02 Nov 2024

feedTalkAndroid

Monopoly Go – Free Dice Links Today (Updated Daily)

If you keep on running out of dice, we have just the solution! Find all the latest Monopoly Go free dice links right here!

02 Nov 2024 3:27pm GMT

Family Island Free Energy Links (Updated Daily)

Tired of running out of energy on Family Island? We have all the latest Family Island Free Energy links right here, and we update these daily!

02 Nov 2024 3:24pm GMT

Crazy Fox Free Spins & Coins (Updated Daily)

If you need free coins and spins in Crazy Fox, look no further! We update our links daily to bring you the newest working links!

02 Nov 2024 3:21pm GMT

Match Masters Free Gifts, Coins, And Boosters (Updated Daily)

Tired of running out of boosters for Match Masters? Find new Match Masters free gifts, coins, and booster links right here! Updated Daily!

02 Nov 2024 3:08pm GMT

Solitaire Grand Harvest – Free Coins (Updated Daily)

Get Solitaire Grand Harvest free coins now, new links added daily. Only tested and working links, complete with a guide on how to redeem the links.

02 Nov 2024 3:06pm GMT

Dice Dreams Free Rolls – Updated Daily

Get the latest Dice Dreams free rolls links, updated daily! Complete with a guide on how to redeem the links.

02 Nov 2024 3:04pm GMT

01 Nov 2024

feedTalkAndroid

Board Kings Free Rolls – Updated Every Day!

Run out of rolls for Board Kings? Find links for free rolls right here, updated daily!

01 Nov 2024 5:12pm GMT

Coin Tales Free Spins – Updated Every Day!

Tired of running out of Coin Tales Free Spins? We update our links daily, so you won't have that problem again!

01 Nov 2024 5:10pm GMT

Avatar World Codes – November 2024 – Updated Daily

Find all the latest Avatar World Codes right here in this article! Read on for more!

01 Nov 2024 5:09pm GMT

Coin Master Free Spins & Coins Links

Find all the latest Coin Master free spins right here! We update daily, so be sure to check in daily!

01 Nov 2024 5:06pm GMT

Monopoly Go Events Schedule Today – Updated Daily

Current active events are Ghostly Gains Event, and Ghost Tag Event. Special Event - Haunted Treasures. The New Marvel Monopoly Go season has started. .

01 Nov 2024 5:04pm GMT

Some Google One Subscribers Are Getting $150 Off Pixel 9 Series

Only some subscribers are reporting receiving an email with this discount.

01 Nov 2024 5:00pm GMT

Google’s Fine in Russia Is a Number Too Big to Count

Google faces huge fine for blocking Russian media channels. Kremlin claims fine is symbolic, but Google must negotiate resolution.

01 Nov 2024 2:57pm GMT

A Big Shift In Coming For The Timeline For Major Android Releases

Android 16 is expected to launch a whole lot earlier than usual, and that might be the new normal.

01 Nov 2024 1:30pm GMT

Need More Than 120Hz? ROG Phone 9 Pro Could Be The Best Globally

Whether you're a gamer or you just want more buttery smoothness, the ROG Phone 9 Pro's screen might be made for you.

01 Nov 2024 10:38am GMT

Roblox Star Codes – November 2024

Find the latest Roblox Star codes here! Keep reading for more!

01 Nov 2024 5:41am GMT

31 Oct 2024

feedAndroid Developers Blog

#TheAndroidShow: live from Droidcon, including the biggest update to Gemini in Android Studio and more SDK releases for Android!

Posted by Matthew McCullough - Vice President, Product Management, Android Developer


We just dropped our Fall episode of #TheAndroidShow, on YouTube and on developer.android.com, and this time are live from Droidcon in London, giving you the latest in Android Developer news including the biggest update to Gemini in Android Studio as well as sharing that there will be more frequent SDK releases for Android, including two next year. Let's dive in!



Gemini in Android Studio: now helping you at every stage of the development cycle

AI has the ability to accelerate your development experience, and help you be more productive. That's why we introduced Gemini in Android Studio, your AI-powered development companion, designed to make it easier and faster for you to build high quality Android apps, faster. Today, we're launching the biggest set of updates to Gemini in Android Studio since launch: now for the first time, Gemini brings the power of AI with features at every stage of the development lifecycle, directly into your Android Studio IDE experience.



More frequent Android SDK releases starting next year

Android has always worked to get innovation in the hands of users faster. In addition to our annual platform releases, we've invested in Project Treble, Mainline, Google Play services, monthly security updates, and the quarterly releases that help power Pixel's popular feature drop updates. Building off the success those quarterly Pixel releases have had towards bringing innovation faster to Pixel users, Android will have more frequent SDK releases going forward, with two releases planned in 2025 with new developer APIs. These releases will help to drive faster innovation in apps and devices, with higher stability and polish for users and developers. Stay informed on upcoming releases for the 2025 calendar.



Make the investment in adaptive, for Large Screens: 20% increased app spend

Your users, especially in the premium segment, don't just buy a phone anymore, they buy into a whole ecosystem of devices. So the experiences you build should follow your users seamlessly across the many screens they own. Take large screens, for instance - foldables, tablets, ChromeOS Devices: there are now over 300 million active Android large-screen devices. This summer, Samsung released their new foldables - the Galaxy Z Fold6 and Z Flip6, and at Google we released our own - the Pixel 9 Pro Fold. We're also investing in a number of platform features to improve how users interact with these devices, like the developer preview of Desktop Windowing that we've been working on in collaboration with Samsung - optimizing these large screen devices for productivity. High quality apps optimized for large screens have several advantages on Play as well: like improved visibility in the Play Store and eligibility for featuring in curated collections and editorial articles. Apps now get separate ratings and reviews for different form factors, making positive feedback more visible.

And it's paying off for those that make the investment: we've seen that using a tablet, flip, or fold increases app spend by ~20%. Flipaclip is proof of this: they've seen a 54% growth in tablet users in the past four months. It has never been easier to build for large screens - with Compose APIs and Android Studio support specifically for building adaptive UIs.



Kotlin Multiplatform for sharing business logic across Android and iOS

Many of you build apps for multiple platforms, requiring you to write platform-specific code or make compromises in order to reuse code across platforms. We've seen the most value in reducing duplicated code for business logic. So earlier this year, we announced official support for Kotlin Multiplatform (KMP) for shared business logic across Android and iOS. KMP, developed by JetBrains, reduces development time and duplicated code, while retaining the flexibility and benefits of native programming.

At Google, we've been migrating Workspace apps, starting with the Google Docs app, to use KMP for shared business logic across Android, iOS and Web. In the community there are a growing number of companies using KMP and getting significant benefits. And it's not just apps - we've seen a 30% increase in the number of KMP libraries developed this year.

To make it easier for you to leverage KMP in your apps, we've been working on migrating many of our Jetpack libraries to take advantage of KMP. For example, Lifecycle, ViewModel, and Paging are KMP compatible libraries. Meanwhile, libraries like Room, DataStore, and Collections have KMP support, so they work out-of-the-box on Android and iOS. We've also added a new template to Android Studio so you can add a shared KMP module to your existing Android app and begin sharing business logic across platforms. Kickstart your Kotlin Multiplatform journey with this comprehensive guide.


Watch the Fall episode of #TheAndroidShow

That's a wrap on this quarter's episode of #TheAndroidShow. A special thanks to our co-hosts for the Fall episode, Simona Milanović and Alejandra Stamato! You can watch the full show on YouTube and on developer.android.com/events/show.

Have an idea for our next episode of #TheAndroidShow? It's your conversation with the broader community, and we'd love to hear your ideas for our next quarterly episode - you can let us know on X or LinkedIn.

31 Oct 2024 5:00pm GMT

FlipaClip optimizes for large screens and sees a 54% increase in tablet users

Posted by Miguel Montemayor - Developer Relations Engineer

FlipaClip is an app for creating dynamic and engaging 2D animations. Its powerful toolkit allows animators of all levels to bring their ideas to life, and its developers are always searching for new ways to help its users create anything they can imagine.

Increasing tablet support was pivotal in improving FlipaClip users' creativity, giving them more space and new methods of animating the stories they want to tell. Now, users on these devices can more naturally bring their visions to life thanks to Android's intuitive features, like stylus compatibility and unique large screen menu interfaces.

Large screens are a natural canvas for animation

FlipaClip initially launched as a phone app, but as tablets became more mainstream, the team knew it needed to adapt its app to take full advantage of larger screens because they are more natural animating platforms. After updating the app, tablet users quickly became a core revenue-generating audience for FlipaClip, representing more than 40% of the app's total revenue.

"We knew we needed to prioritize the large screen experience," said Tim Meson, the lead software engineer and co-founder of FlipaClip. "We believe the tablet experience is the ideal way to use FlipaClip because it gives users more space and precision to create."

The FlipaClip team received numerous user requests to optimize styluses on tablets, like pressure sensitivity and tilt for styluses and new brush types. So it gave their users exactly what they wanted. Not only did they implement stylus support, but they also redesigned the large screen drawing area, allowing for more customization with moveable tool menus and the ability to hide extra tools.

Now, unique menu interfaces and stylus support provide a more immersive and powerful creative experience for FlipaClip's large screen users. By implementing many of the features its users requested and optimizing existing workspaces, FlipaClip increased its US tablet users by 54% in just four months. The quality of the animations made by FlipaClip artists also visibly increased, according to the team.


We knew we needed to prioritize the large screen experience...because it gives users more space and precision to create - Tim Meson; Lead Software Engineer and Co-founder of FlipaClip


Improving large screen performance

One of the key areas the FlipaClip team focused on was achieving low-latency drawing, which is critical for a smooth and responsive experience, especially with a stylus. To help with this, the team created an entire drawing engine from the ground up using Android NDK. This engine also improved the overall app responsiveness regardless of the input method.

"Focusing on GPU optimizations helped create more responsive brushes, a greater variety of brushes, and a drawing stage better suited for tablet users with more customization and more on-screen real estate," said Tim.

Previously, FlipClip drawings were rendered using CPU-backed surfaces, resulting in suboptimal performance, especially on lower-end devices. By utilizing the GPU for rendering and consolidating touch input with the app's historical touch data, the FlipaClip team significantly improved responsiveness and fluidity across a range of devices.

"The improved performance enabled us to raise canvas size limits closer to 2K resolution," said Tim. "It also resolved several reported application-not-responding errors by preventing excessive drawing attempts on the screen."

After optimizing for large screens and reducing their crash rate across device types, FlipaClip's user satisfaction improved, with a 15% improvement in their Play Store rating for large screen devices. The performance enhancements to the drawing engine were particularly well received among users, leading to better engagement and overall positive feedback.

Using Android Vitals, a tool in the Google Play Console for monitoring the technical quality of Android apps, was invaluable in identifying performance issues across the devices FlipaClip users were on. This helped its engineers pinpoint specific devices lacking drawing performance and provided critical data to guide their optimizations.

FlipaClip UI examples across large screen devices


Listening to user feedback

Large screen users are Android's fastest-growing audience, reaching over 300 million users worldwide. Allowing users to enjoy their favorite apps across device types while making use of the larger screen on tablets, means a more engaging experience for users to love.

"One key takeaway for us was always to take the time to review user feedback and app stability reports," said Tim. "From addressing user requests for additional stylus support to pinpointing specific devices to improve drawing performance, these insights have been invaluable for improving the app and addressing pain points of large screen users."

The FlipaClip team noted that developing for Android stood out in several ways compared to other platforms. One key difference is the libraries provided by the Android team, which are continuously updated and improved, allowing its engineers to seamlessly address and resolve any issues without requiring users to upgrade their Android OS.

"Libraries like Jetpack Compose can be updated independently of the device's system version, which is incredibly efficient," said Tim. "Plus, Android documentation has gotten a lot better over the years. The documentation for large screens is a great example. The instructions are more thorough, and all the code examples and codelabs make it so much easier to understand."

FlipaClip engineers plan to continue optimizing the app's UI for larger screens and improve its unique drawing tools. The team also wants to introduce more groundbreaking animation tools, seamless cross-device syncing, and tablet-specific gestures to improve the overall animation experience on large screen devices.

Get started

Learn how to improve your UX by optimizing for large screens.

31 Oct 2024 4:59pm GMT

Updates to power your growth on Google Play

Posted by Paul Feng - Vice President of Engineering, Product and UX, Google Play

Our annual Playtime event series kicks off this week and we're excited to share the latest product updates to help your business thrive. We're sharing new ways to grow your audience, optimize revenue, and protect your business in an ever-evolving digital landscape.

Make sure to also check out news from #TheAndroidShow to learn more about the biggest update to Gemini in Android Studio since launch that will help boost your team's developer productivity.

Growing your audience with enhanced discovery features

To help people discover apps and games they'll love, we're continuously improving our tools and personalizing app discovery so you can reach and engage your ideal audience.

Enhanced content formats: To make your video content more impactful, we're making enhancements to how it's displayed on the Play Store. Portrait videos on your store listing now have a full-screen experience to immerse users and drive conversions with a prominent "install" button. Simply keep creating amazing portrait videos for your store listing, and we'll handle the rest.

Our early results are promising: portrait videos drive +7% increase in total watch time, a +9% increase in video completion count, and a +5% increase in conversions.

Captivate users with full-screen portrait videos on your store listing
Captivate users with full-screen portrait videos on your store listing


We've also launched new features to create a more engaging and tailored experience for people exploring the Play Store.

  • Personalized query recommendations: To help users start their search journeys right, we've introduced personalized search query recommendations on Search Home. This feature is currently available in English, with expanded support for more languages coming soon this year.
Personalized search queries help tailor search results to user’s interests
Personalized search queries help tailor search results to user's interests


  • Interest pickers: Multi-select interest filters allow people to share their preferences so they can get more helpful recommendations tailored to their interests. Earlier this year, we announced this for games, and now these filters are also available for apps.

Optimizing your revenue with Google Play Commerce

We want to make it effortless for people to buy what you're selling, so we're focused on helping our 2.5 billion users in over 190 markets have a seamless and secure purchase experience. Our tools support you and your users during every step of the journey, from payment setup, to the purchase flow, to ensuring transactions are secure and compliant.

Proactive payment setup: To help more buyers be purchase ready, we've been proactively encouraging people to set up payment methods in advance, both within the Play Store and during Android device setup, and even during Google account creation. Our efforts have doubled the number of purchase-ready users this year, now reaching over half a billion users. And we're already seeing results from this approach - In September alone, we've seen an almost 3% increase in global conversion rates, which means more people are completing purchases, which translates directly to higher revenue potential for you from your apps and games.

Expanded payment options: Google Play already offers users over 300 local payment methods across 65+ markets, and we're regularly adding new payment methods. US users can now use Cash App eWallet alongside credit cards, PayPal, direct carrier billing, and gift cards and users in Poland can pay with Blik Banking.

Purchase flow recommendations: Our new algorithmic recommendation engine helps people discover relevant in-app purchases they're likely to buy. Simply select products to feature in Play Console, and we'll recommend a popular or related option at different moments in the purchase journey, helping users find what they need. Our early results show an average of 3% increase in spend.

Purchase flow recommendations in Google Play
Purchase flow recommendations helps people discover relevant in-app purchases


Cart abandonment reminders: If a user is browsing a product in your app or game, but hasn't yet made a decision to purchase, we'll remind them about it later when they browse the Play Store. These automatic, opt-out reminders help nudge users to complete their purchase.

Cart abandonment reminders in Google Play
Cart abandonment reminders help users complete their purchase


Secure bio authentication: Users can now enjoy a faster and more secure checkout experience by choosing on-device biometrics (fingerprint or face recognition) to verify their purchases, eliminating the need to enter their account password. This year, we've seen adoption triple, as more users choose bioauth to make their first purchase.

Protecting your business with the Play Integrity API

Everything we do at Google Play has safety and security at its core. That's why we're continuing to invest in more ways to reinforce user trust, protect your business, and safeguard the ecosystem. This includes actively combating bad actors who try to deceive users or spread malware, and giving you tools to combat abuse.

The Play Integrity API can help you detect and respond to potential abuse such as fraud, bots, cheating, or data theft, ensuring everyone experiences your apps and games as intended. Apps that use Play Integrity features are seeing 80% less unauthorized usage on average compared to unprotected apps.

Here's what's new with the Play Integrity API:

  • Hardware-backed security signals: In the coming months, you can opt-in to improved Play Integrity API verdicts backed by hardware security and other signals on Android 13+ devices. This means faster, more reliable, and more privacy-friendly app and device verification, making it significantly harder and more costly for attackers to bypass.
  • New app access risk feature: Now out of beta, this feature allows you to detect and respond to apps that can capture the screen or control the device, so you can protect your users from scams or malicious activity.

Those are the latest updates from Google Play! We're always enhancing our tools to help address the specific challenges and opportunities of different app categories, from games and media to entertainment and social.

We're excited to see how you leverage both our new and existing features to grow your business. Check out how Spotify and SuperPlay are already taking advantage of features like Play Points and Collections to achieve powerful results:




31 Oct 2024 4:58pm GMT

More frequent Android SDK releases: faster innovation, higher quality and more polish

Posted by Matthew McCullough - Vice President, Product Management, Android Developer

Android has always worked to get innovation into the hands of users faster. In addition to our annual platform releases, we've invested in Project Treble, Mainline, Google Play services, monthly security updates, and the quarterly releases that help power Pixel Drops.

Going forward, Android will have more frequent SDK releases with two releases planned in 2025 with new developer APIs. These releases will help to drive faster innovation in apps and devices, with higher stability and polish for users and developers.

Two Android releases in 2025

Next year, we'll have a major release in Q2 and a minor release in Q4, both of which will include new developer APIs. The Q2 major release will be the only release in 2025 to include behavior changes that can affect apps. We're planning the major release for Q2 rather than Q3 to better align with the schedule of device launches across our ecosystem, so more devices can get the major release of Android sooner.

The Q4 minor release will pick up feature updates, optimizations, and bug fixes since the major release. It will also include new developer APIs, but will not include any app-impacting behavior changes.

Outside of the major and minor Android releases, our Q1 and Q3 releases will provide incremental updates to help ensure continuous quality. We're actively working with our device partners to bring the Q2 release to as many devices as possible.

2025 SDK release timeline showing a features only update in Q1 and Q3, a major SDK release with behavior changes, APIs, and features in Q2, and a minor SDK release with APIs and features in Q4

What this means for your apps

With the major release coming in Q2, you'll need to do your annual compatibility testing a few months earlier than in previous years to make sure your apps are ready. Major releases are just like the SDK releases we have today, and can include behavior changes along with new developer APIs - and to help you get started, we'll soon begin the developer preview and beta program for the Q2 major release.

The minor release in Q4 will include new APIs, but, like the incremental quarterly releases we have today, will have no planned behavior changes, minimizing the need for compatibility testing. To differentiate major releases (which may contain planned behavior changes) from minor releases, minor releases will not increment the API level. Instead, they'll increment a new minor API level value, which will be accessed through a constant that captures both major and minor API levels. A new manifest attribute will allow you to specify a minor API level as the minimum required SDK release for your app. We'll have an initial version of support for minor API levels in the upcoming Q2 developer preview, so please try building against the SDK and let us know how this works for you.

When planning your targeting for 2026, there's no change to the target API level requirements and the associated dates for apps in Google Play; our plans are for one annual requirement each year, and that will be tied to the major API level only.

How to get ready

In addition to compatibility testing on the next major release, you'll want to make sure to test your builds and CI systems with SDK's supporting major and minor API levels - some build systems (including the Android Gradle build) might need adapting. Make sure that you're compiling your apps against the new SDK, and use the compatibility framework to enable targetSdkVersion-gated behavior changes for early testing.

Meta is a great example of how to embrace and test for new releases: they improved their velocity towards targetSdkVersion adoption by 4x. They compiled apps against each platform Beta and conducted thorough automated and smoke tests to proactively identify potential issues. This helped them seamlessly adopt new platform features, and when the release rolled out to users, Meta's apps were ready - creating a great user experience.

What's next?

As always, we plan to work closely with you as we move through the 2025 releases. We will make all of our quarterly releases available to you for testing and feedback, with over-the-air Beta releases for our early testers on Pixel and downloadable system images and tools for developers.

Our aim with these changes is to enable faster innovation and a higher level of quality and polish across releases, without introducing more overhead or costs for developers. At the same time, we're welcoming an even closer collaboration with you throughout the year. Stay tuned for more information on the first developer preview of Android 16.

The shift in platform releases highlights Android's commitment to constant evolution and collaboration. By working closely with partners and listening to the needs of developers, Android continues to push the boundaries of what's possible in the mobile world. It's an exciting time to be part of the Android ecosystem, and I can't wait to see what the future holds!

31 Oct 2024 4:57pm GMT

Gemini in Android Studio, now helping you across the development lifecycle

Posted by Sandhya Mohan - Product Manager, Android Studio

This is Our Biggest Feature Release Since Launch!

AI can accelerate your development experience, and help you become more productive. That's why we introduced Gemini in Android Studio, your AI-powered coding companion. It's designed to make it easier for you to build high quality Android apps, faster. Today, we're releasing the biggest set of updates to Gemini in Android Studio since launch, and now Gemini brings the power of AI to every stage of the development lifecycle, directly within the Android Studio IDE experience. And for more updates on how to grow your apps and games businesses, check out the latest updates from Google Play.

Download the latest version of Android Studio in the canary channel to take advantage of all these new features, and read on to unpack what's new.



Gemini Can Now Write, Refactor, and Document Android Code

Gemini goes beyond just guidance. It can edit your code, helping you quickly move from prototype to implementation, implement common design patterns, and refactor your code. Gemini also streamlines your workflow with features like documentation and commit message generation, allowing you to focus more time on writing code.

Moving image demonstrating Gemini writing code for an Android Composable in real time in Android Studio


Coding features we are launching include:

  • Gemini Code Transforms - modify and refactor code using custom prompts.


using Gemini to modify code in Android Studio


  • Commit message generation - analyze changes and propose VCS commit messages to streamline version control operations.


using Gemini to analyze changes and propose VCS commit messages in Android Studio


  • Rethink and Rename - generate intuitive names for your classes, methods, and variables. This can be invoked while you're coding, or as a larger refactor action applied to existing code.


using Gemini to generate intuitive names for variables while you're coding in Android Studio


  • Prompt library - save and manage your most frequently used prompts. You can quickly recall them when you need them.


save your frequently used prompts for future use with Gemini in Android Studio


  • Generate documentation - get documentation for selected code snippets with a simple right click.


generating code documation in Android Studio


Integrating AI into UI Tools

It's never been easier to build with Compose now that we have integrated AI into Compose workflows. Composable previews help you visualize your composables during design time in Android Studio. We understand that manually crafting mock data for the preview parameters can be time-consuming. Gemini can now help auto-generate Composable previews with relevant context using AI, simplifying the process of visualizing your UI during development.

Visualize your composables during design time in Android Studio


We are continuing to experiment with Multimodal support to speed up your UI development cycle. Coming soon, we will allow for image attachment as context and utilizing Gemini's multimodal understanding to make it easier to create beautiful and engaging user interfaces.

Deploy with Confidence

Gemini's intelligence can help you release higher quality apps with greater confidence. Gemini can analyze, test code, and suggest fixes - and we are continuing to integrate AI into the IDE's App Quality Insights tool window by helping you analyze crashes reported by Google Play Console and Firebase Crashlytics. Now, with Ladybug Feature Drop, you can generate deeper insights by using your local code context. This means that you will fix bugs faster and your users will see fewer crashes.

Generate insights using the IDE's App Quality Insights tool window


Some of the features we are launching include:

  • Unit test scenario generation generates unit test scenarios based on local code context.


generate unit test scenarios based on local code context in Android Studio


  • Build / sync error insights now provides improved coverage for build and sync errors.


build sync error insights are now avaiable in Android Studio


  • App Quality Insights explains and suggests fixes for observed crashes from Android Vitals and Firebase Crashlytics, and now allows you to use local code context for improved insights.


save your frequently used prompts for future use with Gemini in Android Studio


A better Gemini in Android Studio for you

We recently surveyed many of you to see how AI-powered code completion has impacted your productivity, and 86% of respondents said they felt more productive. Please continue to provide feedback as you use Gemini in your day-to-day workflows. In fact, a few of you wanted to share some of your tips and tricks for how to get the most out of Gemini in Android Studio.



Along with the Gemini Nano APIs that you can integrate with your own app, Android developers now have access to Google's leading edge AI technologies across every step of their development journey - with Gemini in Android Studio central to that developer experience.

Get these new features in the latest versions of Android Studio

These features are all available to try today in the Android Studio canary channel. We expect to release many of these features in the upcoming Ladybug Feature Drop, to be released in the stable channel in late December - with the rest to follow shortly after.

  • Gemini Code Transforms - Modify and refactor your code within the editor
  • Commit message generation - Automatically generate commit messages with Gemini
  • Rethink and Rename - Get help renaming your classes, methods, and variables
  • Prompt library - Save and recall your most commonly used prompts
  • Compose Preview Generation - Generate previews for your composables with Gemini
  • Generate documentation - Have Gemini help you document your code
  • Unit test scenario generation - Generate unit test scenarios
  • Build / sync error insights - Ask Gemini for help in troubleshooting build and sync errors
  • App Quality Insights - Insights on how you can fix crashes from Android Vitals and Firebase Crashlytics

As always, Google is committed to the responsible use of AI. Android Studio won't send any of your source code to servers without your consent - which means you'll need to opt in to enable Gemini's developer assistance features in Android Studio. You can read more on Gemini in Android Studio's commitment to privacy.

Try enabling Gemini in your project and tell us what you think on social media with #AndroidGeminiEra. We're excited to see how these enhancements help you build amazing apps!

31 Oct 2024 4:56pm GMT

23 Oct 2024

feedAndroid Developers Blog

Set a reminder: tune in for our Fall episode of #TheAndroidShow on October 31, live from Droidcon!

Posted by Anirudh Dewani - Director, Android Developer Relations

In just a few days, on Thursday, October 31st at 10AM PT, we'll be dropping our Fall episode of #TheAndroidShow, on YouTube and on developer.android.com!

In our quarterly show, this time we'll be live from Droidcon in London, giving you the latest in Android Developer news with demos of Jetpack Compose and more. You can set a reminder to watch the livestream on YouTube, or click here to add to your calendar.


In our Fall episode, we'll be taking the lid off the biggest update to Gemini in Android Studio, so you don't want to miss out! We also had a number of recent wearable, foldable and large screen device launches and updates, and we'll be unpacking what you need to know to get building for these form factors.

Get your #AskAndroid questions answered live!

And we've assembled a team of experts from across Android to answer your #AskAndroid questions on building excellent apps, across devices - share your questions now and tune in to see if they are answered live on the show!

#TheAndroidShow is your conversation with the Android developer community, this time hosted by Simona Milanović and Alejandra Stamato. You'll hear the latest from the developers and engineers who build Android. Don't forget to tune in live on October 31 at 10AM PT, live on YouTube and on developer.android.com/events/show!

23 Oct 2024 4:00pm GMT

16 Oct 2024

feedAndroid Developers Blog

CameraX update makes dual concurrent camera even easier

Posted by Donovan McMurray - Developer Relations Engineer

CameraX, Android's Jetpack camera library, is getting an exciting update to its Dual Concurrent Camera feature, making it even easier to integrate this feature into your app. This feature allows you to stream from 2 different cameras at the same time. The original version of Dual Concurrent Camera was released in CameraX 1.3.0, and it was already a huge leap in making this feature easier to implement.

Starting with 1.5.0-alpha01, CameraX will now handle the composition of the 2 camera streams as well. This update is additional functionality, and it doesn't remove any prior functionality nor is it a breaking change to your existing Dual Concurrent Camera code. To tell CameraX to handle the composition, simply use the new SingleCameraConfig constructor which has a new parameter for a CompositionSettings object. Since you'll be creating 2 SingleCameraConfigs, you should be consistent with what constructor you use.

Nothing has changed in the way you check for concurrent camera support from the prior version of this feature. As a reminder, here is what that code looks like.

// Set up primary and secondary camera selectors if supported on device.
var primaryCameraSelector: CameraSelector? = null
var secondaryCameraSelector: CameraSelector? = null

for (cameraInfos in cameraProvider.availableConcurrentCameraInfos) {
    primaryCameraSelector = cameraInfos.first {
        it.lensFacing == CameraSelector.LENS_FACING_FRONT
    }.cameraSelector
    secondaryCameraSelector = cameraInfos.first {
        it.lensFacing == CameraSelector.LENS_FACING_BACK
    }.cameraSelector

    if (primaryCameraSelector == null || secondaryCameraSelector == null) {
        // If either a primary or secondary selector wasn't found, reset both
        // to move on to the next list of CameraInfos.
        primaryCameraSelector = null
        secondaryCameraSelector = null
    } else {
        // If both primary and secondary camera selectors were found, we can
        // conclude the search.
        break
    }
}

if (primaryCameraSelector == null || secondaryCameraSelector == null) {
    // Front and back concurrent camera not available. Handle accordingly.
}


Here's the updated code snippet showing how to implement picture-in-picture, with the front camera stream scaled down to fit into the lower right corner. In this example, CameraX handles the composition of the camera streams.

// If 2 concurrent camera selectors were found, create 2 SingleCameraConfigs
// and compose them in a picture-in-picture layout.
val primary = SingleCameraConfig(
    cameraSelectorPrimary,
    useCaseGroup,
    CompositionSettings.Builder()
        .setAlpha(1.0f)
        .setOffset(0.0f, 0.0f)
        .setScale(1.0f, 1.0f)
        .build(),
    lifecycleOwner);
val secondary = SingleCameraConfig(
    cameraSelectorSecondary,
    useCaseGroup,
    CompositionSettings.Builder()
        .setAlpha(1.0f)
        .setOffset(2 / 3f - 0.1f, -2 / 3f + 0.1f)
        .setScale(1 / 3f, 1 / 3f)
        .build()
    lifecycleOwner);

// Bind to lifecycle
ConcurrentCamera concurrentCamera =
    cameraProvider.bindToLifecycle(listOf(primary, secondary));


You are not constrained to a picture-in-picture layout. For instance, you could define a side-by-side layout by setting the offsets and scaling factors accordingly. You want to keep both dimensions scaled by the same amount to avoid a stretched preview. Here's how that might look.

// If 2 concurrent camera selectors were found, create 2 SingleCameraConfigs
// and compose them in a picture-in-picture layout.
val primary = SingleCameraConfig(
    cameraSelectorPrimary,
    useCaseGroup,
    CompositionSettings.Builder()
        .setAlpha(1.0f)
        .setOffset(0.0f, 0.25f)
        .setScale(0.5f, 0.5f)
        .build(),
    lifecycleOwner);
val secondary = SingleCameraConfig(
    cameraSelectorSecondary,
    useCaseGroup,
    CompositionSettings.Builder()
        .setAlpha(1.0f)
        .setOffset(0.5f, 0.25f)
        .setScale(0.5f, 0.5f)
        .build()
    lifecycleOwner);

// Bind to lifecycle
ConcurrentCamera concurrentCamera =
    cameraProvider.bindToLifecycle(listOf(primary, secondary));


We're excited to offer this improvement to an already developer-friendly feature. Truly the CameraX way! CompositionSettings in Dual Concurrent Camera is currently in alpha, so if you have feature requests to improve upon it before the API is locked in, please give us feedback in the CameraX Discussion Group. And check out the full CameraX 1.5.0-alpha01 release notes to see what else is new in CameraX.

16 Oct 2024 9:00pm GMT

Chrome on Android to support third-party autofill services natively

Posted by Eiji Kitamura - Developer Advocate

Chrome on Android will soon allow third-party autofill services (like password managers) to natively autofill forms on websites. Developers of these services need to tell their users to toggle a setting in Chrome to continue using their service with Chrome.


Background

Google is the default autofill service on Chrome, providing passwords, passkeys and autofill for other information like addresses and payment data.

A third-party password manager can be set as the preferred autofill service on Android through System Settings. The preferred autofill service can fill across all Android apps. However, to autofill forms on Chrome, the autofill service needs to use "compatibility mode". This causes glitches on Chrome such as janky page scrolling and potentially showing duplicate suggestions from Google and a third-party.

With this coming change, Chrome on Android will allow third-party autofill services to natively autofill forms giving users a smoother and simpler user experience. Third-party autofill services can autofill passwords, passkeys and other information like addresses and payment data, as they would in other Android apps.


Try the feature yourself

You can already test the functionality on Chrome 131 and later. First, set a third-party autofill service as preferred in Android 14:

Note: Instructions may vary by device manufacturer. The below steps are for a Google Pixel device running Android 15.

  1. Open Android's System Settings
  2. Select Passwords, passkeys & accounts
  3. Tap on Change button under Preferred service
  4. Select a preferred service
  5. Confirm changing the preferred autofill service


Side by side screenshots show the steps involved in enabling third-party autofill service from your device: first tap 'Change', then select the new service, and finally confirm the change.


Secondly, enable third-party autofill service on Chrome

  1. Open Chrome on Android
  2. Open chrome://flags#enable-autofill-virtual-view-structure
  3. Set the flag to "Enabled" and restart
  4. Open Chrome's Settings and tap Autofill Services
  5. Choose Autofill using another service
  6. Confirm and restart Chrome

Note: Steps 2 and 3 are not necessary after Chrome 131. Chrome 131 is scheduled to be stable on November 12th, 2024.

Side by side screenshots show the steps involved in changing your preferred password service on a smartphone: first tap 'Autofill Services', then select 'Autofill using another service', and finally restart Chrome to complete setup.


You can emulate how Chrome behaves after compatibility mode is disabled by updating chrome://flags#suppress-autofill-via-accessibility to Enabled.

Actions required from third-party autofill services

Implementation wise, autofill service developers don't need an additional implementation as long as they have a proper integration with autofill services. Chrome will gracefully respect it and autofill forms.

Chrome plans to stop supporting compatibility mode in early 2025. Users must select Autofill using another service in Chrome settings to ensure their autofill experience is unaffected. The new setting is available in Chrome 131. You should encourage your users to toggle the setting, to ensure they have the best autofill experience possible with your service and Chrome on Android.


Timeline

  • October 16th, 2024: Chrome 131 beta is available
  • November 12th, 2024: Chrome 131 is in stable
  • Early 2025: Compatibility mode is no longer available on Chrome

16 Oct 2024 5:00pm GMT

15 Oct 2024

feedAndroid Developers Blog

Creating a responsive dashboard layout for JetLagged with Jetpack Compose

Posted by Rebecca Franks - Developer Relations Engineer

This blog post is part of our series: Adaptive Spotlight Week where we provide resources-blog posts, videos, sample code, and more-all designed to help you adapt your apps to phones, foldables, tablets, ChromeOS and even cars. You can read more in the overview of the Adaptive Spotlight Week, which will be updated throughout the week.


We've heard the news, creating adaptive layouts in Jetpack Compose is easier than ever. As a declarative UI toolkit, Jetpack Compose is well suited for designing and implementing layouts that adjust themselves to render content differently across a variety of sizes. By using logic coupled with Window Size Classes, Flow layouts, movableContentOf and LookaheadScope, we can ensure fluid responsive layouts in Jetpack Compose.

Following the release of the JetLagged sample at Google I/O 2023, we decided to add more examples to it. Specifically, we wanted to demonstrate how Compose can be used to create a beautiful dashboard-like layout. This article shows how we've achieved this.

Moving image demonstrating responsive design in Jetlagged where items animate positions automatically
Responsive design in Jetlagged where items animate positions automatically


Use FlowRow and FlowColumn to build layouts that respond to different screen sizes

Using Flow layouts ( FlowRow and FlowColumn ) make it much easier to implement responsive, reflowing layouts that respond to screen sizes and automatically flow content to a new line when the available space in a row or column is full.

In the JetLagged example, we use a FlowRow, with a maxItemsInEachRow set to 3. This ensures we maximize the space available for the dashboard, and place each individual card in a row or column where space is used wisely, and on mobile devices, we mostly have 1 card per row, only if the items are smaller are there two visible per row.

Some cards leverage Modifiers that don't specify an exact size, therefore allowing the cards to grow to fill the available width, for instance using Modifier.widthIn(max = 400.dp), or set a certain size, like Modifier.width(200.dp).

FlowRow(
    modifier = Modifier.fillMaxSize(),
    horizontalArrangement = Arrangement.Center,
    verticalArrangement = Arrangement.Center,
    maxItemsInEachRow = 3
) {
    Box(modifier = Modifier.widthIn(max = 400.dp))
    Box(modifier = Modifier.width(200.dp))
    Box(modifier = Modifier.size(200.dp))
    // etc 
}


We could also leverage the weight modifier to divide up the remaining area of a row or column, check out the documentation on item weights for more information.


Use WindowSizeClasses to differentiate between devices

WindowSizeClasses are useful for building up breakpoints in our UI for when elements should display differently. In JetLagged, we use the classes to know whether we should include cards in Columns or keep them flowing one after the other.

For example, if WindowWidthSizeClass.COMPACT, we keep items in the same FlowRow, where as if the layout it larger than compact, they are placed in a FlowColumn, nested inside a FlowRow:

            FlowRow(
                modifier = Modifier.fillMaxSize(),
                horizontalArrangement = Arrangement.Center,
                verticalArrangement = Arrangement.Center,
                maxItemsInEachRow = 3
            ) {
                JetLaggedSleepGraphCard(uiState.value.sleepGraphData)
                if (windowSizeClass == WindowWidthSizeClass.COMPACT) {
                    AverageTimeInBedCard()
                    AverageTimeAsleepCard()
                } else {
                    FlowColumn {
                        AverageTimeInBedCard()
                        AverageTimeAsleepCard()
                    }
                }
                if (windowSizeClass == WindowWidthSizeClass.COMPACT) {
                    WellnessCard(uiState.value.wellnessData)
                    HeartRateCard(uiState.value.heartRateData)
                } else {
                    FlowColumn {
                        WellnessCard(uiState.value.wellnessData)
                        HeartRateCard(uiState.value.heartRateData)
                    }
                }
            }


From the above logic, the UI will appear in the following ways on different device sizes:

Side by side comparisons of the differeces in UI on three different sized devices
Different UI on different sized devices


Use movableContentOf to maintain bits of UI state across screen resizes

Movable content allows you to save the contents of a Composable to move it around your layout hierarchy without losing state. It should be used for content that is perceived to be the same - just in a different location on screen.

Imagine this, you are moving house to a different city, and you pack a box with a clock inside of it. Opening the box in the new home, you'd see that the time would still be ticking from where it left off. It might not be the correct time of your new timezone, but it will definitely have ticked on from where you left it. The contents inside the box don't reset their internal state when the box is moved around.

What if you could use the same concept in Compose to move items on screen without losing their internal state?

Take the following scenario into account: Define different Tile composables that display an infinitely animating value between 0 and 100 over 5000ms.


@Composable
fun Tile1() {
    val repeatingAnimation = rememberInfiniteTransition()

    val float = repeatingAnimation.animateFloat(
        initialValue = 0f,
        targetValue = 100f,
        animationSpec = infiniteRepeatable(repeatMode = RepeatMode.Reverse,
            animation = tween(5000))
    )
    Box(modifier = Modifier
        .size(100.dp)
        .background(purple, RoundedCornerShape(8.dp))){
        Text("Tile 1 ${float.value.roundToInt()}",
            modifier = Modifier.align(Alignment.Center))
    }
}


We then display them on screen using a Column Layout - showing the infinite animations as they go:

A purple tile stacked in a column above a pink tile. Both tiles show a counter, counting up from 0 to 100 and back down to 0


But what If we wanted to lay the tiles differently, based on if the phone is in a different orientation (or different screen size), and we don't want the animation values to stop running? Something like the following:

@Composable
fun WithoutMovableContentDemo() {
    val mode = remember {
        mutableStateOf(Mode.Portrait)
    }
    if (mode.value == Mode.Landscape) {
        Row {
           Tile1()
           Tile2()
        }
    } else {
        Column {
           Tile1()
           Tile2()
        }
    }
}


This looks pretty standard, but running this on device - we can see that switching between the two layouts causes our animations to restart.

A purple tile stacked in a column above a pink tile. Both tiles show a counter, counting upward from 0. The column changes to a row and back to a column, and the counter restarts everytime the layout changes


This is the perfect case for movable content - it is the same Composables on screen, they are just in a different location. So how do we use it? We can just define our tiles in a movableContentOf block, using remember to ensure its saved across compositions:

val tiles = remember {
        movableContentOf {
            Tile1()
            Tile2()
        }
 }


Now instead of calling our composables again inside the Column and Row respectively, we call tiles() instead.

@Composable
fun MovableContentDemo() {
    val mode = remember {
        mutableStateOf(Mode.Portrait)
    }
    val tiles = remember {
        movableContentOf {
            Tile1()
            Tile2()
        }
    }
    Box(modifier = Modifier.fillMaxSize()) {
        if (mode.value == Mode.Landscape) {
            Row {
                tiles()
            }
        } else {
            Column {
                tiles()
            }
        }

        Button(onClick = {
            if (mode.value == Mode.Portrait) {
                mode.value = Mode.Landscape
            } else {
                mode.value = Mode.Portrait
            }
        }, modifier = Modifier.align(Alignment.BottomCenter)) {
            Text("Change layout")
        }
    }
}


This will then remember the nodes generated by those Composables and preserve the internal state that these composables currently have.

A purple tile stacked in a column above a pink tile. Both tiles show a counter, counting upward from 0 to 100. The column changes to a row and back to a column, and the counter continues seamlessly when the layout changes


We can now see that our animation state is remembered across the different compositions. Our clock in the box will now keep state when it's moved around the world.

Using this concept, we can keep the animating bubble state of our cards, by placing the cards in movableContentOf:

Language
val timeSleepSummaryCards = remember { movableContentOf { AverageTimeInBedCard() AverageTimeAsleepCard() } } LookaheadScope { FlowRow( modifier = Modifier.fillMaxSize(), horizontalArrangement = Arrangement.Center, verticalArrangement = Arrangement.Center, maxItemsInEachRow = 3 ) { //.. if (windowSizeClass == WindowWidthSizeClass.Compact) { timeSleepSummaryCards() } else { FlowColumn { timeSleepSummaryCards() } } // } }


This allows the cards state to be remembered and the cards won't be recomposed. This is evident when observing the bubbles in the background of the cards, on resizing the screen the bubble animation continues without restarting the animation.

A purple tile showing Average time in bed stacked in a column above a green tile showing average time sleep. Both tiles show moving bubbles. The column changes to a row and back to a column, and the bubbles continue to move across the tiles as the layout changes


Use Modifier.animateBounds() to have fluid animations between different window sizes

From the above example, we can see that state is maintained between changes in layout size (or layout itself), but the difference between the two layouts is a bit jarring. We'd like this to animate between the two states without issue.

In the latest compose-bom-alpha (2024.09.03), there is a new experimental custom Modifier, Modifier.animateBounds(). The animateBounds modifier requires a LookaheadScope.

LookaheadScope enables Compose to perform intermediate measurement passes of layout changes, notifying composables of the intermediate states between them. LookaheadScope is also used for the new shared element APIs, that you may have seen recently.

To use Modifier.animateBounds(), we wrap the top-level FlowRow in a LookaheadScope, and then apply the animateBounds modifier to each card. We can also customize how the animation runs, by specifying the boundsTransform parameter to a custom spring spec:

val boundsTransform = { _ : Rect, _: Rect ->
   spring(
       dampingRatio = Spring.DampingRatioNoBouncy,
       stiffness = Spring.StiffnessMedium,
       visibilityThreshold = Rect.VisibilityThreshold
   )
}


LookaheadScope {
   val animateBoundsModifier = Modifier.animateBounds(
       lookaheadScope = this@LookaheadScope,
       boundsTransform = boundsTransform)
   val timeSleepSummaryCards = remember {
       movableContentOf {
           AverageTimeInBedCard(animateBoundsModifier)
           AverageTimeAsleepCard(animateBoundsModifier)
       }
   }
   FlowRow(
       modifier = Modifier
           .fillMaxSize()
           .windowInsetsPadding(insets),
       horizontalArrangement = Arrangement.Center,
       verticalArrangement = Arrangement.Center,
       maxItemsInEachRow = 3
   ) {
       JetLaggedSleepGraphCard(uiState.value.sleepGraphData, animateBoundsModifier.widthIn(max = 600.dp))
       if (windowSizeClass == WindowWidthSizeClass.Compact) {
           timeSleepSummaryCards()
       } else {
           FlowColumn {
               timeSleepSummaryCards()
           }
       }


       FlowColumn {
           WellnessCard(
               wellnessData = uiState.value.wellnessData,
               modifier = animateBoundsModifier
                   .widthIn(max = 400.dp)
                   .heightIn(min = 200.dp)
           )
           HeartRateCard(
               modifier = animateBoundsModifier
                   .widthIn(max = 400.dp, min = 200.dp),
               uiState.value.heartRateData
           )
       }
   }
}


Applying this to our layout, we can see the transition between the two states is more seamless without jarring interruptions.

A purple tile showing Average time in bed stacked in a column above a green tile showing average time sleep. Both tiles show moving bubbles. The column changes to a row and back to a column, and the bubbles continue to move across the tiles as the layout changes


Applying this logic to our whole dashboard, when resizing our layout, you will see that we now have a fluid UI interaction throughout the whole screen.

Moving image demonstrating responsive design in Jetlagged where items animate positions automatically


Summary

As you can see from this article, using Compose has enabled us to build a responsive dashboard-like layout by leveraging flow layouts, WindowSizeClasses, movable content and LookaheadScope. These concepts can also be used for your own layouts that may have items moving around in them too.

For more information on these different topics, be sure to check out the official documentation, for the detailed changes to JetLagged, take a look at this pull request.

15 Oct 2024 4:00pm GMT

#WeArePlay | NomadHer helps women travel the world

Posted by Robbie McLachlan, Developer Marketing


In our latest film for #WeArePlay, which celebrates the people behind apps and games, we meet Hyojeong - the visionary behind the app NomadHer. She's aiming to reshape how women explore the world by building a global community: sharing travel tips, prioritizing safety, and connecting with one another to explore new destinations.



What inspired you to create NomadHer?

Honestly, NomadHer was born out of a personal need. I started traveling solo when I was 19 and have visited over 60 countries alone, and while it was an incredibly empowering and enriching journey, it wasn't always easy-especially as a woman. There was this one moment when I was traveling in Italy that really shook me. I realized just how important it was to have a support system, not just friends or family, but other women who understand what it's like to be out there on your own. That's when the idea hit me- I wanted to create a space where women could feel safe and confident while seeing the world.


NomadHer Founder - Hyojeong Kim from South Korean smiling, wearing a white tshirt with green text that reads 'she can travel anywhere'


The focus on connecting women who share similar travel plans is a powerful tool. Can you share feedback from someone who has found travel buddies through NomadHer?

Absolutely! One of my favorite stories comes from a woman who was planning a solo trip to Bali. She connected with another 'NomadHer' through the app who had the exact same travel dates and itinerary. They started chatting, and by the time they met up in Bali, it was like they'd known each other forever. They ended up traveling together, trying out new restaurants, exploring hidden beaches, and even taking a surfing class! After the trip, they both messaged us saying how the experience felt safer and more fun because they had each other. It's stories like these that remind me why I started NomadHer in the first place.

How did Google Play help you grow NomadHer?

We couldn't connect with the 90,000+ women worldwide without Google Play. We've been able to reach people from Latin America, Africa, Asia, and beyond. It's incredible seeing women connect, share tips, and support each other, no matter where they are. With tools like Firebase, we can track and improve the app experience, making sure everything runs smoothly. Plus, Google Play's startup programs gave us mentorship and visibility, which really helped us grow and expand our reach faster. It's been a game-changer.

NomadHer on Google Play on a device


What are your hopes for NomadHer in the future?

I want NomadHer to be more than just an app-it's a movement. My dream is for it to become the go-to platform for women travelers everywhere. I want to see more partnerships with local women entrepreneurs, like the surf shop owner we work with in Busan. We host offline events like the She Can Travel Festival in Busan and I'm excited to host similar events in other countries like Paris, Tokyo, and Delhi. The goal is to continue creating these offline connections to build a community that empowers women, both socially and economically, through partnerships with local female businesses.

Discover more global #WeArePlay stories and share your favorites.



How useful did you find this blog post?

15 Oct 2024 1:00pm GMT

14 Oct 2024

feedAndroid Developers Blog

Here's what happening in our latest Spotlight Week: Adaptive Android Apps

Posted by Alex Vanyo - Developer Relations Engineer

Adaptive Spotlight Week

With Android powering a diverse range of devices, users expect a seamless and optimized experience across their foldables, tablets, ChromeOS, and even cars. To meet these expectations, developers need to build their apps with multiple screen sizes and form factors in mind. Changing how you approach UI can drastically improve users' experiences across foldables, tablets, and more, while preventing tech debt that a portrait-only mindset can create - simply put, building adaptive is a great way to help future-proof your app.

The latest in our Spotlight Week series will focus on Building Adaptive Android apps all this week (October 14-18), and we'll highlight the many ways you can improve your mobile app to adapt to all of these different environments.



Here's what we're covering during Adaptive Spotlight Week

Monday: What is adaptive?

October 14, 2024

Check out the new documentation for building adaptive apps and catch up on building adaptive Android apps if you missed it at I/O 2024. Also, learn how adaptive apps can be made available on another new form factor: cars!

Tuesday: Adaptive UIs with Compose

October 15, 2024

Learn the principles for how you can use Compose to build layouts that adapt to available window size and how the Material 3 adaptive library enables you to create list-detail and supporting pane layouts with out-of-the-box behavior.

Read the blog post: Creating a responsive dashboard layout for JetLagged with Jetpack Compose


The Android robot blasts off from a cloud, leaving behind a laptop and a smartphone on a dark background with green and yellow lines.

Creating a responsive dashboard layout for JetLagged with Jetpack Compose

Use Flow layouts, WindowSizeClasses, movableContentOf, and LookaheadScope to achieve a fluid and adaptable UI that adjusts to different screen sizes.


Wednesday: Desktop windowing and productivity

October 16, 2024

Learn what desktop windowing on Android is, together with details about how to handle it in your app and build productivity experiences that let users take advantage of more powerful multitasking Android environments.

Read the blog post: Developer Preview: Desktop windowing on Android Tablets


A tablet displaying a split screen with an email inbox on the left and a chat conversation on the right. A video call is taking place in the bottom right corner.

Developer Preview: Desktop windowing on Android Tablets

Desktop windowing on Android tablets enables users to run multiple apps simultaneously and resize windows for optimal multitasking.


Thursday: Stylus

October 17, 2024

Learn the principles for how to build Adaptive layouts in Compose following the phase system, with examples of custom layouts using tips and tricks to bring designs to life.

Watch the video: Custom Adaptive layouts in Compose


And take a look at how you can build powerful drawing experiences across stylus and touch input with the new Ink API.


A laptop and a mobile phone, both displaying simplified web content, are connected by a yellow line, illustrating the concept of responsive web design.

Introducing Ink API, a new Jetpack library for stylus apps

Ink API is a new Jetpack library for stylus apps making it easy for developers to create, render, and manipulate beautiful ink strokes.


Friday: #AskAndroid

October 18, 2024

Join us for a live Q&A on making apps more adaptive. During Spotlight Week, ask your questions on X and LinkedIn with #AskAndroid.

Watch the livestream: Adaptive #AskAndroid | Spotlight Week


These are just some of the ways that you can improve your mobile app's experience for more than just the smartphone with touch input. Keep checking this blog post for updates. We'll be adding links and more throughout the week. Follow Android Developers on X and Android by Google at LinkedIn to hear even more about ways to adapt your app, and send in your questions with #AskAndroid.

14 Oct 2024 4:00pm GMT

08 Oct 2024

feedAndroid Developers Blog

Introducing Ink API, a new Jetpack library for stylus apps

Posted by Chris Assigbe - Developer Relations Engineer and Tom Buckley - Product Manager

With stylus input, Android apps on phones, foldables, tablets, and Chromebooks become even more powerful tools for productivity and creativity. While there's already a lot to think about when designing for large screens - see our full guidance and inspiration gallery - styluses are especially impactful, transforming these devices into a digital notebook or sketchbook. Users expect stylus experiences to feel as fluid and natural as writing on paper, which is why Android previously added APIs to reduce inking latency to as low as 4ms; virtually imperceptible. However, latency is just one aspect of an inking experience - developers currently need to generate stroke shapes from stylus input, render those strokes quickly, and efficiently run geometric queries over strokes for tools like selection and eraser. These capabilities can require significant investment in geometry and graphics just to get started.

Today, we're excited to share Ink API, an alpha Jetpack library that makes it easy to create, render, and manipulate beautiful ink strokes, enabling developers to build amazing features on top of these APIs. Ink API builds upon the Android framework's foundation of low latency and prediction, providing you with a powerful and intuitive toolkit for integrating rich inking features into your apps.

moving image of a stylus writing with Ink API on a Samsung Tab S8, 4ms showing end-to-end latency
Writing with Ink API on a Samsung Tab S8, 4ms end-to-end latency


What is Ink API?

Ink API is a comprehensive stylus input library that empowers you to quickly create innovative and expressive inking experiences. It offers a modular architecture rather than a one-size-fits-all canvas, so you can tailor Ink API to your app's stack and needs. The modules encompass key functionalities like:

  • Strokes module: Represents the ink input and its visual representation.
  • Geometry module: Supports manipulating and analyzing strokes, facilitating features like erasing, and selecting strokes.
  • Brush module: Provides a declarative way to define the visual style of strokes, including color, size, and the type of tool to draw with.
  • Rendering module: Efficiently displays ink strokes on the screen, allowing them to be combined with Jetpack Compose or Android Views.
  • Live Authoring module: Handles real-time inking input to create smooth strokes with the lowest latency a device can provide.

Ink API is compatible with devices running Android 5.0 (API level 21) or later, and offers benefits on all of these devices. It can also take advantage of latency improvements in Android 10 (API 29) and improved rendering effects and performance in Android 14 (API 34).

Why choose Ink API?

Ink API provides an out-of-the-box implementation for basic inking tasks so you can create a unique drawing experience for your own app. Ink API offers several advantages over a fully custom implementation:

  • Ease of Use: Ink API abstracts away the complexities of graphics and geometry, allowing you to focus on your app's unique inking features.
  • Performance: Built-in low latency support and optimized rendering ensure a smooth and responsive inking experience.
  • Flexibility: The modular design allows you to pick and choose the components you need, tailoring the library to your specific requirements.

Ink API has already been adopted across many Google apps because of these advantages, including for markup in Docs and Circle-to-Search; and the underlying technology also powers markup in Photos, Drive, Meet, Keep, and Classroom. For Circle to Search, the Ink API modular design empowered the team to utilize only the components they needed. They leveraged the live authoring and brush capabilities of Ink API to render a beautiful stroke as users circle (to search). The team also built custom geometry tools tailored to their ML models. That's modularity at its finest.

moving image of a stylus writing with Ink API on a Samsung Tab S8, 4ms showing end-to-end latency


"Ink API was our first choice for Circle-to-Search (CtS). Utilizing their extensive documentation, integrating the Ink API was a breeze, allowing us to reach our first working prototype w/in just one week. Ink's custom brush texture and animation support allowed us to quickly iterate on the stroke design."

- Jordan Komoda, Software Engineer, Google

We have also designed Ink API with our Android app partners' feedback in mind to make sure it fits with their existing app architectures and requirements.

With Ink API, building a natural and fluid inking experience on Android is simpler than ever. Ink API lets you focus on what differentiates your experience rather than on the details of paths, meshes, and shaders. Whether you are exploring inking for note-taking, photo or document markup, interactive learning, or something completely different, we hope you'll give Ink API a try!

Get started with Ink API

Ready to dive into the well of Ink API? Check out the official developer guide and explore the API reference to start building your next-generation inking app. We're eager to see the innovative experiences you create!

Note: This alpha release is just the beginning for Ink API. We're committed to continuously improving the library, adding new features and functionalities based on your feedback. We have a roadmap to add native Compose support, with an initial focus on creating a ComposeStrokeRenderer, improving input interop, and providing simple data type converters. Stay tuned for updates and join us in shaping the future of inking on Android!

08 Oct 2024 4:00pm GMT

03 Oct 2024

feedAndroid Developers Blog

Gemini API in action: showcase of innovative Android apps

Posted by Thomas Ezan, Sr Developer Relation Engineer


With the advent of Generative AI, Android developers now have access to capabilities that were previously out of reach. For instance, you can now easily add image captioning to your app without any computer vision knowledge.

With the upcoming launch of the stable version of VertexAI in Firebase in a few weeks (available in Beta Since Google I/O), you'll be able to effortlessly incorporate the capabilities of Gemini 1.5 Flash and Gemini 1.5 Pro into your app. The inference runs on Google's servers, making these capabilities accessible to any device with an internet connection.

Several Android developers have already begun leveraging this technology. Let's explore some of their creations.


Generate your meals for the week

The team behind Meal Planner, a meal planner and shopping list management app, is leveraging Gemini 1.5 Flash to create original meal plans. Based on the user's diet, the number of people you are cooking for and any food allergies or intolerances, the app automatically creates a meal plan for the selected week.

For each dish, the model lists ingredients and quantities, taking into account the number of portions. It also provides instructions on how to prepare it. The app automatically generates a shopping list for the week based on the ingredient list for each meal.

moving image of Meal Planner app user experience


To enable reliable processing of the model's response and to integrate it in the app, the team leveraged Gemini's JSON mode. They specified responseMimeType = "application/json" in the model configuration and defined the expected JSON response schema in the prompt (see API documentation).

Following the launch of the meal generation feature, Meal Planner received overwhelmingly positive feedback. The feature simplified meal planning for users with dietary restrictions and helped reduce food waste. In the months after its introduction, Meal Planner experienced a 17% surge in premium users.


Journaling while chatting with Leo

A few months ago, the team behind the journal app Life wanted to provide an innovative way to let their users log entries. They created "Leo", an AI diary assistant chatting with users and converting conversations into a journal entry.

To modify the behavior of the model and the tone of its responses, the team used system instructions to define the chatbot persona. This allows the user to set the behavior and tone of the assistant: Pick "Professional and formal" and the model will keep the conversation strict, select "Friendly and cheerful" and it will lighten up the dialogue with lots of emojis!

moving image of Leo app user experience


The team saw an increase of user engagement following the launch of the feature.

And if you want to know more about how the Life developers used Gemini API in their app, we had a great conversation with Jomin from the team. This conversation is part of a new Android podcast series called Android Build Time, that you can also watch on YouTube.


Create a nickname on the fly

The HiiKER app provides offline hiking maps. The app also fosters a community by letting users rating trails and leaving comments. But users signing up don't always add a username to their profile. To avoid the risk of lowering the conversion rate by making username selection mandatory at signup time, the team decided to use the Gemini API to suggest unique usernames based on the user's country or area.

moving image of HiiKER app user experience


To generate original usernames, the team set a high temperature value and played with the top-K and top-P values to increase the creativity of the model.

This AI-assisted feature led to a significant lift in the percentage of users with "complete" profiles contributing to a positive impact on engagement and retention.

It's time to build!

Generative AI is still a very new space and we are just starting to have easy access to these capabilities for Android. From enabling advanced personalization, creating delightful interactive experiences or simplifying signup, you might have unique challenges that you are trying to solve as an Android app developer. It is a great time to start looking at these challenges as opportunities that generative AI can help you tackle!


You can learn more about the advanced features of the Gemini Cloud models, find an introduction to generative AI for Android developers, and get started with Vertex AI in Firebase documentation.

To learn more about AI on Android, check out other resources we have available during AI in Android Spotlight Week.

Use #AndroidAI hashtag to share your creations or feedback on social media, and join us at the forefront of the AI revolution!

03 Oct 2024 6:00pm GMT

Advanced capabilities of the Gemini API for Android developers

Posted by Thomas Ezan, Sr Developer Relation Engineer


Thousands of developers across the globe are harnessing the power of the Gemini 1.5 Pro and Gemini 1.5 Flash models to infuse advanced generative AI features into their applications. Android developers are no exception, and with the upcoming launch of the stable version of VertexAI in Firebase in a few weeks (available in Beta since Google I/O), it's the perfect time to explore how your app can benefit from it. We just published a codelab to help you get started.

Let's deep dive into some advanced capabilities of the Gemini API that go beyond simple text prompting and discover the exciting use cases they can unlock in your Android app.

Shaping AI behavior with system instructions

System instructions serve as a "preamble" that you incorporate before the user prompt. This enables shaping the model's behavior to align with your specific requirements and scenarios. You set the instructions when you initialize the model, and then those instructions persist through all interactions with the model, across multiple user and model turns.

For example, you can use system instructions to:

  • Define a persona or role for a chatbot (e.g, "explain like I am 5")
  • Specify the response to the output format (e.g., Markdown, YAML, etc.)
  • Set the output style and tone (e.g, verbosity, formality, etc…)
  • Define the goals or rules for the task (e.g, "return a code snippet without further explanation")
  • Provide additional context for the prompt (e.g., a knowledge cutoff date)

To use system instructions in your Android app, pass it as parameter when you initialize the model:

val generativeModel = Firebase.vertexAI.generativeModel(
  modelName = "gemini-1.5-flash",
  ...
  systemInstruction = 
    content { text("You are a knowledgeable tutor. Answer the questions using the socratic tutoring method.") }
)

You can learn more about system instruction in the Vertex AI in Firebase documentation.

You can also easily test your prompt with different system instructions in Vertex AI Studio, Google Cloud console tool for rapidly prototyping and testing prompts with Gemini models.


test system instructions with your prompts in Vertex AI Studio
Vertex AI Studio let's you test system instructions with your prompts


When you are ready to go to production it is recommended to target a specific version of the model (e.g. gemini-1.5-flash-002). But as new model versions are released and previous ones are deprecated, it is advised to use Firebase Remote Config to be able to update the version of the Gemini model without releasing a new version of your app.

Beyond chatbots: leveraging generative AI for advanced use cases

While chatbots are a popular application of generative AI, the capabilities of the Gemini API go beyond conversational interfaces and you can integrate multimodal GenAI-enabled features into various aspects of your Android app.

Many tasks that previously required human intervention (such as analyzing text, image or video content, synthesizing data into a human readable format, engaging in a creative process to generate new content, etc… ) can be potentially automated using GenAI.

Gemini JSON support

Android apps don't interface well with natural language outputs. Conversely, JSON is ubiquitous in Android development, and provides a more structured way for Android apps to consume input. However, ensuring proper key/value formatting when working with generative models can be challenging.

With the general availability of Vertex AI in Firebase, implemented solutions to streamline JSON generation with proper key/value formatting:

Response MIME type identifier

If you have tried generating JSON with a generative AI model, it's likely you have found yourself with unwanted extra text that makes the JSON parsing more challenging.

e.g:

Sure, here is your JSON:
```
{
   "someKey": "someValue",
   ...
}
```

When using Gemini 1.5 Pro or Gemini 1.5 Flash, in the generation configuration, you can explicitly specify the model's response mime/type as application/json and instruct the model to generate well-structured JSON output.

val generativeModel = Firebase.vertexAI.generativeModel(
  modelName = "gemini-1.5-flash",
  
  generationConfig = generationConfig {
     responseMimeType = "application/json"
  }
)

Review the API reference for more details.

Soon, the Android SDK for Vertex AI in Firebase will enable you to define the JSON schema expected in the response.


Multimodal capabilities

Both Gemini 1.5 Flash and Gemini 1.5 Pro are multimodal models. It means that they can process input from multiple formats, including text, images, audio, video. In addition, they both have long context windows, capable of handling up to 1 million tokens for Gemini 1.5 Flash and 2 million tokens for Gemini 1.5 Pro.

These features open doors to innovative functionalities that were previously inaccessible such as automatically generate descriptive captions for images, identify topics in a conversation and generate chapters from an audio file or describe the scenes and actions in a video file.

You can pass an image to the model as shown in this example:

val contentResolver = applicationContext.contentResolver
contentResolver.openInputStream(imageUri).use { stream ->
  stream?.let {
     val bitmap = BitmapFactory.decodeStream(stream)

    // Provide a prompt that includes the image specified above and text
    val prompt = content {
       image(bitmap)
       text("How many people are on this picture?")
    }
  }
  val response = generativeModel.generateContent(prompt)
}

You can also pass a video to the model:

val contentResolver = applicationContext.contentResolver
contentResolver.openInputStream(videoUri).use { stream ->
  stream?.let {
    val bytes = stream.readBytes()

    // Provide a prompt that includes the video specified above and text
    val prompt = content {
        blob("video/mp4", bytes)
        text("What is in the video?")
    }

    val fullResponse = generativeModel.generateContent(prompt)
  }
}

You can learn more about multimodal prompting in the VertexAI for Firebase documentation.

Note: This method enables you to pass files up to 20 MB. For larger files, use Cloud Storage for Firebase and include the file's URL in your multimodal request. Read the documentation for more information.


Function calling: Extending the model's capabilities

Function calling enables you to extend the capabilities to generative models. For example you can enable the model to retrieve information in your SQL database and feed it back to the context of the prompt. You can also let the model trigger actions by calling the functions in your app source code. In essence, function calls bridge the gap between the Gemini models and your Kotlin code.

Take the example of a food delivery application that is interested in implementing a conversational interface with the Gemini 1.5 Flash. Assume that this application has a getFoodOrder(cuisine: String) function that returns the list orders from the user for a specific type of cuisine:

fun getFoodOrder(cuisine: String) : JSONObject {
        // implementation…  
}

Note that the function, to be usable to by the model, needs to return the response in the form of a JSONObject.

To make the response available to Gemini 1.5 Flash, create a definition of your function that the model will be able to understand using defineFunction:

val getOrderListFunction = defineFunction(
            name = "getOrderList",
            description = "Get the list of food orders from the user for a define type of cuisine.",
            Schema.str(name = "cuisineType", description = "the type of cuisine for the order")
        ) {  cuisineType ->
            getFoodOrder(cuisineType)
        }

Then, when you instantiate the model, share this function definition with the model using the tools parameter:

val generativeModel = Firebase.vertexAI.generativeModel(
    modelName = "gemini-1.5-flash",
    ...
    tools = listOf(Tool(listOf(getExchangeRate)))
)

Finally, when you get a response from the model, check in the response if the model is actually requesting to execute the function:

// Send the message to the generative model
var response = chat.sendMessage(prompt)

// Check if the model responded with a function call
response.functionCall?.let { functionCall ->
  // Try to retrieve the stored lambda from the model's tools and
  // throw an exception if the returned function was not declared
  val matchedFunction = generativeModel.tools?.flatMap { it.functionDeclarations }
      ?.first { it.name == functionCall.name }
      ?: throw InvalidStateException("Function not found: ${functionCall.name}")
  
  // Call the lambda retrieved above
  val apiResponse: JSONObject = matchedFunction.execute(functionCall)

  // Send the API response back to the generative model
  // so that it generates a text response that can be displayed to the user
  response = chat.sendMessage(
    content(role = "function") {
        part(FunctionResponsePart(functionCall.name, apiResponse))
    }
  )
}

// If the model responds with text, show it in the UI
response.text?.let { modelResponse ->
    println(modelResponse)
}


To summarize, you'll provide the functions (or tools to the model) at initialization:

A flow diagram shows a green box labeled 'Generative Model' connected to a list of model parameters and a list of tools. The parameters include 'gemini-1.5-flash', 'api_key', and 'configuration', while the tools are 'getOrderList()', 'getDate()', and 'placeOrder()'.


And when appropriate, the model will request to execute the appropriate function and provide the results:

A flow diagram illustrating the interaction between an Android app and a 'Generative Model'. The app sends 'getDate()' and 'getOrderList()' requests.


You can read more about function calling in the VertexAI for Firebase documentation.

Unlocking the potential of the Gemini API in your app

The Gemini API offers a treasure trove of advanced features that empower Android developers to craft truly innovative and engaging applications. By going beyond basic text prompts and exploring the capabilities highlighted in this blog post, you can create AI-powered experiences that delight your users and set your app apart in the competitive Android landscape.

Read more about how some Android apps are already starting to leverage the Gemini API.


To learn more about AI on Android, check out other resources we have available during AI on Android Spotlight Week.

Use #AndroidAI hashtag to share your creations or feedback on social media, and join us at the forefront of the AI revolution!


The code snippets in this blog post have the following license:

// Copyright 2024 Google LLC.
// SPDX-License-Identifier: Apache-2.0

03 Oct 2024 5:59pm GMT

02 Oct 2024

feedAndroid Developers Blog

PyTorch machine learning models on Android

Posted by Paul Ruiz - Senior Developer Relations Engineer

Earlier this year we launched Google AI Edge, a suite of tools with easy access to ready-to-use ML tasks, frameworks that enable you to build ML pipelines, and run popular LLMs and custom models - all on-device. For AI on Android Spotlight Week, the Google team is highlighting various ways that Android developers can use machine learning to help improve their applications.

In this post, we'll dive into Google AI Edge Torch, which enables you to convert PyTorch models to run locally on Android and other platforms, using the Google AI Edge LiteRT (formerly TensorFlow Lite) and MediaPipe Tasks libraries. For insights on other powerful tools, be sure to explore the rest of the AI on Android Spotlight Week content.

To get started with Google AI Edge easier, we've provided samples available on GitHub as an executable codelab. They demonstrate how to convert the MobileViT model for image classification (compatible with MediaPipe Tasks) and the DIS model for segmentation (compatible with LiteRT).

a red Android figurine is shown next to a black and white silhouette of the same figure, labeled 'Original Image' and 'PT Mask' respectively, demonstrating image segmentation.
DIS model output


This blog guides you through how to use the MobileViT model with MediaPipe Tasks. Keep in mind that the LiteRT runtime provides similar capabilities, enabling you to build custom pipelines and features.

Convert MobileViT model for image classification compatible with MediaPipe Tasks

Once you've installed the necessary dependencies and utilities for your app, the first step is to retrieve the PyTorch model you wish to convert, along with any other MobileViT components you might need (such as an image processor for testing).

from transformers import MobileViTImageProcessor, MobileViTForImageClassification

hf_model_path = 'apple/mobilevit-small'
processor = MobileViTImageProcessor.from_pretrained(hf_model_path)
pt_model = MobileViTForImageClassification.from_pretrained(hf_model_path)

Since the end result of this tutorial should work with MediaPipe Tasks, take an extra step to match the expected input and output shapes for image classification to what is used by the MediaPipe image classification Task.

class HF2MP_ImageClassificationModelWrapper(nn.Module):

  def __init__(self, hf_image_classification_model, hf_processor):
    super().__init__()
    self.model = hf_image_classification_model
    if hf_processor.do_rescale:
      self.rescale_factor = hf_processor.rescale_factor
    else:
      self.rescale_factor = 1.0

  def forward(self, image: torch.Tensor):
    # BHWC -> BCHW.
    image = image.permute(0, 3, 1, 2)
    # RGB -> BGR.
    image = image.flip(dims=(1,))
    # Scale [0, 255] -> [0, 1].
    image = image * self.rescale_factor
    logits = self.model(pixel_values=image).logits  # [B, 1000] float32.
    # Softmax is required for MediaPipe classification model.
    logits = torch.nn.functional.softmax(logits, dim=-1)

    return logits


hf_model_path = 'apple/mobilevit-small'
hf_mobile_vit_processor = MobileViTImageProcessor.from_pretrained(hf_model_path)
hf_mobile_vit_model = MobileViTForImageClassification.from_pretrained(hf_model_path)
wrapped_pt_model = HF2MP_ImageClassificationModelWrapper(
hf_mobile_vit_model, hf_mobile_vit_processor).eval()


Whether you plan to use the converted MobileViT model with MediaPipe Tasks or LiteRT, the next step is to convert the model to the .tflite format.

First, match the input shape. In this example, the input shape is 1, 256, 256, 3 for a 256x256 pixel three-channel RGB image.

Then, call AI Edge Torch's convert function to complete the conversion process.

import ai_edge_torch

sample_args = (torch.rand((1, 256, 256, 3)),)
edge_model = ai_edge_torch.convert(wrapped_pt_model, sample_args)


After converting the model, you can further refine it by incorporating metadata for the image classification labels. MediaPipe Tasks will utilize this metadata to display or return pertinent information after classification.

from mediapipe.tasks.python.metadata.metadata_writers import image_classifier
from mediapipe.tasks.python.metadata.metadata_writers import metadata_writer
from mediapipe.tasks.python.vision.image_classifier import ImageClassifier
from pathlib import Path

flatbuffer_file = Path('hf_mobile_vit_mp_image_classification_raw.tflite')
edge_model.export(flatbuffer_file)
tflite_model_buffer = flatbuffer_file.read_bytes()

//Extract the image classification labels from the HF models for later integration into the TFLite model.
labels = list(hf_mobile_vit_model.config.id2label.values())

writer = image_classifier.MetadataWriter.create(
    tflite_model_buffer,
    input_norm_mean=[0.0], #  Normalization is not needed for this model.
    input_norm_std=[1.0],
    labels=metadata_writer.Labels().add(labels),
)
tflite_model_buffer, _ = writer.populate()


With all of that completed, it's time to integrate your model into an Android app. If you're following the official Colab notebook, this involves saving the model locally. For an example of image classification with MediaPipe Tasks, explore the GitHub repository. You can find more information in the official Google AI Edge documentation.

moving image of Newly converted ViT model with MediaPipe Tasks
Newly converted ViT model with MediaPipe Tasks


After understanding how to convert a simple image classification model, you can use the same techniques to adapt various PyTorch models for Google AI Edge LiteRT or MediaPipe Tasks tooling on Android.

For further model optimization, consider methods like quantizing during conversion. Check out the GitHub example to learn more about how to convert a PyTorch image segmentation model to LiteRT and quantize it.

What's Next

To keep up to date on Google AI Edge developments, look for announcements on the Google for Developers YouTube channel and blog.

We look forward to hearing about how you're using these features in your projects. Use #AndroidAI hashtag to share your feedback or what you've built in social media and check out other content in AI on Android Spotlight Week!

02 Oct 2024 7:00pm GMT

How to bring your AI Model to Android devices

Posted by Kateryna Semenova - Senior Developer Relations Engineer and Mark Sherwood - Senior Product Manager


During AI on Android Spotlight Week, we're diving into how you can bring your own AI model to Android-powered devices such as phones, tablets, and beyond. By leveraging the tools and technologies available from Google and other sources, you can run sophisticated AI models directly on these devices, opening up exciting possibilities for better performance, privacy, and usability.

Understanding on-device AI

On-device AI involves deploying and executing machine learning or generative AI models directly on hardware devices, instead of relying on cloud-based servers. This approach offers several advantages, such as reduced latency, enhanced privacy, cost saving and less dependence on internet connectivity.

For generative text use cases, explore Gemini Nano that is now available in experimental access through its SDK. For many on-device AI use cases, you might want to package your own models in your app. Today we will walk through how to do so on Android.

Key resources for on-device AI

The Google AI Edge platform provides a comprehensive ecosystem for building and deploying AI models on edge devices. It supports various frameworks and tools, enabling developers to integrate AI capabilities seamlessly into their applications. The Google AI Edge platforms consists of:

  • MediaPipe Tasks - Cross-platform low-code APIs to tackle common generative AI, vision, text, and audio tasks
  • LiteRT (formerly known as TensorFlow Lite) - Lightweight runtime for deploying custom machine learning models on Android
  • MediaPipe Framework - Pipeline framework for chaining multiple ML models along with pre and post processing logic


Google AI Edge Logo


How to build custom AI features on Android

1. Define your use case: Before diving into technical details, it's crucial to clearly define what you want your AI feature to achieve. Whether you're aiming for image classification, natural language processing, or another application, having a well-defined goal will guide your development process.

2. Choose the right tools and frameworks: Depending on your use case, you might be able to use an out of the box solution or you might need to create or source your own model. Look through MediaPipe Tasks for common solutions such as gesture recognition, image segmentation or face landmark detection. If you find a solution that aligns with your needs, you can proceed directly to the testing and deployment step.


Google AI Edge Logo


If you need to create or source a custom model for your use case, you will need an on-device ML framework such as LiteRT (formerly TensorFlow Lite). LiteRT is designed specifically for mobile and edge devices and provides a lightweight runtime for deploying machine learning models. Simply follow these substeps:

a. Develop and train your model: Develop your AI model using your chosen framework. Training can be performed on a powerful machine or cloud environment, but the model should be optimized for deployment on a device. Techniques like quantization and pruning can help reduce the model size and improve inference speed. Model Explorer can help understand and explore your model as you're working with it.

b. Convert and optimize the model: Once your model is trained, convert it to a format suitable for on-device deployment. LiteRT, for example, requires conversion to its specific format. Optimization tools can help reduce the model's footprint and enhance performance. AI Edge Torch allows you to convert PyTorch models to run locally on Android and other platforms, using Google AI Edge LiteRT and MediaPipe Tasks libraries.

c. Accelerate your model: You can speed up model inference on Android by using GPU and NPU. LiteRT's GPU delegate allows you to run your model on GPU today. We're working hard on building the next generation of GPU and NPU delegates that will make your models run even faster, and enable more models to run on GPU and NPU. We'd like to invite you to participate in our early access program to try out this new GPU and NPU infrastructure. We will select participants out on a rolling basis so don't wait to reach out.

3. Test and deploy: To ensure that your model delivers the expected performance across various devices, rigorous testing is crucial. Deploy your app to users after completing the testing phase, offering them a seamless and efficient AI experience. We're working on bringing the benefits of Google Play and Android App Bundles to delivering custom ML models for on-device AI features. Play for On-device AI takes the complexity out of launching, targeting, versioning, downloading, and updating on-device models so that you can offer your users a better user experience without compromising your app's size and at no additional cost. Complete this form to express interest in joining the Play for On-device AI early access program.

Build trust in AI through privacy and transparency

With the growing role of AI in everyday life, ensuring models run as intended on devices is crucial. We're emphasizing a "zero trust" approach, providing developers with tools to verify device integrity and user control over their data. In the zero trust approach, developers need the ability to make informed decisions about the device's trustworthiness.

The Play Integrity API is recommended for developers looking to verify their app, server requests, and the device environment (and, soon, the recency of security updates on the device). You can call the API at important moments before your app's backend decides to download and run your models. You can also consider turning on integrity checks for installing your app to reduce your app's distribution to unknown and untrusted environments.

Play Integrity API makes use of Android Platform Key Attestation to verify hardware components and generate integrity verdicts across the fleet, eliminating the need for most developers to directly integrate different attestation tools and reducing device ecosystem complexity. Developers can use one or both of these tools to assess device security and software integrity before deciding whether to trust a device to run AI models.

Conclusion

Bringing your own AI model to a device involves several steps, from defining your use case to deploying and testing the model. With resources like Google AI Edge, developers have access to powerful tools and insights to make this process smoother and more effective. As on-device AI continues to evolve, leveraging these resources will enable you to create cutting-edge applications that offer enhanced performance, privacy, and user experience. We are currently seeking early access partners to try out some of our latest tools and APIs at Google AI Edge. Simply fill in this form to connect and explore how we can work together to make your vision a reality.

Dive into these resources and start exploring the potential of on-device AI-your next big innovation could be just a model away!

Use #AndroidAI hashtag to share your feedback or what you've built on social media and catch up with the rest of the updates being shared during Spotlight Week: AI on Android.

02 Oct 2024 4:00pm GMT