16 Sep 2024
TalkAndroid
Android 15 Could Sync Notifications Across Your Android Devices
If you've got more than one Android device, you won't need to worry about dismissed notifications persisting on your other device.
16 Sep 2024 1:22pm GMT
Car Dealership Tycoon Codes
Find all the latest Car Dealership Tycoon codes right here! Read on for more!
16 Sep 2024 6:17am GMT
Base Battles Codes
Here you will find all of the latest Base Battles Codes for Roblox! Read on for more!
16 Sep 2024 6:11am GMT
The Resistance Tycoon Codes
Here you will find all the latest The Resistance Tycoon Codes! Tap the link for more!
16 Sep 2024 6:09am GMT
King Legacy Codes
Find all the latest King Legacy codes right here!
16 Sep 2024 6:08am GMT
Anime Champions Simulator Codes
Find the latest Anime Champions Simulator codes here!
16 Sep 2024 6:06am GMT
All Star Tower Defense Codes
Find the latest All Star Tower Defense codes here!
16 Sep 2024 6:03am GMT
Solo Leveling: Arise – Codes
Find all the latest Solo Leveling: Arise codes here! Keep reading for more.
16 Sep 2024 3:48am GMT
15 Sep 2024
TalkAndroid
Board Kings Free Rolls – Updated Every Day!
Run out of rolls for Board Kings? Find links for free rolls right here, updated daily!
15 Sep 2024 4:36pm GMT
Coin Tales Free Spins – Updated Every Day!
Tired of running out of Coin Tales Free Spins? We update our links daily, so you won't have that problem again!
15 Sep 2024 4:34pm GMT
Avatar World Codes – September 2024 – Updated Daily
Find all the latest Avatar World Codes right here in this article! Read on for more!
15 Sep 2024 4:33pm GMT
Coin Master Free Spins & Coins Links
Find all the latest Coin Master free spins right here! We update daily, so be sure to check in daily!
15 Sep 2024 4:32pm GMT
Monopoly Go Events Schedule Today – Updated Daily
Current active events are Habitat Heroes Event, Reef Rush and Special Event - Prize Drop .
15 Sep 2024 4:30pm GMT
Monopoly Go – Free Dice Links Today (Updated Daily)
If you keep on running out of dice, we have just the solution! Find all the latest Monopoly Go free dice links right here!
15 Sep 2024 4:24pm GMT
Family Island Free Energy Links (Updated Daily)
Tired of running out of energy on Family Island? We have all the latest Family Island Free Energy links right here, and we update these daily!
15 Sep 2024 4:23pm GMT
Crazy Fox Free Spins & Coins (Updated Daily)
If you need free coins and spins in Crazy Fox, look no further! We update our links daily to bring you the newest working links!
15 Sep 2024 4:20pm GMT
12 Sep 2024
Android Developers Blog
Developer Preview: Desktop windowing on Android Tablets
Posted by Francesco Romano - Developer Relations Engineer on Android, and Fahd Imtiaz - Product Manager, Android Developer
To empower tablet users to get more done, we're enhancing freeform windowing, allowing them to run multiple apps simultaneously and resize windows for optimal multitasking. Today, we're excited to share that desktop windowing on Android tablets is available in developer preview.
For app developers, the concept of Android apps running in freeform windows has already existed with solutions like Samsung DeX and ChromeOS. Updating your apps to support adaptive layouts, more robust multitasking, and adaptive inputs will ensure your apps work well on large screens across the Android ecosystem.
Let's explore how to optimize your apps for desktop windowing and deliver the optimal experience to users.
What is desktop windowing?
Desktop windowing allows users to run multiple apps simultaneously and resize app windows, offering a more flexible and desktop-like experience. This, along with a refreshed System UI and new APIs, allows users to be even more productive and creates a more seamless, desktop-like experience on tablets.
In Figure 1, you can see the anatomy of the screen with desktop windowing enabled. Things to make note of:
- Users can run multiple apps side-by-side, simultaneously
- Taskbar is fixed and shows the running apps, users can pin apps for quick access
- New header bar with window controls at the top of each window which apps can customize
How can users invoke desktop windowing?
By default, apps open in full screen on Android tablets. To run the apps as a desktop window on Pixel Tablet, press and hold the window handle at the top in the middle of the screen and drag it within the UI, as seen in Figure 2.
Once you are in the desktop space, all future apps will be launched as desktop windows as well.
You can also invoke desktop windowing from the menu that shows up below the window handle when you tap/click on it or use the keyboard shortcut meta key (Windows, Command, or Search) + Ctrl + Down.
You can exit desktop windowing and display an app as full screen by closing all active windows or by grabbing the window handle at the top of the window and dragging the app to the top of the screen. You can also use the meta + H keyboard shortcut to run apps as full screen again.
To return to the desktop, move a full screen app to the desktop space by using the methods mentioned above, or simply tap on the desktop space tile in the Recents screen.
What does this mean for app developers?
Desktop windowing on Android tablets creates new opportunities for your apps, particularly around productivity and multitasking. The possibility to resize and reposition multiple app windows allows users to easily compare documents, reference information while composing emails, and multitask efficiently.
By optimizing for desktop windowing, you can deliver unique user experiences to match the growing demand for tablet-based productivity. At the same time, you'll enhance the overall user experience on tablets, making your apps more versatile and adaptable to different scenarios.
If your app already meets the Tier 2 (Large Screens optimized) quality bar in the Large screen app quality guidelines, then there is minimal additional optimization required! If your app has not been optimized for large screens yet, updating it according to the Large screen app quality guidelines becomes even more crucial in the context of desktop windowing. Let's see why:
- Freeform resizing enables users to resize apps to their preference for maximized productivity. Considering this, developers should note:
-
- Apps with locked orientation are freely resizable. That means, even if an activity is locked to portrait orientation, users can still resize the app to landscape orientation window. In a future update, apps declared as non-resizable will have their UI scaled while keeping the same aspect ratio.
- Adaptive layouts: By adapting your UI, apps have an opportunity to effortlessly handle a wide range of window sizes, from compact to expanded screen layouts. In desktop windowing, apps can be resized down to a minimum size of 386dp x 352dp, so make sure to leverage window size classes to adjust your app's layout, content, and interactions to adapt to different window dimensions.
- State management: With freeform resizing, configuration changes happen each time the window resizes, so your app should either handle these configuration changes gracefully or make sure you are preserving the app state when the OS initiates the re-creation of the app. As a reminder, users can change the screen density while your app is running, so it's best to ensure that your app can handle screen density configuration changes as well.
Figure 3. Apps with locked orientation are freely resizable. - Desktop windowing takes productivity on tablets to the next level with multiple apps running simultaneously. Similar to split screen, Desktop windowing encourages users to have multiple windows open. Considering this, developers should note:
-
- Multitasking support: For enhanced productivity, users can have two or more apps open simultaneously, and they expect to easily share content between apps, so add support for drag and drop gestures. Also, ensure your app continues to function correctly even when not in focus, and if your app uses exclusive resources like camera or microphone, the app needs to handle resource loss gracefully when other apps acquire the resource.
- Multi-instance support: Users can run multiple instances of your app side-by-side; for example, a document editor application may allow users to start new documents while still being able to reference the already open documents. Apps can set this new Multi-instance property to declare that System UI should be shown for this app to allow it to be launched as multiple instances. Also note that in desktop windowing, new tasks open in a new window, so double-check the user journey if your app starts multiple tasks.
Figure 4. Start another instance of Chrome by dragging a tab out of the app window.Note: Images are examples and subject to change
- With desktop windowing, input methods beyond touch and insets handling become even more important for a seamless user experience.
-
- More input methods (keyboard, mouse): Users are more likely to use your app with a variety of input methods like external keyboards, mice, and trackpads. Check that users can interact smoothly with your app using keyboard and mouse peripherals or through the emulator. Developers can add support for app shortcuts and publish them using the keyboard shortcuts API, which allows users to easily view the supported app shortcuts through a standardized surface on Android devices.
- Insets handling: All apps when running in desktop windowing have a header bar, even in immersive mode. Ensure your app's content isn't obscured by this. The new header bar is reported as a caption bar in Compose (androidx.compose.foundation:foundation-layout.WindowInsets.Companion.captionBar) and in Views (android.view.WindowInsets.Type.CAPTION_BAR), which is part of the system bars. API 35 also introduced a new appearance type, to make the header bar transparent, to allow apps to draw custom content inside.
Get hands-on!
Today we're announcing a developer preview that provides you with an early opportunity to experience and test desktop windowing. You can try it out on Pixel Tablet before it's released to AOSP more broadly. The preview is available today. Update your Pixel Tablet to the latest Android 15 QPR1 Beta 2 release to try out desktop windowing. If you don't have a Pixel Tablet handy, access the Pixel Tablet emulator in Android Studio Preview, and select the Android 15.0 (Google APIs Tablet) target. Once your device is set up, select Enable freeform windows option in Developer options to explore the capabilities of desktop windowing and how your app behaves within this new environment.
By optimizing your apps for desktop windowing on Pixel Tablet, you are not only enhancing the app experience on that specific device but also future-proofing your apps for the broader Android ecosystem where freeform windowing will become prevalent. We're excited about the windows of opportunities enabled by desktop windowing, and we look forward to seeing how you adapt your apps for an enhanced user experience.
We're committed to improving the desktop windowing experience through future updates. Make sure to test your app and give us feedback. Say tuned for more developer guides and resources!
12 Sep 2024 6:00pm GMT
11 Sep 2024
Android Developers Blog
Streamlining Android authentication: Credential Manager replaces legacy APIs
Posted by Diego Zavala and Jason Lucibello - Product Managers
In 2023, we introduced Credential Manager for Android. Credential Manager creates a unified experience for passkeys, Sign in with Google, and passwords, allowing seamless sign-in and eliminating the need for users to type in usernames or passwords.
To bring Credential Manager's benefits to more Android users and simplify developers' integration efforts, APIs that were previously deprecated will continue their phased removals and shutdowns. These APIs include:
Developers with apps that still use these APIs should migrate to Credential Manager as soon as possible. Credential Manager supports all authentication features included in these legacy APIs, as well as streamlined journeys for users and modernizes the experience with passkey support and streamlined user journeys. Developers looking to implement authorization functionality for Google Accounts, such as scoped access to a service like Google Drive, should continue to use the AuthorizationClient API.
Current status of APIs as of September 2024, update plans, and recommended migration guides.
Status: RemovedNext Update: Fully shut down in Q1 2025Migration guide: Migrate from Smart Lock for Passwords to Credential ManagerStatus: DeprecatedNext Update: Removed in H1 2025Migration guide: Migrate passwords to Credential ManagerStatus: DeprecatedNext Update: Removed in H1 2025Migration guide: Migrate to the Sign in with Google button and Credential ManagerAPI: One Tap Sign-inStatus: DeprecatedNext Update: Removed in H2 2025Migration guide: Authenticate users with Sign in with Google and Credential ManagerStatus: DeprecatedNext Update: Removed in H2 2025Migration guide: Migrate from legacy Google Sign-In to Credential Manager
What does each status mean?
- Deprecated: API is still in the SDK and functional, but will be removed and fully shut down in the future. Developers are recommended to migrate to Credential Manager at this time.
- Removed: API is still functional for users, but is no longer included in the SDK. New app versions compiled with the most recent SDK would fail in the build process if your code still utilizes the removed API. If your app still relies on any of these APIs, you should migrate to Credential Manager as soon as possible.
- Fully shut down: API is no longer functional, and it will fail when a request is sent by an app.
Credential Manager offers streamlined, more secure auth journeys
Credential Manager delivers multiple advantages to users and developers over the deprecated APIs:
Adopting Credential Manager is an intuitive upgrade for developers
For developers previously using our deprecated APIs, the transition to Credential Manager is smooth and intuitive. Developers like X (formerly known as Twitter), Pinterest have already experienced the benefits of the upgrade. X shared with us that Credential Manager's unified approach made migration and maintenance effortless, while Pinterest expressed a smooth process for both users and engineers with Credential Manager.
Developers can use the following guides to make adopting Credential Manager even easier:
- Get started: Credential Manager Implementation Guide
Share your feedback
Your input is very valuable to us as we continue to refine and improve our authentication services. Please keep providing us feedback on the issue tracker and share your experience integrating Credential Manager!
11 Sep 2024 4:00pm GMT
09 Sep 2024
Android Developers Blog
Jetpack Compose APIs for building adaptive layouts using Material guidance now stable
Posted by Alex Vanyo - Developer Relations Engineer
The 1.0 stable version of the Compose adaptive APIs with Material guidance is out, ready to be used in production. The library helps you build adaptive layouts that provide an optimized user experience on any window size.
The team at SAP Mobile Start were early adopters of the Compose adaptive APIs. It took their developers only five minutes to integrate the NavigationSuiteScaffold from the new Compose Material 3 adaptive library, rapidly adapting the app's navigation UI to different window sizes.
Each of the new components in the library, NavigationSuiteScaffold, ListDetailPaneScaffold and SupportingPaneScaffold are adaptive: based on the window size and posture, different components are displayed to the user based on which one is most appropriate in the current context. This helps build UI that adapts to a wide variety of window sizes instead of just stretching layouts.
For an overview of the components, check out the dedicated I/O session and our new documentation pages to get started.
In this post, we're going to take a more detailed look at the layering of the new library so you have a better understanding of how customisable it is, to fit a wide variety of use cases you might have.
Similar to Compose itself, the adaptive libraries are layered into multiple dependencies, so that you can choose the appropriate level of abstraction for your application.There are four new artifacts as part of the adaptive libraries:
- For the core building blocks for building adaptive UI, including computing the window size class and the current posture, add androidx.compose.material3.adaptive:adaptive:1.0.0
- For implementing multi-pane layouts, add androidx.compose.material3.adaptive:adaptive-layout:1.0.0
- Contains the multi-pane scaffold layouts ListDetailPaneScaffold and SupportingPaneScaffold
- For standalone navigators for the multi-pane scaffold layouts, add androidx.compose.material3.adaptive:adaptive-navigation:1.0.0
- For implementing adaptive navigation UI, add androidx.compose.material3:material3-adaptive-navigation-suite:1.3.0
The libraries have the following dependencies:
To explore this layering more, let's start with the highest level example with the most built-in functionality using a NavigableListDetailPaneScaffold from androidx.compose.material3.adaptive:adaptive-navigation:
val navigator = rememberListDetailPaneScaffoldNavigator<Any>()
NavigableListDetailPaneScaffold(
navigator = navigator,
listPane = {
// List pane
},
detailPane = {
// Detail pane
},
)
This snippet of code gives you all of our recommended adaptive behavior out of the box for a list-detail layout: determining how many panes to show based on the current window size, hiding and showing the correct pane when the window size changes depending on the previous state of the UI, and having the back button conditionally bring the user back to the list, depending on the window size and the current state.
This encapsulates a lot of behavior - and this might be all you need, and you don't need to go any deeper!
However, there may be reasons why you may want to tweak this behavior, or more directly manage the state by hoisting parts of it in a different way.
Remember, each layer builds upon the last. This snippet is at the outermost layer, and we can start unwrapping the layers to customize it where we need.
Let's go one level deeper with NavigableListDetailPaneScaffold and drop down one layer. Behavior won't change at all with these direct inlinings, since we are just inlining the default behavior at each step:
(Fun fact: You can follow along with this directly in Android Studio and for any other component you desire. If you choose Refactor > Inline function, you can directly replace a component with its implementation. You can't delete the original function in the library of course.)
val navigator = rememberListDetailPaneScaffoldNavigator<Any>()
BackHandler(
enabled = navigator.canNavigateBack(BackNavigationBehavior.PopUntilContentChange)
) {
navigator.navigateBack(BackNavigationBehavior.PopUntilContentChange)
}
ListDetailPaneScaffold(
directive = navigator.scaffoldDirective,
value = navigator.scaffoldValue,
listPane = {
// List pane
},
detailPane = {
// Detail pane
},
)
With the first inlining, we see the BackHandler that NavigableListDetailPaneScaffold includes by default. If using ListDetailPaneScaffold directly, back handling is left up to the developer to include and hoist to the appropriate place.
This also reveals how the navigator provides two pieces of state to control the ListDetailPaneScaffold:
- directive -- how the panes should be arranged in the ListDetailPaneScaffold, and
- value -- the current state of the panes, as calculated from the directive and the current navigation state.
These are both controlled by the navigator, and the next unpeeling shows us the default arguments to the navigator for directive and the adapt strategy, which is used to calculate value:
val navigator = rememberListDetailPaneScaffoldNavigator<Any>(
scaffoldDirective = calculatePaneScaffoldDirective(currentWindowAdaptiveInfo()),
adaptStrategies = ListDetailPaneScaffoldDefaults.adaptStrategies(),
)
BackHandler(
enabled = navigator.canNavigateBack(BackNavigationBehavior.PopUntilContentChange)
) {
navigator.navigateBack(BackNavigationBehavior.PopUntilContentChange)
}
ListDetailPaneScaffold(
directive = navigator.scaffoldDirective,
value = navigator.scaffoldValue,
listPane = {
// List pane
},
detailPane = {
// Detail pane
},
)
The directive controls the behavior for how many panes to show and the pane spacing, based on currentWindowAdaptiveInfo, which contains the size and posture of the window.
This can be customized with a different directive, to show two panes side-by-side at a smaller medium width:
val navigator = rememberListDetailPaneScaffoldNavigator<Any>(
scaffoldDirective = calculatePaneScaffoldDirectiveWithTwoPanesOnMediumWidth(currentWindowAdaptiveInfo()),
adaptStrategies = ListDetailPaneScaffoldDefaults.adaptStrategies(),
)
By default, showing two panes at a medium width can result in UI that is too narrow, especially for complex content. However, this can be a good option to use the window space more optimally by showing two panes for less complex content.
The AdaptStrategy controls what happens to panes when there isn't enough space to show all of them. Right now, this always hides panes for which there isn't enough space.
This directive is used by the navigator to drive its logic and, combined with the adapt strategy to determine the scaffold value, the resulting target state for each of the panes.
The scaffold directive and the scaffold value are then passed to the ListDetailPaneScaffold, driving the behavior of the scaffold.
This layering allows hoisting the scaffold state away from the display of the scaffold itself. This layering also allows custom implementations for controlling how the scaffold works and for hoisting related state. For example, if you are using a custom navigation solution instead of the navigator, you could drive the ListDetailPaneScaffold directly with state derived from your custom navigation solution.
The layering is enforced in the library with the different artifacts:
- androidx.compose.material3.adaptive:adaptive contains the underlying methods to calculate the current window adaptive info
- androidx.compose.material3.adaptive:adaptive-layout contains the layouts ListDetailPaneScaffold and SupportingPaneScaffold
- androidx.compose.material3.adaptive:adaptive-navigation contains the navigator APIs (like rememberListDetailPaneScaffoldNavigator)
Therefore, if you aren't going to use the navigator and instead use a custom navigation solution, you can skip using androidx.compose.material3.adaptive:adaptive-navigation and depend on androidx.compose.material3.adaptive:adaptive-layout directly.
When adding the Compose Adaptive library to your app, start with the most fully featured layer, and then unwrap if needed to tweak behavior. As we continue to work on the library and add new features, we'll keep adding them to the appropriate layer. Using the higher-level layers will mean that you will be able to get these new features most easily. If you need to, you can use lower layers to get more fine-grained control, but that also means that more responsibility for behavior is transferred to your app, just like the layering in Compose itself.
Try out the new components today, and send us your feedback for bugs and feature requests.
09 Sep 2024 5:01pm GMT
SAP integrated NavigationSuiteScaffold in just 5 minutes to create adaptive navigation UI
Posted by Alex Vanyo - Developer Relations Engineer
SAP Mobile Start is an app that centralizes access to SAP's mobile business suite, a hub for users to keep track of their companies' processes and data so they can efficiently manage their daily to-dos while on the move.
Recently, SAP Mobile Start developers prioritized building an adaptive app that looks great across devices, including tablets and foldables, to create a more seamless user experience. Using Jetpack Compose and Material 3 design, the team efficiently implemented intuitive, user-friendly features to increase accessibility across its users' preferred devices.
Adaptive design across devices
With over 300 million daily active users on foldables, tablets, and Chromebooks today, building apps that adapt to varied screen sizes is important for providing an optimal user experience. But simply stretching the UI to fit different screen sizes can drastically alter it from its original form, obscuring the interface and impairing the user experience.
"We focused on situations where we could make better use of available space on large screens," said Laura Bergmann, UX designer for SAP. "We wanted to get rid of screens that are stretched from edge to edge, full-screen drill-downs or dialogs, and use space more efficiently."
Now, after optimizing for different devices, SAP Mobile Start dynamically adjusts its layouts by swapping components and showing or hiding content based on the available window size instead of stretching UI elements to match a device's screen.
The SAP team also implemented canonical layouts, common UI designs that split a screen into panes according to its size. By separating content into panes, SAP's users can manage their business workflows more productively. Depending on the window size class, the supporting pane adjusts the UI without additional custom logic. For example, compact windows typically utilize one pane, while larger windows can utilize multiple.
"Adopting the new canonical layouts from Google helped us focus more on designing unique app capabilities for SAP's business scenarios," said Laura. "With the available navigational elements and patterns, we can now channel our efforts into creating a more engaging user experience without reinventing the wheel."
SAP developers started by implementing supporting panes to create multi-pane layouts that efficiently utilize on-screen space. The first place developers added supporting panes was on the app's "To-Do" details page. To-dos used to be managed in a single pane, making it difficult to review the comments and tickets simultaneously. Now, tickets and comments are reviewed in primary and secondary panes on the same screen using SupportingPaneScaffold.
Fast implementation using Compose Material 3 Adaptive library
SAP Mobile Start is built entirely with Jetpack Compose, Android's modern declarative toolkit for building native UI. Compose helped SAP developers build new UI faster and easier than ever before thanks to composables, reusable code blocks for building common UI components. The team also used Compose Navigation to integrate seamless navigation between composables, optimizing travel between new UI on all screens.
It took developers only five minutes to integrate the NavigationSuiteScaffold from the new Compose Material 3 adaptive library, rapidly adapting the app's navigation UI to different window sizes, switching between a bottom navigation bar and a vertical navigation rail. It also eliminated the need for custom logic, which previously determined the navigation component based on various window size classes. The NavigationSuiteScaffold also reduced the custom navigation UI logic code by 59%, from 379 lines to 156.
"Jetpack Compose simplified UI development," said Aditya Arora, lead Android developer. "Its declarative nature, coupled with built-in support for Material Design and dark theme, significantly increased our development efficiency. By simply describing the desired UI, we've reduced code complexity and improved maintainability."
SAP developers used live edit and layout inspector in Android Studio to test and optimize the app for large screens. These features were "total game changers" for the SAP team because they helped iterate and inspect layout issues faster when optimizing for new screens.
With its @PreviewScreenSizes annotation and device streaming powered by Firebase, Jetpack Compose also made testing the app's UI across various screen sizes easier. SAP developers look forward to Compose Screenshot Testing being completed, which will further streamline UI testing and ensure greater visual consistency within the app.
Using Jetpack Compose, SAP developers also quickly and easily implemented new Material 3 design concepts from the Compose M3 Adaptive library. Material 3 design emphasizes personalizing the app experience, improving interactions with modern visual aesthetics.
Compose's flexibility made replacing the standard Material Theme with their own custom Fiori Horizon Theme simple, ensuring a consistent visual appearance across SAP apps. "As early adopters of the Compose M3 Adaptive library, we collaborated with Google to refine the API," said Aditya. "Since our app is completely Compose-based, leveraging the new Compose Material 3 Adaptive library was a piece of cake."
As large-screen devices like tablets, foldables, and Chromebooks become more popular, building layouts that adapt to varied screen sizes becomes increasingly crucial. For SAP Mobile Start developers, reimagining their app across devices using Jetpack Compose and Material 3 design guidelines was simple. Using Android's collection of tools and resources, creating adaptive UIs for all the new form factors hitting the market today is faster and easier than ever.
"Optimizing for large screens is crucial. The market for tablets, foldables, and Chromebooks is booming. Don't miss out on this opportunity to improve your user experience and expand your app's reach," said Aditya.
Get started
Learn how to improve your UX by optimizing for large screens and foldables using Jetpack Compose and Material 3 design.
09 Sep 2024 4:59pm GMT
04 Sep 2024
Android Developers Blog
TalkBack uses Gemini Nano to increase image accessibility for users with low vision
Posted by Terence Zhang - Developer Relations Engineer and Lisie Lillianfeld - Product Manager
TalkBack is Android's screen reader in the Android Accessibility Suite that describes text and images for Android users who have blindness or low vision. The TalkBack team is always working to make Android more accessible. Today, thanks to Gemini Nano with multimodality, TalkBack automatically provides users with blindness or low vision more vivid and detailed image descriptions to better understand the images on their screen.
Increasing accessibility using Gemini Nano with multimodality
Advancing accessibility is a core part of Google's mission to build for everyone. That's why TalkBack has a feature to describe images when developers didn't include descriptive alt text. This feature was powered by a small ML model called Garcon. However, Garcon produced short, generic responses and couldn't specify relevant details like landmarks or products.
The development of Gemini Nano with multimodality was the perfect opportunity to use the latest AI technology to increase accessibility with TalkBack. Now, when TalkBack users opt in on eligible devices, the screen reader uses Gemini Nano's new multimodal capabilities to automatically provide users with clear, detailed image descriptions in apps including Google Photos and Chrome, even if the device is offline or has an unstable network connection.
"Gemini Nano helps fill in missing information," said Lisie Lillianfeld, product manager at Google. "Whether it's more details about what's in a photo a friend sent or the style and cut of clothing when shopping online."
Going beyond basic image descriptions
Here's an example that illustrates how Gemini Nano improves image descriptions: When Garcon is presented with a panorama of the Sydney, Australia shoreline at night, it might read: "Full moon over the ocean." Gemini Nano with multimodality can paint a richer picture, with a description like: "A panoramic view of Sydney Opera House and the Sydney Harbour Bridge from the north shore of Sydney, New South Wales, Australia."
"It's amazing how Nano can recognize something specific. For instance, the model will recognize not just a tower, but the Eiffel Tower," said Lisie. "This kind of context takes advantage of the unique strengths of LLMs to deliver a helpful experience for our users."
Using an on-device model like Gemini Nano was the only feasible solution for TalkBack to provide automatically generated detailed image descriptions for images, even while the device is offline.
"The average TalkBack user comes across 90 unlabeled images per day, and those images weren't as accessible before this new feature," said Lisie. The feature has gained positive user feedback, with early testers writing that the new image descriptions are a "game changer" and that it's "wonderful" to have detailed image descriptions built into TalkBack.
Balancing inference verbosity and speed
One important decision the Android accessibility team made when implementing Gemini Nano with multimodality was between inference verbosity and speed, which is partially determined by image resolution. Gemini Nano with multimodality currently accepts images in either 512 pixels or 768 pixels.
"The 512-pixel resolution emitted its first token almost two seconds faster than 768 pixels, but the output wasn't as detailed," said Tyler Freeman, a senior software engineer at Google. "For our users, we decided a longer, richer description was worth the increased latency. We were able to hide the perceived latency a bit by streaming the tokens directly to the text-to-speech system, so users don't have to wait for the full text to be generated before hearing a response."
A hybrid solution using Gemini Nano and Gemini 1.5 Flash
TalkBack developers also implemented a hybrid AI solution using Gemini 1.5 Flash. With this server-based AI model, TalkBack can provide the best of on-device and server-based generative AI features to make the screen reader even more powerful.
When users want more details after hearing an automatically generated image description from Gemini Nano, TalkBack gives the user an option to listen to more by running the image through Gemini Flash. When users focus on an image, they can use a three-finger tap to open the TalkBack menu and select the "Describe Image" option to send the image to Gemini 1.5 Flash on the server and get even more details.
By combining the unique advantages of both Gemini Nano's on-device processing with the full power of cloud-based Gemini 1.5 Flash, TalkBack provides blind and low-vision Android users a helpful and informative experience with images. The "describe image" feature powered by Gemini 1.5 Flash launched to TalkBack users on more Android devices, so even more users can get detailed image descriptions.
Compact model, big impact
The Android accessibility team recommends developers looking to use the Gemini Nano with multimodality prototype and test on a powerful, server-side model first. There developers can understand the UX faster, iterate on prompt engineering, and get a better idea of the highest quality possible using the most capable model available.
While Gemini Nano with multimodality can include missing context to improve image descriptions, it's still best practice for developers to provide detailed alt text for all images on their apps or websites. If the alt text is not provided, TalkBack can help fill in the gaps.
The Android accessibility team's goal is to create inclusive and accessible features, and leveraging Gemini Nano with multimodality to provide vivid and detailed image descriptions automatically is a big step towards that. Furthermore, their hybrid approach towards AI, combining the strengths of both Gemini Nano on device and Gemini 1.5 Flash in the server, showcases the transformative potential of AI in promoting inclusivity and accessibility and highlights Google's ongoing commitment to building for everyone.
Get started
Learn more about Gemini Nano for app development.
This blog post is part of our series: Spotlight Week on Android 15, where we provide resources - blog posts, videos, sample code, and more - all designed to help you prepare your apps and take advantage of the latest features in Android 15. You can read more in the overview of Spotlight Week: Android 15, which will be updated throughout the week.
04 Sep 2024 12:30am GMT
03 Sep 2024
Android Developers Blog
Our first Spotlight Week: diving into Android 15
Posted by Aaron Labiaga- Android Developer Relations Engineer
By now, you've probably heard the news: Android 15 was just released earlier today to AOSP. To celebrate, we're kicking off a new series called "Spotlight Week" where we'll shine a light on technical areas across Android development and equip you with the tools you need to take advantage of each area.
The Android 15 "Spotlight Week" will provide resources - blog posts, videos, sample code, and more - all designed to help you prepare your apps and take advantage of the latest features. These changes strive to improve the Android ecosystem, but updating the OS comes with potential app compatibility implications and integrations that require detailed guidance.
Here's what we're covering this week in our Spotlight Week on Android 15:
-
Android 15 goes to AOSP
-
Edge-to-edge
-
Building for the future of Android, an in-depth video
-
Foreground services and a live Android 15 Q&A
-
Passkeys and Picture-in-Picture
That's just a taste of what we're covering in our Spotlight Week on Android 15. Keep checking back to this blog post for updates, where we'll be adding links and more throughout the week. Plus, follow Android Developers on X and Android by Google at Linkedin throughout the week to hear even more about Android 15.
03 Sep 2024 6:10pm GMT
Google Maps improved download reliability by 10% using user initiated data transfer API
Posted by Alice Yuan - Developer Relations Engineer, in collaboration with Software Engineers - Matthew Valgenti and Emma Li - at Google
What is user initiated data transfer?
In Android 14 we introduced user-initiated data transfer jobs, or UIDT. You can use the new API setUserInitiated in JobScheduler to specify that the job is a user-initiated data transfer job. This API is helpful for use cases that require long-duration (>10 minutes), user-initiated transfer of data over network. UIDT is also an alternative API to using a dataSync foreground service, which has new timeout behavior for apps that target Android 15.
UIDT is intended to support user initiated use cases such as downloading files from a remote server, uploading files to a remote server or transferring data between two devices via Wi-Fi transport. Since the release of Android 14, the new API has been adopted by a growing number of Android apps running on hundreds of millions of user devices.
What are the benefits if I adopt user-initiated data transfer?
The Android team's extensive analysis found gaps in foreground services and WorkManager for long duration, user initiated data transfers. Although WorkManager could support retries and constraints, it could not support long duration work which are often necessary for data transfer operations. Developers also found it challenging to use foreground service, which did not provide an ideal user experience during interruption of network connectivity.
JobScheduler's user initiated data transfer API helps solve for these gaps by offering developers the following benefits:
- Long duration, immediate background work execution that is not impacted by existing job quotas based on app standby buckets.
- Helps improve consistency in API behavior across all devices, and the behavior is enforced through Compatibility Test Suite (CTS).
- Improved reliability of data transfer compared to using a foreground service as indicated by Google Maps own benchmarks.
- Support for execution when certain constraints are met, such as running only on Wi-Fi or only when the device is charging.
- Gracefully manages job timeouts and retries.
- Reduced memory usage and reduced notification clutter when the app is waiting for constraints to be met.
If you're looking to do short or interruptible background work, WorkManager is still the recommended solution. Check out data transfer background task options to learn more.
Google Maps successfully launched UIDT and saw improvement in download reliability!
Google Maps decided to use UIDT for their offline maps download use case. This use case ensures that users are able to download offline maps so they have map data even when they lose network connectivity.
What was Google Maps' main motivation for adopting UIDT?
Google Maps decided to adopt UIDT to ensure that the download service works with the latest Android releases and continues to be reliable and efficient.
"We implemented several features and optimizations, such as resumable downloads so that if a user's internet connection is interrupted or they exit the app before the download is complete, the download resumes from where it left off when the user returns to the app or their connection is restored." - Emma Li, Software Engineer at Google
What is the trigger point to start UIDT in Google Maps?
The UIDT is triggered when a user decides to download a map region to have that data offline. When a user hits download, the UIDT is triggered immediately and processing of the region begins as soon as possible.
What were Google Maps' adoption results?
Google Maps rolled out the project using experiment flags to understand metrics impact after each ramp stage.
"We successfully launched UIDT on Android 14 in early 2024 migrating from our foreground service implementation. After a retroactive analysis on Android 14 vs Android 13 implementation, we now see a 10%+ improvement in download failure rate of offline downloads!" - Matthew Valgenti, Software Engineer at Google
How do I get started implementing user initiated data transfer API?
In order to adopt user initiated data transfer, you will need to integrate with the JobScheduler platform API and gate the change to Android 14 or higher.
Check out the following developer documentation to get started with user initiated data transfer:
- Integrate user initiated data transfer on how to integrate setUserInitiated API.
- Data transfer background task options on what APIs to use depending on your data transfer use case.
- Check out Android 15 documentation to learn more about A15 behavior changes.
This blog post is part of our series: Spotlight Week on Android 15, where we provide resources - blog posts, videos, sample code, and more - all designed to help you prepare your apps and take advantage of the latest features in Android 15. You can read more in the overview of Spotlight Week: Android 15, which will be updated throughout the week.
03 Sep 2024 6:00pm GMT
Android 15 is released to AOSP
Posted by Matthew McCullough - VP of Product Management, Android Developer
Today we're releasing Android 15 and making the source code available at the Android Open Source Project (AOSP). Android 15 will be available on supported Pixel devices in the coming weeks, as well as on select devices from Samsung, Honor, iQOO, Lenovo, Motorola, Nothing, OnePlus, Oppo, realme, Sharp, Sony, Tecno, vivo, and Xiaomi in the coming months.
We're proud to continue our work in open source through the AOSP. Open source allows anyone to build upon and contribute to Android, resulting in devices that are more diverse and innovative. You can leverage your app development skills in Android Studio with Jetpack Compose to create applications that thrive across the entire ecosystem. You can even examine the source code for a deeper understanding of how Android works.
Android 15 continues our mission of building a private and secure platform that helps improve your productivity while giving you new capabilities to produce beautiful apps, superior media and camera experiences, and an intuitive user experience, particularly on tablets and foldables.
Starting today, we're kicking off a new educational series called Spotlight Weeks, where we dive into technical topics across Android, beginning with a week of content on Android 15. Check out what we'll be covering throughout the week, as well as today's deep dive into edge-to-edge.
Improving your developer experience
While most of our work to improve your productivity centers around tools like Android Studio, Jetpack Compose, and the Android Jetpack libraries, each new Android platform release includes quality-of-life updates to improve the development experience. For example, Android 15 gives you new insights and telemetry to allow you to further tune your app experience, so you can make changes that improve the way your app runs on any platform release.
- The ApplicationStartInfo API helps provide insight into your app startup including the startup reason, time spent in launch phases, start temperature, and more.
- The Profiling class within Android Jetpack, streamlining the use of the new ProfilingManager API in Android 15, lets your app request heap profiles, heap dumps, stack samples, or system traces, enabling a new way to collect telemetry about how your app runs on user devices.
- The StorageStats.getAppBytesByDataType([type]) API gives you new insights into how your app is using storage, including apk file splits, ahead-of-time (AOT) and speedup related code, dex metadata, libraries, and guided profiles.
- The PdfRenderer APIs now include capabilities to incorporate advanced features such as rendering password-protected files, annotations, form editing, searching, and selection with copy. Linearized PDF optimizations are supported to speed local PDF viewing and reduce resource use. The Jetpack PDF library uses these APIs to simplify adding PDF viewing capabilities to your app, with planned support for older Android releases.
- Newly-added OpenJDK APIs include support for additional math/strictmath methods, many util updates including sequenced collection/map/set, ByteBuffer support in Deflater, and security key updates. These APIs are updated on over a billion devices running Android 12+ through Android 15 through Google Play system updates, so you can broadly take advantage of the latest programming features.
- Newly added SQLite APIs include support for read-only deferred transactions, new ways to retrieve the count of changed rows or the last inserted row ID without issuing an additional query, and direct support for raw SQLite statements.
- Android 15 adds new Canvas drawing capabilities, including Matrix44 to help manipulate the Canvas in 3D and clipShader/clipOutShader to enable complex shapes by intersecting either the current shader or a difference of the current shader.
Improving typography and internationalization
Android helps you make beautiful apps that work well across the global diversity of the Android ecosystem.
- You can now create a FontFamily instance from variable fonts in Android 15 without having to specify wght and ital axes using the buildVariableFamily API; the text renderer will automatically adjust the values of the wght and ital axes to match the displaying text with compatible fonts.
- The font file in Android 15 for Chinese, Japanese, and Korean (CJK) languages, NotoSansCJK, is now a variable font, opening up new possibilities for creative typography.
- Android 15 bundles the old Japanese Hiragana (also known as Hentaigana) font by default, helping add a distinctive flair to design while preserving more accurate transmission and understanding of ancient Japanese documents.
- JUSTIFICATION_MODE_INTER_CHARACTER in Android 15 improves justification for languages that use white space for segmentation such as Chinese and Japanese.
Camera and media improvements
Each Android release helps you bring superior media and camera experiences to your users.
- For screens that contain both HDR and SDR content, Android 15 allows you to control the HDR headroom with setDesiredHdrHeadroom to prevent SDR content from appearing too washed-out.
- Android 15 supports intelligently adjusting audio loudness and dynamic range compression levels for apps with AAC audio content that contains loudness metadata so that audio levels can adapt to user devices and surroundings. To enable, instantiate a LoudnessCodecController with the audio session ID from the associated AudioTrack.
- Low Light Boost in Android 15 adjusts the exposure of the Preview stream in low-light conditions, enabling enhanced image previews, scanning QR codes in low light, and more.
- Advanced flash strength adjustments in Android 15 enable precise control of flash intensity in both SINGLE and TORCH modes while capturing images.
- Android 15 extends Universal MIDI Packets support to virtual MIDI apps, enabling composition apps to control synthesizer apps as a virtual MIDI 2.0 device just like they would with an USB MIDI 2.0 device.
Improving the user experience
We continue to refine the Android user experience with every release, while working to improve performance and battery life. Here is just some of what Android 15 brings to make the experience more intuitive, performant, and accessible.
- Users can save their favorite split-screen app combinations for quick access and pin the taskbar on screen to quickly switch between apps for better large screen multitasking on Android 15; making sure your app is adaptive is more important than ever.
- Android 15 defaults to displaying apps edge-to-edge when they target SDK 35. Also, the system bars will be transparent or translucent and content will draw behind by default. To ensure your app is ready check out "Handle overlaps using insets" (Views) or Window insets in Compose. Also many of the Material 3 composables help handle insets for you.
- Android 15 enables TalkBack to support Braille displays that are using the HID standard over both USB and secure Bluetooth to help Android support a wider range of Braille displays.
- On supported Android 15 devices, NfcAdapter allows apps to request observe mode as well as register filters, enabling one-tap transactions in many cases across multiple NFC capable apps.
- Apps can declare a property to allow your Application or Activity to be presented on the small cover screens of supported flippable devices.
- Android 15 greatly enhances AutomaticZenRules to allow apps to further customize Attention Management (Do Not Disturb) rules by adding types, icons, trigger descriptions, and the ability to trigger ZenDeviceEffects.
- Android 15 now includes OS-level support for app archiving and unarchiving. Archiving removes the APK and any cached files, but persists user data and returns the app as displayable through the LauncherApps APIs, and the original installer can restore it upon a request to unarchive.
- Foreground services change on Android 15 as part of our work to improve battery life and multitasking performance, including data sync timeouts, a new media processing foreground service type, and restrictions on launching foreground services from BOOT_COMPLETED and while an app holds the SYSTEM_ALERT_WINDOW permission.
- Beginning with Android 15, 16 KB page size support will be available on select devices as a developer option. When Android uses this larger page size, our initial testing shows an overall performance boost of 5-10% while using ~9% additional memory.
Privacy and security enhancements
Privacy and security are at the core of everything we do, and we work to make meaningful improvements to protect your apps and our users with each platform release.
- Private space in Android 15 lets users create a separate space on their device where they can keep sensitive apps away from prying eyes, under an additional layer of authentication. Some types of apps such as medical apps, launcher apps, and app stores may need to take additional steps to function as expected in a user's private space.
- Android 15 supports sign-in using passkeys with a single tap, as well as support to autofill saved credentials to relevant input fields.
- Android 15 adds support for apps to detect that they are being recorded so that you can inform the user that they're being recorded if your app is performing a sensitive operation.
- Android 15 adds the allowCrossUidActivitySwitchFromBelow attribute that blocks apps that don't match the top UID on the stack from launching activities to help prevent task hijacking attacks.
- PendingIntent creators block background activity launches by default in Android 15 to help prevent apps from accidentally creating a PendingIntent that could be abused by malicious actors.
Get your apps, libraries, tools, and game engines ready!
If you develop an SDK, library, tool, or game engine, it's particularly important to prepare any necessary updates immediately to prevent your downstream app and game developers from being blocked by compatibility issues and allow them to target the latest SDK features. Please let your developers know if updates are needed to fully support Android 15.
Testing your app involves installing your production app using Google Play or other means onto a device or emulator running Android 15. Work through all your app's flows and look for functional or UI issues. Review the behavior changes to focus your testing. Here are several changes to consider that apply even if you don't yet target Android 15:
- Package stopped state changes - Android 15 updates the behavior of the package FLAG_STOPPED state to keep apps stopped until the user launches or indirectly interacts with the app.
- Support for 16KB page sizes - Beginning with Android 15, 16 KB page size support will be available on select devices as a developer option. Additionally, Android Studio also offers an emulator system image with 16 KB support through the SDK manager. If your app or library uses the NDK, either directly or indirectly through a library, you can use the developer option in the QPR beta or the Android 15 emulator system image to test and fix applications to prepare for Android devices with 16 KB page sizes in the near future.
- Private space support - Test that your app/library works when installed in a private space; we have guidance for medical apps, launcher apps, and app stores.
- Removed legacy emoji font file - Some Android 15 devices such as Pixel will no longer have the bitmap NotoColorEmojiLegacy.ttf file included for compatibility since Android 13 and will only have the default vector file.
Please thoroughly exercise libraries and SDKs that your app is using during your compatibility testing. You may need to update to current SDK versions or reach out to the developer for help if you encounter any issues.
Once you've published the Android 15-compatible version of your app, you can start the process to update your app's targetSdkVersion.
App compatibility
We're working to make updates faster and smoother with each platform release by prioritizing app compatibility. In Android 15 we've made most app-facing changes opt-in until your app targets SDK version 35. This gives you more time to make any necessary app changes.
To make it easier for you to test the opt-in changes that can affect your app, based on your feedback we've made many of them toggleable again this year. With the toggles, you can force-enable or disable the changes individually from Developer options or adb. Check out how to do this, here.
To help you migrate your app to target Android 15, the Android SDK Upgrade Assistant within the latest Android Studio Koala Feature Drop release now covers android 15 API changes and walks you through the steps to upgrade your targetSdkVersion.
Get started with Android 15
If you have a supported Pixel device, you will receive the public Android 15 over the air update when it becomes available. If you don't want to wait, you can get the most recent quarterly platform release (QPR) beta by joining the Android 15 QPR beta program at any time.
If you're already in the QPR beta program on a Pixel device that supports the next Android release, you'll likely have been offered the opportunity to install the first Android 15 QPR beta update. If you want to opt-out of the beta program without wiping your device, don't install the beta and instead wait for an update to the release version when it is made available on your Pixel device. Once you've applied the stable release update, you can opt out without a data wipe as long as you don't apply the next beta update.
Stay tuned for the next five days of our Spotlight Week on Android 15, where we'll be covering topics like edge-to-edge, passkeys, updates to foreground services, picture-in-picture, and more. Follow along on our blog, X, LinkedIn or YouTube channels. Thank you again to everyone who participated in our Android developer preview and beta program. We're looking forward to seeing how your apps take advantage of the updates in Android 15.
For complete information, visit the Android 15 developer site.
Java and OpenJDK are trademarks or registered trademarks of Oracle and/or its affiliates.
03 Sep 2024 6:00pm GMT
#WeArePlay | Estante Mágica, the app helping kids publish their own books
Posted by Robbie McLachlan, Developer Marketing
In our latest film for #WeArePlay, which celebrates the people behind apps and games, we meet Robson from Rio de Janeiro, Brazil. As the co-founder of Estante Mágica, he created an app that turns kids into published authors, sparking their imagination and love for reading. Robson's journey from the favelas to creating a platform that inspires millions of young minds is helping to revolutionize education in Brazil. Discover how Estante Mágica is making a lasting impact on education, one story at a time.
What was the inspiration behind Estante Mágica?
It's a combination of my personal journey and a deep belief in the transformative power of education. Growing up in the Rocinha favela, I saw first-hand how education could change lives - my illiterate grandparents always believed that education was the key to real change, and passed that belief down to me. I wanted to create a tool that nurtures literacy so I teamed up with my friend Pedro to create Estante Mágica.
Estante Mágica, which translates to "Magic Bookshelf," is our way of giving every child the chance to become an author, create something magical that they can be proud of, and to see their own imagination come to life. We designed the app to be a bridge, reaching not just urban schools but also rural and underserved communities, including indigenous villages and areas with fewer resources.
Can you tell us about a moment that showed you the power of Estante Mágica?
Recently, I visited a school in a small village in Rio de Janeiro. The principal apologized for being late to our meeting, explaining that she had just met with a father and mother who wanted to enroll in adult education classes. When she asked them why they wanted to learn to read and write in their 30s and 40s, they told her, "Our son wrote a book here last year, and we don't know how to read it. It's our dream to be able to read the book our son wrote." That moment hit me hard. It was proof that our app isn't just about helping kids become authors; it's about inspiring entire families to embrace literacy and education.
Tell us about the Autograph Days
These events are held at schools and are all about celebrating the young authors who have created their own books. Each child receives a printed copy of their book, and the day is set up just like a traditional book signing event. The event is not only about the children's hard work but also a moment of pride for parents and teachers, who see the joy and confidence this experience brings to the kids.
What's next for Estante Mágica?
One major focus is integrating more AI into the app. We want to make it so that when kids create their characters or stories, they can interact with them more dynamically. Imagine a character in a game being able to respond to a child's questions or comments - that's the kind of interaction we're aiming for. We're keeping an eye on some new features like text-to-video and image-to-video which could add a whole new layer to how kids can bring their stories to life. Ultimately, we're planning to bring the magic of storytelling to children around the world and expand our platform to more schools, especially in underserved areas.
Discover more global #WeArePlay stories and share your favorites.
03 Sep 2024 1:00pm GMT
29 Aug 2024
Android Developers Blog
Android Studio Koala Feature Drop is Stable!
Posted by Sandhya Mohan, Product Manager, Android Studio
Today, we are thrilled to announce the stable release of Android Studio Koala Feature Drop (2024.1.2)!🐨
Earlier this year, we announced that every Android Studio animal version will have two releases: a platform release and a feature drop release. These more frequent updates get important IntelliJ updates to you faster, while we focus on quality and polish for Android-specific features. The Koala platform release was launched in June. Today, we'll walk through the feature drop release.
Get access to cutting-edge features like new devices in device streaming, Compose previews for Glance widgets, USB cable speed detection, support for Android 15 in the Android SDK Upgrade Assistant, and much more. All of these new features are designed to accelerate your Android app development workflow in building next-generation and high-quality apps.
Read on to learn more about all the updates, quality improvements, and new features across your key workflows in Android Studio Koala Feature Drop, and download the latest stable version today to try them out!
Develop
Android Device Streaming: more devices and improved sign-up
Android Device Streaming now includes the following devices, in addition to the portfolio of 20+ device models already available:
- Google Pixel 9
- Google Pixel 9 Pro
- Google Pixel 9 Pro XL
- Google Pixel 9 Pro Fold
- Google Pixel 8a
- Samsung Galaxy Fold5
- Samsung Galaxy S23 Ultra
Additionally, if you're new to Firebase, Android Studio automatically creates and sets up a no-cost Firebase project for you when you sign-in to Android Studio to use Device Streaming. As a result, you can get to streaming the device you need much faster. Learn more about Android Device Streaming quotas, including promotional quota for the Firebase Blaze plan projects available for a limited time.
As we announced at Google I/O 2024, we're further expanding the selection of devices available by working with partners, such as Samsung, Xiaomi, and OnePlus, to allow you to connect to devices hosted in their device labs. To learn more and enroll in the upcoming Early Access Preview, see the official blog post.
Target Android 15 using Android SDK Upgrade Assistant
The Android SDK Upgrade Assistant provides a step-by-step wizard to help you upgrade your targetSdkVersion. It also pulls documentation directly into Android Studio, saving you time and effort. Android Studio Koala Feature Drop adds support for upgrading projects to Android 15 (API Level 35).
Updated sign-in flow to Google services
It's now easier to sign in to multiple Google services with one authentication step. Whether you use Gemini in Android Studio, Firebase for Android Device Streaming, Crashlytics in App Quality Insights, Google Play for Android Vitals reports, or some combination of these services, the new sign-in flow makes it easier to get up and running. With granular permissions scoping, you'll always be in control of which services have access to your account. To get started, click the profile avatar on the top right corner and sign-in with your developer account.
Wear OS Tile Preview Panel
You can now view snapshots of your Wear OS app's tiles by including version 1.4 of the Jetpack Tiles library. This preview panel is particularly useful if your tile's appearance changes based on certain conditions, such as content that depends on the device's display size, or a sports event reaching halftime.
Compose Glance widget previews
Android Studio Koala Feature Drop makes it easy to preview your Jetpack Compose Glance widgets directly within the IDE. You can even use multi-previews to preview at standard widget sizes and their designed widget breakpoints (sample code). Catch potential UI issues and fine-tune your widget's appearance early in the development process or while debugging any UI issues. Learn more.
Live Edit (Compose)
Live Edit is now enabled in manual mode by default. It has increased stability and more robust change detection, including support for import statements. Note that starting with Android Studio Koala Feature Drop, the default shortcut to push your changes in manual mode has been updated to Control+' (Command+' on macOS). You can customize the shortcut on the Keymap settings page.
Debug
USB Cable Speed Detection
Android Studio now detects when it's possible to connect your Android device with a faster USB cable and suggests an upgrade that maximizes your device capabilities. Using an appropriate USB cable optimizes app installation time and minimizes latency when using tools such as the Android Studio debugger. USB cable speed detection is currently available for macOS and Linux. Learn more.
While most readily available USB cables are still the older USB 2.0 standard, the majority of modern devices support the significantly faster USB 3.0. Upgrading to a USB 3.0 cable can potentially increase your data transfer speeds up to 10x.
Device UI Shortcuts
To help you build and debug your UI, we've introduced Device UI shortcuts button action in the Running Devices tool window in Android Studio. Use the shortcuts to view the effect of common UI settings such as dark theme, font size, screen size, app language and TalkBack. You can use the shortcuts with emulators, mirrored physical devices, and devices streamed from Firebase Test Lab. Device UI shortcuts are available for devices running API level 33 or higher. Learn more.
Pixel 8a in Emulator
The Android Emulator (35.1+) now supports the Pixel 8a in the stable channel, enabling you to test your apps on more Pixel devices without needing a physical device. Find the new Pixel 8a in the phone category when you create a new virtual device. Additionally, you can find Pixel 9 devices in the canary release channel of Android Studio.
Optimize
Faster and improved Profiler with a task-centric approach
Popular performance optimization tasks like capturing a system trace with profileable apps now start up to 60% faster*. The Profiler's task-centric redesign also makes it easier to start the task you're interested in, whether it's profiling your app's CPU, memory, or power usage. For example, you can start a system trace task to profile and improve your app's startup time right from the UI as soon as you open the Profiler.
Quality improvements
Beyond new features, we also continue to improve the overall quality and stability of Android Studio. In fact, the Android Studio team addressed over 520 bugs during the Koala Feature Drop development cycle.
IntelliJ platform update
Android Studio Koala Feature Drop (2024.1.2) includes the IntelliJ 2024.1 platform release, which has many new features such as comprehensive support for the latest Java** 22 features, an improved terminal, and sticky lines in the editor to simplify working with large files and exploring new codebases.
- The improved terminal features a fresh new look, with commands separated into distinct blocks, along with an expanded set of features, such as smooth navigation between blocks, command completion, and easy access to the command history. Learn more.
- Sticky lines in the editor keeps key structural elements, like the beginnings of classes or methods, pinned to the top of the editor as you scroll and provides an option to promptly navigate through the code by clicking on a pinned line. Learn more.
- Basic IDE functionalities like code highlighting and completion now work for Java and Kotlin during project indexing, which should enhance your startup experience.
See the full release notes here.
Summary
To recap, Android Studio Koala Feature Drop includes the following enhancements and features:
Develop
- Android Device Streaming: more devices and improved sign-up
- Target Android 15 using Android SDK Upgrade Assistant
- Updated sign-in flow to Google services
- Wear OS Tile Preview Panel
- Compose Glance widget previews
- Live Edit (Compose)
Debug
- USB Cable Speed Detection
- Device UI Settings Picker
- Pixel 8a in Emulator
Optimize
- New Task UX for Profilers
Quality Improvements
- 520+ bugs addressed
IntelliJ Platform Update
- Improved terminal
- Sticky lines in the editor to simplify working with large codebases
- Enhanced startup experience
Getting Started
Ready for next-level Android development? Download Android Studio Koala Feature Drop and unlock these cutting-edge features today! As always, your feedback is important to us - check known issues, report bugs, suggest improvements, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X. Let's build the future of Android apps together!
**Java is a trademark or registered trademark of Oracle and/or its affiliates.
29 Aug 2024 6:00pm GMT
27 Aug 2024
Android Developers Blog
Instagram’s early adoption of Ultra HDR transforms user experience in only 3 months
Posted by Mayuri Khinvasara Khabya - Developer Relations Engineer, Google; in partnership with Bismark Ito - Android Developer, Rex Jin - Android Developer and Bei Yi - Partner Engineering
Meta's Instagram is one of the world's most popular social networking apps that helps people connect, find communities, and grow their businesses in new and innovative ways. Since its release in 2010, photographers and creators alike have embraced the platform, making it a go-to hub of artistic expression and creativity.
Instagram developers saw an opportunity to build a richer media experience by becoming an early adopter of Ultra HDR image format, a new feature introduced with Android 14. With its adoption of Ultra HDR, Instagram completely transformed and improved its user experience in just 3 months.
Enhancing Instagram photo quality with Ultra HDR
The development team wanted to be an early adopter of Ultra HDR because photos and videos are Instagram's most important form of interaction and expression, and improving image quality aligns with Meta's goal of connecting people, communities, and businesses. "Android rapidly adopts the latest media technology so that we can bring the benefits to users," said Rex Jin, an Android developer on the Instagram Media Platform team.
Instagram developers started implementing Ultra HDR in late September 2023. Ultra HDR images store more information about light intensity for more detailed highlights, shadows, and crisper colors. It also enables capturing, editing, sharing, and viewing HDR photos, a significant improvement over standard dynamic range (SDR) photos while still being backward compatible. Users can seamlessly post, view, edit, and apply filters to Ultra HDR photos without compromising image quality.
Since the update, Instagram has seen a large surge in Ultra HDR photo uploads. Users have also embraced their new ability to edit up to 10 Ultra HDR images simultaneously and share photos that retain the full color and dynamic camera capture range. Instagram's pioneering integration of Ultra HDR earned industry-wide recognition and praise when it was announced at Samsung Unpacked and in a Pixel Feature Drop.
Pioneering Ultra HDR integrations
Being early adopters of Android 14 meant working with beta versions of the operating system and addressing the challenges associated with implementing a brand-new feature that's never been tested publicly. For example, Instagram developers needed to find innovative solutions to handle the expanded color space and larger file sizes of Ultra HDR images while maintaining compatibility with Instagram's diverse editing features and filters.
The team found solutions during the development process by using code examples for HDR photo capture and rendering. Instagram also partnered with Google's Android Camera & Media team to address the challenges of displaying Ultra HDR images, share its developer experience, and provide feedback during integration. The partnership helped speed up the integrations, and the feedback shared was implemented faster.
"With Android being an open source project, we can build more optimized media solutions with better performance on Instagram," said Bismark Ito, an Android developer at Instagram. "I feel accomplished when I find a creative solution that works on a range of devices with different hardware capabilities."
Building for the future with Android 15
Ultra HDR has significantly enhanced Instagram's photo-sharing experience, and Meta is already planning to expand support to more devices and add future image and video quality improvements. With the upcoming Android 15 release, the company plans to explore new APIs and features that amplify its mission of connecting people, communities, and businesses.
As the Ultra HDR development process showed, being the first to adopt a new feature involves navigating new challenges to give users the best possible experience. However, collaborating with Google teams and Android's open source community can help make the process smoother.
Get started
Learn how to revolutionize your app's user experience with Ultra HDR images.
27 Aug 2024 5:02pm GMT
The Recorder app on Pixel sees a 24% boost in engagement with Gemini Nano-powered feature
Posted by Terence Zhang - Developer Relations Engineer and Kristi Bradford - Product Manager
Google Pixel's Recorder app allows people to record, transcribe, save, and share audio. To make it easier for users to manage and revisit their recordings, Recorder's developers turned to Gemini Nano, a powerful on-device large language model (LLM). This integration introduces an AI-powered audio summarization feature to help users more easily find the right recordings and quickly grasp key points.
Earlier this month, Gemini Nano got a power boost with the introduction of the new Gemini Nano with Multimodality model. The Recorder app is already leveraging this upgrade to summarize longer voice recordings, with improved processing for grammar and nuance.
Meeting user needs with on-device AI
Recorder developers initially experimented with a cloud-based solution, achieving impressive levels of performance and quality. However, to prioritize accessibility and privacy for their users, they sought an on-device solution. The development of Gemini Nano presented a perfect opportunity to build the concise audio summaries users were looking for, all while keeping data processing on the device.
Gemini Nano is Google's most efficient model for on-device tasks. "Having the LLM on-device is beneficial to users because it provides them with more privacy, less latency, and it works wherever they need since there's no internet required," said Kristi Bradford, the product manager for Pixel's essential apps.
To achieve better results, Recorder also fine-tuned the model using data that matches its use case. This is done using low order rank adaptation (LoRA), which enables Gemini Nano to consistently output three-bullet point descriptions of the transcript that include any speaker names, key takeaways, and themes.
AICore, an Android system service that centralizes runtime, delivery, and critical safety components for LLMs, significantly streamlined Recorder's adoption of Gemini Nano. The availability of a developer SDK for running GenAI workloads allowed the team to build the transcription summary feature in just four months, with only four developers. This efficiency was achieved by eliminating the need for maintaining in-house models.
Since its release, Recorder users have been using the new AI-powered summarization feature averaging 2 to 5 times daily, and the number of overall saved recordings increased by 24%. This feature has contributed to a significant increase in app engagement and user retention overall. The Recorder team also noted that feedback about the new feature has been positive, with many users citing the time the new AI-powered summarization feature saves them.
The next big evolution: Gemini Nano with multimodality
Recorder developers also implemented the latest Gemini Nano model, known as Gemini Nano with multimodality, to further improve its summarization feature on Pixel 9 devices. The new model is significantly larger than the previous one on Pixel 8 devices, and it's more capable, accurate, and scalable. The new model also has expanded token support that lets Recorder summarize much longer transcripts than before. Gemini Nano with multimodality is currently only available on Pixel 9 devices.
Integrating Gemini Nano with multimodality required another round of fine-tuning. However, Recorder developers were able to use the original Gemini Nano model's fine-tuning dataset as a foundation, streamlining the development process.
To fully leverage the new model's capabilities, Recorder developers expanded their dataset with support for longer voice recordings, implemented refined evaluation methods, and established launch criteria metrics focused on grammar and nuance. The inclusion of grammar as a new metric for assessing inference quality was made possible solely by the enhanced capabilities of Gemini Nano with Multimodality.
Doing more with on-device AI
"Given the novelty of GenAI, the whole team had fun learning how to use it," said Kristi. "Now, we're empowered to push the boundaries of what we can accomplish while meeting emerging user needs and opportunities. It's truly brought a new level of creativity to problem-solving and experimentation. We've already demoed at least two more GenAI features that help people get time back internally for early feedback, and we're excited about the possibilities ahead."
Get started
Learn more about how to bring the benefits of on-device AI with Gemini Nano to your apps.
27 Aug 2024 5:00pm GMT
#TheAndroidShow: diving into the latest from Made by Google, including wearables, Foldable, Gemini and more!
Posted by Anirudh Dewani, Director - Android Developer Relations
We just dropped our summer episode of #TheAndroidShow, on YouTube and on developer.android.com, where we unpacked all of the goodies coming out of this month's Made by Google event and what you as Android developers need to know. With two new Wear OS 5 watches, we show you how to get building for the wrist. And with the latest foldable from Google, the Pixel 9 Pro Fold, we show how you can leverage out of the box APIs and multi-window experiences to make your apps adaptive for this new form factor.
Building for Pixel 9 Pro Fold with Adaptive UIs
With foldables like the Pixel 9 Pro Fold, users have options for how to engage and multitask based on the display they are using and the folded state of their device. Building apps that adapt based on screen size and device postures allows you to scale your UI for mobile, foldables, tablets and beyond. You can read more about how to get started building for devices like the Pixel 9 Pro Fold, or learn more about building for large screens.
Preparing for Pixel Watch 3: Wear OS 5 and Larger Displays
With Pixel Watch 3 ringing in the stable release of Wear OS 5, there's never been a better time to prepare your app for the behavior changes from Wear OS 5 and larger screen sizes from Pixel. We covered how to get started building for wearables like Pixel Watch 3, and you can learn more about building for Wear OS 3.
Gemini Nano, with multi-modality
We also took you behind the scenes with Gemini Nano with multimodality, Google's latest model for on-device AI. Gemini Nano, the smallest version of the Gemini model family, can be executed on-device on capable Android devices including the latest Pixel 9. We caught up with the team to hear more about how the Pixel Recorder team used Gemini Nano to summarize users' transcripts of audio recordings, with data remaining on-device.
And some voices from Android devs like you!
Across the show, we heard from some amazing developers building excellent apps, across devices. Like Rex Jin and Bismark Ito, Android Developers at Meta: they told us how the team at Instagram was able to add Ultra HDR in less than three months, dramatically improving the user experience. Later, SAP told us how within 5 minutes, they integrated NavigationSuiteScaffold, swiftly adapting their navigation UI to different window sizes. And AllTrails told us they are seeing 60% higher monthly retention from Wear OS users… pretty impressive!
Have an idea for our next episode of #TheAndroidShow? It's your conversation with the broader community, and #TheAndroidShow is your conversation with the Android developer community, this time hosted by Huyen Tue Dao and John Zoeller. You'll hear the latest from the developers and engineers who build Android. You can watch the full show on YouTube Comment start and on developer.android.com/events/show!
27 Aug 2024 4:55pm GMT
23 Aug 2024
Android Developers Blog
Adding 16 KB Page Size to Android
Posted by Steven Moreland - Staff Software Engineer, Sandeep Patil - Principal Software Engineer
A page is the granularity at which an operating system manages memory. Most CPUs today support a 4 KB page size and so the Android OS and applications have historically been built and optimized to run with a 4 KB page size. ARM CPUs support the larger 16 KB page size. When Android uses this larger page size, we observe an overall performance boost of 5-10% while using ~9% additional memory.
In order to improve the operating system performance overall and to give device manufacturers an option to make this trade-off, Android 15 can run with 4 KB or 16 KB page sizes.
The very first 16 KB enabled Android system will be made available on select devices as a developer option. This is so you can use the developer option to test and fix (if needed) your applications to prepare for Android devices with 16 KB page sizes in the near future.
Details
In most CPUs, dedicated hardware called memory management units (MMUs) translate addresses from what a program is using to a physical location in memory. This translation is done on a page-size basis. Every time a program needs more memory, the operating system needs to get involved and fill out a "page table" entry, assigning that piece of memory to a process. When the page size is 4 times larger, there is 4 times less bookkeeping. So, the system can spend more time making sure your videos look great, games play well, and applications run smoothly, and less time filling out low-level operating system paperwork.
Unlike 32-bit/64-bit mode, a page size is not an Application Binary Interface (ABI). In other words, once an application is fixed to be page size agnostic, the same application binary can run on both 4 KB and 16 KB devices.
In Android 15, we've refactored Android from the ground up to support running at different page sizes, thus making it page-size agnostic.
Major OS Changes
On new Android 15 based devices:
- The compile-time PAGE_SIZE macro is replaced at runtime with getpagesize(2).
- All OS binaries are 16 KB aligned (-Wl,-z,max-page-size=16384). 3rd party applications / libraries may not be 16 KB aligned.
- All OS binaries are built with separate loadable segments (-Wl,-z,separate-loadable-segments) to ensure all memory regions mapped into a process are readable, which some applications depend on.
Many of our other OS components have been rewritten to avoid assuming the page size and to optimize for larger page size when available.
Filesystems
For performant operation, file system block size must match the page size. EROFS and F2FS file systems have been made 16 KB compatible, as has the UFS storage layer.
On 4 KB systems, ELF executable file size increases due to additional padding added for 16 KB alignment (-Wl,-z,max-page-size=16384 option), but several optimizations help us avoid this cost.
- Sparse read-only file systems ensure that zero pages created for additional padding for 16 KB alignment are not written to disk. For example, EROFS knows a certain range of a file is zero filled, and it will not need to do any IO if this part of the file is accessed.
- Read-writeable file systems handle zero pages on a case-by-case basis. For example, In Android 15, for files installed as part of applications PackageManager reclaims this space.
Memory Management
- The Linux page cache has been modified not to read ahead for these extra padding spaces, thereby saving unnecessary memory load.
- These pages are blank padding, and programs never read this. It's the space in-between usable parts of the program, purely for alignment reasons.
Linux Kernel
The Linux kernel is deeply tied to a specific page size, so we must choose which page size to use when building the kernel, while the rest of the operating system remains the same.
Android Applications
All applications with native code or dependencies need to be recompiled for compatibility with 16 KB page size devices.
Since most native code within Android applications and SDKs have been built with 4 KB page size in mind, they need to be re-aligned to 16 KB so the binaries are compatible with both 4 KB and 16 KB devices. For most applications and SDKs, this is a 2 step process:
- Rebuild the native code with 16 KB alignment.
- Test and fix on a 16 KB device/emulator in case there are hardcode assumptions about page size.
Please see our developer documentation for more information.
NOTE: If you are an SDK or tools developer, you should add 16 KB support as soon as possible so that applications can work on 16 KB using your SDK or tools.
Developing for 16 KB devices
There are no production Android devices available today or expected for the Android 15 release that support a 16 KB page size. In order to fix this problem, we are taking steps to work with our partners to make a developer option available on existing devices. This developer option is meant for application development and testing. We are also making a 16 KB emulator target available for developers in Android Studio.
16 KB Developer option on device
In Android 15, we implemented a developer option that lets users switch between 16 KB and 4 KB page size on the device in order to test their application with either of the page sizes. This option is available on Pixel 8 and Pixel 8 Pro starting in the Android 15 QPR1 Beta, and we're collaborating closely with SoC and OEM partners to enable the option on additional devices soon.
When built for 16 KB pages, the same binary will work with 4 KB and 16 KB devices, however the Linux kernel has to be separate. In order to solve this problem, we've added a way to include an extra kernel you can switch to as a developer option. Incrementally compressed, with one copy for each page size and takes ~12-16 MB of space on disk.
Using the 16 KB developer option will require wiping the device once and an unlocked bootloader. Following flashing, developers will be able to switch between a 4 KB and 16 KB mode by toggling the developer option over a reboot.
If you are a device manufacturer or SoC developer, see our instructions on how to enable and use this.
16 KB on x86_64 desktops
While 16 KB pages are an ARM-only feature, we recognize that many developers are using emulators on x86_64 hardware. In order to bridge this gap for developers, we've added support to emulate 16 KB page size for applications on x86_64 emulators. In this mode, the Kernel runs in 4 KB mode, but all addresses exposed to applications are aligned to 16 KB, and arguments to function calls such as mmap(...MAP_FIXED...) are verified to be 16 KB aligned.
To get started, you can download and run the 16 KB pages emulator inside the Android Studio SDK manager. This way, even if you don't have access to ARM hardware, you can still ensure your applications will work with 16 KB page size.
Future
In this post, we've discussed the technical details of how we are restructuring memory in Android to get faster, more performant devices. Android 15 and AOSP work with 16 KB pages, and devices can now implement 16 KB pages as a development option. This required changes from the bottom to the top of the operating system, in our development tooling, and throughout the Android ecosystem.
We are looking forward to application and SDK developers now to take advantage of these options and prepare for more performant and efficient Android devices in near future.
23 Aug 2024 4:00pm GMT
20 Aug 2024
Android Developers Blog
Tune in for our summer episode of #TheAndroidShow on August 27!
In just a few days, on Tuesday, August 27 at 10AM PT, we'll be dropping our summer episode of #TheAndroidShow, on YouTube and on developer.android.com. In this quarterly show, we'll be unpacking all of the goodies coming out of this month's Made by Google event and what you as Android developers need to know!
With two new Wear OS 5 watches, we'll show you how to get building for the wrist. And with the latest foldable from Google, the Pixel 9 Pro Fold, we'll show how you can leverage out of the box APIs and multi-window experiences to make your apps adaptive for this new form factor.
Plus, Gemini Nano now has Multimodality, and we'll be going behind-the-scenes to show you how teams at Google are using the latest model for on-device AI.
#TheAndroidShow is your conversation with the Android developer community, this time hosted by Huyen Tue Dao and John Zoeller. You'll hear the latest from the developers and engineers who build Android.
Don't forget to tune in live on August 27 at 10AM PT, live on YouTube and on developer.android.com/events/show!
20 Aug 2024 5:00pm GMT
#WeArePlay | Meet the founders turning their passions into thriving businesses
Posted by Robbie McLachlan, Developer Marketing
Our celebration of app and game businesses continues with #WeArePlay stories from founders around the world. Today, we're spotlighting the people who turned their passions into thriving businesses - from a passion for art and design from one game creator, to a passion for saving the environment from an app maker.
Sydney, Australia
During a gaming developer competition, Brian - alongside his wife and three other participants - built a challenging monster and bullet-dodging game called No Humanity within 48 hours, winning first place in the competition. From this, Brian founded his gaming company SweatyChair. No Humanity was improved and launched a week later and grew to over 9 million downloads. His passion for technology and art is how he champions a more active gaming experience, where players can create their own elements and play them in the game.
Pune, India
When Prachi travelled to Maharashtra, she saw first-hand how the effects of climate change impacted the locals. Her passion for protecting the environment led her to ask herself "What can I do about climate change?". She vowed to reduce her carbon footprint and went on to create Cool The Globe, an app that helps people track daily actions to lower their emissions. Her dedication earned her the Young Changemaker Award in India. Next, she aims to add community dashboards for schools and organizations to follow their collective climate efforts.
Chatou, France
Benoit is passionate about providing nutritious food for his children, so he went on a mission to buy healthier food for his family. Whilst shopping, he found label-reading tiring and wished for a tool to check ingredients automatically. He shared his idea with his brother François and close friend Julie. Together, the trio saw a real need to combine their passions for nutrition and technology and spent a weekend hammering out their concept before presenting the idea in a food hackathon they went on to win. Their winning project laid the groundwork for their app Yuka, which scans product labels to reveal their ingredients and health impact.
London, UK
When the loneliness of early motherhood hit after her first child, Michelle sought community and answers from online forums. When the forums didn't provide the safe space she was looking for, her passion for building community along with her 10 years of experience in social networking inspired her to create Peanut. The app helps moms to connect, make friends, and find support. With over 2.3 million downloads and a budding global community, the Peanut team recently revamped the main feed for greater personalization and introduced an ad-free option.
Discover more global #WeArePlay stories and share your favorites.
20 Aug 2024 4:00pm GMT