06 Nov 2025

feedAndroid Developers Blog

#WeArePlay: Meet the people making apps & games to improve your health

Posted by Robbie McLachlan - Developer Marketing

In our latest #WeArePlay stories, we meet the founders building apps and games that are making health and wellness fun and easy for everyone on Google Play. From getting heavy sleepers jumping into their mornings, to turning mental wellness into an immersive adventure game.

Here are a few of our favorites:

Jay, founder of Delightroom

Seoul, South Korea

With over 90 million downloads, Jay's app Alarmy helps heavy sleepers to get moving with smart, challenge-based alarms.

While studying computer science, Jay's biggest challenge wasn't debugging code, it was waking up for his morning classes. This struggle sparked an idea: what if there were an app that could help anyone get out of bed? Jay built a basic version and showcased it at a tech event, where it quickly drew attention. That prototype evolved into Alarmy, an app that uses creative missions, like solving math problems, doing squats, or snapping a photo, to get people moving so they fully wake up. Now available in over 30 languages and 170+ countries, Jay and his team are expanding beyond alarms, adding sleep tracking and wellness features to help even more people start their day right.


Ellie and Hazel, co-founders of Mind Monsters Games

Cambridge, UK

Ellie and Hazel's game, Betwixt, makes mental wellness more fun by using an interactive story to reduce anxiety.

While working in London's tech scene and later writing about psychology, Ellie noticed a pattern: many people turned to video games to ease stress but struggled to engage with traditional meditation. That's when she came up with the idea to combine the two. While curating a book on mental health, she met Hazel-a therapist, former world champion boxer, and game lover and together they created Betwixt, an interactive fantasy adventure that guides players on a journey of self-discovery. By blending storytelling with evidence-based techniques, the game helps reduce anxiety and promote well-being. Now, with three new projects in development, Ellie and Hazel strive to turn play into a mental health tool.



Kevin and Robin, co-founders of MapMyFitness
Boulder (CO), U.S.

Kevin and Robin's app, MapMyFitness, helps a global community of runners and cyclists map their routes and track their training.

Growing up across the Middle East, the Philippines, and Africa, Kevin developed a fascination with maps. In San Diego, while training for his second marathon, he built a simple MapMyRun website to map his routes. When other runners joined, former professional cyclist Robin reached out with a vision to also help cyclists discover and share maps. Together they founded MapMyFitness in 2007 and launched MapMyRide soon after, blending Kevin's technical expertise and Robin's athletic know-how. Today, the MapMy suite powers millions of walkers, runners, and riders with adaptive training plans, guided workouts, live safety tracking, and community challenges-all in support of their mission to "get everybody outside".


Discover more #WeArePlay stories from founders across the globe.

06 Nov 2025 5:00pm GMT

03 Nov 2025

feedAndroid Developers Blog

Health Connect Jetpack v1.1.0 is now available!

Posted by Brenda Shaw, Health & Home Partner Engineering Technical Writer

Health Connect is Android's on-device platform designed to simplify connectivity between health and fitness apps, allowing developers to build richer experiences with secure, centralized data. Today, we're thrilled to announce three major updates that empower you to create more intelligent, connected, and nuanced applications: the stable release of the Health Connect Jetpack library 1.1.0 and the expanded device type support.

Health Connect Jetpack Library 1.1.0 is Now Stable

We are excited to announce that the Health Connect Jetpack library has reached its 1.1.0 stable release. This milestone provides you with the confidence and reliability needed to build production-ready health and fitness experiences at scale.

Since its inception, Health Connect has grown into a robust platform supporting over 50 different data types across activity, sleep, nutrition, medical records, and body measurements. The journey to this stable release has been marked by significant advancements driven by developer feedback. Throughout the alpha and beta phases, we introduced critical features like background reads for continuous data monitoring, historical data sync to provide users with a comprehensive long-term view of their health, and support for critical new data types like Personal Health records, Exercise Routes, Training Plans, and Skin Temperature. This stable release encapsulates all of these enhancements, offering a powerful and dependable foundation for your applications.

Expanded Device Type Support

Accurate data representation is key to building trust and delivering precise insights. To that end, we have significantly expanded the list of supported device types in Health Connect. This will be available in 1.2.0-alpha02. When data is written to the platform, specifying the source device is crucial metadata that helps data readers understand its context and quality.

The newly supported device types include:

This expansion ensures data is represented more accurately, allowing you to build more nuanced experiences based on the specific hardware used to record it.

What's Next?

We encourage all developers to upgrade to the stable 1.1.0 Health Connect Jetpack library to take full advantage of these new features and improvements.

We are committed to the continued growth of the Health Connect platform. We can't wait to see the incredible experiences you build!

03 Nov 2025 5:00pm GMT

30 Oct 2025

feedAndroid Developers Blog

ML Kit’s Prompt API: Unlock Custom On-Device Gemini Nano Experiences

Posted by Caren Chang, Developer Relations Engineer, Chengji Yan, Software Engineer, and Penny Li, Software Engineer

AI is making it easier to create personalized app experiences that transform content into the right format for users. We previously enabled developers to integrate with Gemini Nano through ML Kit GenAI APIs tailored for specific use cases like summarization and image description.

Today marks a major milestone for Android's on-device generative AI. We're announcing the Alpha release of the ML Kit GenAI Prompt API. This API allows you to send natural language and multimodal requests to Gemini Nano, addressing the demand for more control and flexibility when building with generative models.


Partners like Kakao are already building with Prompt API, creating unique experiences with real-world impact. You can experiment with Prompt API's powerful features today with minimal code.



Move beyond pre-built to custom on-device GenAI Prompt API moves beyond pre-built functionality to support custom, app-specific GenAI use cases, allowing you to create unique features with complex data transformation. Prompt API uses Gemini Nano on-device to process data locally, enabling offline capability and improved user privacy.


Key use cases for Prompt API:

Prompt API allows for highly customized GenAI use cases. Here are some recommended examples:

  • Image understanding: Analyzing photos for classification (e.g., creating a draft social media post or identifying tags such as "pets," "food," or "travel").

  • Intelligent document scanning: Using a traditional ML model to extract text from a receipt, and then categorizing each item with Prompt API.

  • Transforming data for the UI: Analyzing long-form content to create a short, engaging notification title.

  • Content prompting: Suggesting topics for new journal entries based on a user's preference for themes.

  • Content analysis: Classifying customer reviews into a positive, neutral, or negative category.

  • Information extraction: Extracting important details about an upcoming event from an email thread.


Implementation

Prompt API lets you create custom prompts and set optional generation parameters with just a few lines of code:


Generation.getClient().generateContent(
   generateContentRequest(
       ImagePart(bitmapImage),
       TextPart("Categorize this image as one of the following: car, motorcycle, bike, scooter, other. Return only the category as the response."),
   ) {
       // Optional parameters
       temperature = 0.2f
       topK = 10
       candidateCount = 1
       maxOutputTokens = 10
   },
)

For more detailed examples of implementing Prompt API, check out the official documentation and sample on Github.


Gemini Nano, performance, and prototyping


Prompt API currently performs best on the Pixel 10 device series, which runs the latest version of Gemini Nano (nano-v3). This version of Gemini Nano is built on the same architecture as Gemma 3n, the model we first shared with the open model community at I/O.


The shared foundation between Gemma 3n and nano-v3 enables developers to more easily prototype features. For those without a Pixel 10 device, you can start experimenting with prompts today by prototyping with Gemma 3n locally.


For the full list of devices that support GenAI APIs, refer to our device support documentation.


Learn more

Start implementing Prompt API in your Android apps today with guidance from our official documentation and the sample on Github.


30 Oct 2025 7:51pm GMT