13 Dec 2018

feedTalkAndroid

LG V40 ThinQ review: Most underrated phone of the year

LG has had a pretty strange flagship phone strategy lately, with near constant releases and a blurred line between their G and V series of devices. Just this past year we've seen multiple flagship devices that are barely differentiated at all, and with the V30 dropping the secondary display it's pretty much just a slightly […]


Come comment on this article: LG V40 ThinQ review: Most underrated phone of the year

Visit TalkAndroid

13 Dec 2018 6:35pm GMT

YouTube Rewind 2018 is now the most disliked video ever

Google dropped YouTube Rewind 2018 less than a week ago as a look back at the year in YouTube videos, and the internet really did not take it well. It look literally 6 days for the 8-minute video to become the most disliked video in YouTube history, beating out Justin Bieber's Baby music video, and with a significantly higher […]


Come comment on this article: YouTube Rewind 2018 is now the most disliked video ever

Visit TalkAndroid

13 Dec 2018 4:45pm GMT

NVIDIA Shield picks up Amazon Music, 5.1 YouTube, and some holiday discounts

The NVIDIA Shield is easily our favorite media streaming box, and it just keeps getting better. NVIDIA has announced that the box is receiving its latest software update that brings a few new apps and features, and it's getting a holiday discount to boot. Shield Experience 7.2 is already rolling out and adds Amazon Music […]


Come comment on this article: NVIDIA Shield picks up Amazon Music, 5.1 YouTube, and some holiday discounts

Visit TalkAndroid

13 Dec 2018 3:29pm GMT

After arrest, Huawei CFO now out on bail

Earlier this month Huawei CFO Meng Wanzhou was arrested in Canada pursuant to an arrest warrant issued by the U.S. government. Meng was accused of violating U.S. sanctions against Iran by tricking banks into helping Huawei do business with Iran. This is not the first time a Chinese company has run afoul of U.S. sanctions […]


Come comment on this article: After arrest, Huawei CFO now out on bail

Visit TalkAndroid

13 Dec 2018 1:40pm GMT

12 Dec 2018

feedTalkAndroid

Talk Android’s 2018 Gift Guide for the Holidays

Need to get started on your holiday shopping? Don't panic, Talk Android is here to help you out. We've rounded up our favorite phones, smart home devices, and accessories to give you some ideas of what to get for the holiday season, and we've tagged some reviews so you can read up on everything before […]


Come comment on this article: Talk Android's 2018 Gift Guide for the Holidays

Visit TalkAndroid

12 Dec 2018 7:00pm GMT

feedAndroid Developers Blog

New Keystore features keep your slice of Android Pie a little safer

Posted by Brian Claire Young and Shawn Willden, Android Security; and Frank Salim, Google Pay

New Android Pie Keystore Features

The Android Keystore provides application developers with a set of cryptographic tools that are designed to secure their users' data. Keystore moves the cryptographic primitives available in software libraries out of the Android OS and into secure hardware. Keys are protected and used only within the secure hardware to protect application secrets from various forms of attacks. Keystore gives applications the ability to specify restrictions on how and when the keys can be used.

Android Pie introduces new capabilities to Keystore. We will be discussing two of these new capabilities in this post. The first enables restrictions on key use so as to protect sensitive information. The second facilitates secure key use while protecting key material from the application or operating system.

Keyguard-bound keys

There are times when a mobile application receives data but doesn't need to immediately access it if the user is not currently using the device. Sensitive information sent to an application while the device screen is locked must remain secure until the user wants access to it. Android Pie addresses this by introducing keyguard-bound cryptographic keys. When the screen is locked, these keys can be used in encryption or verification operations, but are unavailable for decryption or signing. If the device is currently locked with a PIN, pattern, or password, any attempt to use these keys will result in an invalid operation. Keyguard-bound keys protect the user's data while the device is locked, and only available when the user needs it.

Keyguard binding and authentication binding both function in similar ways, except with one important difference. Keyguard binding ties the availability of keys directly to the screen lock state while authentication binding uses a constant timeout. With keyguard binding, the keys become unavailable as soon as the device is locked and are only made available again when the user unlocks the device.

It is worth noting that keyguard binding is enforced by the operating system, not the secure hardware. This is because the secure hardware has no way to know when the screen is locked. Hardware-enforced Android Keystore protection features like authentication binding, can be combined with keyguard binding for a higher level of security. Furthermore, since keyguard binding is an operating system feature, it's available to any device running Android Pie.

Keys for any algorithm supported by the device can be keyguard-bound. To generate or import a key as keyguard-bound, call setUnlockedDeviceRequired(true) on the KeyGenParameterSpec or KeyProtection builder object at key generation or import.

Secure Key Import

Secure Key Import is a new feature in Android Pie that allows applications to provision existing keys into Keystore in a more secure manner. The origin of the key, a remote server that could be sitting in an on-premise data center or in the cloud, encrypts the secure key using a public wrapping key from the user's device. The encrypted key in the SecureKeyWrapper format, which also contains a description of the ways the imported key is allowed to be used, can only be decrypted in the Keystore hardware belonging to the specific device that generated the wrapping key. Keys are encrypted in transit and remain opaque to the application and operating system, meaning they're only available inside the secure hardware into which they are imported.

Secure Key Import is useful in scenarios where an application intends to share a secret key with an Android device, but wants to prevent the key from being intercepted or from leaving the device. Google Pay uses Secure Key Import to provision some keys on Pixel 3 phones, to prevent the keys from being intercepted or extracted from memory. There are also a variety of enterprise use cases such as S/MIME encryption keys being recovered from a Certificate Authorities escrow so that the same key can be used to decrypt emails on multiple devices.

To take advantage of this feature, please review this training article. Please note that Secure Key Import is a secure hardware feature, and is therefore only available on select Android Pie devices. To find out if the device supports it, applications can generate a KeyPair with PURPOSE_WRAP_KEY.

12 Dec 2018 6:39pm GMT

feedTalkAndroid

[Deal] Check out these Holiday Deals from Anker (US/UK)

With Christmas Day less than two weeks away, time is running out to find the perfect gift this festive season. If the strain of shopping for others is stressing you out, there's no rule that says you can't treat yourself to a little something along the way. With these holiday deals from Anker ranging from projectors, wireless […]


Come comment on this article: [Deal] Check out these Holiday Deals from Anker (US/UK)

Visit TalkAndroid

12 Dec 2018 5:16pm GMT

[TA Deals] Learn to build your own robot with the Complete Robotics eBook Bundle (90% off)

If you want to dig into building your own robotics projects, you'll need somewhere to start, and we're offering a bundle of ebooks to help you out. The Complete Robotics eBook Bundle has 5 comprehensive ebooks that will show you the ins and outs of Robot Operating Systems and give you a few projects to […]


Come comment on this article: [TA Deals] Learn to build your own robot with the Complete Robotics eBook Bundle (90% off)

Visit TalkAndroid

12 Dec 2018 4:04pm GMT

Latest case renders add to the picture of multiple Galaxy S10 cameras

The latest leak regarding the forthcoming Samsung Galaxy S10 smartphones seem to confirm much of the information we have seen regarding the camera configurations for three of the Galaxy S10 models believed to be launching this spring. The renders are for cases from Olixar via retailer Mobile Fun, both of which are considered reputable sources. […]


Come comment on this article: Latest case renders add to the picture of multiple Galaxy S10 cameras

Visit TalkAndroid

12 Dec 2018 2:20pm GMT

Open Beta OTA brings Android Pie to OnePlus 5 and OnePlus 5T

A day after launching the faster, more expensive McLaren Edition of the OnePlus 6T, OnePlus has announced an Open Beta program that will bring Android Pie to the OnePlus 5 and 5T. Besides the bump to the latest version of Android, the OxygenOS Open Beta includes new navigational gestures, a new user interface, and the […]


Come comment on this article: Open Beta OTA brings Android Pie to OnePlus 5 and OnePlus 5T

Visit TalkAndroid

12 Dec 2018 2:19pm GMT

Vivo’s latest phone drops the front-facing camera, but adds a full screen on the back

Display notches are good for cutting the screen around the front-facing cameras on smartphones, so what should you do to get an all-body display without a notch? Drop the front-facing camera of course! But what about when you want to take a selfie? Simply use the rear cameras, because Vivo is putting a screen back […]


Come comment on this article: Vivo's latest phone drops the front-facing camera, but adds a full screen on the back

Visit TalkAndroid

12 Dec 2018 12:35am GMT

11 Dec 2018

feedAndroid Developers Blog

Effective foreground services on Android

Posted by Keith Smyth

This is the fourth in a series of blog posts in which outline strategies and guidance in Android with regard to power.

A process is not forever

Android is a mobile operating system designed to work with constrained memory and battery. For this reason, a typical Android application can have its process killed by the system to recover memory. The process being killed is chosen based on a ranking system of how important that process is to the user at the time. Here, in descending order, is the ranking of each class of process. The higher the rank, the less likely that process is to be killed.

Native Native Linux daemon processes are responsible for running everything (including the process killer itself).
System The system_server process, which is responsible for maintaining this list.
Persistent apps Persistent apps like Phone, Wi-Fi, and Bluetooth are crucial to keeping your device connected and able to provide its most basic features.
Foreground app A foregrounded / top (user visible) app is the app a user is currently using.
Perceptible apps These are apps that the user can perceive are running. For example an app with a foreground service playing audio, or an app set as the preferred voice interaction service will be bound to the system_server, effectively promoting it to Perceptible level.
Service Background services like download manager and sync manager.
Home The Launcher app containing desktop wallpaper
Previous app The previous foreground app the user was using. The previous app lives above the cached apps as it's the most likely app the user will switch to next.
Cached apps These are the remaining apps that have been opened by the user, and then backgrounded. They will be killed first to recover memory, and have the most restrictions applied to them on modern releases. You can read about them on the Behavior Changes pages for Nougat, Oreo and Pie.



The foreground service

There is nothing wrong with becoming a cached app: Sharing the user's device is part of the lifecycle that every app developer must accept to keep a happy ecosystem. On a device with a dead battery, 100% of the apps go unused. And an app blamed for killing the battery could even be uninstalled.

However, there are valid scenarios to promote your app to the foreground: The prerequisites for using a foreground service are that your app is executing a task that is immediate, important (must complete), is perceptible to the user (most often because it was started by the user), and must have a well defined start and finish. If a task in your app meets these criteria, then it can be promoted to the foreground until the task is complete.

There are some guidelines around creating and managing foreground services. For all API levels, a persistent notification with at least PRIORITY_LOW must be shown while the service is created. When targeting API 26+ you will also need to set the notification channel to at least IMPORTANCE_LOW. The notification must have a way for the user to cancel the work, this cancellation can be tied to the action itself: for example, stopping a music track can also stop the music-playback service. Last, the title and description of the foreground service notification must show an accurate description of what the foreground service is doing.

To read more about foreground services, including several important updates in recent releases, see Running a service in the foreground

Foreground service use cases

Some good example usages of foreground services are playing music, completing a purchase transaction, high-accuracy location tracking for exercise, and logging sensor data for sleep. The user will initiate all of these activities, they must happen immediately, have an explicit beginning and end, and all can be cancelled by the user at any time.

Another good use case for a foreground service is to ensure that critical, immediate tasks (e.g. saving a photo, sending a message, processing a purchase) are completed if the user switches away from the application and starts a new one. If the device is under high memory pressure it could kill the previous app while it is still processing causing data loss or unexpected behavior. An elegantly written app will detect being backgrounded and respond by promoting its short, critical task to the foreground to complete.

If you feel you need your foreground service to stay alive permanently, then this is an indicator that a foreground service is not the right answer. Many alternatives exist to both meet the requirements of your use case, and be the most efficient with power.

Alternatives

Passive location tracking is a bad use case for foreground services. If the user has consented to being tracked, use the FusedLocationProvider API to receive bundled location updates at longer intervals, or use the geofencing API to be efficiently notified when a user enters or leaves a specified area. Read more about how to optimize location for battery.

If you wish to pair with a Bluetooth companion device, use CompanionDeviceManager. For reconnecting to the device, BluetoothLeScanner has a startScan method that takes a PendingIntent that will fire when a narrow filter is met.

If your app has work that must be done, but does not have to happen immediately: WorkManager or JobScheduler will schedule the work for the best time for the entire system. If the work must be started immediately, but then can stop if the user stops using the app, we recommend ThreadPools or Kotlin Coroutines.

DownloadManager facilitates handling long running downloads in the background. It will even handle retries over poor connections and system reboots for you.

If you believe you have a use case that isn't handled let us know!

Conclusion

Used correctly, the foreground service is a great way to tell Android that your app is doing something important to the user. Making the right decision on which tool to use remains the best way to provide a premium experience on Android for all users. Use the community and Google to help with these important decisions, and always respect the user first.

11 Dec 2018 10:35pm GMT

feedTalkAndroid

[TA Deals] Maximize your time with the Complete Productivity Booster bundle! (98% off)

You've only got 24 hours in a day and you have to spend some of it sleeping, so until someone figures out how to either add more hours or eliminate the need for resting, we're just going to have to figure out how to maximize what time we do have. Fortunately for all of us, […]


Come comment on this article: [TA Deals] Maximize your time with the Complete Productivity Booster bundle! (98% off)

Visit TalkAndroid

11 Dec 2018 8:04pm GMT

Samsung launches Galaxy A8s with a punch-hole display and triple rear cameras

Notched displays are yesterday's trend; the next display craze involves a punch-hole that sits in the upper reaches of the screen that houses the front-facing camera. After the Honor View 20 was confirmed to launch on December 26 in China with a punch-hole display, Samsung has taken the opportunity to announce a punch-hole design of […]


Come comment on this article: Samsung launches Galaxy A8s with a punch-hole display and triple rear cameras

Visit TalkAndroid

11 Dec 2018 3:56pm GMT

A customer service nightmare with Google Fi

Google Fi is a pretty good deal, on the surface. It's a cheap, flexible carrier plan that utilizes some fancy newer technology, and it's clearly been somewhat successful for Google. But if there's one thing Google seems to struggle with, it's customer service. Not too long ago we had a write-in from one of our […]


Come comment on this article: A customer service nightmare with Google Fi

Visit TalkAndroid

11 Dec 2018 3:02pm GMT

The OnePlus 6T McLaren Edition officially announced

This morning OnePlus made their latest version of the OnePlus 6T official with the announcement of the OnePlus 6T McLaren Edition. The partnership with McLaren was revealed a couple weeks ago and like other partnerships we have seen between phone makers with non-phone brands and products, OnePlus hopes the tweaks to the OnePlus 6T will […]


Come comment on this article: The OnePlus 6T McLaren Edition officially announced

Visit TalkAndroid

11 Dec 2018 1:29pm GMT

10 Dec 2018

feedTalkAndroid

[TA Deals] Improve your writing skills with a discounted WhiteSmoke subscription

WhiteSmoke Writing Assistant is a program that will help you clean up and improve your writing skills by checking your grammar, diction, and spelling, which guarantees you'll be writing better emails and stories in no time. Whether you're using this for business or personal projects, WhiteSmoke offers a plugin for all major browsers and checks […]


Come comment on this article: [TA Deals] Improve your writing skills with a discounted WhiteSmoke subscription

Visit TalkAndroid

10 Dec 2018 11:43pm GMT

06 Dec 2018

feedAndroid Developers Blog

Google Play services discontinuing updates for API levels 14 and 15

Posted by Sam Spencer, Technical Program Manager, Google Play

The Android Ice Cream Sandwich (ICS) platform is seven years old and the active device count has been below 1% for some time. Consequently, we are deprecating support for ICS in future releases of Google Play services. For devices running ICS, the Google Play Store will no longer update Play Services APK beyond version 14.7.99.

What does this mean as an Application developer:

The Google Play services SDK contains the interfaces to the functionality provided by the Google Play services APK, running as background services. The functionality required by the current, released SDK versions is already present on ICS devices with Google Play services and will continue to work without change.

With the SDK version changes earlier this year, each library can be independently released and may update its own minSdkVersion. Individual libraries are not required to change based on this deprecation. Newer SDK components may continue to support API levels 14 and 15 but many will update to require the higher API level. For applications that support API level 16 or greater, you will not need to make any changes to your build. For applications that support API levels 14 or 15, you may continue to build and publish your app to devices running ICS, but you will encounter build errors when updating to newer SDK versions. The error will look like this:

Error:Execution failed for task ':app:processDebugManifest'.
> Manifest merger failed : uses-sdk:minSdkVersion 14 cannot be smaller than version 16 declared in library [com.google.android.gms:play-services-FOO:16.X.YY]
        Suggestion: use tools:overrideLibrary="com.google.android.gms:play_services" to force usage

Unfortunately, the stated suggestion will not help you successfully run your app on older devices. In order to use the newer SDK, you will need to use one of the following options:

1. Target API level 16 as the minimum supported API level.

This is the recommended course of action. To discontinue support for API levels that will no longer receive Google Play services updates, simply increase the minSdkVersion value in your app's build.gradle to at least 16. If you update your app in this way and publish it to the Play Store, users of devices with less than that level of support will not be able to see or download the update. However, they will still be able to download and use the most recently published version of the app that does target their device.

A very small percentage of all Android devices are using API levels less than 16. You can read more about the current distribution of Android devices. We believe that many of these old devices are not actively being used.

If your app still has a significant number of users on older devices, you can use multiple APK support in Google Play to deliver an APK that uses Google Play services 14.7.99. This is described below.

2. Build multiple APKs to support devices with an API level less than 16.

Along with some configuration and code management, you can build multiple APKs that support different minimum API levels, with different versions of Google Play services. You can accomplish this with build variants in Gradle. First, define build flavors for legacy and newer versions of your app. For example, in your build.gradle, define two different product flavors, with two different compile dependencies for the stand-in example play-services-FOO component:

productFlavors {
    legacy {
        minSdkVersion 14
        versionCode 1401  // Min API level 14, v01
    }
    current {
        minSdkVersion 16
        versionCode 1601  // Min API level 16, v01
    }
}

dependencies {
    legacyCompile 'com.google.android.gms:play-services-FOO:16.0.0'
    currentCompile 'com.google.android.gms:play-services-FOO:17.0.0'
}

In the above situation, there are two product flavors being built against two different versions of play-services-FOO. This will work fine if only APIs are called that are available in the 16.0.0 library. If you need to call newer APIs made available with 17.0.0, you will have to create your own compatibility library for the newer API calls so that they are only built into the version of the application that can use them:

  1. Declare a Java interface that exposes the higher-level functionality you want to perform that is only available in current versions of Play services.
  2. Build two Android libraries that implement that interface. The "current" implementation should call the newer APIs as desired. The "legacy" implementation should no-op or otherwise act as desired with older versions of Play services. The interface should be added to both libraries.
  3. Conditionally compile each library into the app using "legacyCompile" and "currentCompile" dependencies as illustrated for play-services-FOO above.
  4. In the app's code, call through to the compatibility library whenever newer Play APIs are required.

After building a release APK for each flavor, you then publish them both to the Play Store, and the device will update with the most appropriate version for that device. Read more about multiple APK support in the Play Store.

06 Dec 2018 11:11pm GMT

05 Dec 2018

feedAndroid Developers Blog

Improve media and messaging app integrations with Android Auto

Posted by John Posavatz, Product Manager, Android Auto

At Google I/O this past May, we provided a sneak preview of several new media and messaging features for Android Auto. We are happy to announce that these features are now ready in our latest version of Android Auto, and we encourage you to update your Android Auto implementations to take advantage of them!

New Media Features

Several new features make it easier for users to find the media content that they're looking for. Check out the full documentation for them on our Android developer site.

Search results

After performing an Assistant-based search (e.g. "OK Google, play [artist / album / playlist / book / song / genre]"), music auto-plays as before, and in addition you can now provide your own list of categorized results. First, you'll need to declare support for onSearch() in your MediaBrowserServiceCompat implementation, and then override it. Android Auto forwards a user's search terms to this method whenever a user invokes the "Show more results" affordance.

Android Auto calls onSearch with the same Bundle of extras as the one defined for Android Auto's playFromSearch() calls. Unlike playFromSearch(), onSearch() includes a Result> that can be used to return multiple MediaItems back to Android Auto for display.

You can then categorize search results using title items. For example, music apps may include categories such as "Artists", "Albums" and "Songs".

Improved browse

Content has been "brought forward" out of the drawer, and now resides within the main view of the media screen. Within this new layout, you now have an option of either displaying your browse trees as a simple list, or you can optionally display large grids of album art / icons. We recommend using lists wherever the text description is most useful in describing content (e.g. a list of track names or podcast episodes), whereas the larger grid view is most appropriate where the album / icon aids in quick identification and selection.

To start applying content styles, you should set a global default for how your media items are displayed by applying specific constants in the BrowserRoot extras bundle returned by the onGetRoot() function. Android Auto reads the extras associated with each item in the browse tree and looks for specific constants (detailed in our documentation), and then use the presence/value of each key to add the appropriate indicator.

In order to change the default behavior for a specific node, the Content Style API supports overriding the default global hint for any browsable node's children. The same extras as above can be supplied as extras in the MediaDescription. If these extras are present, then the children of that browsable node will have the new Content Style hint.

Finally, you can organize content using title items to group media in a list. To do this, every media item in the group needs to declare an extra in their media description with the same string value, which you can localize. This value is used as the group title. You also need to pass the media items together and in the order you want them displayed.

Additional metadata icons

In both browsing and playback views, you are now able to show icons next to media items which have explicit language, have been downloaded to the user's device, and which are unplayed / partially played / completed (e.g. for audiobooks and podcasts).

Android Auto inspects extras for each item in the browse tree and looks for the specific keys for the indicators, and then uses the presence/value of each key to add the appropriate indicator.

You should add these extras to content returned by your MediaBrowse Service. "Explicit" and "Downloaded" are boolean extras (set to true to show the indicator), while "Completion State" is an integer extra set to the appropriate value. Apps should create an extras bundle that includes one or more of these keys and pass that to MediaDescription.Builder.setExtras().

Updates to Messaging

We are deprecating CarExtender, in favor of the more robust and broadly beneficial MessagingStyle API. Migrating to MessagingStyle is very straightforward, and not only extends messaging support beyond Android Auto (for example, to Google Assistant), but it also brings the following immediate benefits for Android Auto:

Group messaging

Formerly, Android Auto's support for group messages was lacking - in most cases, notifications were never shown. MessagingStyle addresses that, so that your users never miss a message.

MMS / RCS support

Android Auto only supports SMS natively through the system SMS broadcast. MessagingStyle allows support for SMS apps that themselves support RCS and MMS.

To enable your app to provide messaging service for Auto devices, your app must do the following:

  1. Build and send NotificationCompat.MessagingStyle objects that contain reply and mark-as-read Action objects.
  2. Handle replying and marking a conversation as read with a Service.
  3. Configure your manifest to indicate the app supports Android Auto.

By moving to MessagingStyle, your app will not only gain automotive support, but also gain a richer mobile notification experience including inline replying, image preview, and conversation history; all within the notification shade.

An in-depth guide to implementing (or updating to) MessagingStyle can be found in our online developer documentation.

Thanks for continuing to support Android Auto!

05 Dec 2018 11:17pm GMT

04 Dec 2018

feedAndroid Developers Blog

Android codelab courses are here!

Posted by Jocelyn Becker, Senior Program Manager, Google Developer Training

The Google Developers Training team recently published an updated version of our Android Developer Fundamentals course as a series of Google codelabs.

Android Developer Fundamentals course landing page

Codelabs made their debut as onsite tutorials at Google I/O in 2015, and have skyrocketed in popularity as a way for developers to learn how to use Google technologies, APIs, and SDKs. A codelab is a short, self-contained tutorial that walks you through how to do a specific task. More than 2 million users have worked through Google codelabs this year.

Our Android courses were originally intended as classroom-based courses. However, we found that many people work through the courses on their own, outside of formal teaching programs. So, when we updated the Android Developer Fundamentals course, in addition to supporting classroom-based learning, we made the material available as a sequential series of codelabs.

Android Developer Fundamentals course

The updated Android Developer Fundamentals (V2) course includes lessons on using the Room database and other Architecture Components. All the apps have been updated to reflect that the Empty Activity template in Android Studio uses ConstraintLayout, and we've updated all apps to a later version of Android Studio. For more details on the differences, see the release notes.

Advanced Android course

We've also re-published the Advanced Android Developer course as a series of codelabs. This course provides step-by-step instructions for learning about more advanced topics, and adding features to your app to improve user engagement and delight. Learn how to add maps to your apps, create custom views, use SurfaceView to draw directly to the screen, and much more.

Screenshots for apps that display a customized map marker, a customized fan controller view, and an Android hiding in the dark

Teaching materials for both courses

Android logo wearing graduation cap

If you want to teach either course in the classroom, or use it as the basis of a study jam, the complete package is still available, including slide decks, source code in GitHub, and concept guides, in addition to the step-by-step codelabs.

04 Dec 2018 7:00pm GMT

03 Dec 2018

feedAndroid Developers Blog

Celebrating the developers behind the best apps and games of 2018

Posted by Purnima Kochikar, Director, Business Development, Games & Applications

Today, we announced our annual Best of 2018 list, highlighting the best content on Google Play. But ever wonder about the makers behind your favorite apps and games like PUBG MOBILE or Tasty? Well, we wanted to take a moment to celebrate the developers that brought to life the best U.S. apps and games of 2018. And this year was jam packed with entertainment - all thanks to the developers who pushed the envelope and sparked our imaginations.

Check out the full rundown of the developers behind the best apps and games of 2018 on Google Play:

Best App of 2018

Most Entertaining Apps

Best Self Improvement Apps

Best Daily Helper Apps

Best Hidden Gem Apps

Best Game of 2018

Most Competitive Games

Most Innovative Games

Best Indie Games

Most Casual Games

03 Dec 2018 6:00pm GMT

30 Nov 2018

feedAndroid Developers Blog

SDK Developers: sign up to stay up to date with latest tips, news and updates

Posted by Parul Soi, Strategic Partner Development Manager, Google Play

Android is fortunate to have an incredibly rich ecosystem of SDKs and libraries to help developers build great apps more efficiently. These SDKs can range from developer tools that simplify complicated feature development to end-to-end services such as analytics, attribution, engagement, etc. All of these tools can help Android developers reduce cost and ship products faster.

For the past few months, various teams at Google have been working together on new initiatives to expand the resources and support we offer for the developers of these tools. Today, SDK developers can sign up and register their SDK with us to receive updates that will keep you informed about Google Play policy changes, updates to the platform, and other useful information.

Our goal is to provide you with whatever you need to better serve your technical and business goals in helping your partners create better apps or games. Going forward we will be sharing further resources to help SDK developers, so stay tuned for more updates.

If you develop an SDK or library for Android, make sure you sign up and register your SDK to receive updates about the latest tools and information to help serve customers better. And, if you're an app developer, share this blogpost with the developers of the SDKs that you use!

How useful did you find this blog post?


30 Nov 2018 6:37pm GMT

27 Nov 2018

feedAndroid Developers Blog

Fast Pair Update

Posted by Seang Chau (VP Engineering)

Last year we announced Fast Pair, a set of specs that make it easier to connect Bluetooth headsets and speakers to Android devices.

Today, we're making it easier for people to connect Fast Pair compatible accessories to devices associated with the same Google Account. Fast Pair will connect accessories to a user's current and future Android phones (6.0+), and we're adding support for Chromebooks in 2019.

Fast Pair provides stress-free Bluetooth pairing for your Android phone.

We have been working closely with dozens of manufacturers, many of which are bringing new Fast Pair compatible devices to market over the coming months. This includes Jaybird, who is already selling the Tarah Wireless Sport Headphones, as well as upcoming products from prominent brands such as Anker SoundCore, Bose, and many more.

The Jaybird Tarah, Fast Pair compatible sport headphones already available in the market.

We also want to make it easy for manufacturers to ship compatible products with minimal additional engineering effort. We collaborated with industry leading Bluetooth audio companies such as Airoha Technology Corp., BES and Qualcomm Technologies International, Ltd. (QTIL) to add native Fast Pair support to their software development kits.

If you are a manufacturer interested in creating Fast Pair compatible Bluetooth devices, just head to our Nearby Devices console to register your product and validate that it has correctly implemented the Fast Pair specs.

27 Nov 2018 6:00pm GMT

20 Nov 2018

feedAndroid Developers Blog

Wear OS by Google: final API 28 emulator with new redesigned UI

Posted by Hoi Lam, Lead Developer Advocate

Today, we are launching the final API 28 emulator image for developers. This image will also contain the UI redesign we announced in August. You should verify that your app's notification works well with the new notification stream, and that your apps work well against changes previously announced for API 28.

What's new in API 28?

Here are the highlights of the API 28 emulator:

Please note that changes related to the new notification stream are being rolled out to devices supporting API 25 and up. You can test how your notification will behave now, before roll-out is complete, by using the API 28 emulator image.

Keep your feedback coming

Just because we are now in release build does not mean that our work stops here. Please continue to submit all bug / enhancement requests via the Wear OS by Google issue tracker.

Finally, we are grateful for all of your valuable feedback during the developer preview. It played an important role in our decision making process - especially concerning App Standby Buckets. Thank you!

20 Nov 2018 5:56pm GMT

19 Nov 2018

feedAndroid Developers Blog

Getting screen brightness right for every user

Posted by Ben Murdoch, Software Engineer and Michael Wright, Android Framework Engineer

The screen on a mobile device is critical to the user experience. The improved Adaptive Brightness feature in Android P automatically manages the display to match your preferences for brightness level so you get the best experience, whatever the current lighting environment.

Screen brightness in Android is managed via Quick Settings or via the settings app

(Settings → Display → Brightness Level).

In Android Pie, Adaptive Brightness is enabled by default (Settings → Display → Adaptive Brightness).

While enabled, Android automatically selects a screen brightness that's suitable for the user's current ambient light conditions. Prior to Android Pie, the brightness slider didn't represent an absolute screen brightness level, but a global adjustment factor for boosting or reducing the device manufacturer's preset screen brightness curve across all ambient light levels:

* Setting the slider to center resulted in the device using the preset.

* Setting the slider to the left of center applied a negative scale factor, making the screen dimmer than the preset.

* Setting the slider to the right of center applied a positive scale factor, making the screen brighter than the preset.

So, under low ambient light conditions, you might prefer a brighter screen than the preset level and move the brightness slider up accordingly. But, because that adjustment would boost the brightness at all ambient light levels, you might find yourself needing to move the brightness slider back down in brighter ambient light. And so on, back and forth.

To improve this experience, we've introduced two important changes to screen brightness in Android Pie:

  1. Better slider control
  2. Personalization of the brightness level

Better slider control

The slider control now represents absolute screen brightness rather than the global adjustment factor. That means that you may see it move on its own while Adaptive Brightness is on. This is expected behavior!

Humans perceive brightness on a logarithmic rather than linear scale1. That means changes in screen brightness are much more noticeable when the screen is dark versus bright. To match this difference in perception, we updated the brightness slider UI in the notification shade and System Settings app to work on a more human-like scale. This means you may need to move the slider farther to the right than you did on previous versions of Android for the same absolute screen brightness, and that when setting a dark screen brightness you have more precise control over exactly which brightness to set.

Personalization of screen brightness

Prior to Android P, when developing a new Android device the device manufacturer would determine a baseline mapping from ambient brightness to screen brightness based on the display manufacturer's recommendation and a bit of experimentation. All users of that device would receive the same baseline mapping and, while using the device, move the brightness slider around to set their global adjustment factor. To determine the final screen brightness, the system would first look at the room brightness and the baseline mapping to find the default screen brightness for that situation, and then apply the global adjustment factor.

What we found is that in many cases this global adjustment factor didn't adequately capture personal preferences - that is, users tended to change the slider often for new lighting environments.

For Android Pie we worked with researchers from DeepMind to build a machine learning model that will observe the interactions that a user makes with the screen brightness slider, and train on-device to personalise the mapping of ambient light level to screen brightness.

This means that Android will learn what screen brightness is comfortable for a user in a given lighting environment. The user teaches it by manually adjusting the slider, and, as the software trains over time, the user should need to make fewer manual adjustments. In testing the feature, we've observed that after a week almost half our test users are making fewer manual adjustments while the total number of slider interactions across all internal test users was reduced by over 10%. The model that we've developed is updatable and will be tuned based on real world usage now that Android Pie has been released. This means that the model will continue to get better over time.

We believe that screen brightness is one of those things that should just work, and these changes in Android Pie are a step towards realizing that. For the best performance no matter where you are models run directly on the device rather than the cloud, and train overnight while the device charges.

The improved Adaptive Brightness feature is now available on Pixel devices and we are working with our OEM partners now to incorporate Adaptive Brightness into Android Pie builds for their devices.

Notes


  1. https://en.wikipedia.org/wiki/Weber%E2%80%93Fechner_law#Vision

19 Nov 2018 7:07pm GMT

15 Nov 2018

feedAndroid Developers Blog

Combating Potentially Harmful Applications with Machine Learning at Google: Datasets and Models

Posted by Mo Yu, Damien Octeau, and Chuangang Ren, Android Security & Privacy Team

In a previous blog post, we talked about using machine learning to combat Potentially Harmful Applications (PHAs). This blog post covers how Google uses machine learning techniques to detect and classify PHAs. We'll discuss the challenges in the PHA detection space, including the scale of data, the correct identification of PHA behaviors, and the evolution of PHA families. Next, we will introduce two of the datasets that make the training and implementation of machine learning models possible, such as app analysis data and Google Play data. Finally, we will present some of the approaches we use, including logistic regression and deep neural networks.

Using machine learning to scale

Detecting PHAs is challenging and requires a lot of resources. Our security experts need to understand how apps interact with the system and the user, analyze complex signals to find PHA behavior, and evolve their tactics to stay ahead of PHA authors. Every day, Google Play Protect (GPP) analyzes over half a million apps, which makes a lot of new data for our security experts to process.

Leveraging machine learning helps us detect PHAs faster and at a larger scale. We can detect more PHAs just by adding additional computing resources. In many cases, machine learning can find PHA signals in the training data without human intervention. Sometimes, those signals are different than signals found by security experts. Machine learning can take better advantage of this data, and discover hidden relationships between signals more effectively.

There are two major parts of Google Play Protect's machine learning protections: the data and the machine learning models.

Data sources

The quality and quantity of the data used to create a model are crucial to the success of the system. For the purpose of PHA detection and classification, our system mainly uses two anonymous data sources: data from analyzing apps and data from how users experience apps.

App data

Google Play Protect analyzes every app that it can find on the internet. We created a dataset by decomposing each app's APK and extracting PHA signals with deep analysis. We execute various processes on each app to find particular features and behaviors that are relevant to the PHA categories in scope (for example, SMS fraud, phishing, privilege escalation). Static analysis examines the different resources inside an APK file while dynamic analysis checks the behavior of the app when it's actually running. These two approaches complement each other. For example, dynamic analysis requires the execution of the app regardless of how obfuscated its code is (obfuscation hinders static analysis), and static analysis can help detect cloaking attempts in the code that may in practice bypass dynamic analysis-based detection. In the end, this analysis produces information about the app's characteristics, which serve as a fundamental data source for machine learning algorithms.

Google Play data

In addition to analyzing each app, we also try to understand how users perceive that app. User feedback (such as the number of installs, uninstalls, user ratings, and comments) collected from Google Play can help us identify problematic apps. Similarly, information about the developer (such as the certificates they use and their history of published apps) contribute valuable knowledge that can be used to identify PHAs. All these metrics are generated when developers submit a new app (or new version of an app) and by millions of Google Play users every day. This information helps us to understand the quality, behavior, and purpose of an app so that we can identify new PHA behaviors or identify similar apps.

In general, our data sources yield raw signals, which then need to be transformed into machine learning features for use by our algorithms. Some signals, such as the permissions that an app requests, have a clear semantic meaning and can be directly used. In other cases, we need to engineer our data to make new, more powerful features. For example, we can aggregate the ratings of all apps that a particular developer owns, so we can calculate a rating per developer and use it to validate future apps. We also employ several techniques to focus in on interesting data.To create compact representations for sparse data, we use embedding. To help streamline the data to make it more useful to models, we use feature selection. Depending on the target, feature selection helps us keep the most relevant signals and remove irrelevant ones.

By combining our different datasets and investing in feature engineering and feature selection, we improve the quality of the data that can be fed to various types of machine learning models.

Models

Building a good machine learning model is like building a skyscraper: quality materials are important, but a great design is also essential. Like the materials in a skyscraper, good datasets and features are important to machine learning, but a great algorithm is essential to identify PHA behaviors effectively and efficiently.

We train models to identify PHAs that belong to a specific category, such as SMS-fraud or phishing. Such categories are quite broad and contain a large number of samples given the number of PHA families that fit the definition. Alternatively, we also have models focusing on a much smaller scale, such as a family, which is composed of a group of apps that are part of the same PHA campaign and that share similar source code and behaviors. On the one hand, having a single model to tackle an entire PHA category may be attractive in terms of simplicity but precision may be an issue as the model will have to generalize the behaviors of a large number of PHAs believed to have something in common. On the other hand, developing multiple PHA models may require additional engineering efforts, but may result in better precision at the cost of reduced scope.

We use a variety of modeling techniques to modify our machine learning approach, including supervised and unsupervised ones.

One supervised technique we use is logistic regression, which has been widely adopted in the industry. These models have a simple structure and can be trained quickly. Logistic regression models can be analyzed to understand the importance of the different PHA and app features they are built with, allowing us to improve our feature engineering process. After a few cycles of training, evaluation, and improvement, we can launch the best models in production and monitor their performance.

For more complex cases, we employ deep learning. Compared to logistic regression, deep learning is good at capturing complicated interactions between different features and extracting hidden patterns. The millions of apps in Google Play provide a rich dataset, which is advantageous to deep learning.

In addition to our targeted feature engineering efforts, we experiment with many aspects of deep neural networks. For example, a deep neural network can have multiple layers and each layer has several neurons to process signals. We can experiment with the number of layers and neurons per layer to change model behaviors.

We also adopt unsupervised machine learning methods. Many PHAs use similar abuse techniques and tricks, so they look almost identical to each other. An unsupervised approach helps define clusters of apps that look or behave similarly, which allows us to mitigate and identify PHAs more effectively. We can automate the process of categorizing that type of app if we are confident in the model or can request help from a human expert to validate what the model found.

PHAs are constantly evolving, so our models need constant updating and monitoring. In production, models are fed with data from recent apps, which help them stay relevant. However, new abuse techniques and behaviors need to be continuously detected and fed into our machine learning models to be able to catch new PHAs and stay on top of recent trends. This is a continuous cycle of model creation and updating that also requires tuning to ensure that the precision and coverage of the system as a whole matches our detection goals.

Looking forward

As part of Google's AI-first strategy, our work leverages many machine learning resources across the company, such as tools and infrastructures developed by Google Brain and Google Research. In 2017, our machine learning models successfully detected 60.3% of PHAs identified by Google Play Protect, covering over 2 billion Android devices. We continue to research and invest in machine learning to scale and simplify the detection of PHAs in the Android ecosystem.

Acknowledgements

This work was developed in joint collaboration with Google Play Protect, Safe Browsing and Play Abuse teams with contributions from Andrew Ahn, Hrishikesh Aradhye, Daniel Bali, Hongji Bao, Yajie Hu, Arthur Kaiser, Elena Kovakina, Salvador Mandujano, Melinda Miller, Rahul Mishra, Sebastian Porst, Monirul Sharif, Sri Somanchi, Sai Deep Tetali, and Zhikun Wang.

15 Nov 2018 7:08pm GMT

An Update on Project Treble

Posted by Iliyan Malchev, Project Treble Architect

Last week at the 2018 Android Dev Summit, we demonstrated the benefits of Project Treble by showing the same Generic System Image (GSI) running on devices from different OEMs. We highlighted the availability of GSI for Android 9 Pie that app developers can use to develop and test their apps with Android 9 on any Treble-compliant device.

Launched with Android Oreo in 2017, Project Treble has enabled OEMs and silicon vendors to develop and deploy Android updates faster than what was previously possible. Since then, we've been working with device manufacturers to define Vendor Interfaces (VINTF) and draw a clear separation between vendor and framework code on Android devices.

Going forward, all devices launching with Android 9 Pie or later will be Treble-compliant and take full advantage of the Treble architecture to deliver faster upgrades. Thanks to Treble, we expect to see more devices from OEMs running Android 9 Pie at the end of 2018 as compared to the number of devices that were running Android Oreo at the end of 2017.

The GSI is built from the latest available AOSP source code, including the latest bug fixes contributed by OEMs. Device manufacturers already use GSI to validate the implementation of the vendor interface on their devices, and Android app developers can now harness the power of the GSI to test their apps across different devices. With GSI, you can test your apps on a pure AOSP version of the latest Android dessert, including the latest features and behavior changes, on any Treble-compliant device that's unlocked for flashing.

We're continuing to work on making GSI even more accessible and useful for app developers. For example, the GSI could enable early access to future Android platform builds that you can run on a Treble-compliant Android 9 device, so you could start app development and validation before the AOSP release.

If you are interested in trying GSI today, check out the documentation for full instructions on how to build GSI yourself and flash it to your Treble-compliant device.

15 Nov 2018 1:06am GMT

09 Nov 2018

feedAndroid Developers Blog

Get your app ready for foldable phones

Posted by Leo Sei, Product Manager on Android

As you may have heard from the Android Dev Summit, we announced that we're expanding support in Android to include Foldables, in preparation for upcoming devices from hardware partners like Samsung.

Here are a set of recommendations and information to make sure your application provides a great user experience on this new form factor (you can also check out the Android Dev Summit dedicated session here)

1. Screen continuity

On this new form factor, your application could be transitioned from one screen to another automatically (eg. when folding / unfolding a foldable phone).

During this transition, your app will receive a configuration change for the new layout (and possibly density in some cases).

To provide a great user experience when changing from one screen to the other, you want to make sure your app properly support runtime configuration change.

How to test: Emulators for various devices should become available soon (eg., Samsung will publish a folding / unfolding emulator apk later in Q4 which should work on Samsung Galaxy S4 tablets as well as the AOSP emulator in Android studio).

2. Multi-resume

Today, when an app is in multi-window but not focused, it is on the OnPause state.

While we provide recommendations on how to support multi-window, we noticed a significant number of apps are not handling the onPause state according to those recommendations (video paused or stopped, instant messages not displayed etc).

To help developers provide the best user experience on multi-window with minimal effort, we're allowing device manufacturers to keep all apps resumed when in multi-windows in P.

To opt-in to this behavior in Android P, add the following meta-data in your app manifest:

<meta-data android:name="android.allow_multiple_resumed_activities" android:value="true" />

Note: With the next Android version we're looking into how to optimize compatibility for this behavior.

How to test: There are no device at the moment with this behavior but device manufacturers are working to update existing devices to allow developers to test. Stay tuned for more details from device manufacturers.

3. Multi-display

Beginning with Android 8.0 (API level 26), the platform offers enhanced support for multiple displays. If an activity supports multi-window mode and is running on a device with multiple displays, users can move the activity from one display to another. When an app launches an activity, the app can specify which display the activity should run on. See here for the full documentation

How to test: You can try it out by using the "Developer options > Simulate secondary displays" option. Keep in mind that those simulated display do not process inputs.

09 Nov 2018 6:47pm GMT

07 Nov 2018

feedAndroid Developers Blog

Unfolding right now at #AndroidDevSummit!

Posted by Stephanie Cuthbertson, Director of Product Management

Today, at the Computer History Museum in Mountain View, CA, we kicked off the Android Dev Summit, taking a look back at the last 10 years of Android and then jumping into some important new features for Android developers. Here's a look at some of the things we shared!

Unfolding Android into new experiences

As early as Android 1.6, Android and our partners have contemplated different screen sizes and densities, enabling the platform to power a broad category of form factors and new experiences like Android TV, Android Auto, Wear OS and even Android apps on Chromebooks. Phone screens are an area where Android partners set the bar, introducing "phablets" when phone screens were small. Fast forward to today, when a phablet is... just a phone, a standard size users have come to love.

Now we see a Android device makers creating a new category: Foldables. Taking advantage of new flexible display technology, the screen can literally bend and fold.

There are two variants broadly speaking: two-screen devices and one-screen devices. When folded, foldables look like phones, fitting in your pocket or purse. When unfolded, their defining feature is what we call screen continuity. For example, start a video with the folded smaller screen - and later you can sit down and unfold the device to get a larger tablet-sized screen for a beautiful, immersive experience. As you unfold, the app seamlessly transfers to the bigger screen without missing a beat. We're optimizing Android for this new form factor. And, making changes to help developers everywhere take advantage of the possibilities this creates for amazing new experiences, new ways to engage and delight your users. Tune in to the Foldables session at Dev Summit this week to learn more. Expect to see Foldables coming from several Android manufacturers, including one Samsung previewed today and plans to offer next year.

Kotlin: updates to the fastest growing language

We made Kotlin a first class language on Android in 2017. This month we had over 118,000 new projects using Kotlin started in Android Studio - from those users who opt in to share metrics. That's a 10X increase from last year. It's become the fastest growing language in terms of growth of number of contributors on GitHub, and voted the #2 most loved language on Stack Overflow. In our surveys, the more developers use Kotlin, the higher their satisfaction.

Last week, JetBrains released the latest version of Kotlin, 1.3, which brings new language features, APIs, bug fixes, and performance improvements:

All of these new features of Kotlin 1.3 will be integrated into the Kotlin-specific APIs that we provide-a majority of which are through KTX extensions as part of Jetpack.

Android Jetpack: Navigation, Work Manager, and Slices

At Google I/O we announced Jetpack, the next generation of tools and Android APIs to accelerate Android application development. Jetpack builds on the foundations laid out by Support Library and Architecture. Already, 80% of top 1,000 apps and games are using one of the new Jetpack libraries in production.

This summer we moved AndroidX - Jetpack's evolution of the original Android Support Library - to public AOSP. This means you can see features and bug fixes implemented in real-time, and contribute to any of the AndroidX libraries. You can learn more about contributing here.

We've been working to get as much feedback and refinement as possible on two new Architecture Component libraries: Navigation and Work Manager, and we plan to move both to Beta this month. The Navigation Architecture Component offers a simplified way to implement Android's navigation principles in your application, using a single Activity. Plus, the new Navigation Editor in Android Studio creates and edits your navigation architecture. This eliminates navigation boilerplate, gives you atomic navigation operations, easier animated transitions and more. WorkManager makes it easy to perform background tasks in the most efficient manner, choosing the most appropriate solution based on the application state and device API level.

Navigation Editor

We're also excited to see Android Slices move to public Search experiments! At I/O this year we introduced Slices, a new way to bring users to your app. Slices are like a mini snippet of your app, where you can surface content and actions. You can book a flight, play a video, or call a ride. Slices is another example where we want to be open very early, but we want to take the time to get it right. We're moving into public EAP this month with Doist, Kayak and others. We'll run experiments surfacing Slices in Google search results. To learn more, there's also a session today at Dev Summit with more info and best practices.

Android Studio: focusing on productivity, build speed, quality and fundamentals

Android Studio is our official IDE for Android development. We asked where do you spend the most time? When we gather data from Android Studio's opted-in users we see that build time are getting faster with every release, sometimes as fast as 20%, but we also see build time getting slower and slower over time. So, how can both things be true? We've been digging in hard to understand.

It turns out build is a pretty complicated ecosystem. Developer choices makes a huge difference. Our developers are using a very broad (and growing) combination of OSes, custom plug-ins, annotation processors, languages. All of these can significantly affect times. In one case, a plugin some users like to add was silently slowing build speeds by up to 45%. Learning this, we realized we need build profiling and analysis tools so you can easily understand what's slowing your build down. We're also investing more in our own plugins to accelerate performance to make sure we continue to improve the performance of core build.

Android Studio 3.3 launches beta 3 today. In coming releases expect to see a strong focus on quality and fundamentals: reducing the number of crashes and hangs, optimizing memory usage, and fixing user-impacting bugs. We also announced today that we're making Android Studio an officially supported IDE on Chrome OS early next year; learn more here.

Android App Bundles and dynamic features

App sizes have grown dramatically, up 5x since 2012. But larger apps have downsides: lower install conversion rates, lower update rates, and higher uninstalls. This is why we built the Android App Bundle, the new publishing format that serves only the code and resources a user needs to run your app on their specific device; on average apps see 35% size savings compared to a universal APK. The app bundle also saves you time and effort with each release since you don't need to use incomplete solutions like multi-APK. Android Studio 3.2 brought full IDE support of app bundles, and there are now thousands of app bundles in production totaling billions of installs, including Google's apps like YouTube, Google Maps, Google Photos, and Google News.

The app bundle now supports uncompressed native libraries; with no additional developer work needed, the app bundle now makes apps using native libraries an average of 8% smaller to download and 16% smaller on disk on M+ devices.

Once you switch to the app bundle you can also start modularizing your app. With dynamic feature modules, you can load any app functionality on demand instead of at install time. You don't need to keep big features that are only used once, on every single device forever; dynamic features can be installed and uninstalled dynamically when your app requests them.

In-app Updates API

We've heard that you'd like more controls to ensure that users are running the latest and greatest version of your app. To address this, we're launching an In-app Updates API. We're testing the API with early access partners and will be launching it to all developers soon.

You'll have two options with this API; the first is a full-screen experience for critical updates when you expect the user to wait for the update to be applied immediately. The second option is a flexible update, which means the user can keep using the app while the update is downloaded. You can completely customize the update flow so it feels like part of your app.

Instant discovery

We're also making instant apps easier than ever to adopt. We recently made using web URLs optional, enabling you to take your existing play store deep link traffic and send users to your instant experience if it's available. Additionally, we've raised the instant app size limit to 10MB for the Try Now button on the Play Store and web banners to make it even easier to adopt.

In the Android Studio 3.3 beta, you can now build an instant-enabled app bundle. This means that you can now build and deploy both your Instant and installed experiences from a single Android Studio project, and include them in a single Android App Bundle. You only have to upload just ONE artifact for both instant and installed app.

As developers, your feedback has been critical in shaping these investment areas; you are part of how we work, from early ideas, to EAPs and canaries, Beta, and iterating after launch. We hope you join us for the next two days whether you're watching the 30+ sessions on the livestream, joining social, or with us in-person in Mountain View. From the team, a sincere thank you for all your thoughtful feedback and contributions. We hope you enjoy Android Dev Summit.

07 Nov 2018 6:53pm GMT

05 Nov 2018

feedAndroid Developers Blog

R8, the new code shrinker from Google, is available in Android studio 3.3 beta

Posted by Leo Sei, Product Manager on Android Studio and R8

Android developers know that APK size is an important factor in user engagement. Code shrinking helps reduce the size of your APK by getting rid of unused code and resources as well as making your actual code take less space (also known as minification or obfuscation).

That's why we're investing in improving code shrinking to make it faster and more efficient. We're excited to announce that the next generation code shrinker, R8, is available for preview as part of Android Studio 3.3 beta.

R8 does all of shrinking, desugaring and dexing in one step. When comparing to the current code shrinking solution, Proguard, R8 shrinks the code faster while improving the output size.

The following data comes from benchmark on the Santa Tracker app, you can find the project with benchmark details on this GitHub repository.

How to try it

R8 is available with Android Studio 3.3 beta and works with Proguard rules. To try it, set the following in your project's gradle.properties file:

android.enableR8=true

For the more adventurous, R8 also has full mode that is not directly compatible with Proguard. In order to try that out you can additionally set the following in your gradle.properties file:

android.enableR8.fullMode=true

This turns on more optimizations, that can further reduce app size. However, you might need a few extra keep rules to make it work.

We have tested R8 correctness and performances on a number of apps and the results are promising so we plan to switch R8 as the default shrinker in Android studio soon.

Please give R8 a try and we would love to hear your feedback. You can file a bug report using this link.

05 Nov 2018 7:03pm GMT

31 Oct 2018

feedAndroid Developers Blog

The Android Dev Summit app is live! Get ready for November 7-8

Posted by Matt Pearring, Associate Product Marketing Manager, Developer Marketing

In just a week, we'll be kicking off Android Dev Summit 2018, broadcasting live from the Computer History Museum in Mountain View, CA on November 7 and 8. We'll have two days of deep technical sessions from the Android engineering team, with over 30 sessions livestreamed. The app just went live; download it on Google Play and start planning.

With the app you can explore the conference schedule with details on keynotes, sessions, and lightning talks. You can also plan your summit experience by saving events to your personalized schedule. This year's app is also an Instant app, so you can try it out first before installing it!

Android Dev Summit app screenshots

If you can't join in person, you can always join us online - we'll be livestreaming all of the sessions on the Android Dev Summit website or app and making them available on YouTube throughout the conference so you can watch at your own pace. Plus, we will share updates directly from the Computer History Museum to our social channels, so be sure to follow along!

31 Oct 2018 9:26pm GMT

10 Nov 2011

feedAndroid Forums

Latest action game INC from OrangePixel now available!

From the developer of Meganoid and Stardash comes a new action arcade game: INC! http://www.youtube.com/watch?v=9j5OEG-3RyM Get it from the...

10 Nov 2011 9:31am GMT

Free online video chat

More than 1000 broadcast cameras for you online - the most incendiary models in Russia. 1000 girls, 1000, the temptations, 1000, full of desire - all...

10 Nov 2011 7:48am GMT

Layout problem

Hi Friends I decided to work with a tab layout application. Program consist of 3 tabs and a button. I like to place the button below the tab. ...

10 Nov 2011 5:20am GMT

[ANDROID]5 New Live Wallpapers for ANDROID !

*1-) Spectrum ICS * Image: http://i.imgur.com/IjE5B.jpg *2-) Alien Shapes* Image: http://i.imgur.com/7hQHA.jpg

10 Nov 2011 12:50am GMT

09 Nov 2011

feedAndroid Forums

New to Android, thinking of getting Asus Transformer

Hey all, New to this site and Android. I'm a 50 year old fireman who has resisted the newest tech gadgets but am wanting a tablet for use at home....

09 Nov 2011 10:33pm GMT

Island Fortress - "reverse Angry Birds" (FREE GAME)

Island Fortress is a free physics based puzzle/construction game where player has to defend the treasure from the pirate's cannonballs....

09 Nov 2011 8:42pm GMT

Unlock Code Question (MyTouch 3G)

I have a question about using an unlock code with an HTC T-Mobile MyTouch 3G. So I got the phone from a guy on Craigslist, and I have AT&T. In order...

09 Nov 2011 8:28pm GMT

[Game] Mini-Bubbles

Free Mini-Bubbles Android Market Link: https://market.android.com/details?id=br.com.dotfive.minibubbles Pop the most bubbles you can within...

09 Nov 2011 6:39pm GMT

Top 6 Android Tablet For 2011

Well now a days we are seeing new tablets coming every day and we see new upcoming tablets leaks too! It's difficult to choose best one which works...

09 Nov 2011 4:15pm GMT

unlock code

Hello, I need unlock code for telephone my touch 3g tmobile. thanks

09 Nov 2011 2:56pm GMT