21 Oct 2019

feedAndroid Developers Blog

Android Automotive OS updates for developers

Posted by Madan Ankapura, Product Manager, Android

Google's vision is to bring a safe and seamless connected experience to every car. Since 2017, we have announced collaborations with vehicle manufacturers like Volvo Car Group, General Motors and others to power infotainment systems with Android Automotive OS, Google's open-source Android platform, and to enable integration of Google technology and services. Now with the reveal of Volvo's XC40 Recharge and the previously announced Polestar 2, we are making progress on our vision with these brand new, customized infotainment systems that feature real-time updates to the Google Assistant, Google Maps and automotive apps created by Google, and the global developer community.

Volvo XC40 carVolvo XC40 infotainment unit

Volvo XC40 Recharge & its infotainment unit

With more manufacturers adding Android Automotive OS based infotainment systems to their vehicles, app developers have an opportunity to reach even more users with innovative, and drive optimized experiences.

Concept image from GM on Maps & Media integration

Concept image from GM on Maps & Media integration

Developing & testing media apps on emulator

At Google I/O 2019, we published design guidelines for developing media apps for cars, added wizard support to Android Studio, updated emulator to have car specific controls and the Android Automotive OS emulator system image. These latest features helped Android developers start to design, as well as develop and test their existing media apps to run on Android Automotive OS (review developer documentation here).

Today, we're announcing that developers can download an updated Android Automotive OS emulator system image that includes the Google Play Store. This means developers no longer have to wait to get their hands on a vehicle, but can design, develop, run apps right within the emulator, and can now test distribution via Play Console by requesting access.

In addition to the apps announced at Google I/O, more media app developers, including Amazon Music, Audioburst and YouTube Music, are adapting their apps for Android Automotive OS. The process of porting existing media apps that support Android Auto to this platform is simple and requires minimal development resources.

Audioburst, Amazon Music and YouTube Music running on the Android Automotive OS emulator

Audioburst, Amazon Music and YouTube Music running on the Android Automotive OS emulator

And if you want to learn more about creating apps for Android Automotive OS - join us at Android Dev Summit 2019. Come talk to us in our sandbox, tune in via livestream on YouTube, or post on the automotive-developers Google Group or Stack Overflow using android-automotive tags.

We hope to see you there!

21 Oct 2019 5:13pm GMT

feedAndroid Community

Android 10 update now lets 3rd party launchers access gesture navigation

One of the new things that come with the Android 10 has been the gesture navigation system. While it has been pretty divisive among users and tech writers, one of the foremost criticism is that it did not work with third-party launchers like Nova, Action Launcher, etc. Google promised that they would fix this in […]

21 Oct 2019 4:30pm GMT

feedTalkAndroid

Google defends their decision to not allow 4K60fps on the Pixel 4

Google announced the Pixel 4 with some serious camera capabilities, but it lacked one video feature that many users want. The Pixel 4 can shoot 4K video, but it's not able to do so in 60fps. It's not because phones can't do that, either; Samsung, Huawei, Apple, and others are all able to cram 4K60 […]


Come comment on this article: Google defends their decision to not allow 4K60fps on the Pixel 4

Visit TalkAndroid

21 Oct 2019 4:19pm GMT

[Deal] Grab a powerful saving on these Anker charging accessories

With car chargers, wireless chargers, and even batteries on offer in today's deals, Anker has you covered with some great savings to be had. There's also a wireless leaf blower, power banks, Bluetooth speaker, portable projectors, and a dashcam available at reduced prices from your favorite e-tailer. All you need to do is join us […]


Come comment on this article: [Deal] Grab a powerful saving on these Anker charging accessories

Visit TalkAndroid

21 Oct 2019 4:04pm GMT

feedAndroid Community

LG Stylo 5+ available on AT&T as an enhanced Stylo 5

The LG Stylo 5 was first made available on Cricket Wireless and then a few months later, it hit Metro and T-Mobile. The phone is getting a minor upgrade and follow up in the form of the LG Stylo 5+. This time, it's AT&T releasing the new Android smartphone that boasts a large 6.2-inch FHD+ […]

21 Oct 2019 3:00pm GMT

feedTalkAndroid

In search of new revenue, Netflix plans to crack down on password sharing

I know that I've been guilty of doing this, and there's a good chance that you know someone who has also committed the same deed; sharing their Netflix login details with friends and family. Whether it's sharing your Netflix account with someone who is, in turn, sharing their Prime Video or Spotify account with you, […]


Come comment on this article: In search of new revenue, Netflix plans to crack down on password sharing

Visit TalkAndroid

21 Oct 2019 1:35pm GMT

feedAndroid Community

OnePlus 6, GT getting OxygenOS Android 10 Open Beta 1

The OxygenOS update is available from OnePlus. It's not for the latest OnePlus 7T series but for the older OnePlus 6 and OnePlus 6T. OxygenOS Android 10 Open Beta 1 is now available for the two phones. It's only a beta build so a lot of things can still be improved. This build specifically upgrades […]

21 Oct 2019 1:30pm GMT

17 Oct 2019

feedAndroid Developers Blog

Introducing NDK r21: our first Long Term Support release

Posted by Dan Albert, Android NDK Tech Lead

Android NDK r21 is now in beta! It's been a longer than usual development cycle (four months since NDK r20), so there's quite a lot to discuss for this release.

We have the usual toolchain updates, improved defaults for better security and performance, and are making changes to our release process to better accommodate users that need stability without hindering those that want new features.

New Minimum System Requirements

This release comes with new minimum system requirements. Following Android Studio and the SDK, 32-bit Windows is no longer supported. While this change will not affect most developers, this change does have an impact if you use 32-bit versions of Microsoft® Windows®. Linux users must have glibc 2.17 or newer.

Release Cycle Changes and LTS

One release a year will be our Long Term Support (LTS) release for users that want stability more than they need new features. The release will undergo a longer beta cycle before being released, and will receive bug fixes as backports until next year's LTS release. Generally releasing in Q4, our first LTS release will be NDK r21.

The non-LTS releases each year, which we call the "rolling" release, will be similar to our current process. These will be approximately quarterly releases of our latest set of features, which will only be patched later for critical toolchain fixes. If you want the latest features from Clang and libc++, this is the release for you.

More detail, including the criteria we will use to determine what will be backported, what kinds of bugs will trigger a point release, and the bar we hold each release to can be found documented on our GitHub Wiki.

New Features and Updates

There are quite a lot of new things in this release, resolving bugs and helping you write better, safer code.

We've updated GNU Make to version 4.2, which enables --output-sync to avoid interleaving output with error messages. This is enabled by default with ndk-build. This also includes a number of bug fixes, including fixing the pesky CreateProcess errors on Windows.

GDB has been updated to version 8.3, which includes fixes for debugging modern Intel CPUs.

Updated LLVM

As always, we've updated LLVM and all of its components (Clang, lld, libc++, etc) which includes many improvements.

The toolchain has been updated to r365631 (the master branch as of 10 July 2019). This includes fixes for quite a few bugs in the previous release, perhaps most importantly LLD no longer hangs when using multithreaded linking on Windows. OpenMP is now available as a dynamic library (and this is the new default behavior, so link with -static-openmp if you want to stick with the static runtime).

A handful of driver improvements have been made to reduce the amount of compiler configuration required by each build system as well. Build system owners should check the updated Build System Maintainers guide.

libc++ has been updated to r369764.

Fortify

Fortify is now enabled by default when using ndk-build or the CMake toolchain file (this includes ExternalNativeBuild users). Fortify enables additional checks in the standard library that can help catch bugs sooner and mitigate security issues. For example, without fortify the following code compiles fine:

    const char src[] = "this string is too long";
    char dst[10];
    strcpy(dst, src);

With fortify, the buffer overflow is diagnosed at compile-time:

    test.cpp:10:18: error: 'strcpy' called with string bigger than buffer
      strcpy(dst, src);
                     ^

It is not always possible for the compiler to detect this issue at compile-time. In those cases, a run-time check will be used instead that will cause the program to abort rather than continue unsafely.

If you're using a build system other than ndk-build or CMake via the NDK's toolchain file, this will not be enabled by default. To enable, simply define _FORTIFY_SOURCE=2. The most reliable way to do this is by adding -D_FORTIFY_SOURCE=2 to your compiler flags.

Clang can also statically detect some of these issues by using -Wfortify-source (also new in r21). This is on by default, but it's recommended to enforce fixing issues with -Werror=fortify-source. Use this in addition to the C library features, not instead of, since the warnings do not cover the same cases as the C library extension, nor can it add run-time checks.

Note that because run-time support is required for fortify and the feature was gradually added to Android over time, the exact set of APIs protected by fortify depends on your minSdkVersion. Fortify is an improvement, but it is not a replacement for good tests, ASan, and writing safe code.

See FORTIFY in Android for an in-depth explanation of fortify.

Other updates

64-bit requirement

Since August 2019, all existing and new apps are now required to support 64-bit before they can be released to Google Play; there's an extension for a limited set of apps. For more information and help on adding support for 64-bit, see our guide.

Neon by default

Arm code is now built with Neon by default. In a previous release we enabled it conditionally based on minSdkVersion, but given the very small number of devices that don't support Neon we now enable it unconditionally. This offers improved performance on all 32-bit Arm devices (64-bit Arm always had this, and it does not affect Intel ABIs).

As before, this behavior can be disabled for apps that need to continue supporting devices without Neon. Alternatively, those devices can be blacklisted in the Play Console. See https://developer.android.com/ndk/guides/cpu-arm-neon for more information.

Up Next

Have a look at our roadmap to see what we're working on next. The next few big things coming up are package management and better CMake integration.

17 Oct 2019 5:25pm GMT

13 Oct 2019

feedPlanet Maemo

Implementing "Open with…" on MacOS with Qt

I just released PhotoTeleport 0.12, which includes the feature mentioned in the title of this blog post. Given that it took me some time to understand how this could work with Qt, I think it might be worth spending a couple of lines about how to implement it.

In the target application

The first step (and the easiest one) is about adding the proper information to your .plist file: this is needed to tell MacOS what file types are supported by your application. The official documentation is here, but given that an example is better than a thousand words, here's what I had to add to PhotoTeleport.plist in order to have it registered as a handler for TIFF files:

  <key>CFBundleDocumentTypes</key>
  <array>
    <dict>
      <key>CFBundleTypeExtensions</key>
      <array>
        <string>tiff</string>
        <string>TIFF</string>
        <string>tif</string>
        <string>TIF</string>
      </array>
      <key>CFBundleTypeMIMETypes</key>
      <array>
        <string>image/tiff</string>
      </array>
      <key>CFBundleTypeName</key>
      <string>NSTIFFPboardType</string>
      <key>CFBundleTypeOSTypes</key>
      <array>
        <string>TIFF</string>
        <string>****</string>
      </array>
      <key>CFBundleTypeRole</key>
      <string>Viewer</string>
      <key>LSHandlerRank</key>
      <string>Default</string>
      <key>LSItemContentTypes</key>
      <array>
        <string>public.tiff</string>
      </array>
      <key>NSDocumentClass</key>
      <string>PVDocument</string>
    </dict>
    …more dict entries for other supported file formats…
  </array>

This is enough to have your application appear in Finder's "Open with…" menu and be started when the user selects it from the context menu, but it's only half of the story: to my big surprise, the selected files are not passed to your application as command line parameters, but via some MacOS-specific event which needs to be handled.

By grepping into the Qt source code, I've found out that Qt already handles the event, which is then transformed into a QFileOpenEvent. The documentation here is quite helpful, so I won't waste your time to repeat it here; what has hard for me was to actually find that this functionality exists and is supported by Qt.

In the source application

The above is only half of the story: what if you are writing an application which wants to send some files to some other application? Because of the sandboxing, you cannot just start the desired application in a QProcess and pass the files as parameters: again, we need to use the Apple Launch Services so that the target application would receive the files through the mechanism described above.

Unfortunately, as far as I could find this is not something that Qt supports; sure, with QDesktopServices::openUrlExternally() you can start the default handler for the given url, but what if you need to open more than one file at once? And what if you want to open the files in a specific application, and not just in the default one? Well, you need to get your hands dirty and use some MacOS APIs:

#import <CoreFoundation/CoreFoundation.h>
#import <ApplicationServices/ApplicationServices.h>

void MacOS::runApp(const QString &app, const QList<QUrl> &files)
{
    CFURLRef appUrl = QUrl::fromLocalFile(app).toCFURL();

    CFMutableArrayRef cfaFiles =
        CFArrayCreateMutable(kCFAllocatorDefault,
                             files.count(),
                             &kCFTypeArrayCallBacks);
    for (const QUrl &url: files) {
        CFURLRef u = url.toCFURL();
        CFArrayAppendValue(cfaFiles, u);
        CFRelease(u);
    }

    LSLaunchURLSpec inspec;
    inspec.appURL = appUrl;
    inspec.itemURLs = cfaFiles;
    inspec.asyncRefCon = NULL;
    inspec.launchFlags = kLSLaunchDefaults + kLSLaunchAndDisplayErrors;
    inspec.passThruParams = NULL;

    OSStatus ret;
    ret = LSOpenFromURLSpec(&inspec, NULL);
    CFRelease(appUrl);
}

In Imaginario I've saved this into a macos.mm file, added it to the source files, and also added the native MacOS libraries to the build (qmake):

LIBS += -framework CoreServices

You can see the commit implementing all this, it really doesn't get more complex than this. The first parameter to the MacOS::runApp() function is the name of the application; I've verified that the form /Applications/YourAppName.app works, but it may be that more human-friendly variants work as well.

0 Add to favourites0 Bury

13 Oct 2019 6:51pm GMT

09 Oct 2019

feedAndroid Developers Blog

Previewing #AndroidDevSummit: Sessions, App, & Livestream Details

Posted by The #AndroidDevSummit team

In two weeks, we'll be welcoming Android developers from around the world at Android Dev Summit 2019, broadcasting live from the Google Events Center (MP7) in Sunnyvale, CA on October 23 & 24. Whether you're joining us in person or via the livestream, we've got a great show planned for you; starting today, you can read more details about the keynote, sessions, codelabs, sandbox, the mobile app, and how online viewers can participate.

The keynote, sessions, sandbox and more, kicking off at 10AM PDT

The summit kicks off on October 23 at 10 a.m. PDT with a keynote, where you'll hear from Dave Burke, VP Engineering for Android, and others on the present and future of Android development. From there, we'll dive into two days of deep technical content from the Android engineering team, on topics such as Android platform, Android Studio, Android Jetpack, Kotlin, Google Play, and more.

The full agenda is now available, so you can start to plan your summit experience. We'll also have demos in the sandbox, hands-on learning with codelabs, plus exclusive content for those watching on the livestream.

Get the Android Dev Summit app on Google Play!

The official app for Android Dev Summit 2019 has started rolling out on Google Play. With the app, you can explore the agenda by searching through topics and speakers. Plan your summit experience by saving events to your personalized schedule. You'll be able to watch the livestream and find recordings after sessions occur, and more. (By the way, the app is also an Instant app, so with one tap you can try it out first before installing!)

2019 #AndroidDevSummit app


A front-row seat from your desk, and #AskAndroid your pressing questions!

We'll be broadcasting live on YouTube and Twitter starting October 23 at 10 a.m. PDT. In addition to a front row seat to over 25 Android sessions, there will be exclusive online-only content and an opportunity to have your most pressing Android development questions answered live, broadcast right from the event.

Tweet us your best questions using the hashtag #AskAndroid in the lead-up to the Android Dev Summit. We've gathered experts from Jetpack to Kotlin to Android 10, so we've got you covered. We'll be answering your questions live between sessions on the livestream. Plus, we will share updates directly from the Google Events Center to our social channels, so be sure to follow along!

09 Oct 2019 10:45pm GMT

04 Oct 2019

feedPlanet Maemo

Bussator: implementing webmentions as comments

Recently I've grown an interest to the indieweb: as big corporations are trying to dictate the way we live our digital life, I'm feeling the need to take a break from at least some of them and getting somehow more control over the technologies I use.

Some projects have been born which are very helpful with that (one above all: NextCloud), but there are also many older technologies which enable us to live the internet as a free distributed network with no owners: I'm referring here to protocols such as HTTP, IMAP, RSS, which I perceive to be under threat of being pushed aside in favor of newer, more convenient, but also more oppressive solutions.

Anyway. The indieweb community is promoting the empowerment of users, by teaching them how to regain control of their online presence: this pivots arund having one's own domain and use self-hosted or federated solutions as much as possible.

One of the lesser known technologies (yet widely used in the indieweb community) is webmentions: in simple terms, it's a way to reply to other people's blog posts by writing a reply in your own blog, and have it shown also on the original article you are replying to. The protocol behind this feature is an recommendation approved by the W3C, and it's actually one of the simplest protocol to implement. So, why not give it a try?

I already added support for comments in my blog (statically generated with Nikola) by deploying Isso, a self-hosted commenting system which can even run as a FastCGI application (hence, it can be deployed in a shared hosting with no support for long-running processes) - so I was looking for a solution to somehow convert webmentions into comments, in order hot to have to deal with two different commenting systems.

As expected, there was no ready solution for this; so I sat down and hacked up Bussator, a WSGI application which implements a webmention receiver and publishes the reply posts as Isso comments. The project is extensible, and Isso is only one of the possible commenting systems; sure, at the moment it's indeed the only one available, but there's no reason why a plugin for Static Man, Commento, Remark or others couldn't be written. I'll happily accept merge requests, don't be shy - or I can write it myself, if you convince me to (a nice Lego box would make me do anything 0 Add to favourites0 Bury

04 Oct 2019 8:36am GMT

17 Sep 2019

feedPlanet Maemo

On Richard Stallman and people who cannot read

We live in strange times. People are so filled with hatred and prejudices that their brain becomes unable to parse the simplest sentences. I take this issue to heart, because it could happen to anyone - it has happened to me before (luckily, only in private online conversations), where an acquaintance of mine accused me of saying things I never said. And it happens to famous people all the time. Guys, just because you hate person X, you should not skip over parts of their speech or suppress context in order to make it look like they said something terrible or stupid, when they didn't.

Now it happend to Richard Stallman, with a whole wave of hateful people accusing him of saying something that he didn't say. Let's start with the VICE article, titled "Famed Computer Scientist Richard Stallman Described Epstein Victims As 'Entirely Willing'", which insists in quoting only two words out of Stallman's sentence:

Early in the thread, Stallman insists that the "most plausible scenario" is that Epstein's underage victims were "entirely willing" while being trafficked.

Except that he didn't say that. Why not quote the whole sentence? It's not such a long sentence, really! Just follow the link to the source, which provides a complete excerpt of Stallman's words:

We can imagine many scenarios, but the most plausible scenario is that she presented herself to him as entirely willing. Assuming she was being coerced by Epstein, he would have had every reason to tell her to conceal that from most of his associates.

Now, English is not my native language, but I read it well enough to understand that "to present oneself as" and "to be" are different expressions having very different meanings (and, in most context, actually opposite ones!). You don't need to be Shakespeare to understand that. You only need to either stop hating or, if you really cannot help it, at least stop projecting your prejudices onto the people you hate. Hate makes you blind.

It's sad to see otherwise intelligent people take stupid decisions because of such misunderstandings.

I for one, stand in solidarity with Richard Stallman and with the English language.

(please note that this is not an endorsement of everything Stallman might have said in the past; I don't follow him that closely, and it may be that he also happened to say terrible things in this very thread. I'm only commenting this very specific issue, and I know that in this very specific issue he's being wrongly accused)

0 Add to favourites0 Bury

17 Sep 2019 4:38pm GMT

25 Sep 2017

feedPlanet Openmoko

Holger "zecke" Freyther: Brain dump what fascinates me

A small brain dump of topics that currently fascinate me. These are mostly pointers and maybe it is interesting to follow it.

Books/Reading:

My kobo ebook reader has the Site Reliability Engineering book and I am now mostly done. It is kind of a revelation and explains my interest to write code but also to operate infrastructure (like struggling with ruby, rmagick, nginx…). I am interested in backends since… well ever. The first time I noticed it when we talked about Kolab at LinuxTag and I was more interested in the backend than the KDE client. At sysmocom we built an IoT product and the backend was quite some fun, especially the scale of one instance and many devices/users, capacity planning and disk commissioning, lossless upgrades.

It can be seen in my non FOSS SS7 map work on traffic masquerading and transparent rewriting. It is also clear to see which part of engineering is needed for scale (instead of just installing and restarting servers).

Lang VM design

One technology that made Java fast (Hotspot) and has seen its way into JavaScript is dynamic optimization. Most Just in Time Compilers start with generating native code per method, either directly or after the first couple of calls when the methods size is significant enough. The VM records which call paths are hot, which types are used and then can generate optimized code (e.g. specialized for integers, remove type checks). A technique pioneered at Sun for the "Self" language (and then implemented for Strongtalk and then brought to Java) was "adaptive optimization and deoptimization" and was the Phd topic of Urs Hoelzle (Google's VP of Engineering). One of the key aspects is inlining across method boundaries as this removes method look-up, call stack handling and opens the way for code optimization across method boundaries (at the cost of RAM usage).

In OpenJDK, V8 and JavaScriptCore this adaptive optimization is typically implemented in C++ and requires quite some code. The code is complicated as it needs to optimize but also need to return to a basic function (deoptimize, e.g. if a method changed or the types passed don't match anymore), e.g. in the middle of a for loop with tons of inlined code (think of Array.map being inlined but then need to be de-inlined). A nice and long blog post of JSC can be found here describing the On Stack Replacement (OSR).

Long introduction and now to the new thing. In the OpensmalltalkVM a new approach called Sista has been picked and I find it is genius. Like with many problems the point of view and approach really matters. Instead of writing a lot of code in the VM the optimizer runs next to the application code. The key parts seem to be:

The revelation is the last part. By just optimizing from bytecode to bytecode the VM remains in charge of creating and managing machine code. The next part is that tooling in the higher language is better or at least the roundtrip is more quick (edit code and just compile the new method instead of running make, c++, ld). The productivity thanks to the abstraction and tooling is likely higher.

As last part the OSR is easier as well. In Smalltalk thisContext (the current stack frame, activation record) is an object as well. At the right point (when the JIT has either written back variables from register to the stack or at least knows where the value is) one can just manipulate thisContext, create and link news ones and then resume execution without all the magic in other VMs.

Go, Go and escape analysis

Ken Thompson and Robert Pike are well known persons and their Go programming language is a very interesting system programming language. Like with all new languages I try to get real world experience with the language, the tooling and which kind of problems can be solved with it. I have debugged and patched some bigger projects and written two small applications with it.

There is plenty I like. The escape analysis of the compiler is fun (especially now that I know it was translated from the Plan9 C compiler from C to Go), the concurrency model is good (though allowing shared state), the module system makes sense (but makes forking harder than necessary), being able to cross compile to any target from any system.

Knowing a bit of Erlang (and continuing to read the Phd Thesis of Joe Armstrong) and being a heavy Smalltalk user there are plenty of things missing. It starts with vague runtime error messages (e.g. panicslice not having parameters) and goes to runtime and post-runtime inspection. In Smalltalk thanks to the abstraction a lot of hard things are easy and I would have wished for some of them to be in Go. Serialize all unrecovered panics? Debugging someone else's code seems like pre 1980…

So for many developers Go is a big improvement but for some people with a wider view it might look like a lost opportunity. But that can only be felt by developers that have experienced higher abstraction and productivity.

Unsupervised machine learning

but that is for another dump…

25 Sep 2017 10:11am GMT

02 Sep 2017

feedPlanet Openmoko

Harald "LaF0rge" Welte: Purism Librem 5 campaign

There's a new project currently undergoing crowd funding that might be of interest to the former Openmoko community: The Purism Librem 5 campaign.

Similar to Openmoko a decade ago, they are aiming to build a FOSS based smartphone built on GNU/Linux without any proprietary drivers/blobs on the application processor, from bootloader to userspace.

Furthermore (just like Openmoko) the baseband processor is fully isolated, with no shared memory and with the Linux-running application processor being in full control.

They go beyond what we wanted to do at Openmoko in offering hardware kill switches for camera/phone/baseband/bluetooth. During Openmoko days we assumed it is sufficient to simply control all those bits from the trusted Linux domain, but of course once that might be compromised, a physical kill switch provides a completely different level of security.

I wish them all the best, and hope they can leave a better track record than Openmoko. Sure, we sold some thousands of phones, but the company quickly died, and the state of software was far from end-user-ready. I think the primary obstacles/complexities are verification of the hardware design as well as the software stack all the way up to the UI.

The budget of ~ 1.5 million seems extremely tight from my point of view, but then I have no information about how much Puri.sm is able to invest from other sources outside of the campaign.

If you're a FOSS developer with a strong interest in a Free/Open privacy-first smartphone, please note that they have several job openings, from Kernel Developer to OS Developer to UI Developer. I'd love to see some talents at work in that area.

It's a bit of a pity that almost all of the actual technical details are unspecified at this point (except RAM/flash/main-cpu). No details on the cellular modem/chipset used, no details on the camera, neither on the bluetooth chipset, wifi chipset, etc. This might be an indication of the early stage of their plannings. I would have expected that one has ironed out those questions before looking for funding - but then, it's their campaign and they can run it as they see it fit!

I for my part have just put in a pledge for one phone. Let's see what will come of it. In case you feel motivated by this post to join in: Please keep in mind that any crowdfunding campaign bears significant financial risks. So please make sure you made up your mind and don't blame my blog post for luring you into spending money :)

02 Sep 2017 10:00pm GMT

01 Sep 2017

feedPlanet Openmoko

Harald "LaF0rge" Welte: First actual XMOS / XCORE project

For many years I've been fascinated by the XMOS XCore architecture. It offers a surprisingly refreshing alternative virtually any other classic microcontroller architectures out there. However, despite reading a lot about it years ago, being fascinated by it, and even giving a short informal presentation about it once, I've so far never used it. Too much "real" work imposes a high barrier to spending time learning about new architectures, languages, toolchains and the like.

Introduction into XCore

Rather than having lots of fixed-purpose built-in "hard core" peripherals for interfaces such as SPI, I2C, I2S, etc. the XCore controllers have a combination of

  • I/O ports for 1/4/8/16/32 bit wide signals, with SERDES, FIFO, hardware strobe generation, etc
  • Clock blocks for using/dividing internal or external clocks
  • hardware multi-threading that presents 8 logical threads on each core
  • xCONNECT links that can be used to connect multiple processors over 2 or 5 wires per direction
  • channels as a means of communication (similar to sockets) between threads, whether on the same xCORE or a remote core via xCONNECT
  • an extended C (xC) programming language to make use of parallelism, channels and the I/O ports

In spirit, it is like a 21st century implementation of some of the concepts established first with Transputers.

My main interest in xMOS has been the flexibility that you get in implementing not-so-standard electronics interfaces. For regular I2C, UART, SPI, etc. there is of course no such need. But every so often one encounters some interface that's very rately found (like the output of an E1/T1 Line Interface Unit).

Also, quite often I run into use cases where it's simply impossible to find a microcontroller with a sufficient number of the related peripherals built-in. Try finding a microcontroller with 8 UARTs, for example. Or one with four different PCM/I2S interfaces, which all can run in different clock domains.

The existing options of solving such problems basically boil down to either implementing it in hard-wired logic (unrealistic, complex, expensive) or going to programmable logic with CPLD or FPGAs. While the latter is certainly also quite interesting, the learning curve is steep, the tools anything but easy to use and the synthesising time (and thus development cycles) long. Furthermore, your board design will be more complex as you have that FPGA/CPLD and a microcontroller, need to interface the two, etc (yes, in high-end use cases there's the Zynq, but I'm thinking of several orders of magnitude less complex designs).

Of course one can also take a "pure software" approach and go for high-speed bit-banging. There are some ARM SoCs that can toggle their pins. People have reported rates like 14 MHz being possible on a Raspberry Pi. However, when running a general-purpose OS in parallel, this kind of speed is hard to do reliably over long term, and the related software implementations are going to be anything but nice to write.

So the XCore is looking like a nice alternative for a lot of those use cases. Where you want a microcontroller with more programmability in terms of its I/O capabilities, but not go as far as to go full-on with FPGA/CPLD development in Verilog or VHDL.

My current use case

My current use case is to implement a board that can accept four independent PCM inputs (all in slave mode, i.e. clock provided by external master) and present them via USB to a host PC. The final goal is to have a board that can be combined with the sysmoQMOD and which can interface the PCM audio of four cellular modems concurrently.

While XMOS is quite strong in the Audio field and you can find existing examples and app notes for I2S and S/PDIF, I couldn't find any existing code for a PCM slave of the given requirements (short frame sync, 8kHz sample rate, 16bit samples, 2.048 MHz bit clock, MSB first).

I wanted to get a feeling how well one can implement the related PCM slave. In order to test the slave, I decided to develop the matching PCM master and run the two against each other. Despite having never written any code for XMOS before, nor having used any of the toolchain, I was able to implement the PCM master and PCM slave within something like ~6 hours, including simulation and verification. Sure, one can certainly do that in much less time, but only once you're familiar with the tools, programming environment, language, etc. I think it's not bad.

The biggest problem was that the clock phase for a clocked output port cannot be configured, i.e. the XCore insists on always clocking out a new bit at the falling edge, while my use case of course required the opposite: Clocking oout new signals at the rising edge. I had to use a second clock block to generate the inverted clock in order to achieve that goal.

Beyond that 4xPCM use case, I also have other ideas like finally putting the osmo-e1-xcvr to use by combining it with an XMOS device to build a portable E1-to-USB adapter. I have no clue if and when I'll find time for that, but if somebody wants to join in: Let me know!

The good parts

Documentation excellent

I found the various pieces of documentation extremely useful and very well written.

Fast progress

I was able to make fast progress in solving the first task using the XMOS / Xcore approach.

Soft Cores developed in public, with commit log

You can find plenty of soft cores that XMOS has been developing on github at https://github.com/xcore, including the full commit history.

This type of development is a big improvement over what most vendors of smaller microcontrollers like Atmel are doing (infrequent tar-ball code-drops without commit history). And in the case of the classic uC vendors, we're talking about drivers only. In the XMOS case it's about the entire logic of the peripheral!

You can for example see that for their I2C core, the very active commit history goes back to January 2011.

xSIM simulation extremely helpful

The xTIMEcomposer IDE (based on Eclipse) contains extensive tracing support and an extensible near cycle accurate simulator (xSIM). I've implemented a PCM mater and PCM slave in xC and was able to simulate the program while looking at the waveforms of the logic signals between those two.

The bad parts

Unfortunately, my extremely enthusiastic reception of XMOS has suffered quite a bit over time. Let me explain why:

Hard to get XCore chips

While the product portfolio on on the xMOS website looks extremely comprehensive, the vast majority of the parts is not available from stock at distributors. You won't even get samples, and lead times are 12 weeks (!). If you check at digikey, they have listed a total of 302 different XMOS controllers, but only 35 of them are in stock. USB capable are 15. With other distributors like Farnell it's even worse.

I've seen this with other semiconductor vendors before, but never to such a large extent. Sure, some packages/configurations are not standard products, but having only 11% of the portfolio actually available is pretty bad.

In such situations, where it's difficult to convince distributors to stock parts, it would be a good idea for XMOS to stock parts themselves and provide samples / low quantities directly. Not everyone is able to order large trays and/or capable to wait 12 weeks, especially during the R&D phase of a board.

Extremely limited number of single-bit ports

In the smaller / lower pin-count parts, like the XU[F]-208 series in QFN/LQFP-64, the number of usable, exposed single-bit ports is ridiculously low. Out of the total 33 I/O lines available, only 7 can be used as single-bit I/O ports. All other lines can only be used for 4-, 8-, or 16-bit ports. If you're dealing primarily with serial interfaces like I2C, SPI, I2S, UART/USART and the like, those parallel ports are of no use, and you have to go for a mechanically much larger part (like XU[F]-216 in TQFP-128) in order to have a decent number of single-bit ports exposed. Those parts also come with twice the number of cores, memory, etc- which you don't need for slow-speed serial interfaces...

Insufficient number of exposed xLINKs

The smaller parts like XU[F]-208 only have one xLINK exposed. Of what use is that? If you don't have at least two links available, you cannot daisy-chain them to each other, let alone build more complex structures like cubes (at least 3 xLINKs).

So once again you have to go to much larger packages, where you will not use 80% of the pins or resources, just to get the required number of xLINKs for interconnection.

Change to a non-FOSS License

XMOS deserved a lot of praise for releasing all their soft IP cores as Free / Open Source Software on github at https://github.com/xcore. The License has basically been a 3-clause BSD license. This was a good move, as it meant that anyone could create derivative versions, whether proprietary or FOSS, and there would be virtually no license incompatibilities with whatever code people wanted to write.

However, to my very big disappointment, more recently XMOS seems to have changed their policy on this. New soft cores (released at https://github.com/xmos as opposed to the old https://github.com/xcore) are made available under a non-free license. This license is nothing like BSD 3-clause license or any other Free Software or Open Source license. It restricts the license to use the code together with an XMOS product, requires the user to contribute fixes back to XMOS and contains references to importand export control. This license is incopatible with probably any FOSS license in existance, making it impossible to write FOSS code on XMOS while using any of the new soft cores released by XMOS.

But even beyond that license change, not even all code is provided in source code format anymore. The new USB library (lib_usb) is provided as binary-only library, for example.

If you know anyone at XMOS management or XMOS legal with whom I could raise this topic of license change when transitioning from older sc_* software to later lib_* code, I would appreciate this a lot.

Proprietary Compiler

While a lot of the toolchain and IDE is based on open source (Eclipse, LLVM, ...), the actual xC compiler is proprietary.

Further Reading

01 Sep 2017 10:00pm GMT

12 Nov 2011

feedPlanet Linux-to-go

Paul 'pfalcon' Sokolovsky: Shopping for 3D TV...

Shopping for 3D TV (again), few findings:

12 Nov 2011 6:55pm GMT

Paul 'pfalcon' Sokolovsky: Hacking Luxeon SP-1

I finally going to get Arduino, and while I'm choosing flavor and waiting for it, I can't help but disassembling all devices I have at home, each time speaking: "This must have Arduino inside!" (meaning of course that I expect it to be based on general-purpose MCU). Gosh, I usually get "blob chip" (uncased chip with blob of epoxy on top).

Well, I finally had my expectations fulfilled - Luxeon SP-1 voltage stabilizer/cutter features ATMEGA48V-10PU (Flash: 4k, EEPROM: 256, RAM:512). Not only that, it is installed in DIP socket! Buy from Luxeon, they're hacker-friendly ;-).

I bought the device actually for a wattmeter it features (which fact is hard to figure out from common specs found in the shops, I accidentally read somebody mentioning it on a forum). The wattmeter is of course not bright - for a lamp rated 100W it shows 88W, and for more powerful equipment (like perforator) understates wattage even more (maybe it's difference between real and apparent power factor).

Still, for $17 you get Arudino-alike with voltage/current sensor and hacking possibility. Woot!

BOM:
High-power board:

MCU board:


12 Nov 2011 5:58pm GMT

10 Nov 2011

feedPlanet Linux-to-go

Paul 'pfalcon' Sokolovsky: Links for November 2011

Kindle:


Linux kernel module tricks:

10 Nov 2011 3:21pm GMT

19 Oct 2011

feedPlanet OpenEZX

Antonio Ospite: Gnome 3: go to Shell? Not just yet, thanks.

In Debian Unstable the transition to Gnome 3 is taking place; when Gnome 3.0 firstly came out some unnamed geeky users complained loudly about the design decisions of the development team to push strongly towards gnome-shell as a new default UI; gnome-shell was designed focusing on usability (usability is a metric relative to a certain target audience BTW) and simplicity, hiding a lot of details from the users. Obviously you can never make everyone happy so some of us simply happened to be "out of target": you know us computer people (*cough cough*), we like to be in charge and control The Machine... I must admit I still don't have a definitive opinion about the gnome-shell concept, for now I just know that it does not suit me; I am going to try it eventually, maybe I'll get used to it, but in the mean time I need my desktop back just like I shaped it through the years; can this be done without loosing all the good Gnome technologies (Empathy over all of them)?

To be completely fair I have to say that there is little to complain about with Gnome developers, we can still get our good old GNOME desktop fully back by using the fall-back mode based on gnome-panel and live happily ever after, let's take a look at how this can be accomplished.

NOTE: GNOME people state that the fall-back mode is meant for systems with older graphic cards which cannot run gnome-shell, however it can very well be seen as a good opportunity for those who do not want to run gnome-shell just yet.

Getting back to the topic: some minor touches are needed to make the panel look more like what we are used to, maybe some of these settings could even become default for fall-back mode, we'll see.

First, enable fall-back mode (on Debian there is a dedicated session you can choose from the Log-in Manager for that) and change some desktop settings, in a terminal type:

$ gsettings set org.gnome.desktop.session session-name 'gnome-fallback'
$ gsettings set org.gnome.desktop.interface 'menus-have-icons' true
$ gsettings set org.gnome.desktop.interface 'buttons-have-icons' true
$ gsettings set org.gnome.desktop.background 'show-desktop-icons' true

gnome-tweak-tool can be used for some of these settings like shown in the attached images.

Then rearrange the applets on the panel as you please (use Alt-RightClick to access the panel properties), and fix the theming using this patch to have a light panel again (against gnome-themes-standard=3.0.2-1):

$ mkdir $HOME/.themes
$ cd $HOME/.themes
$ cp -r /usr/share/themes/Adwaita Adwaita-fallback
$ cd Adwaita-fallback
$ patch -p1 < $HOME/adwaita-fallback-panel-theme.patch
$ gsettings set org.gnome.desktop.interface 'gtk-theme' 'Adwaita-fallback'

Some final touches for the Metacity window manager and to the clock applet, and we are all set:

$ gconftool-2 --type string --set /apps/metacity/general/focus_mode mouse
$ gconftool-2 --type boolean --set /apps/metacity/general/compositing_manager true
$ gconftool-2 --type string --set /apps/panel3-applets/clock/custom_format '<span color="#333">%a %d %b</span> <b>%H:%M</b>'
$ gconftool-2 --type string --set /apps/panel3-applets/clock/format custom

Ah, in the new gnome-panel based on Gtk3 there are still some details to take care of, I hope issues like that will be addressed and that the panel will be supported for quite some time.

Attached images:
Gnome Shell default look on Debian
gnome-tweak-tool show desktop icons
Gnome 3 fall-back mode default look on Debian
Gnome 3 fall-back mode applets rearranged
Gnome 3 fall-back mode rethemed to have a light panel
Attached files:
text/x-diff iconAdwaita theme patch for fall-back mode

19 Oct 2011 9:37pm GMT

09 Jun 2011

feedPlanet OpenEZX

Michael Lauer: The Eagle Has Landed!

After letting us wait for a bit longer than scheduled (13 days), the hospital initiated the contractions. For the first couple of hours, everything went just perfect, but then the little one got stuck on the way and we had to resort to a cesarean section. Lara Marie Lauer was born 8th of June at 04:41 (AM) with 3460 gramms and 49 cm.

Mummy was still on intensive care and so they gave her to me. I can't express the feelings I had in this very moment. I'm still kind of overwhelmed every time I see her. Thanks for all of you who waited anxiously with me and those who prayed for us. The most important tasks for the near future is getting Mummy to recover and Lara Marie to become accustomed to us and the rest of the outside world.

Please bear with me if in the next time I'm not as responsive as usually :)

Lara Marie Lauer

09 Jun 2011 4:06pm GMT

30 May 2011

feedPlanet OpenEZX

Michael Lauer: German Post on time!

And now for something completely different… while we are all waiting for my baby to arrive (who was scheduled for 25th of May), she just received her first greeting card - together with a personalized bib and a towel (with integrated hood - pretty fancy!) from my good friends at #openmoko-cdevel.

Guys, seeing this card was very heartwarming - it means a lot to me that you share my anticipation, thanks a lot! And I'm 100% sure she will appreciate her gifts… now let's cross fingers it won't take much longer… waiting is the hardest part of it :)

Yours,

Mickey.

30 May 2011 8:54am GMT