13 Jun 2019

feedPlanet Maemo

WIP: changing the backend for contacts in Ubports

More than one year has passed since the initial announcement of my plan to investigate using a different backend for contact storage. If you want to get a better understanding of the plan, that mail is still a good read -- not much has changed since them, planning wise.

The reason for this blog post is to give a small update on what has happened since then, and as a start nothing can be better than a couple of screenshots:

Adding CardDAV accounts in the Addressbook application Aggregated contact details from multiple sources

In other words, that means that contact synchonisation works, both with the new CardDAV protocol (for which we'll have preconfigured setups for NextCloud and OwnCloud accounts) and with Google Contacts, for which we are now using a different engine. What you see in the second screenshot (although indeed it's not obvious at all) is that the new qtcontacts-sqlite backend performs automatic contact merging based on some simple heuristics, meaning that when synchonising the same contact from multiple sources you should not happen to find a multitude of semi-identical copies of the contact, but a single one having all the aggregated details.

Before you get too excited, I have to say that this code is pre-alpha quality and that it's not even available for testing yet. The next step is indeed to setup CI so that the packaggqes get automatically built and published to a public repository, at which point I'll probably issue another update here in my blog.

The boring stuff

And now some detail for those who might wonder why this feature is not ready yet, or would like to get an idea on the time-frame for its completion.

Apart from a chronical lack of time from my part, the feature complexity is due to the large number of components involved:

  • qtcontacts-sqlite: the QtContacts backend we are migrating to. This is a backend for the QtContacts API (used by our Addressbook application) which uses a SQLite database as storage for your contacts.
  • buteo-sync-plugin-carddav: the CardDAV plugin for Buteo (our synchronisation manager). This plugin is loaded by Buteo and synchronises the contacts between a CardDAV remote source and the qtcontacts-sqlite database.
  • buteo-sync-plugins-social: a Buteo plugin which can synchronise contacts from a multitude of sources, including Google, Facebook and Vk. At the moment we only care about Google, but once this feature has landed we can easily extend it to work with the other two as well.
  • address-book-app: this is our well-known Contacts application. It needs some minor changes to adapt to the qtcontacts-sqlite backend and to support the creation of new CardDAV, NextCloud and OwnCloud accounts.
  • QtPim: the contacts and calendar API developed by the Qt project. Our Contacts application is using the front-end side of this API, and the qtcontacts-sqlite component implements the backend side. There are some improvements proposed by Jolla, which we need to include in order to support grouping contacts by their initials.

The other tricky aspect is that the first three projects are maintained by Jolla as part of the Sailfish OS, and while on one side this means that we can share the development and maintenance burden with Jolla, on the other side of the coin it means that we need to apply extra care when submitting changes, in order not to step on each other's shoes. Specifically, Sailfish OS is using a much older version of QtPim than Ubports is, and the APIs between the two versions have changes in an incompatible version, so that it's nearly impossible to have a single code base working with both versions of QtPim. Luckily git supports branches, and Chris from Jolla was kind enough to create a branch for us in their upstream repository where I've proposed our changes (and they are a lot!).

However, this is not as bad as it sounds, and the fact that I have a roughly working version on my development device is a good sign that things are moving forwards.

0 Add to favourites0 Bury

13 Jun 2019 2:59pm GMT

13 May 2019

feedPlanet Maemo

Ubuntu on the Lenovo D330

The Lenovo D330 2-in-1 convertible (or netbook as we used to say) is a quite interesting device. It is based on Intels current low-power core platform, Gemini Lake (GLK), and thus offers great battery-life and a fan-less design.

This similar to what you would from an ARM based tablet. However being x86 based and Windows focused we can expect to get Ubuntu Linux running - without requiring any out-of-tree drivers or custom kernels that never get updated as we are used-to from the ARM world.
This post will be about my experiences on doing so.

For this I will use the most recent Ubuntu 19.04 release as it contains fractional scaling support, which is essential for a 10″ 1920x1200px device. Also the orientation sensor (mostly) works out of the box, when compared to the 18.04 LTS release.

Getting to the desktop

After booting the live USB, you will notice that the screen stays black. This is caused by the i915 driver not correctly setting up the internal (DSI) screen (see FDO#109267).

A quick workaround for this is to rotate the device, causes the i915 driver to re-initialise the screen, which will work at this point. Alternatively you can also suspend/ resume the device.

Wrong screen orientation

Once on the desktop, you will notice that it is rotated by 90° which is caused by a missing mount matrix entry in udev shipped with 19.04.

First we manually rotate the screen to landscape mode by opening a terminal via Ctrl+Alt+T and typing

xrandr -o right

Now we can continue to add the missing accelerometer mount matrix. Create the file /etc/udev/hwdb.d/61-sensor-local.hwdb with the following content:

# IdeaPad D330
sensor:modalias:acpi:BOSC0200*:dmi:*:svnLENOVO:pn81H3:*
    ACCEL_MOUNT_MATRIX=0, 1, 0; -1, 0, 0; 0, 0, 1

To immediately apply the changes do

sudo systemd-hwdb update
sudo udevadm trigger -v -p DEVNAME=/dev/iio:device0
sudo service iio-sensor-proxy restart

Touch input rotation

Now you can rotate the device and the screen content will be correctly oriented. However, if you try to use the touchscreen, you will notice that it only behaves correctly in portrait mode.

This is caused by a bug in GNOME/ mutter introduced in the 3.32 release. It will be fixed in the 3.32.2 point-release. In the meantime you can use the updated mutter packages, where I have backported the fix.

Enabling fractional scaling

Finally we need to magnify the UI by 50% in order not to damage our eyes. Unfortunately the fractional scaling in Ubuntu 19.04 is hidden by default. To enable it enter the following in a console

gsettings set org.gnome.mutter experimental-features "['scale-monitor-framebuffer', 'x11-randr-fractional-scaling']"

This will enable fractional scaling for both Wayland and Xorg sessions. Here, you will get a noticeable performance hit when using Xorg. However you should choose it regardless due to the following on Wayland

This probably also helps understanding why Xorg is still default on the majority of Linux distributions.

What does not work

Surprisingly little, actually. The only remaining issue is that you will lose sound after a suspend/ resume cycle due to some bug in snd-hda-intel-realtek.

What does work

State of touch UI on Linux

However there is more than getting the hardware to start when using the device on a daily basis - you will also have to fight the linux apps. Currently only GTK3 apps have some understanding of touch events - most other toolkits need updating. To estimate the time until this will happen keep in mind that touch on Linux is a niche inside a niche.

Firefox for instance does not understand touch events and you will have to manually enable them as described here.

But even then you will be able to drag the window around - thanks to client side decoration (CSD) window placement is handled by Firefox itself, but it does not react to touch events #1530070 (same applies to Chrome btw).

Next all apps continue to launch in windowed mode. On device with limited screen space, you will soon notice why everything is full-screen on Android (and Windows Tablet Mode).

Finally the GNOME on-screen keyboard feels quite sluggish and lacks visual feedback. Therefore you will often end up with missing letters.

So in summary it is not a very pleasant experience right now.

Fortunately GNOME/ Ubuntu are supposedly on it for the next release.

Until then I would recommend sticking with Windows 10; fractional scaling just works there and you will really value its on-screen keyboard and tablet mode when having the physical keyboard detached.

0 Add to favourites0 Bury

13 May 2019 5:03pm GMT

24 Apr 2019

feedPlanet Maemo

A critical view on the blockchain

At the beginning of this month I participated to the foss-north conference, in Gothenburg, and took the stage to give a short presentation of the blockchain technology. Given that my talk was somehow critical of the blockchain (or rather, of the projects using it without due reason) I was prepared to receive a wave of negative remarks, assuming that all the hype surrounding this technology would have infected a good part of my audience as well. I was therefore positively surprised when several people came to me afterwords to express their appreciation for my speech, appreciation that now makes me confident enough to share the video of the presentation here too:

I want to publicly thank Johan Thelin and all the other foss-north staff and volunteers who organized such a successful conference. They also managed to get the video recordings out in a surprisingly short time. Indeed, the above video is taken from the foss-north YouTube channel, which I recommend you to visit as there were a lot of good talks at the conference; the topics were so varied, that I'm sure you'll find at least a couple of talks of your interest.

1 Add to favourites0 Bury

24 Apr 2019 7:59pm GMT

13 Mar 2019

feedPlanet Maemo

Ubports at the LinuxPiter conference

Last November I was invited to talk at the LinuxPiter conference. I held a presentation of the Ubports project, to which I still contribute in my little spare time.

The video recording from the conference has finally been published:

(there's also a version in Russian)

There was not a big audience, to be honest, but those that were there expressed a lot of interest in the project.

1 Add to favourites0 Bury

13 Mar 2019 4:07pm GMT

25 Feb 2019

feedPlanet Maemo

Review of Igalia’s Graphics activities (2018)

This is the first report about Igalia's activities around Computer Graphics, specifically 3D graphics and, in particular, the Mesa3D Graphics Library (Mesa), focusing on the year 2018.

GL_ARB_gl_spirv and GL_ARB_spirv_extensions

GL_ARB_gl_spirv is an OpenGL extension whose purpose is to enable an OpenGL program to consume SPIR-V shaders. In the case of GL_ARB_spirv_extensions, it provides a mechanism by which an OpenGL implementation would be able to announce which particular SPIR-V extensions it supports, which is a nice complement to GL_ARB_gl_spirv.

As both extensions, GL_ARB_gl_spirv and GL_ARB_spirv_extensions, are core functionality in OpenGL 4.6, the drivers need to provide them in order to be compliant with that version.

Although Igalia picked up the already started implementation of these extensions in Mesa back in 2017, 2018 is a year in which we put a big deal of work to provide the needed push to have all the remaining bits in place. Much of this effort provides general support to all the drivers under the Mesa umbrella but, in particular, Igalia implemented the backend code for Intel's i965 driver (gen7+). Assuming that the review process for the remaining patches goes without important bumps, it is expected that the whole implementation will land in Mesa during the beginning of 2019.

Throughout the year, Alejandro Piñeiro gave status updates of the ongoing work through his talks at FOSDEM and XDC 2018. This is a video of the latter:

ETC2/EAC

The ETC and EAC formats are lossy compressed texture formats used mostly in embedded devices. OpenGL implementations of the versions 4.3 and upwards, and OpenGL/ES implementations of the versions 3.0 and upwards must support them in order to be conformant with the standard.

Most modern GPUs are able to work directly with the ETC2/EAC formats. Implementations for older GPUs that don't have that support but want to be conformant with the latest versions of the specs need to provide that functionality through the software parts of the driver.

During 2018, Igalia implemented the missing bits to support GL_OES_copy_image in Intel's i965 for gen7+, while gen8+ was already complying through its HW support. As we were writing this entry, the work has finally landed.

VK_KHR_16bit_storage

Igalia finished the work to provide support for the Vulkan extension VK_KHR_16bit_storage into Intel's Anvil driver.

This extension allows the use of 16-bit types (half floats, 16-bit ints, and 16-bit uints) in push constant blocks, and buffers (shader storage buffer objects). This feature can help to reduce the memory bandwith for Uniform and Storage Buffer data accessed from the shaders and / or optimize Push Constant space, of which there are only a few bytes available, making it a precious shader resource.

shaderInt16

Igalia added Vulkan's optional feature shaderInt16 to Intel's Anvil driver. This new functionality provides the means to operate with 16-bit integers inside a shader which, ideally, would lead to better performance when you don't need a full 32-bit range. However, not all HW platforms may have native support, still needing to run in 32-bit and, hence, not benefiting from this feature. Such is the case for operations associated with integer division in the case of Intel platforms.

shaderInt16 complements the functionality provided by the VK_KHR_16bit_storage extension.

SPV_KHR_8bit_storage and VK_KHR_8bit_storage

SPV_KHR_8bit_storage is a SPIR-V extension that complements the VK_KHR_8bit_storage Vulkan extension to allow the use of 8-bit types in uniform and storage buffers, and push constant blocks. Similarly to the the VK_KHR_16bit_storage extension, this feature can help to reduce the needed memory bandwith.

Igalia implemented its support into Intel's Anvil driver.

VK_KHR_shader_float16_int8

Igalia implemented the support for VK_KHR_shader_float16_int8 into Intel's Anvil driver. This is an extension that enables Vulkan to consume SPIR-V shaders that use Float16 and Int8 types in arithmetic operations. It extends the functionality included with VK_KHR_16bit_storage and VK_KHR_8bit_storage.

In theory, applications that do not need the range and precision of regular 32-bit floating point and integers, can use these new types to improve performance. Additionally, its implementation is mostly API agnostic, so most of the work we did should also help to have a proper mediump implementation for GLSL ES shaders in the future.

The review process for the implementation is still ongoing and is on its way to land in Mesa.

VK_KHR_shader_float_controls

VK_KHR_shader_float_controls is a Vulkan extension which allows applications to query and override the implementation's default floating point behavior for rounding modes, denormals, signed zero and infinity.

Igalia has coded its support into Intel's Anvil driver and it is currently under review before being merged into Mesa.

VkRunner

VkRunner is a Vulkan shader tester based on shader_runner in Piglit. Its goal is to make it feasible to test scripts as similar as possible to Piglit's shader_test format.

Igalia initially created VkRunner as a tool to get more test coverage during the implementation of GL_ARB_gl_spirv. Soon, it was clear that it was useful way beyond the implementation of this specific extension but as a generic way of testing SPIR-V shaders.

Since then, VkRunner has been enabled as an external dependency to run new tests added to the Piglit and VK-GL-CTS suites.

Neil Roberts introduced VkRunner at XDC 2018. This is his talk:

freedreno

During 2018, Igalia has also started contributing to the freedreno Mesa driver for Qualcomm GPUs. Among the work done, we have tackled multiple bugs identified through the usual testing suites used in the graphic drivers development: Piglit and VK-GL-CTS.

Khronos Conformance

The Khronos conformance program is intended to ensure that products that implement Khronos standards (such as OpenGL or Vulkan drivers) do what they are supposed to do and they do it consistently across implementations from the same or different vendors.

This is achieved by producing an extensive test suite, the Conformance Test Suite (VK-GL-CTS or CTS for short), which aims to verify that the semantics of the standard are properly implemented by as many vendors as possible.

In 2018, Igalia has continued its work ensuring that the Intel Mesa drivers for both Vulkan and OpenGL are conformant. This work included reviewing and testing patches submitted for inclusion in VK-GL-CTS and continuously checking that the drivers passed the tests. When failures were encountered we provided patches to correct the problem either in the tests or in the drivers, depending on the outcome of our analysis or, even, brought a discussion forward when the source of the problem was incomplete, ambiguous or incorrect spec language.

The most important result out of this significant dedication has been successfully passing conformance applications.

OpenGL 4.6

Igalia helped making Intel's i965 driver conformant with OpenGL 4.6 since day zero. This was a significant achievement since, besides Intel Mesa, only nVIDIA managed to do this too.

Igalia specifically contributed to achieve the OpenGL 4.6 milestone providing the GL_ARB_gl_spirv implementation.

Vulkan 1.1

Igalia also helped to make Intel's Anvil driver conformant with Vulkan 1.1 since day zero, too.

Igalia specifically contributed to achieve the Vulkan 1.1 milestone providing the VK_KHR_16bit_storage implementation.

Mesa Releases

Igalia continued the work that was already carrying on in Mesa's Release Team throughout 2018. This effort involved a continuous dedication to track the general status of Mesa against the usual test suites and benchmarks but also to react quickly upon detected regressions, specially coordinating with the Mesa developers and the distribution packagers.

The work was obviously visible by releasing multiple bugfix releases as well as doing the branching and creating a feature release.

CI

Continuous Integration is a must in any serious SW project. In the case of API implementations it is even critical since there are many important variables that need to be controlled to avoid regressions and track the progress when including new features: agnostic tests that can be used by different implementations, different OS platforms, CPU architectures and, of course, different GPU architectures and generations.

Igalia has kept a sustained effort to keep Mesa (and Piglit) CI integrations in good health with an eye on the reported regressions to act immediately upon them. This has been a key tool for our work around Mesa releases and the experience allowed us to push the initial proposal for a new CI integration when the FreeDesktop projects decided to start its migration to GitLab.

This work, along with the one done with the Mesa releases, lead to a shared presentation, given by Juan Antonio Suárez during XDC 2018. This is the video of the talk:

XDC 2018

2018 was the year that saw A Coruña hosting the X.Org Developer's Conference (XDC) and Igalia as Platinum Sponsor.

The conference was organized by GPUL (Galician Linux User and Developer Group) together with University of A Coruña, Igalia and, of course, the X.Org Foundation.

Since A Coruña is the town in which the company originated and where we have our headquarters, Igalia had a key role in the organization, which was greatly benefited by our vast experience running events. Moreover, several Igalians joined the conference crew and, as mentioned above, we delivered talks around GL_ARB_gl_spirv, VkRunner, and Mesa releases and CI testing.

The feedback from the attendees was very rewarding and we believe the conference was a great event. Here you can see the Closing Session speech given by Samuel Iglesias:

Other activities

Conferences

As usual, Igalia was present in many graphics related conferences during the year:

New Igalians in the team

Igalia's graphics team kept growing. Two new developers joined us in 2018:

Conclusion

Thank you for reading this blog post and we look forward to more work on graphics in 2019!

Igalia

0 Add to favourites0 Bury

25 Feb 2019 2:50pm GMT

30 Jan 2019

feedPlanet Maemo

Looking forward to your comments

It took a few days, but I've finally migrated my site to Nikola. I used to have blog.mardy.it served by Google's Blogger, the main sections of www.mardy.it generated with Jekyll, the image gallery served by the old and glorious Gallery2, plus a few leftovers from the old Drupal site.

discussion by Nicolas Alejandro, on Flickr

While Jekyll is cool, I was immediately captivated by Nikola's ease of use and by its developers' promptness in answering questions in the forum. Also, one nice thing about Nikola (and Pelican, too) which I forgot to mention in my previous post is it's support for multilingual sites. I guess I'll have to translate this post in interlingua too, to give you a demonstration. :-)

Anyway, while I've fallen in love with static site generators, I still would like to give people the chance of leaving comments. Services like Disqus are easy to integrate, but given the way they can be (ab)used to track the users, I prefered to go for something self hosted. So, enter Isso.

Isso is a Python server to handle comments; it's simple to install and configure, and offers some nice features like e-mail notifications on new replies.

My Isso setup

Integrating Isso with Nikola was relatively easy, but the desire to keep a multilingual site and some hosting limitation made the process worth spending a couple of words.

FastCGI

First, my site if hosted by Dreamhost with a very basic subscription that doesn't allow me to keep long-running processes. After reading Isso's quickstart guide I was left quite disappointed, because it seemed that the only way to use Isso is to have it running all the time, or have a nginx server (Dreamhost offers Apache). Luckily, that's not quite the case, and more deployment approach are described in a separate page, including one for FastCGI (which is supported by Dreamhost). Those instructions are a bit wrong, but yours truly submitted some amendments to the documentation which will hopefully go live soon.

Importing comments

Isso can import comments from other sites, but an importer for Blogger (a.k.a. blogspot.com) was missing. So I wrote a quick and dirty tool for that job, and shared it in case it could be useful to someone else, too.

Multilingual sites

The default configuration of Nikola + Isso binds the comments to the exact URL that they were entered into. What I mean is that if your site supports multiple languages, and a user has entered a comment to an entry while visiting the English version of the site, users visiting the Italian version of the site would see same blog entry, but without that comment. That happens regardless of whether the blog entry has been translated into multiple languages or not: it's enough that the site has been configured for multiple languages.

My solution to fix the issue could not be accepted into Nikola as it would break old comments in existing sites, but if you are starting a new multilingual site you should definitely consider it.

Testers welcome

Given that I've deployed Isso as a CGI, it's understandable that it's not the fastest thing ever: it takes some time to startup, so comments don't appear immediately when you open a page. However, once it's started it stays alive for several seconds, and that seems to help with performance when commenting.

Anyway, the real reason why I've written all this is to kindly ask you to write a comment on this post :-) Extra points if you leave your e-mail address and enable the reply notifications, and let me know if you receive a notification once I'll reply to your comment. As far as I understand, you won't get a notification when someone adds an unrelated comment, but only when the "reply" functionality is used.

But really, should the commenting system be completely broken, I'm sure you'll find a way to contact me, if you need to. :-)

0 Add to favourites0 Bury

30 Jan 2019 8:21pm GMT

20 Jan 2019

feedPlanet Maemo

Choosing a static site generator

In the last few days I've been writing a simple website for Imaginario. I'm a terrible site designer, and I can't really say that I enjoy writing websites, but it's something that from time to time people might need to do. While the PhotoTeleport website is built with Jekyll, this time I decided to try some other static site generator, in order to figure out if Jekyll is indeed the best for me, or if there are better alternatives for my (rather basic) needs.

I set out trying a couple of Python-based generators, Pelican and Nikola. Here is a brief review of them (and of Jekyll), in case it helps someone else make their own choice.

Jekyll

I've been using it since several months for the PhotoTeleport website, which features a news section and a handful of static pages. It does the job very well and I haven't any major complaint. It's very popular and there are plenty of plugins to customize its behaviour or add new functionality. The documentation is sufficient for a basic usage of the site, and information on how to solve more specific issues can easily be found in the internet.

My only issue is that it's not totally intuitive to use, and in order to customize the interactions for your own needs you need to write your own scripts - at least, I didn't find a ready solution to create a new post, or deploy the generated content into my site.

Pelican

My first impression with Pelican has been extremely positive: it's very easy to setup and start a blog. It's also quite popular, even though not as much as Jekyll, and there are may themes for it. By looking at the themes, though, I quickly realized that Pelican is meant to be used for blogs, and not for simple static sites. I'm almost sure that there must be a way to use it to create a static site, maybe with some tweaking, but I couldn't find information about this in its documentation. A quick search in the internet didn't help either, so I gave up and moved to the next one.

If I had to write a blog I'd certainly consider it, though.

Nikola

Nikola is definitely less popular than Jekyll or Pelican, at least if we trust the number of stars and forks in GitHub, but it's still a popular and maintained project, with many plugins. Like Jekyll, it can handle both blogs and sites, or a combination of the two. It's well documented, the people in the forum are helpful, and its command line interface is simpler and more intuitive than Jekyll's. Also, the live preview functionality seems to be more advanced than Jekyll's, in that the browser is told to automatically reload the page whenever the site is rebuilt.

You can see my progress with the Imaginario website by inspecting the commits in its repository; you'll see how easy it was to set it up, and hopefully following my steps you'll save some time should you decide to create your own site with Nikola.

Overall, I'd rate Jekyll and Nikola on the same level: Jekyll wins for the wider community and amount of available plugins, while Nikola wins for the better command line interactions, and the fact that it's in Python gives me better confidence should I ever need to modify it deeply (though, admittedly, the latter is just a personal preference - Ruby developers will say the opposite).

0 Add to favourites0 Bury

20 Jan 2019 11:44am GMT

11 Dec 2018

feedPlanet Maemo

A Pathetic Human Being

A Venetian gondoliere thought it a good idea to decorate his gondola with fascist symbols, yet he can't handle that others think it not a good "joke"

The post A Pathetic Human Being appeared first on René Seindal.

0 Add to favourites0 Bury

11 Dec 2018 3:40pm GMT

06 Dec 2018

feedPlanet Maemo

Venice Kayak

Kayaking in Venice is a unique experience. Venice Kayak offers guided kayak tours in the city of Venice and in the lagoon.

The post Venice Kayak appeared first on René Seindal.

0 Add to favourites0 Bury

06 Dec 2018 4:34pm GMT

Venice Street Photography

I have put up a separate site with my street photography from Venice

The post Venice Street Photography appeared first on René Seindal.

0 Add to favourites0 Bury

06 Dec 2018 4:29pm GMT

Photo walks in Venice

The locals know Venice

The post Photo walks in Venice appeared first on René Seindal.

0 Add to favourites0 Bury

06 Dec 2018 4:18pm GMT

19 Nov 2018

feedPlanet Maemo

Brexit from a distance

Brexit doesn't influence me directly, but being Danish living in Italy means my existence relies on freedom of movement. Brexit attacks that freedom.

The post Brexit from a distance appeared first on René Seindal.

0 Add to favourites0 Bury

19 Nov 2018 6:54pm GMT

08 Nov 2018

feedPlanet Maemo

From Blender to OpenCV Camera and back

In case you want to employ Blender for Computer Vision like e.g. for generating synthetic data, you will need to map the parameters of a calibrated camera to Blender as well as mapping the blender camera parameters to the ones of a calibrated camera.

Calibrated cameras typically base around the pinhole camera model which at its core is the camera matrix and the image size in pixels:

K = \begin{bmatrix}f_x & 0 & c_x \\ 0 & f_y& c_y \\ 0 & 0 & 1 \end{bmatrix}, (w, h)

But if we look at the Blender Camera, we find lots non-standard and duplicate parameters with random or without any units, like

Doing some research on their meaning and fixing various bugs in the proposed conversion formula, I could however come up with the following python code to do the conversion from blender to OpenCV


# get the relevant data
cam = bpy.data.objects["cameraName"].data
scene = bpy.context.scene
# assume image is not scaled
assert scene.render.resolution_percentage == 100
# assume angles describe the horizontal field of view
assert cam.sensor_fit != 'VERTICAL'

f_in_mm = cam.lens
sensor_width_in_mm = cam.sensor_width

w = scene.render.resolution_x
h = scene.render.resolution_y

pixel_aspect = scene.render.pixel_aspect_y / scene.render.pixel_aspect_x

f_x = f_in_mm / sensor_width_in_mm * w
f_y = f_x * pixel_aspect

# yes, shift_x is inverted. WTF blender?
c_x = w * (0.5 - cam.shift_x)
# and shift_y is still a percentage of width..
c_y = h * 0.5 + w * cam.shift_y

K = [[f_x, 0, c_x],
     [0, f_y, c_y],
     [0,   0,   1]]

So to summarize the above code

The reverse transform can now be derived trivially as


cam.shift_x = -(c_x / w - 0.5)
cam.shift_y = (c_y - 0.5 * h) / w

cam.lens = f_x / w * sensor_width_in_mm

pixel_aspect = f_y / f_x
scene.render.pixel_aspect_x = 1.0
scene.render.pixel_aspect_y = pixel_aspect

0 Add to favourites0 Bury

08 Nov 2018 5:12pm GMT

19 Oct 2018

feedPlanet Maemo

Ubports at the Linux Piter conference

I'm happy (and thankful) for having been invited to speak at the Linux Piter conference in Saint Petersburg on November 2nd. I'll be talking about the Ubports project, which is the community-driven continuation of the Ubuntu Touch effort, driven by Canonical until April 7th, when the project was cancelled.

Demo of Ubuntu convergence in action

The conference talks will be in English and Russian, with simultaneous translation on the other language. The videos will appear a couple of weeks after the conference on the organization's YouTube channel, but in any case I will write a post here - unless, of course, something goes terribly wrong and I feel ashamed of my performance ;-). In order to minimize this risk, I won't be giving a live demo (at least, not before I finish talking on my slides), but I'll take a couple of Ubports devices with me and people are very welcome to come to me and check them out.

As far as I've understood, most of the audience will not be very familiar with Linux-based mobile devices, but I guess that could play into an advantage for me: no difficult questions, yay! ;-)
And I really hope that some member of the audience gets interested in the project and decides to become part of it. We'll see. :-)

0 Add to favourites0 Bury

19 Oct 2018 12:20pm GMT

07 Aug 2018

feedPlanet Maemo

Doing It Right examples on autotools, qmake, cmake and meson

About

I finished my earlier work on build environment examples. Illustrating how to do versioning on shared object files right with autotools, qmake, cmake and meson. You can find it here.

The DIR examples are examples for various build environments on how to create a good project structure that will build libraries that are versioned with libtool or have versioning that is equivalent to what libtool would deliver, have a pkg-config file and have a so called API version in the library's name.

What is right?

Information on this can be found in the autotools mythbuster docs, the libtool docs on versioning and freeBSD's chapter on shared libraries. I tried to ensure that what is written here works with all of the build environments in the examples.

libpackage-4.3.so.2.1.0, what is what?

You'll notice that a library called 'package' will in your LIBDIR often be called something like libpackage-4.3.so.2.1.0

We call the 4.3 part the APIVERSION, and the 2.1.0 part the VERSION (the ABI version).

I will explain these examples using semantic versioning as APIVERSION and either libtool's current:revision:age or a semantic versioning alternative as field for VERSION (like in FreeBSD and for build environments where compatibility with libtool's -version-info feature ain't a requirement).

Noting that with libtool's -version-info feature the values that you fill in for current, age and revision will not necessarily be identical to what ends up as suffix of the soname in LIBDIR. The formula to form the filename's suffix is, for libtool, "(current - age).age.revision". This means that for soname libpackage-APIVERSION.so.2.1.0, you would need current=3, revision=0 and age=1.

The VERSION part

In case you want compatibility with or use libtool's -version-info feature, the document libtool/version.html on autotools.io states:

The rules of thumb, when dealing with these values are:

  • Increase the current value whenever an interface has been added, removed or changed.
  • Always increase the revision value.
  • Increase the age value only if the changes made to the ABI are backward compatible.

The libtool's -version-info feature's updating-version-info part of libtool's docs states:

  1. Start with version information of '0:0:0' for each libtool library.
  2. Update the version information only immediately before a public release of your software. More frequent updates are unnecessary, and only guarantee that the current interface number gets larger faster.
  3. If the library source code has changed at all since the last update, then increment revision ('c:r:a' becomes 'c:r+1:a').
  4. If any interfaces have been added, removed, or changed since the last update, increment current, and set revision to 0.
  5. If any interfaces have been added since the last public release, then increment age.
  6. If any interfaces have been removed or changed since the last public release, then set age to 0.

When you don't care about compatibility with libtool's -version-info feature, then you can take the following simplified rules for VERSION:

  • SOVERSION = Major version
  • Major version: increase it if you break ABI compatibility
  • Minor version: increase it if you add ABI compatible features
  • Patch version: increase it for bug fix releases.

Examples when these simplified rules are or can be applicable is in build environments like cmake, meson and qmake. When you use autotools you will be using libtool and then they ain't applicable.

The APIVERSION part

For the API version I will use the rules from semver.org. You can also use the semver rules for your package's version:

Given a version number MAJOR.MINOR.PATCH, increment the:

  1. MAJOR version when you make incompatible API changes,
  2. MINOR version when you add functionality in a backwards-compatible manner, and
  3. PATCH version when you make backwards-compatible bug fixes.

When you have an API, that API can change over time. You typically want to version those API changes so that the users of your library can adopt to newer versions of the API while at the same time other users still use older versions of your API. For this we can follow section 4.3. called "multiple libraries versions" of the autotools mythbuster documentation. It states:

In this situation, the best option is to append part of the library's version information to the library's name, which is exemplified by Glib's libglib-2.0.so.0 > soname. To do so, the declaration in the Makefile.am has to be like this:

lib_LTLIBRARIES = libtest-1.0.la

libtest_1_0_la_LDFLAGS = -version-info 0:0:0

The pkg-config file

Many people use many build environments (autotools, qmake, cmake, meson, you name it). Nowadays almost all of those build environments support pkg-config out of the box. Both for generating the file as for consuming the file for getting information about dependencies.

I consider it a necessity to ship with a useful and correct pkg-config .pc file. The filename should be /usr/lib/pkgconfig/package-APIVERSION.pc for soname libpackage-APIVERSION.so.VERSION. In our example that means /usr/lib/pkgconfig/package-4.3.pc. We'd use the command pkg-config package-4.3 -cflags -libs, for example.

Examples are GLib's pkg-config file, located at /usr/lib/pkgconfig/glib-2.0.pc

The include path

I consider it a necessity to ship API headers in a per API-version different location (like for example GLib's, at /usr/include/glib-2.0). This means that your API version number must be part of the include-path.

For example using earlier mentioned API-version 4.3, /usr/include/package-4.3 for /usr/lib/libpackage-4.3.so(.2.1.0) having /usr/lib/pkg-config/package-4.3.pc

What will the linker typically link with?

The linker will for -lpackage-4.3 typically link with /usr/lib/libpackage-4.3.so.2 or with libpackage-APIVERSION.so.(current - age). Noting that the part that is calculated as (current - age) in this example is often, for example in cmake and meson, referred to as the SOVERSION. With SOVERSION the soname template in LIBDIR is libpackage-APIVERSION.so.SOVERSION.

What is wrong?

Not doing any versioning

Without versioning you can't make any API or ABI changes that wont break all your users' code in a way that could be managable for them. If you do decide not to do any versioning, then at least also don't put anything behind the .so part of your so's filename. That way, at least you wont break things in spectacular ways.

Coming up with your own versioning scheme

Knowing it better than the rest of the world will in spectacular ways make everything you do break with what the entire rest of the world does. You shouldn't congratulate yourself with that. The only thing that can be said about it is that it probably makes little sense, and that others will probably start ignoring your work. Your mileage may vary. Keep in mind that without a correct SOVERSION, certain things will simply not work correct.

In case of libtool: using your package's (semver) release numbering for current, revision, age

This is similarly wrong to 'Coming up with your own versioning scheme'.

The Libtool documentation on updating version info is clear about this:

Never try to set the interface numbers so that they correspond to the release number of your package. This is an abuse that only fosters misunderstanding of the purpose of library versions.

This basically means that once you are using libtool, also use libtool's versioning rules.

Refusing or forgetting to increase the current and/or SOVERSION on breaking ABI changes

The current part of the VERSION (current, revision and age) minus age, or, SOVERSION is/are the most significant field(s). The current and age are usually involved in forming the so called SOVERSION, which in turn is used by the linker to know with which ABI version to link. That makes it … damn important.

Some people think 'all this is just too complicated for me', 'I will just refuse to do anything and always release using the same version numbers'. That goes spectacularly wrong whenever you made ABI incompatible changes. It's similarly wrong to 'Coming up with your own versioning scheme'.

That way, all programs that link with your shared library can after your shared library gets updated easily crash, can corrupt data and might or might not work.

By updating the current and age, or, SOVERSION you will basically trigger people who manage packages and their tooling to rebuild programs that link with your shared library. You actually want that the moment you made breaking ABI changes in a newer version of it.

When you don't want to care about libtool's -version-info feature, then there is also a set of more simple to follow rules. Those rules are for VERSION:

  • SOVERSION = Major version (with these simplified set of rules, no subtracting of current with age is needed)
  • Major version: increase it if you break ABI compatibility
  • Minor version: increase it if you add ABI compatible features
  • Patch version: increase it for bug fix releases.

What isn't wrong?

Not using libtool (but nonetheless doing ABI versioning right)

GNU libtool was made to make certain things more easy. Nowadays many popular build environments also make things more easy. Meanwhile has GNU libtool been around for a long time. And its versioning rules, commonly known as the current:revision:age field as parameter for -verison-info, got widely adopted.

What GNU libtool did was, however, not really a standard. It's is one interpretation of how to do it. And a rather complicated one, at that.

Please let it be crystal clear that not using libtool does not mean that you can do ABI versioning wrong. Because very often people seem to think that they can, and think they'll still get out safely while doing ABI versioning completely wrong. This is not the case.

Not having a APIVERSION at all

It isn't wrong not to have an APIVERSION in the soname. It however means that you promise to not ever break API. Because the moment you break API, you disallow your users to stay on the old API for a little longer. They might both have programs that use the old and that use the new API. Now what?

When you have an APIVERSION then you can allow the introduction of a new version of the API while simultaneously the old API remains available on a user's system.

Using a different naming-scheme for APIVERSION

I used the MAJOR.MINOR version numbers from semver to form the APIVERSION. I did this because only the MAJOR and the MINOR are technically involved in API changes (unless you are doing semantic versioning wrong - in which case see 'Coming up with your own versioning scheme').

Some projects only use MAJOR. Examples are Qt which puts the MAJOR number behind the Qt part. For example libQt5Core.so.VERSION (so that's "Qt" + MAJOR + Module). The GLib world, however, uses "g" + Module + "-" + MAJOR + ".0″ as they have releases like 2.2, 2.3, 2.4 that are all called libglib-2.0.so.VERSION. I guess they figured that maybe someday in their 2.x series, they could use that MINOR field?

DBus seems to be using a similar thing to GLib, but then without the MINOR suffix: libdbus-1.so.VERSION. For their GLib integration they also use it as libdbus-glib-1.so.VERSION.

Who is right, who is wrong? It doesn't matter too much for your APIVERSION naming scheme. As long as there is a way to differentiate the API in a) the include path, b) the pkg-config filename and c) the library that will be linked with (the -l parameter during linking/compiling). Maybe someday a standard will be defined? Let's hope so.

Differences in interpretation per platform

FreeBSD

FreeBSD's Shared Libraries of Chapter 5. Source Tree Guidelines and Policies states:

The three principles of shared library building are:

  1. Start from 1.0
  2. If there is a change that is backwards compatible, bump minor number (note that ELF systems ignore the minor number)
  3. If there is an incompatible change, bump major number

For instance, added functions and bugfixes result in the minor version number being bumped, while deleted functions, changed function call syntax, etc. will force the major version number to change.

I think that when using libtool on a FreeBSD (when you use autotools), that the platform will provide a variant of libtool's scripts that will convert earlier mentioned current, revision and age rules to FreeBSD's. The same goes for the VERSION variable in cmake and qmake. Meaning that with those tree build environments, you can just use the rules for GNU libtool's -version-info.

I could be wrong on this, but I did find mailing list E-mails from ~ 2011 stating that this SNAFU is dealt with. Besides, the *BSD porters otherwise know what to do and you could of course always ask them about it.

Note that FreeBSD's rules are or seem to be compatible with the rules for VERSION when you don't want to care about libtool's -version-info compatibility. However, when you are porting from a libtoolized project, then of course you don't want to let newer releases break against releases that have already happened.

Modern Linux distributions

Nowadays you sometimes see things like /usr/lib/$ARCH/libpackage-APIVERSION.so linking to /lib/$ARCH/libpackage-APIVERSION.so.VERSION. I have no idea how this mechanism works. I suppose this is being done by packagers of various Linux distributions? I also don't know if there is a standard for this.

I will update the examples and this document the moment I know more and/or if upstream developers need to worry about it. I think that using GNUInstallDirs in cmake, for example, makes everything go right. I have not found much for this in qmake, meson seems to be doing this by default and in autotools you always use platform variables for such paths.

As usual, I hope standards will be made and that the build environment and packaging community gets to their senses and stops leaving this into the hands of developers. I especially think about qmake, which seems to not have much at all to state that standardized installation paths must be used (not even a proper way to define a prefix).

Questions that I can imagine already exist

Why is there there a difference between APIVERSION and VERSION?

The API version is the version of your programmable interfaces. This means the version of your header files (if your programming language has such header files), the version of your pkgconfig file, the version of your documentation. The API is what software developers need to utilize your library.

The ABI version can definitely be different and it is what programs that are compiled and installable need to utilize your library.

An API breaks when recompiling the program without any changes, that consumes a libpackage-4.3.so.2, is not going to succeed at compile time. The API got broken the moment any possible way package's API was used, wont compile. Yes, any way. It means that a libpackage-5.0.so.0 should be started.

An ABI breaks when without recompiling the program, replacing a libpackage-4.3.so.2.1.0 with a libpackage-4.3.so.2.2.0 or a libpackage-4.3.so.2.1.1 (or later) as libpackage-4.3.so.2 is not going to succeed at runtime. For example because it would crash, or because the results would be wrong (in any way). It implies that libpackage-4.3.so.2 shouldn't be overwritten, but libpackage-4.3.so.3 should be started.

For example when you change the parameter of a function in C to be a floating point from a integer (and/or the other way around), then that's an ABI change but not neccesarily an API change.

What is this SOVERSION about?

In most projects that got ported from an environment that uses GNU libtool (for example autotools) to for example cmake or meson, and in the rare cases that they did anything at all in a qmake based project, I saw people converting the current, revision and age parameters that they passed to the -version-info option of libtool to a string concatenated together using (current - age), age, revision as VERSION, and (current - age) as SOVERSION.

I wanted to use the exact same rules for versioning for all these examples, including autotools and GNU libtool. When you don't have to (or want to) care about libtool's set of (for some people, needlessly complicated) -version-info rules, then it should be fine using just SOVERSION and VERSION using these rules:

  • SOVERSION = Major version
  • Major version: increase it if you break ABI compatibility
  • Minor version: increase it if you add ABI compatible features
  • Patch version: increase it for bug fix releases.

I, however, also sometimes saw variations that are incomprehensible with little explanation and magic foo invented on the spot. Those variations are probably wrong.

In the example I made it so that in the root build file of the project you can change the numbers and calculation for the numbers. However. Do follow the rules for those correctly, as this versioning is about ABI compatibility. Doing this wrong can make things blow up in spectacular ways.

The examples

qmake in the qmake-example

Note that the VERSION variable must be filled in as "(current - age).age.revision" for qmake (to get 2.1.0 at the end, you need VERSION=2.1.0 when current=3, revision=0 and age=1)

To try this example out, go to the qmake-example directory and type

$ cd qmake-example
$ mkdir=_test
$ qmake PREFIX=$PWD/_test
$ make
$ make install

This should give you this:

$ find _test/
_test/
├── include
│   └── qmake-example-4.3
│       └── qmake-example.h
└── lib
    ├── libqmake-example-4.3.so -> libqmake-example-4.3.so.2.1.0
    ├── libqmake-example-4.3.so.2 -> libqmake-example-4.3.so.2.1.0
    ├── libqmake-example-4.3.so.2.1 -> libqmake-example-4.3.so.2.1.0
    ├── libqmake-example-4.3.so.2.1.0
    ├── libqmake-example-4.la
    └── pkgconfig
        └── qmake-example-4.3.pc

When you now use pkg-config, you get a nice CFLAGS and LIBS line back (I'm replacing the current path with $PWD in the output each time):

$ export PKG_CONFIG_PATH=$PWD/_test/lib/pkgconfig
$ pkg-config qmake-example-4.3 --cflags
-I$PWD/_test/include/qmake-example-4.3
$ pkg-config qmake-example-4.3 --libs
-L$PWD/_test/lib -lqmake-example-4.3

And it means that you can do things like this now (and people who know about pkg-config will now be happy to know that they can use your library in their own favorite build environment).

$ export LD_LIBRARY_PATH=$PWD/_test/lib
$ echo -en "#include <qmake-example.h>\nmain() {} " > test.cpp
$ g++ -fPIC test.cpp -o test.o `pkg-config qmake-example-4.3 --libs --cflags`

You can see that it got linked to libqmake-example-4.3.so.2, where that 2 at the end is (current - age).

$ ldd test.o
    linux-gate.so.1 (0xb77b0000)
    libqmake-example-4.3.so.2 => $PWD/_test/lib/libqmake-example-4.3.so.2 (0xb77a6000)
    libstdc++.so.6 => /usr/lib/i386-linux-gnu/libstdc++.so.6 (0xb75f5000)
    libm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0xb759e000)
    libgcc_s.so.1 => /lib/i386-linux-gnu/libgcc_s.so.1 (0xb7580000)
    libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xb73c9000)
    /lib/ld-linux.so.2 (0xb77b2000)

cmake in the cmake-example

Note that the VERSION property on your library target must be filled in with "(current - age).age.revision" for cmake (to get 2.1.0 at the end, you need VERSION=2.1.0 when current=3, revision=0 and age=1. Note that in cmake you must also fill in the SOVERSION property as (current - age), so SOVERSION=2 when current=3 and age=1).

To try this example out, go to the cmake-example directory and do

$ cd cmake-example
$ mkdir _test
$ cmake -DCMAKE_INSTALL_PREFIX:PATH=$PWD/_test
-- Configuring done
-- Generating done
-- Build files have been written to: .
$ make
[ 50%] Building CXX object src/libs/cmake-example/CMakeFiles/cmake-example.dir/cmake-example.cpp.o
[100%] Linking CXX shared library libcmake-example-4.3.so
[100%] Built target cmake-example
$ make install
[100%] Built target cmake-example
Install the project...
-- Install configuration: ""
-- Installing: $PWD/_test/lib/libcmake-example-4.3.so.2.1.0
-- Up-to-date: $PWD/_test/lib/libcmake-example-4.3.so.2
-- Up-to-date: $PWD/_test/lib/libcmake-example-4.3.so
-- Up-to-date: $PWD/_test/include/cmake-example-4.3/cmake-example.h
-- Up-to-date: $PWD/_test/lib/pkgconfig/cmake-example-4.3.pc

This should give you this:

$ tree _test/
_test/
├── include
│   └── cmake-example-4.3
│       └── cmake-example.h
└── lib
    ├── libcmake-example-4.3.so -> libcmake-example-4.3.so.2
    ├── libcmake-example-4.3.so.2 -> libcmake-example-4.3.so.2.1.0
    ├── libcmake-example-4.3.so.2.1.0
    └── pkgconfig
        └── cmake-example-4.3.pc

When you now use pkg-config, you get a nice CFLAGS and LIBS line back (I'm replacing the current path with $PWD in the output each time):

$ pkg-config cmake-example-4.3 --cflags
-I$PWD/_test/include/cmake-example-4.3
$ pkg-config cmake-example-4.3 --libs
-L$PWD/_test/lib -lcmake-example-4.3

And it means that you can do things like this now (and people who know about pkg-config will now be happy to know that they can use your library in their own favorite build environment):

$ echo -en "#include <cmake-example.h>\nmain() {} " > test.cpp
$ g++ -fPIC test.cpp -o test.o `pkg-config cmake-example-4.3 --libs --cflags`

You can see that it got linked to libcmake-example-4.3.so.2, where that 2 at the end is the SOVERSION. This is (current - age).

$ ldd test.o
    linux-gate.so.1 (0xb7729000)
    libcmake-example-4.3.so.2 => $PWD/_test/lib/libcmake-example-4.3.so.2 (0xb771f000)
    libstdc++.so.6 => /usr/lib/i386-linux-gnu/libstdc++.so.6 (0xb756e000)
    libm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0xb7517000)
    libgcc_s.so.1 => /lib/i386-linux-gnu/libgcc_s.so.1 (0xb74f9000)
    libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xb7342000)
    /lib/ld-linux.so.2 (0xb772b000)

autotools in the autotools-example

Note that you pass -version-info current:revision:age directly with autotools. The libtool will translate that to (current - age).age.revision to form the so's filename (to get 2.1.0 at the end, you need current=3, revision=0, age=1).

To try this example out, go to the autotools-example directory and do

$ cd autotools-example
$ mkdir _test
$ libtoolize
$ aclocal
$ autoheader
$ autoconf
$ automake --add-missing
$ ./configure --prefix=$PWD/_test
$ make
$ make install

This should give you this:

$ tree _test/
_test/
├── include
│   └── autotools-example-4.3
│       └── autotools-example.h
└── lib
    ├── libautotools-example-4.3.a
    ├── libautotools-example-4.3.la
    ├── libautotools-example-4.3.so -> libautotools-example-4.3.so.2.1.0
    ├── libautotools-example-4.3.so.2 -> libautotools-example-4.3.so.2.1.0
    ├── libautotools-example-4.3.so.2.1.0
    └── pkgconfig
        └── autotools-example-4.3.pc

When you now use pkg-config, you get a nice CFLAGS and LIBS line back (I'm replacing the current path with $PWD in the output each time):

$ export PKG_CONFIG_PATH=$PWD/_test/lib/pkgconfig
$ pkg-config autotools-example-4.3 --cflags
-I$PWD/_test/include/autotools-example-4.3
$ pkg-config autotools-example-4.3 --libs
-L$PWD/_test/lib -lautotools-example-4.3

And it means that you can do things like this now (and people who know about pkg-config will now be happy to know that they can use your library in their own favorite build environment):

$ echo -en "#include <autotools-example.h>\nmain() {} " > test.cpp
$ export LD_LIBRARY_PATH=$PWD/_test/lib
$ g++ -fPIC test.cpp -o test.o `pkg-config autotools-example-4.3 --libs --cflags`

You can see that it got linked to libautotools-example-4.3.so.2, where that 2 at the end is (current - age).

$ ldd test.o
    linux-gate.so.1 (0xb778d000)
    libautotools-example-4.3.so.2 => $PWD/_test/lib/libautotools-example-4.3.so.2 (0xb7783000)
    libstdc++.so.6 => /usr/lib/i386-linux-gnu/libstdc++.so.6 (0xb75d2000)
    libm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0xb757b000)
    libgcc_s.so.1 => /lib/i386-linux-gnu/libgcc_s.so.1 (0xb755d000)
    libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xb73a6000)
    /lib/ld-linux.so.2 (0xb778f000)

meson in the meson-example

Note that the version property on your library target must be filled in with "(current - age).age.revision" for meson (to get 2.1.0 at the end, you need version=2.1.0 when current=3, revision=0 and age=1. Note that in meson you must also fill in the soversion property as (current - age), so soversion=2 when current=3 and age=1).

To try this example out, go to the meson-example directory and do

$ cd meson-example
$ mkdir -p _build/_test
$ cd _build
$ meson .. --prefix=$PWD/_test
$ ninja
$ ninja install

This should give you this:

$ tree _test/
_test/
├── include
│   └── meson-example-4.3
│       └── meson-example.h
└── lib
    └── i386-linux-gnu
        ├── libmeson-example-4.3.so -> libmeson-example-4.3.so.2.1.0
        ├── libmeson-example-4.3.so.2 -> libmeson-example-4.3.so.2.1.0
        ├── libmeson-example-4.3.so.2.1.0
        └── pkgconfig
            └── meson-example-4.3.pc

When you now use pkg-config, you get a nice CFLAGS and LIBS line back (I'm replacing the current path with $PWD in the output each time):

$ export PKG_CONFIG_PATH=$PWD/_test/lib/i386-linux-gnu/pkgconfig
$ pkg-config meson-example-4.3 --cflags
-I$PWD/_test/include/meson-example-4.3
$ pkg-config meson-example-4.3 --libs
-L$PWD/_test/lib -lmeson-example-4.3

And it means that you can do things like this now (and people who know about pkg-config will now be happy to know that they can use your library in their own favorite build environment):

$ echo -en "#include <meson-example.h>\nmain() {} " > test.cpp
$ export LD_LIBRARY_PATH=$PWD/_test/lib/i386-linux-gnu
$ g++ -fPIC test.cpp -o test.o `pkg-config meson-example-4.3 --libs --cflags`

You can see that it got linked to libmeson-example-4.3.so.2, where that 2 at the end is the soversion. This is (current - age).

$ ldd test.o
    linux-gate.so.1 (0xb772e000)
    libmeson-example-4.3.so.2 => $PWD/_test/lib/i386-linux-gnu/libmeson-example-4.3.so.2 (0xb7724000)
    libstdc++.so.6 => /usr/lib/i386-linux-gnu/libstdc++.so.6 (0xb7573000)
    libm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0xb751c000)
    libgcc_s.so.1 => /lib/i386-linux-gnu/libgcc_s.so.1 (0xb74fe000)
    libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xb7347000)
    /lib/ld-linux.so.2 (0xb7730000)

0 Add to favourites0 Bury

07 Aug 2018 2:30pm GMT

11 Jul 2018

feedPlanet Maemo

Doing it right, making libraries using popular build environments

Enough with the political posts!

Making libraries that are both API and libtool versioned with qmake, how do they do it?

I started a project on github that will collect what I will call "doing it right" project structures for various build environments.

With right I mean that the library will have a API version in its Library name, that the library will be libtoolized and that a pkg-config .pc file gets installed for it.

I have in mind, for example, autotools, cmake, meson, qmake and plain make. First example that I have finished is one for qmake.

Let's get started working on a libqmake-example-3.2.so.3.2.1

We get the PREFIX, MAJOR_VERSION, MINOR_VERSION and PATCH_VERSION from a project-wide include

include(../../../qmake-example.pri)

We will use the standard lib template of qmake

TEMPLATE = lib

We need to set VERSION to a semver.org version for compile_libtool (in reality it should use what is called current, revision and age to form an API and ABI version number. In the actual example it's explained in the comments, as this is too much for a small blog post).

VERSION = $${MAJOR_VERSION}"."$${MINOR_VERSION}"."$${PATCH_VERSION}

According section 4.3 of Autotools' mythbusters we should have as target-name the API version in the library's name

TARGET = qmake-example-$${MAJOR_VERSION}"."$${MINOR_VERSION}

We will write a define in config.h for access to the semver.org version as a double quoted string

QMAKE_SUBSTITUTES += config.h.in

Our example happens to use QDebug, so we need QtCore here

QT = core

This is of course optional

CONFIG += c++14

We will be using libtool style libraries

CONFIG += compile_libtool
CONFIG += create_libtool

These will create a pkg-config .pc file for us

CONFIG += create_pc create_prl no_install_prl

Project sources

SOURCES = qmake-example.cpp

Project's public and private headers

HEADERS = qmake-example.h

We will install the headers in a API specific include path

headers.path = $${PREFIX}/include/qmake-example-$${MAJOR_VERSION}"."$${MINOR_VERSION}

Here put only the publicly installed headers

headers.files = $${HEADERS}

Here we will install the library to

target.path = $${PREFIX}/lib

This is the configuration for generating the pkg-config file

QMAKE_PKGCONFIG_NAME = $${TARGET}
QMAKE_PKGCONFIG_DESCRIPTION = An example that illustrates how to do it right with qmake
# This is our libdir
QMAKE_PKGCONFIG_LIBDIR = $$target.path
# This is where our API specific headers are
QMAKE_PKGCONFIG_INCDIR = $$headers.path
QMAKE_PKGCONFIG_DESTDIR = pkgconfig
QMAKE_PKGCONFIG_PREFIX = $${PREFIX}
QMAKE_PKGCONFIG_VERSION = $$VERSION
# These are dependencies that our library needs
QMAKE_PKGCONFIG_REQUIRES = Qt5Core

Installation targets (the pkg-config seems to install automatically)

INSTALLS += headers target

This will be the result after make-install

├── include
│   └── qmake-example-3.2
│       └── qmake-example.h
└── lib
    ├── libqmake-example-3.2.so -> libqmake-example-3.2.so.3.2.1
    ├── libqmake-example-3.2.so.3 -> libqmake-example-3.2.so.3.2.1
    ├── libqmake-example-3.2.so.3.2 -> libqmake-example-3.2.so.3.2.1
    ├── libqmake-example-3.2.so.3.2.1
    ├── libqmake-example-3.la
    └── pkgconfig
        └── qmake-example-3.pc

ps. Dear friends working at their own customers: when I visit your customer, I no longer want to see that you produced completely stupid wrong qmake based projects for them. Libtoolize it all, get an API version in your Library's so-name and do distribute a pkg-config .pc file. That's the very least to pass your exam. Also read this document (and stop pretending that you don't need to know this when at the same time you charge them real money pretending that you know something about modern UNIX software development).

0 Add to favourites0 Bury

11 Jul 2018 10:25pm GMT