17 Oct 2018

feedPlanet Debian

Michal Čihař: wlc 0.9

wlc 0.9, a command line utility for Weblate, has been just released. There are several new commands like translation file upload or repository cleanup. The codebase has been also migrated to use requests instead of urllib.

Full list of changes:

wlc is built on API introduced in Weblate 2.6 and still being in development, you need at least Weblate 2.10 (or use on our hosting offering). You can find usage examples in the wlc documentation.

Filed under: Debian English SUSE Weblate

17 Oct 2018 3:00pm GMT

16 Oct 2018

feedPlanet Debian

Matthew Garrett: Initial thoughts on MongoDB's new Server Side Public License

MongoDB just announced that they were relicensing under their new Server Side Public License. This is basically the Affero GPL except with section 13 largely replaced with new text, as follows:

If you make the functionality of the Program or a modified version available to third parties as a service, you must make the Service Source Code available via network download to everyone at no charge, under the terms of this License. Making the functionality of the Program or modified version available to third parties as a service includes, without limitation, enabling third parties to interact with the functionality of the Program or modified version remotely through a computer network, offering a service the value of which entirely or primarily derives from the value of the Program or modified version, or offering a service that accomplishes for users the primary purpose of the Software or modified version.

"Service Source Code" means the Corresponding Source for the Program or the modified version, and the Corresponding Source for all programs that you use to make the Program or modified version available as a service, including, without limitation, management software, user interfaces, application program interfaces, automation software, monitoring software, backup software, storage software and hosting software, all such that a user could run an instance of the service using the Service Source Code you make available.


MongoDB admit that this license is not currently open source in the sense of being approved by the Open Source Initiative, but say:We believe that the SSPL meets the standards for an open source license and are working to have it approved by the OSI.

At the broadest level, AGPL requires you to distribute the source code to the AGPLed work[1] while the SSPL requires you to distribute the source code to everything involved in providing the service. Having a license place requirements around things that aren't derived works of the covered code is unusual but not entirely unheard of - the GPL requires you to provide build scripts even if they're not strictly derived works, and you could probably make an argument that the anti-Tivoisation provisions of GPL3 fall into this category.

A stranger point is that you're required to provide all of this under the terms of the SSPL. If you have any code in your stack that can't be released under those terms then it's literally impossible for you to comply with this license. I'm not a lawyer, so I'll leave it up to them to figure out whether this means you're now only allowed to deploy MongoDB on BSD because the license would require you to relicense Linux away from the GPL. This feels sloppy rather than deliberate, but if it is deliberate then it's a massively greater reach than any existing copyleft license.

You can definitely make arguments that this is just a maximalist copyleft license, the AGPL taken to extreme, and therefore it fits the open source criteria. But there's a point where something is so far from the previously accepted scenarios that it's actually something different, and should be examined as a new category rather than already approved categories. I suspect that this license has been written to conform to a strict reading of the Open Source Definition, and that any attempt by OSI to declare it as not being open source will receive pushback. But definitions don't exist to be weaponised against the communities that they seek to protect, and a license that has overly onerous terms should be rejected even if that means changing the definition.

In general I am strongly in favour of licenses ensuring that users have the freedom to take advantage of modifications that people have made to free software, and I'm a fan of the AGPL. But my initial feeling is that this license is a deliberate attempt to make it practically impossible to take advantage of the freedoms that the license nominally grants, and this impression is strengthened by it being something that's been announced with immediate effect rather than something that's been developed with community input. I think there's a bunch of worthwhile discussion to have about whether the AGPL is strong and clear enough to achieve its goals, but I don't think that this SSPL is the answer to that - and I lean towards thinking that it's not a good faith attempt to produce a usable open source license.

(It should go without saying that this is my personal opinion as a member of the free software community, and not that of my employer)

[1] There's some complexities around GPL3 code that's incorporated into the AGPLed work, but if it's not part of the AGPLed work then it's not covered

comment count unavailable comments

16 Oct 2018 10:43pm GMT

Reproducible builds folks: Reproducible Builds: Weekly report #181

Here's what happened in the Reproducible Builds effort between Sunday October 7 and Saturday October 13 2018:

Another brief reminder that another Reproducible Builds summit will be taking place between 11th-13th December 2018 in Mozilla's offices in Paris. If you are interested in attending please send an email to holger@layer-acht.org. More details can also be found on the corresponding event page of our website.

diffoscope development

diffoscope is our in-depth "diff-on-steroids" utility which helps us diagnose reproducibility issues in packages) was updated this week, including contributions from:

Packages reviewed and fixed, and bugs filed

Test framework development

There were a large number of updates to our Jenkins-based testing framework that powers tests.reproducible-builds.org by Holger Levsen this month, including:

In addition, Mattia Rizzolo performed some node administration (1, 2).

Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

16 Oct 2018 8:23pm GMT

Julien Danjou: More GitHub workflow automation

More GitHub workflow automation

The more you use computers, the more you see the potentials for automating everything. Who doesn't love that? By building Mergify those last months, we've decided it was time bring more automation to the development workflow.

Mergify's first version was a minimal viable product around automating the merge of pull requests. As I wrote a few months ago, we wanted to automate the merge of pull requests when it was ready to be merged. For most projects, this is easy and consists of a simple rule: "it must be approved by a developer and pass the CI".

Evolving on Feedback

For the first few months, we received a lot of feedback from our users. They were enthusiastic about the product but were frustrated by a couple of things.

First, Mergify would mess up with branch protections. We thought that people wanted the GitHub UI to match their rules. As I'll explain later, it turns out to be only partially true, and we found a workaround.

Then, Mergify's abilities were capped by some of the limitations of the GitHub workflow and API. For example, GitHub would only allow rules per branch, whereas our users wanted to have rules applied based on a lot of different criteria.

Building the Next Engine

We rolled up our sleeves and started to build that new engine. The first thing was to get rid of the GitHub branch protection feature altogether and leveraging the Checks API to render something useful to the users in the UI. You can now have a complete overview of the rules that will be applied to your pull requests in the UI, making it easy to understand what's happening.

More GitHub workflow automation

Then, we wrote a new matching engine that would allow matching any pull requests based on any of its attributes. You can now automate your workflow with a finer-grained configuration.

What Does It Look Like?

Here's a simple rule you could write:

pull_request_rules:
  - name: automatic merge on approval and CI pass
    conditions:
     - "#approved-reviews-by>=1"
     - status-success=continuous-integration/travis-ci/pr
     - label!=work-in-progress
    actions:
      merge:
        method: merge

With that, any pull request that has been approved by a collaborator, passes the Travis CI job and does not have the label work-in-progress will be automatically merged by Mergify.

You could use even more actions to backport this pull request to another branch, close the pull request or add/remove labels. We're starting to see users building amazing workflow with that engine!

We're thrilled by this new version we launched this week and glad we're getting amazing feedback (again) from our users.

When you give it a try, drop me a note and let me know what you think about it!

16 Oct 2018 12:39pm GMT

15 Oct 2018

feedPlanet Debian

Michal Čihař: uTidylib 0.4

Two years ago, I've taken over uTidylib maintainership. Two years has passed without any bigger contribution, but today there is a new version with support for recent html-tidy and Python 3.

The release still can't be uploaded to PyPI (see https://github.com/pypa/warehouse/issues/4860), but it's available for download from my website or tagged in the GitHub repository.

Full list of changes is quite small:

Anyway as I can not update PyPI entry, the downloads are currently available only on my website: https://cihar.com/software/utidylib/

Filed under: Debian English SUSE uTidylib

15 Oct 2018 2:30pm GMT

Robert McQueen: Flatpaks, sandboxes and security

Last week the Flatpak community woke to the "news" that we are making the world a less secure place and we need to rethink what we're doing. Personally, I'm not sure this is a fair assessment of the situation. The "tl;dr" summary is: Flatpak confers many benefits besides the sandboxing, and even looking just at the sandboxing, improving app security is a huge problem space and so is a work in progress across multiple upstream projects. Much of what has been achieved so far already delivers incremental improvements in security, and we're making solid progress on the wider app distribution and portability problem space.

Sandboxing, like security in general, isn't a binary thing - you can't just say because you have a sandbox, you have 100% security. Like having two locks on your front door, two front doors, or locks on your windows too, sensible security is about defense in depth. Each barrier that you implement precludes some invalid or possibly malicious behaviour. You hope that in total, all of these barriers would prevent anything bad, but you can never really guarantee this - it's about multiplying together probabilities to get a smaller number. A computer which is switched off, in a locked faraday cage, with no connectivity, is perfectly secure - but it's also perfectly useless because you cannot actually use it. Sandboxing is very much the same - whilst you could easily take systemd-nspawn, Docker or any other container technology of choice and 100% lock down a desktop app, you wouldn't be able to interact with it at all.

Network services have incubated and driven most of the container usage on Linux up until now but they are fundamentally different to desktop applications. For services you can write a simple list of permissions like, "listen on this network port" and "save files over here" whereas desktop applications have a much larger number of touchpoints to the outside world which the user expects and requires for normal functionality. Just thinking off the top of my head you need to consider access to the filesystem, display server, input devices, notifications, IPC, accessibility, fonts, themes, configuration, audio playback and capture, video playback, screen sharing, GPU hardware, printing, app launching, removable media, and joysticks. Without making holes in the sandbox to allow access to these in to your app, it either wouldn't work at all, or it wouldn't work in the way that people have come to expect.

What Flatpak brings to this is understanding of the specific desktop app problem space - most of what I listed above is to a greater or lesser extent understood by Flatpak, or support is planned. The Flatpak sandbox is very configurable, allowing the application author to specify which of these resources they need access to. The Flatpak CLI asks the user about these during installation, and we provide the flatpak override command to allow the user to add or remove these sandbox escapes. Flatpak has introduced portals into the Linux desktop ecosystem, which we're really pleased to be sharing with snap since earlier this year, to provide runtime access to resources outside the sandbox based on policy and user consent. For instance, document access, app launching, input methods and recursive sandboxing ("sandbox me harder") have portals.

The starting security position on the desktop was quite terrible - anything in your session had basically complete access to everything belonging to your user, and many places to hide.

Even with these caveats, Flatpak brings a bunch of default sandboxing - IPC filtering, a new filesystem, process and UID namespace, seccomp filtering, an immutable /usr and /app - and each of these is already a barrier to certain attacks.

Looking at the specific concerns raised:

Zooming out a little bit, I think it's worth also highlighting some of the other reasons why Flatpak exists at all - these are far bigger problems with the Linux desktop ecosystem than app security alone, and Flatpak brings a huge array of benefits to the table:

Nobody is trying to claim that Flatpak solves all of the problems at once, or that what we have is anywhere near perfect or completely secure, but I think what we have is pretty damn cool (I just wish we'd had it 10 years ago!). Even just in the security space, the overall effort we need is huge, but this is a journey that we are happy to be embarking together with the whole Linux desktop community. Thanks for reading, trying it out, and lending us a hand.

15 Oct 2018 1:40pm GMT

Lars Wirzenius: Rewrote summain from Python to Rust

I've been learning Rust lately. As part of that, I rewrote my summain program from Python to Rust (see summainrs). It's not quite a 1:1 rewrite: the Python version outputs RFC822-style records, the Rust one uses YAML. The Rust version is my first attempt at using multithreading, something I never added to the Python version.

Results:

A nice speed improvement, I think. Especially, since the difference between the single and multithreaded version of the Rust program is four characters (par_iter instead of iter in the process_chunk function).

15 Oct 2018 8:13am GMT

Louis-Philippe Véronneau: A Good Harvest of the Devil's Lettuce

Hop cones layed out for drying

You might have heard that Canada's legalising marijuana in 2 days. Even though I think it's a pretty good idea, this post is not about pot, but about another type of Devil's Lettuce: hops.

As we all know, homebrewing beer is a gateway into growing hops, a highly suspicious activity that attracts only marginals and deviants. Happy to say I've been successfully growing hops for two years now and this year's harvest has been bountiful.

Two years ago, I planted two hops plants, one chinook and one triple pearl. A year prior to this I had tried to grow a cascade plant in a container on my balcony, but it didn't work out well. This time I got around to plant the rhizomes in the ground under my balcony and had the bines grow on ropes.

Although I've been having trouble with the triple pearl (the soil where I live is thick and heavy clay - not the best for hops), the chinook has been growing pretty well.

Closeup of my chinook hops on the bines

Harvest time is always fun and before taking the bines down, I didn't know how much cones I would get this year. I'd say compared to last year, I tripled my yield. With some luck (and better soil), I should be able to get my triple pearl to produce cones next year.

Here a nice poem about the usefulness of hops written by Thomas Tusser in 1557:

      The hop for his profit I thus do exalt,
      It strengtheneth drink and it flavoureth malt;
      And being well-brewed long kept it will last,
      And drawing abide, if ye draw not too fast.

So remember kids, don't drink and upload and if you decide to grow some of the Devil's Lettuce, make sure you use it to flavoureth malt and not your joint. The ones waging war on drugs might not like it.

15 Oct 2018 3:30am GMT

14 Oct 2018

feedPlanet Debian

Dirk Eddelbuettel: RcppCCTZ 0.2.5

A new bugfix release 0.2.5 of RcppCCTZ got onto CRAN this morning - just a good week after the previous release.

RcppCCTZ uses Rcpp to bring CCTZ to R. CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and times, and one for converting between between absolute and civil times via time zones. And while CCTZ is made by Google(rs), it is not an official Google product. The RcppCCTZ page has a few usage examples and details. This package was the first CRAN package to use CCTZ; by now at least three others do-but decided in their infinite wisdom to copy the sources yet again into their packages. Sigh.

This version corrects two bugs. We were not properly accounting for those poor systems that do not natively have nanosecond resolution. And I missed a feature in the Rcpp DatetimeVector class by not setting the timezone on newly created variables; this too has been fixed.

Changes in version 0.2.5 (2018-10-14)

  • Parsing to Datetime was corrected on systems that do not have nanosecond support in C++11 chrono (#28).

  • DatetimeVector objects are now created with their timezone attribute when available.

  • The toTz functions is now vectorized (#29).

  • More unit tests were added, and some conditioning on Solaris (mostly due to missing timezone info) was removed.

We also have a diff to the previous version thanks to CRANberries. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

14 Oct 2018 11:04pm GMT

Jeremy Bicha: Google Cloud Print in Ubuntu

There is an interesting hidden feature available in Ubuntu 18.04 LTS and newer. To enable this feature, first install cpdb-backend-gcp.

sudo apt install cpdb-backend-gcp

Make sure you are signed in to Google with GNOME Online Accounts. Open the Settings app1gnome-control-center to the Online Accounts page. If your Google account is near the top above the Add an account section, then you're all set.

Currently, only LibreOffice is supported. Hopefully, for 19.04, other GTK+ apps will be able to use the feature.

This feature was developed by Nilanjana Lodh and Abhijeet Dubey when they were Google Summer of Code 2017 participants. Their mentors were Till Kamppeter, Aveek Basu, and Felipe Borges.

Till has been trying to get this feature installed by default in Ubuntu since 18.04 LTS, but it looks like it won't make it in until 19.04.

I haven't seen this feature packaged in any other Linux distros yet. That might be because people don't know about this feature so that's why I'm posting about it today! If you are a distro packager, the 3 packages you need are cpdb-libs , cpdb-backend-gcp, and cpdb-backend-cups. The final package enables easy printing to any IPP printer. (I didn't mention it earlier because I believe Ubuntu 18.04 LTS already supports that feature through a different package.)

Save to Google Drive

In my original blog post, I confused the cpdb feature with a feature that already exists in GTK3 built with GNOME Online Accounts support. This should already work on most distros.

When you print a document, there will be an extra Save to Google Drive option. Saving to Google Drive saves a PDF of your document to your Google Drive account.

This post was edited on October 16 to mention that cpdb only supports LibreOffice now and that Save to Google Drive is a GTK3 feature instead.

October 17: Please see Felipe's comments. It turns out that even Google Cloud Print works fine in distros with recent GTK3. The point of the cpdb feature is to make this work in apps that don't use GTK3. So I guess the big benefit now is that you can use Google Cloud Print or Save to Google Drive from LibreOffice.

14 Oct 2018 2:31pm GMT

13 Oct 2018

feedPlanet Debian

Julian Andres Klode: The demise of G+ and return to blogging (w/ mastodon integration)

I'm back to blogging, after shutting down my wordpress.com hosted blog in spring. This time, fully privacy aware, self hosted, and integrated with mastodon.

Let's talk details: In spring, I shutdown my wordpress.com hosted blog, due to concerns about GDPR implications with comment hosting and ads and stuff. I'd like to apologize for using that, back when I did this (in 2007), it was the easiest way to get into blogging. Please forgive me for subjecting you to that!

Recently, Google announced the end of Google+. As some of you might know, I posted a lot of medium-long posts there, rather than doing blog posts; especially after I disabled the wordpress site.

With the end of Google+, I want to try something new: I'll host longer pieces on this blog, and post shorter messages on @juliank@mastodon.social. If you follow the Mastodon account, you will see toots for each new blog post as well, linking to the blog post.

Mastodon integration and privacy

Now comes the interesting part: If you reply to the toot, your reply will be shown on the blog itself. This works with a tiny bit of JavaScript that talks to a simple server-side script that finds toots from me mentioning the blog post, and then replies to that.

This protects your privacy, because mastodon.social does not see which blog post you are looking at, because it is contacted by the server, not by you. Rendering avatars requires loading images from mastodon.social's file server, however - to improve your privacy, all avatars are loaded with referrerpolicy='no-referrer', so assuming your browser is half-way sane, it should not be telling mastodon.social which post you visited either. In fact, the entire domain also sets Referrer-Policy: no-referrer as an http header, so any link you follow will not have a referrer set.

The integration was originally written by @bjoern@mastodon.social - I have done some moderate improvements to adapt it to my theme, make it more reusable, and replace and extend the caching done in a JSON file with a Redis database.

Source code

This blog is free software; generated by the Hugo snap. All source code for it is available:

(Yes I am aware that hosting the repositories on GitHub is a bit ironic given the whole focus on privacy and self-hosting).

The theme makes use of Hugo pipes to minify and fingerprint JavaScript, and vendorizes all dependencies instead of embedding CDN links, to, again, protect your privacy.

Future work

I think I want to make the theme dark, to be more friendly to the eyes. I also might want to make the mastodon integration a bit more friendly to use. And I want to get rid of jQuery, it's only used for a handful of calls in the Mastodon integration JavaScript.

If you have any other idea for improvements, feel free to join the conversation in the mastodon toot, send me an email, or open an issue at the github projects.

Closing thoughts

I think the end of Google+ will be an interesting time, requring a lot of people in the open source world to replace one of their main communication channels with a different approach.

Mastodon and Diaspora are both in the race, and I fear the community will split or everyone will have two accounts in the end. I personally think that Mastodon + syndicated blogs provide a good balance: You can quickly write short posts (up to 500 characters), and you can host long articles on your own and link to them.

I hope that one day diaspora* and mastodon federate together. If we end up with one federated network that would be the best outcome.

13 Oct 2018 9:03pm GMT

Ingo Juergensmann: Xen & Databases

I'm running PostgreSQL and MySQL on my server that both serve different databases to Wordpress, Drupal, Piwigo, Friendica, Mastodon, whatever...

In the past the databases where colocated in my mailserver VM whereas the webserver was running on a different VM. Somewhen I moved the databases from domU to dom0, maybe because I thought that the databases would be faster running on direct disk I/O in the dom0 environment, but can't remember the exact rasons anymore.

However, in the meantime the size of the databases grew and the number of the VMs did, too. MySQL and PostgreSQL are both configured/optimized to run with 16 GB of memory in dom0, but in the last months I experienced high disk I/O especially for MySQL and slow I/O performance in all the domU VMs because of that.

Currently iotop shows something like this:

Total DISK READ : 131.92 K/s | Total DISK WRITE : 1546.42 K/s
Actual DISK READ: 131.92 K/s | Actual DISK WRITE: 2.40 M/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
6424 be/4 mysql 0.00 B/s 0.00 B/s 0.00 % 60.90 % mysqld
18536 be/4 mysql 43.97 K/s 80.62 K/s 0.00 % 35.59 % mysqld
6499 be/4 mysql 0.00 B/s 29.32 K/s 0.00 % 13.18 % mysqld
20117 be/4 mysql 0.00 B/s 3.66 K/s 0.00 % 12.30 % mysqld
6482 be/4 mysql 0.00 B/s 0.00 B/s 0.00 % 10.04 % mysqld
6495 be/4 mysql 0.00 B/s 3.66 K/s 0.00 % 10.02 % mysqld
20144 be/4 postgres 0.00 B/s 73.29 K/s 0.00 % 4.87 % postgres: hubzilla hubzi~
2920 be/4 postgres 0.00 B/s 1209.28 K/s 0.00 % 3.52 % postgres: wal writer process
11759 be/4 mysql 0.00 B/s 25.65 K/s 0.00 % 0.83 % mysqld
18736 be/4 mysql 0.00 B/s 14.66 K/s 0.00 % 0.17 % mysqld
21768 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.02 % [kworker/1:0]
2922 be/4 postgres 0.00 B/s 69.63 K/s 0.00 % 0.00 % postgres: stats collector process

MySQL data site is below configured max memory size for MySQL, so everything should more or less fit into memory. Yet, there is still a large amount of disk I/O by MySQL, much more than by PostgreSQL. Of course there is much I/O done by writes to the database.

However, I'm thinking of changing my setup again back to domU based database setup again, maybe one dedicated VM for both DBMS' or even two dedicated VMs for each of them? I'm not quite sure how Xen reacts to the current work load?

Back in the days when I did 3D computer graphic I did a lot of testing with different settings in regards of priorities and such. Basically one would think that giving the renderer more CPU time would speed of the rendering, but this turned out to be wrong: the higher the render tasks priority was, the slower the rendering got, because disk I/O (and other tasks that were necessary for the render task to work) got slowed down. When running the render task at lowest priority all the other necessary tasks could run on higher speed and return the CPU more quickly, which resulted in shorter render times.

So, maybe I experience something similar with the databases on dom0 here as well: dom0 is busy doing database work and this slows down all the other tasks (== domU VMs). When I would move databases back to domU this would enable dom0 again to better do its basic job of taking care of the domUs?

Of course, this is also a quite philosophical question, but what is the recommended setup? Is it better to separate the databases in two different VMs or just one? Or is running the databases on dom0 the best option?

I'm interested in your feedback, so please comment! :-)

UPDATE: you can also contact me @ij@nerdculture.de on Mastodon or on Friendica at https://nerdica.net/profile/ij

Kategorie:
Debian
Tags:
Debian
MySQL
PostgreSQL
Xen

13 Oct 2018 7:21pm GMT

Jeremy Bicha: Shutter removed from Debian & Ubuntu

This week, the popular screenshot app Shutter was removed from Debian Unstable & Ubuntu 18.10. (It had already been removed from Debian "Buster" 6 months ago and some of its "optional" dependencies had already been removed from Ubuntu 18.04 LTS).

Shutter will need to be ported to gtk3 before it can return to Debian. (Ideally, it would support Wayland desktops too but that's not a blocker for inclusion in Debian.)

See the Debian bug for more discussion.

I am told that flameshot is a nice well-maintained screenshot app.

I believe Snap or Flatpak are great ways to make apps that use obsolete libraries available on modern distros that can no longer keep those libraries around. There isn't a Snap or Flatpak version of Shutter yet, so hopefully someone interested in that will help create one.

13 Oct 2018 6:29pm GMT

Dirk Eddelbuettel: RcppNLoptExample 0.0.1: Use NLopt from C/C++

A new package of ours, RcppNLoptExample, arrived on CRAN yesterday after a somewhat longer-than-usual wait for new packages as CRAN seems really busy these days. As always, a big and very grateful Thank You! for all they do to keep this community humming.

So what does it do ?

NLopt is a very comprehensive library for nonlinear optimization. The nloptr package by Jelmer Ypma has long been providing an excellent R interface.

Starting with its 1.2.0 release, the nloptr package now exports several C symbols in a way that makes it accessible to other R packages without linking easing the installation on all operating systems.

The new package RcppNLoptExample illustrates this facility with an example drawn from the NLopt tutorial. See the (currently single) file src/nlopt.cpp.

How / Why ?

R uses C interfaces. These C interfaces can be exported between packages. So when the usual library(nloptr) (or an import via NAMESPACE) happens, we now also get a number of C functions registered.

And those are enough to run optimization from C++ as we simply rely on the C interface provided. Look careful at the example code: the objective function and the constraint functions are C functions, and the body of our example invokes C functions from NLopt. This just works. For either C code, or C++ (where we rely on Rcpp to marshal data back and forth with ease).

On the other hand, if we tried to use the NLopt C++ interface which brings with it someinterface code we would require linking to that code (which R cannot easily export across packages using its C interface). So C it is.

Status

The package is pretty basic but fully workable. Some more examples should get added, and a helper class or two for state would be nice. The (very short) NEWS entry follows:

Changes in version 0.0.1 (2018-10-01)

  • Initial basic package version with one example from NLopt tutorial

Code, issue tickets etc are at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

13 Oct 2018 2:56pm GMT

Elana Hashman: PyGotham 2018 Talk Resources

At PyGotham in 2018, I gave a talk called "The Black Magic of Python Wheels". I based this talk on my two years of work on auditwheel and the manylinux platform, hoping to share some dark details of how the proverbial sausage is made.

It was a fun talk, and I enjoyed the opportunity to wear my Python Packaging Authority hat:

A very witchy @PyGotham talk from @ehashdn about dark ELF magic pic.twitter.com/W8JMEVW8GE

- Geoffrey Thomas, but spooky (@geofft) October 6, 2018

The Black Magic of Python Wheels

Follow-up readings

All the PEPs referenced in the talk

In increasing numeric order.

Image licensing info

13 Oct 2018 4:00am GMT

Dirk Eddelbuettel: GitHub Streak: Round Five

Four years ago I referenced the Seinfeld Streak used in an earlier post of regular updates to to the Rcpp Gallery:

This is sometimes called Jerry Seinfeld's secret to productivity: Just keep at it. Don't break the streak.

and then showed the first chart of GitHub streaking

github activity october 2013 to october 2014github activity october 2013 to october 2014

And three year ago a first follow-up appeared in this post:

github activity october 2014 to october 2015github activity october 2014 to october 2015

And two years ago we had a followup

github activity october 2015 to october 2016github activity october 2015 to october 2016

And last year we another one

github activity october 2016 to october 2017github activity october 2016 to october 2017

As today is October 12, here is the newest one from 2017 to 2018:

github activity october 2017 to october 2018github activity october 2017 to october 2018

Again, special thanks go to Alessandro Pezzè for the Chrome add-on GithubOriginalStreak.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

13 Oct 2018 1:11am GMT