26 May 2017

feedPlanet Debian

Michal Čihař: Running Bitcoin node on Turris Omnia

For quite some I'm happy user of Turris Omnia router. The router has quite good hardware, so I've decided to try if I can run Bitcoin node on that and ElectrumX server.

To make the things easier to manage, I've decided to use LXC and run all these in separate container. First of all you need LXC on the router. This is the default setup, but in case you've removed it, you can add it back in the Updater settings.

Now we will create Debian container. There is basic information on using in Turris Documentation on how to create the container, in latter documentation I assume it is called debian.

It's also good idea to enable LXC autostart, to do so add your container to cat /etc/config/lxc-auto on :

config container
    option name debian

You might also want to edit lxc container configration to enable clean shutdown:

# Send SIGRTMIN+3 to shutdown systemd
lxc.haltsignal = 37

To make the system more recent, I've decided to use Debian Stretch (one of reasons was that ElectrumX needs Python 3.5.3 or newer). Which is anyway probably sane choice right now given that it's already frozen and will be soon stable. As Stretch is not available as a download option in Omnia, I've chosen to use Debian Jessie and upgrate it later:

$ lxc-attach  --name debian
$ sed -i s/jessie/stretch/ /etc/apt/sources.list
$ apt update
$ apt full-upgrade

Now you have up to date system and we can start installing dependencies. First thing to install is Bitcoin Core. Just follow the instructions on their website to do that. Now it's time to set it up and wait for downloading full blockchain:

$ adduser bitcoin
$ su - bitcoin
$ bitcoind -daemon

Depending on your connection speed, the download will take few hours. You can monitor the progress using bitcoin-cli, you're waiting for 450k blocks:

$ bitcoin-cli getinfo
{
  "version": 140000,
  "protocolversion": 70015,
  "walletversion": 130000,
  "balance": 0.00000000,
  "blocks": 301242,
  "timeoffset": -1,
  "connections": 8,
  "proxy": "",
  "difficulty": 8853416309.1278,
  "testnet": false,
  "keypoololdest": 1490267950,
  "keypoolsize": 100,
  "paytxfee": 0.00000000,
  "relayfee": 0.00001000,
  "errors": ""
}

Depending how much memory you have (mine has 2G) and what all you run on the router, you will have to tweak bitcoind configuration to consume less memory. This can be done by editing .bitcoin/bitcoin.conf, I've ended up with following settings:

par=1
dbcache=150
maxmempool=150

You can also create startup unit for Bitcoin daemon (place that as /etc/systemd/system/bitcoind.service):

[Unit]
Description=Bitcoind
After=network.target

[Service]
ExecStart=/opt/bitcoin/bin/bitcoind
User=bitcoin
TimeoutStopSec=30min
Restart=on-failure
RestartSec=30

[Install]
WantedBy=multi-user.target

Now we can enable services to start on container start:

systemctl enable bitcoind.service

Then I wanted to setup ElectrumX as well, but I've quickly realized that it uses way more memory that my router has, so there is no option to run it without using swap, what will probably make it quite slow (I haven't tried that).

Filed under: Debian English OpenWrt

26 May 2017 10:00am GMT

Michael Prokop: The #newinstretch game: dbgsym packages in Debian/stretch

Debug packages include debug symbols and so far were usually named <package>-dbg in Debian. Those packages are essential if you've to debug failing (especially: crashing) programs. Since December 2015 Debian has automatic dbgsym packages, being built by default. Those packages are available as <package>-dbgsym, so starting with Debian/stretch you should no longer look for -dbg packages but for -dbgsym instead. Currently there are 13.369 dbgsym packages available for the amd64 architecture of Debian/stretch, comparing this to the 2.250 packages which I counted being available for Debian/jessie this is really a huge improvement. (If you're interested in the details of dbgsym packages as a package maintainer take a look at the Automatic Debug Packages page in the Debian wiki.)

The dbgsym packages are NOT provided by the usual Debian archive though (which is good thing, since those packages are quite disk space consuming, e.g. just the amd64 stretch mirror of debian-debug consumes 47GB). Instead there's a new archive called debian-debug. To get access to the dbgsym packages via the debian-debug suite on your Debian/stretch system include the following entry in your apt's sources.list configuration (replace deb.debian.org with whatever mirror you prefer):

deb http://deb.debian.org/debian-debug/ stretch-debug main

If you're not yet familiar with usage of such debug packages let me give you a short demo.

Let's start with sending SIGILL (Illegal Instruction) to a running sha256sum process, causing it to generate a so called core dump file:

% sha256sum /dev/urandom &
[1] 1126
% kill -4 1126
% 
[1]+  Illegal instruction     (core dumped) sha256sum /dev/urandom
% ls
core
$ file core
core: ELF 64-bit LSB core file x86-64, version 1 (SYSV), SVR4-style, from 'sha256sum /dev/urandom', real uid: 1000, effective uid: 1000, real gid: 1000, effective gid: 1000, execfn: '/usr/bin/sha256sum', platform: 'x86_64'

Now we can run the GNU Debugger (gdb) on this core file, executing:

% gdb sha256sum core
[...]
Type "apropos word" to search for commands related to "word"...
Reading symbols from sha256sum...(no debugging symbols found)...done.
[New LWP 1126]
Core was generated by `sha256sum /dev/urandom'.
Program terminated with signal SIGILL, Illegal instruction.
#0  0x000055fe9aab63db in ?? ()
(gdb) bt
#0  0x000055fe9aab63db in ?? ()
#1  0x000055fe9aab8606 in ?? ()
#2  0x000055fe9aab4e5b in ?? ()
#3  0x000055fe9aab42ea in ?? ()
#4  0x00007faec30872b1 in __libc_start_main (main=0x55fe9aab3ae0, argc=2, argv=0x7ffc512951f8, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7ffc512951e8) at ../csu/libc-start.c:291
#5  0x000055fe9aab4b5a in ?? ()
(gdb) 

As you can see by the several "??" question marks, the "bt" command (short for backtrace) doesn't provide useful information.
So let's install the according debug package, which is coreutils-dbgsym in this case (since the sha256sum binary which generated the core file is part of the coreutils package). Then let's rerun the same gdb steps:

% gdb sha256sum core
[...]
Type "apropos word" to search for commands related to "word"...
Reading symbols from sha256sum...Reading symbols from /usr/lib/debug/.build-id/a4/b946ef7c161f2d215518ca38d3f0300bcbdbb7.debug...done.
done.
[New LWP 1126]
Core was generated by `sha256sum /dev/urandom'.
Program terminated with signal SIGILL, Illegal instruction.
#0  0x000055fe9aab63db in sha256_process_block (buffer=buffer@entry=0x55fe9be95290, len=len@entry=32768, ctx=ctx@entry=0x7ffc51294eb0) at lib/sha256.c:526
526     lib/sha256.c: No such file or directory.
(gdb) bt
#0  0x000055fe9aab63db in sha256_process_block (buffer=buffer@entry=0x55fe9be95290, len=len@entry=32768, ctx=ctx@entry=0x7ffc51294eb0) at lib/sha256.c:526
#1  0x000055fe9aab8606 in sha256_stream (stream=0x55fe9be95060, resblock=0x7ffc51295080) at lib/sha256.c:230
#2  0x000055fe9aab4e5b in digest_file (filename=0x7ffc51295f3a "/dev/urandom", bin_result=0x7ffc51295080 "\001", missing=0x7ffc51295078, binary=<optimized out>) at src/md5sum.c:624
#3  0x000055fe9aab42ea in main (argc=<optimized out>, argv=<optimized out>) at src/md5sum.c:1036

As you can see it's reading the debug symbols from /usr/lib/debug/.build-id/a4/b946ef7c161f2d215518ca38d3f0300bcbdbb7.debug and this is what we were looking for.
gdb now also tells us that we don't have lib/sha256.c available. For even better debugging it's useful to have the according source code available. This is also just an `apt-get source coreutils ; cd coreutils-8.26/` away:

~/coreutils-8.26 % gdb sha256sum ~/core
[...]
Type "apropos word" to search for commands related to "word"...
Reading symbols from sha256sum...Reading symbols from /usr/lib/debug/.build-id/a4/b946ef7c161f2d215518ca38d3f0300bcbdbb7.debug...done.
done.
[New LWP 1126]
Core was generated by `sha256sum /dev/urandom'.
Program terminated with signal SIGILL, Illegal instruction.
#0  0x000055fe9aab63db in sha256_process_block (buffer=buffer@entry=0x55fe9be95290, len=len@entry=32768, ctx=ctx@entry=0x7ffc51294eb0) at lib/sha256.c:526
526           R( h, a, b, c, d, e, f, g, K(25), M(25) );
(gdb) bt
#0  0x000055fe9aab63db in sha256_process_block (buffer=buffer@entry=0x55fe9be95290, len=len@entry=32768, ctx=ctx@entry=0x7ffc51294eb0) at lib/sha256.c:526
#1  0x000055fe9aab8606 in sha256_stream (stream=0x55fe9be95060, resblock=0x7ffc51295080) at lib/sha256.c:230
#2  0x000055fe9aab4e5b in digest_file (filename=0x7ffc51295f3a "/dev/urandom", bin_result=0x7ffc51295080 "\001", missing=0x7ffc51295078, binary=<optimized out>) at src/md5sum.c:624
#3  0x000055fe9aab42ea in main (argc=<optimized out>, argv=<optimized out>) at src/md5sum.c:1036
(gdb) 

Now we're ready for all the debugging magic. :)

Thanks to everyone who was involved in getting us the automatic dbgsym package builds in Debian!

26 May 2017 9:37am GMT

25 May 2017

feedPlanet Debian

Michael Prokop: The #newinstretch game: new forensic packages in Debian/stretch

Repeating what I did for the last Debian releases with the #newinwheezy and #newinjessie games it's time for the #newinstretch game:

Debian/stretch AKA Debian 9.0 will include a bunch of packages for people interested in digital forensics. The packages maintained within the Debian Forensics team which are new in the Debian/stretch release as compared to Debian/jessie (and ignoring jessie-backports):

Join the #newinstretch game and present packages and features which are new in Debian/stretch.

25 May 2017 7:48am GMT

Jaldhar Vyas: For Downtown Hoboken

Q: What should you do if you see a spaceman?

A: Park there before someone takes it, man.

25 May 2017 4:34am GMT

24 May 2017

feedPlanet Debian

Steve Kemp: Getting ready for Stretch

I run about 17 servers. Of those about six are very personal and the rest are a small cluster which are used for a single website. (Partly because the code is old and in some ways a bit badly designed, partly because "clustering!", "high availability!", "learning!", "fun!" - seriously I had a lot of fun putting together a fault-tolerant deployment with haproxy, ucarp, etc, etc. If I were paying for it the site would be both retired and static!)

I've started the process of upgrading to stretch by picking a bunch of hosts that do things I could live without for a few days - in case there were big problems, or I needed to restore from backups.

So far I've upgraded:

All upgrades were painless, with only one real surprise - the attic-backup software was removed from Debian.

Although I do intend to retry using Larss' excellent obnum in the near future pragmatically I wanted to stick with what I'm familiar with. Borg backup is a fork of attic I've been aware of for a long time, but I never quite had a reason to try it out. Setting it up pretty much just meant editing my backup-script:

s/attic/borg/g

Once I did that, and created some new destinations all was good:

borg@rsync.io ~ $ borg init /backups/git.steve.org.uk.borg/
borg@rsync.io ~ $ borg init /backups/master.steve.org.uk.borg/
borg@rsync.io ~ $ ..

Upgrading other hosts, for example my website(s), and my email-box, will be more complex and fiddly. On that basis they will definitely wait for the formal stretch release.

But having a couple of hosts running the frozen distribution is good for testing, and to let me see what is new.

24 May 2017 9:00pm GMT

Jonathan Dowland: yakking

I've written a guest post for the Yakking Blog - "A WadC successor in Haskell?. It's mainly on the topic of Haskell with WadC as a use-case for a thought experiment.

Yakking is a collaborative blog geared towards beginner software engineers that is put together by some friends of mine. I was talking to them about contributing a blog post on a completely different topic a while ago, but that has not come to fruition (there or anywhere, yet). When I wrote up the notes that formed the basis of this blog post, I realised it might be a good fit.

Take a look at some of their other posts, and if you find it interesting, subscribe!

24 May 2017 1:07pm GMT

Michal Čihař: Weblate 2.14.1

Weblate 2.14.1 has been released today. It is bugfix release fixing possible migration issues, search results navigation and some minor security issues.

Full list of changes:

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

24 May 2017 8:00am GMT

23 May 2017

feedPlanet Debian

Dirk Eddelbuettel: Rcpp 0.12.11: Loads of goodies

The elevent update in the 0.12.* series of Rcpp landed on CRAN yesterday following the initial upload on the weekend, and the Debian package and Windows binaries should follow as usual. The 0.12.11 release follows the 0.12.0 release from late July, the 0.12.1 release in September, the 0.12.2 release in November, the 0.12.3 release in January, the 0.12.4 release in March, the 0.12.5 release in May, the 0.12.6 release in July, the 0.12.7 release in September, the 0.12.8 release in November, the 0.12.9 release in January, and the 0.12.10.release in March --- making it the fifteenth release at the steady and predictable bi-montly release frequency.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1026 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with another 91 in BioConductor.

This releases follows on the heels of R's 3.4.0 release and addresses on or two issues from the transition, along with a literal boatload of other fixes and enhancements. James "coatless" Balamuta was once restless in making the documentation better, Kirill Mueller addressed a number of more obscure compiler warnings (triggered under under -Wextra and the like), Jim Hester improved excecption handling, and much more mostly by the Rcpp Core team. All changes are listed below in some detail.

One big change that JJ made is that Rcpp Attributes also generate the now-almost-required package registration. (For background, I blogged about this one, two, three times.) We tested this, and do not expect it to throw curveballs. If you have an existing src/init.c, or if you do not have registration set in your NAMESPACE. It should cover most cases. But one never knows, and one first post-release buglet related to how devtools tests things has already been fixed in this PR by JJ.

Changes in Rcpp version 0.12.11 (2017-05-20)

  • Changes in Rcpp API:

    • Rcpp::exceptions can now be constructed without a call stack (Jim Hester in #663 addressing #664).

    • Somewhat spurious compiler messages under very verbose settings are now suppressed (Kirill Mueller in #670, #671, #672, #687, #688, #691).

    • Refreshed the included tinyformat template library (James Balamuta in #674 addressing #673).

    • Added printf-like syntax support for exception classes and variadic templating for Rcpp::stop and Rcpp::warning (James Balamuta in #676).

    • Exception messages have been rewritten to provide additional information. (James Balamuta in #676 and #677 addressing #184).

    • One more instance of Rf_mkString is protected from garbage collection (Dirk in #686 addressing #685).

    • Two exception specification that are no longer tolerated by g++-7.1 or later were removed (Dirk in #690 addressing #689)

  • Changes in Rcpp Documentation:

  • Changes in Rcpp Sugar:

    • Added sugar function trimws (Nathan Russell in #680 addressing #679).
  • Changes in Rcpp Attributes:

    • Automatically generate native routine registrations (JJ in #694)

    • The plugins for C++11, C++14, C++17 now set the values R 3.4.0 or later expects; a plugin for C++98 was added (Dirk in #684 addressing #683).

  • Changes in Rcpp support functions:

    • The Rcpp.package.skeleton() function now creates a package registration file provided R 3.4.0 or later is used (Dirk in #692)

Thanks to CRANberries, you can also look at a diff to the previous release. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

23 May 2017 7:59pm GMT

Reproducible builds folks: Reproducible Builds: week 108 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday May 14 and Saturday May 20 2017:

News and Media coverage

IRC meeting

Our next IRC meeting has been scheduled for Thursday June 1 at 16:00 UTC.

Packages reviewed and fixed, bugs filed, etc.

Bernhard M. Wiedemann:

Chris Lamb:

Reviews of unreproducible packages

35 package reviews have been added, 28 have been updated and 12 have been removed in this week, adding to our knowledge about identified issues.

2 issue types have been added:

diffoscope development

strip-nondeterminism development

tests.reproducible-builds.org

Holger wrote a new systemd-based scheduling system replacing 162 constantly running Jenkins jobs which were slowing down job execution in general:

Misc.

This week's edition was written by Chris Lamb, Holver Levsen, Bernhard M. Wiedemann, Vagrant Cascadian and Maria Glukhova & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

23 May 2017 6:43pm GMT

Tianon Gravi: Debuerreotype

Following in the footsteps of one of my favorite Debian Developers, Chris Lamb / lamby (who is quite prolific in the reproducible builds effort within Debian), I've started a new project based on snapshot.debian.org (time-based snapshots of the Debian archive) and some of lamby's work for creating reproducible Debian (debootstrap) rootfs tarballs.

The project is named "Debuerreotype" as an homage to the photography roots of the word "snapshot" and the daguerreotype process which was an early method of taking photographs. The essential goal is to create "photographs" of a minimal Debian rootfs, so the name seemed appropriate (even if it's a bit on the "mouthful" side).

The end-goal is to create and release Debian rootfs tarballs for a given point-in-time (especially for use in Docker) which should be fully reproducible, and thus improve confidence in the provenance of the Debian Docker base images.

For more information about reproducibility and why it matters, see reproducible-builds.org, which has more thorough explanations of the why and how and links to other important work such as the reproducible builds effort in Debian (for Debian package builds).

In order to verify that the tool actually works as intended, I ran builds against seven explicit architectures (amd64, arm64, armel, armhf, i386, ppc64el, s390x) and eight explicit suites (oldstable, stable, testing, unstable, wheezy, jessie, stretch, sid).

I used a timestamp value of 2017-05-16T00:00:00Z, and skipped combinations that don't exist (such as wheezy on arm64) or aren't supported anymore (such as wheezy on s390x). I ran the scripts repeatedly over several days, using diffoscope to compare the results.

While doing said testing, I ran across #857803, and added a workaround. There's also a minor outstanding issue with wheezy's reproducibility that I haven't had a chance to dig deep very deeply into yet (but it's pretty benign and Wheezy's LTS support window ends 2018-05-31, so I'm not too stressed about it).

I've also packaged the tool for Debian, and submitted it into the NEW queue, so hopefully the FTP Masters will look favorably upon this being a tool that's available to install from the Debian archive as well. 😇

Anyhow, please give it a try, have fun, and as always, report bugs!

23 May 2017 6:00am GMT

22 May 2017

feedPlanet Debian

Gunnar Wolf: Open Source Symposium 2017

I travelled (for three days only!) to Argentina, to be a part of the Open Source Symposium 2017, a co-located event of the International Conference on Software Engineering.

This is, all in all, an interesting although small conference - We are around 30 people in the room. This is a quite unusual conference for me, as this is among the first "formal" academic conference I am part of. Sessions have so far been quite interesting.
What am I linking to from this image? Of course, the proceedings! They managed to publish the proceedings via the "formal" academic channels (a nice hard-cover Springer volume) under an Open Access license (which is sadly not usual, and is unbelievably expensive). So, you can download the full proceedings, or article by article, in EPUB or in PDF...
...Which is very very nice :)
Previous editions of this symposium have also their respective proceedings available, but AFAICT they have not been downloadable.
So, get the book; it provides very interesant and original insights into our community seen from several quite novel angles!

Attachment Size
oss2017_cover.png 84.47 KB

22 May 2017 5:21pm GMT

Michal Čihař: HackerOne experience with Weblate

Weblate has started to use HackerOne Community Edition some time ago and I think it's good to share my experience with that. Do you have open source project and want to get more attention of security community? This post will answer how it looks from perspective of pretty small project.

I've applied with Weblate to HackerOne Community Edition by end of March and it was approved early in April. Based on their recommendations I've started in invite only mode, but that really didn't bring much attention (exactly none reports), so I've decided to go public.

I've asked for making the project public just after coming from two weeks vacation, while expecting the approval to take some time where I'll settle down things which have popped up during vacation. In the end that was approved within single day, so I was immediately under fire of incoming reports:

Reports on HackerOne

I was surprised that they didn't lie - you will really get huge amount of issues just after making your project public. Most of them were quite simple and repeating (as you can see from number of duplicates), but it really provided valuable input.

Even more surprisingly there was second peak coming in when I've started to disclose resolved issues (once Weblate 2.14 has been released).

Overall the issues could be divided to few groups:

In the end it was really challenging week to be able to cope with the incoming reports, but I think I've managed it quite well. The HackerOne metrics states that there are 2 hours in average to respond on incoming incidents, what I think will not work in the long term :-).

Anyway thanks to this, you can now enjoy Weblate 2.14 which more secure than any release before, if you have not yet upgraded, you might consider doing that now or look into our support offering for self hosted Weblate.

The downside of this all was that the initial publishing on HackerOne made our website target of lot of automated tools and the web server was not really ready for that. I'm really sorry to all Hosted Weblate users who were affected by this. This has been also addressed now, but the infrastructure really should have been prepared before on this. To share how it looked like, here is number of requests to the nginx server:

nxing requests

I'm really glad I could make Weblate available on HackerOne as it will clearly improve it's security and security of hosted offering we have. I will certainly consider providing swag and/or bounties on further severe reports, but that won't be possible without enough funding for Weblate.

Filed under: Debian English SUSE Weblate

22 May 2017 10:00am GMT

21 May 2017

feedPlanet Debian

Ritesh Raj Sarraf: apt-offline 1.8.0 releasedI

I am pleased to announce the release of apt-offline, version 1.8.0. This release is mainly a forward port of apt-offline to Python 3 and PyQt5. There are some glitches related to Python 3 and PyQt5, but overall the CLI interface works fine. Other than the porting, there's also an important bug fixed, related to memory leak when using the MIME library. And then there's some updates to the documentation (user examples) based on feedback from users.

Release is availabe from Github and Alioth

What is apt-offline ?

Description: offline APT package manager
apt-offline is an Offline APT Package Manager.
.
apt-offline can fully update and upgrade an APT based distribution without
connecting to the network, all of it transparent to APT.
.
apt-offline can be used to generate a signature on a machine (with no network).
This signature contains all download information required for the APT database
system. This signature file can be used on another machine connected to the
internet (which need not be a Debian box and can even be running windows) to
download the updates.
The downloaded data will contain all updates in a format understood by APT and
this data can be used by apt-offline to update the non-networked machine.
.
apt-offline can also fetch bug reports and make them available offline.

Categories:

Keywords:

Like:

21 May 2017 9:17pm GMT

Holger Levsen: 20170521-this-time-of-the-year

It's this time of the year again…

So it seems summer has finally arrived here and for the first time this year I've been offline for more than 24h, even despite having wireless network coverage. The lake, the people, the bonfire, the music, the mosquitos and the fireworks at 3.30 in the morning were totally worth it! ;-)

21 May 2017 6:26pm GMT

Russ Allbery: Review: Sector General

Review: Sector General, by James White

Series: Sector General #5
Publisher: Orb
Copyright: 1983
Printing: 2002
ISBN: 0-312-87770-6
Format: Trade paperback
Pages: 187

Sector General is the fifth book (or, probably more accurately, collection) in the Sector General series. I blame the original publishers for the confusion. The publication information is for the Alien Emergencies omnibus, which includes the fourth through the sixth books in the series.

Looking back on my previous reviews of this series (wow, it's been eight years since I read the last one?), I see I was reviewing them as novels rather than as short story collections. In retrospect, that was a mistake, since they're composed of clearly stand-alone stories with a very loose arc. I'm not going to go back and re-read the earlier collections to give them proper per-story reviews, but may as well do this properly here.

Overall, this collection is more of the same, so if that's what you want, there won't be any negative surprises. It's another four engineer-with-a-wrench stories about biological and medical puzzles, with only a tiny bit of characterization and little hint to any personal life for any of the characters outside of the job. Some stories are forgettable, but White does create some memorable aliens. Sadly, the stories don't take us to the point of real communication, so those aliens stop at biological puzzles and guesswork. "Combined Operation" is probably the best, although "Accident" is the most philosophical and an interesting look at the founding principle of Sector General.

"Accident": MacEwan and Grawlya-Ki are human and alien brought together by a tragic war, and forever linked by a rather bizarre war monument. (It's a very neat SF concept, although the implications and undiscussed consequences don't bear thinking about too deeply.) The result of that war was a general recognition that such things should not be allowed to happen again, and it brought about a new, deep commitment to inter-species tolerance and politeness. Which is, in a rather fascinating philosophical twist, exactly what MacEwan and Grawlya-Ki are fighting against: not the lack of aggression, which they completely agree with, but with the layers of politeness that result in every species treating all others as if they were eggshells. Their conviction is that this cannot create a lasting peace.

This insight is one of the most profound bits I've read in the Sector General novels and supports quite a lot of philosophical debate. (Sadly, there isn't a lot of that in the story itself.) The backdrop against which it plays out is an accidental crash in a spaceport facility, creating a dangerous and potentially deadly environment for a variety of aliens. Given the collection in which this is included and the philosophical bent described above, you can probably guess where this goes, although I'll leave it unspoiled if you can't. It's an idea that could have been presented with more subtlety, but it's a really great piece of setting background that makes the whole series snap into focus. A much better story in context than its surface plot. (7)

"Survivor": The hospital ship Rhabwar rescues a sole survivor from the wreck of an alien ship caused by incomplete safeguards on hyperdrive generators. The alien is very badly injured and unconscious and needs the full attention of Sector General, but on the way back, the empath Prilicla also begins suffering from empathic hypersensitivity. Conway, the protagonist of most of this series, devotes most of his attention to that problem, having delivered the rescued alien to competent surgical hands. But it will surprise no regular reader that the problems turn out to be linked (making it a bit improbable that it takes the doctors so long to figure that out). A very typical entry in the series. (6)

"Investigation": Another very typical entry, although this time the crashed spaceship is on a planet. The scattered, unconscious bodies of the survivors, plus signs of starvation and recent amputation on all of them, convinces the military (well, police is probably more accurate) escort that this is may be a crime scene. The doctors are unconvinced, but cautious, and local sand storms and mobile vegetation add to the threat. I thought this alien design was a bit less interesting (and a lot creepier). (6)

"Combined Operation": The best (and longest) story of this collection. Another crashed alien spacecraft, but this time it's huge, large enough (and, as they quickly realize, of a design) to indicate a space station rather than a ship, except that it's in the middle of nowhere and each segment contains a giant alien worm creature. Here, piecing together the biology and the nature of the vehicle is only the beginning; the conclusion points to an even larger problem, one that requires drawing on rather significant resources to solve. (On a deadline, of course, to add some drama.) This story requires the doctors to go unusually deep into the biology and extrapolated culture of the alien they're attempting to rescue, which made it more intellectually satisfying for me. (7)

Followed by Star Healer.

Rating: 6 out of 10

21 May 2017 5:21pm GMT

Adnan Hodzic: Automagically deploy & run containerized WordPress (PHP7 FPM, Nginx, MariaDB) using Ansible + Docker on AWS

In this blog post, I've described what started as simple migration of WordPress blog to AWS, ended up as automation project consisting of publishing multiple Ansible roles deploying and running multiple Docker images.

If you're not interested in reading about my entire journey, cognition gains and how this process came to be, please skim down to "Birth of: containerized-wordpress-project (TL;DR)" section.

Migrating WordPress blog to AWS (EC2, Lightsail?)

Since I've been sold on Amazon's AWS idea of cloud computing "services" for couple of years now. I've wanted, and been trying to migrate this (WordPress) blog to AWS, but somehow it never worked out.

Moving it to EC2 instance, with its own ELB volumes, AMI, EIP, Security Group … it just seemed as an overkill.

When AWS Lightsail was first released, it seemed that was an answer to all my problems.

But it wasn't, disregarding its bit restrictive/dumbed down versions of original features. Living in Amsterdam, my main problem with it was that it was only available in a single US region.

Regardless, I thought it had everything I needed for WordPress site, and as a new service, it had great potential.

Its regional limitations were also good in a sense that they made me realize one important thing. And that's once I migrate my blog to AWS, I want to be able to seamlessly move/migrate it across different EC2's and different regions once they were available.

If done properly, it meant I could even have it moved across different clouds (I'm talking to you Google Cloud).

P.S: AWS Lightsail is now available in couple of different regions across Europe. Rollout which was almost smoothless.

Fundamental problem of every migration … is migration

Phase 1: Don't reinvent the wheel?

When you have a WordPress site that's not self hosted. You want everything to work, but yet you really don't want to spend any time managing infrastructure it's on.

And as soon as I started looking what could fit this criteria, I found that there were pre-configured, running out of box WordPress EC2 images available on AWS Marketplace, great!

But when I took a look, although everything was running out of box, I wasn't happy with software stack it was all built on. Namely Ubuntu 14.04 and Apache, and all of the services were started using custom scripts. Yuck.

With this setup, when it was time to upgrade (and it's already that time) you wouldn't be thinking about upgrade. You'd only be thinking about another migration.

Phase 2: What if I built everything myself?

Installing and configuring everything manually, and then writing huge HowTo which I would follow when I needed to re-create whole stack was not an option. Same case with was scripting whole process, as overhead of changes that had to be tracked was too way too big.

Being a huge Ansible fan, automating this step was a natural next step.

I even found an awesome Ansible role which seemed like it's going to do everything I need. Except, I realized I needed to update all software that's deployed with it, and customize it since configuration it was deployed on wasn't as generic.

So I forked it and got to work. But soon enough, I was knee deep in making and fiddling with various system changes. Something I was trying to get away in this case, and most importantly something I was trying to avoid when it was time for next update.

Phase 3: Marriage made in heaven: Ansible + Docker + AWS

Idea to have everything Dockerized was around from very start. However, it never made a lot of sense until I put Ansible into same picture. And it was at this point where my final idea and requirements become crystal clear.

Use Ansible to configure and setup host ready for Docker ecosystem. Ecosystem consisting of multiple separate containers for each required service (WordPress + Nginx + MariaDB). Link them all together as a single service using Docker Compose.

Idea was backed by thought to spend minimum to no time (and effort) on manual configuration of anything on the server. Level of attachment to this server was so low that I didn't even want to SSH to it.

If there was something wrong, I could just nuke the whole thing and deploy code on a new healthy rolled out server with everything working out of box.

After it was clear what needed to be done, I got to work.

Birth of: containerized-wordpress-project (TL;DR)

After a lot of work, end result is project which allows you to automagically deploy & run containerized WordPress instance which consists of 3 separate containers running:

Once run, containerized-wordpress playbook will guide you through interactive setup of all 3 containers, after which it will run all Ansible roles created for this project. End result is that host you have never even SSH-ed to will be fully configured and running containerized WordPress instace out of box.

Most importantly, this whole process will be completed in <= 5 minutes and doesn't require any Docker or Ansible knowledge!

containerized-wordpress demo

Console output of running "containerized-wordpress" Ansible Playbook:

Console output of running "containerized-wordpress" Ansible Playbook

Accessing WordPress instance created from "containerized-wordpress" Ansible Playbook:

Accessing WordPress instance created from "containerized-wordpress" Ansible Playbook

Did I end up migrating to AWS in the end?

You bet. Thanks to efforts made in containerized-wordpress-project, I'm happy to report my whole WordPress migration to AWS was completed in matter of minutes and that this blog is now running on Docker and on AWS!

I hope this same project will help you take a leap in your migration.

Happy hacking!

21 May 2017 4:28pm GMT