23 May 2015
Die Troisdorfer Linux User Group (kurz TroLUG) veranstaltet
am Samstag den 01.08.2015
in Troisdorf nahe Köln/Bonn
einen Gentoo-Workshop, der sich an fortgeschrittene User richtet.
23 May 2015 10:54pm GMT
We were also affected by this issue at work so I'm glad that the tension between ops and devs this issue was causing will finally be over
23 May 2015 6:38pm GMT
20 May 2015
Late last night, I decided to apply some needed updates to my personal mail server, which is running Gentoo Linux (OpenRC) with a mail stack of Postfix & Dovecot with AMaViS (filtering based on SpamAssassin, ClamAV, and Vipul's Razor). After applying the updates, and restarting the necessary components of the mail stack, I ran my usual test of sending an email from one of my accounts to another one. It went through without a problem.
However, I realised that it isn't a completely valid test to send an email from one internal account to another because I have amavisd configured to not scan anything coming from my trusted IPs and domains. I noticed several hundred mails in the queue when I ran
postqueue -p, and they all had notices similar to:
status=deferred (delivery temporarily suspended:
connect to 127.0.0.1[127.0.0.1]:10024: Connection refused)
That indicated to me that it wasn't a problem with Postfix (and I knew it wasn't a problem with Dovecot, because I could connect to my accounts via IMAP). Seeing as amavisd is running on localhost:10024, I figured that that is where the problem had to be. A lot of times, when there is a "connection refused" notification, it is because no service is listening on that port. You can test to see what ports are in a listening state and what processes, applications, or daemons are listening by running:
netstat -tupan | grep LISTEN
When I did that, I noticed that amavisd wasn't listening on port 10024, which made me think that it wasn't running at all. That's when I ran into the strange part of the problem: the init script output:
# /etc/init.d/amavisd start
* WARNING: amavisd has already been started
# /etc/init.d/amavisd stop
The amavisd daemon is not running [ !! ]
* ERROR: amavisd failed to start
So, apparently it is running and not running at the same time (sounds like a Linux version of Schrödinger's cat to me)! It was obvious, though, that it wasn't actually running (which could be verified with 'ps -elf | grep -i amavis'). So, what to do? I tried manually removing the PID file, but that actually just made matters a bit worse. Ultimately, this combination is what fixed the problem for me:
It seems that the SpamAssassin rules file had gone missing, and that was causing amavisd to not start properly. Manually updating the rules file (with 'sa-update') regenerated it, and then I zapped amavisd completely, and lastly restarted the daemon.
Hope that helps anyone running into the same problem.
20 May 2015 2:32pm GMT
13 May 2015
This is a quick heads-up post about a behaviour change when running a gevent based application using the new pymongo 3 driver under uWSGI and its gevent loop.
I was naturally curious about testing this brand new and major update of the python driver for mongoDB so I just played it dumb : update and give a try on our existing code base.
The first thing I noticed instantly is that a vast majority of our applications were suddenly unable to reload gracefully and were force killed by uWSGI after some time !
worker 1 (pid: 9839) is taking too much time to die...NO MERCY !!!
All our applications must be able to be gracefully reloaded at any time. Some of them are spawning quite a few greenlets on their own so as an added measure of making sure we never loose any running greenlet we use the gevent-wait-for-hub option, which is described as follow :
wait for gevent hub's death instead of the control greenlet
… which does not mean a lot but is explained in a previous uWSGI changelog :
During shutdown only the greenlets spawned by uWSGI are taken in account, and after all of them are destroyed the process will exit. This is different from the old approach where the process wait for ALL the currently available greenlets (and monkeypatched threads). If you prefer the old behaviour just specify the option gevent-wait-for-hub
Compared to its previous 2.x versions, one of the overall key aspect of the new pymongo 3 driver is its intensive usage of threads to handle server discovery and connection pools.
Now we can relate this very fact to the gevent-wait-for-hub behaviour explained above :
the process wait for ALL the currently available greenlets (and monkeypatched threads)
This explained why our applications were hanging until the reload-mercy (force kill) timeout option of uWSGI hit the fan !
When using pymongo 3 with the gevent-wait-for-hub option, you have to keep in mind that all of pymongo's threads (so monkey patched threads) are considered as active greenlets and will thus be waited for termination before uWSGI recycles the worker !
Two options come in mind to handle this properly :
- stop using the gevent-wait-for-hub option and change your code to use a gevent pool group to make sure that all of your important greenlets are taken care of when a graceful reload happens (this is how we do it today, the gevent-wait-for-hub option usage was just over protective for us).
- modify your code to properly close all your pymongo connections on graceful reloads.
Hope this will save some people the trouble of debugging this
13 May 2015 2:56pm GMT
09 May 2015
Sadly over the last months the support for VMware Workstation and friends in Gentoo dropped a lot. Why? Well, I was the only developer left who cared, and it's definitely not at the top of my Gentoo priorities list. To be honest that has not really changed. However... let's try to harness the power of the community now.
I've pushed a mirror of the Gentoo vmware overlay to Github, see
If you have improvements, version bumps, ... - feel free to generate pull requests. Everything related to VMware products is acceptable. I hope some more people will over time sign up and help merging. Just be careful when using the overlay, it likely won't get the same level of review as ebuilds in the main tree.
09 May 2015 11:41pm GMT
07 May 2015
We've been running a nice mongoDB cluster in production for several years now in my company.
This cluster suits quite a wide range of use cases from very simple configuration collections to complex queried ones and real time analytics. This versatility has been the strong point of mongoDB for us since the start as it allows different teams to address their different problems using the same technology. We also run some dedicated replica sets for other purposes and network segmentation reasons.
We've waited a long time to see the latest 3.0 release features happening. The new WiredTiger storage engine hit the fan at the right time for us since we had reached the limits of our main production cluster and were considering alternatives.
So as surprising it may seem, it's the first of our mongoDB architecture we're upgrading to v3.0 as it has become a real necessity.
This post is about sharing our first experience about an ongoing and carefully planned major upgrade of a production cluster and does not claim to be a definitive migration guide.
Upgrade plan and hardware
The upgrade process is well covered in the mongoDB documentation already but I will list the pre-migration base specs of every node of our cluster.
- mongodb v2.6.8
- RAID1 spinning HDD 15k rpm for the OS (Gentoo Linux)
- RAID10 4x SSD for mongoDB files under LVM
- 64 GB RAM
Our overall philosophy is to keep most of the configuration parameters to their default values to start with. We will start experimenting with them when we have sufficient metrics to compare with later.
Disk (re)partitioning considerations
The master-get-all-the-writes architecture is still one of the main limitation of mongoDB and this does not change with v3.0 so you obviously need to challenge your current disk layout to take advantage of the new WiredTiger engine.
mongoDB 2.6 MMAPv1
Considering our cluster data size, we were forced to use our 4 SSD in a RAID10 as it was the best compromise to preserve performance while providing sufficient data storage capacity.
We often reached the limits of our I/O and moved the journal out of the RAID10 to the mostly idle OS RAID1 with no significant improvements.
mongoDB 3.0 WiredTiger
The main consideration point for us is the new feature allowing to store the indexes in a separate directory. So we anticipated the data storage consumption reduction thanks to the snappy compression and decided to split our RAID10 in two dedicated RAID1.
Our test layout so far is :
- RAID1 SSD for the data
- RAID1 SSD for the indexes and journal
Our first node migration
After migrating our mongos and config servers to 3.0, we picked our worst performing secondary node to test the actual migration to WiredTiger. After all, we couldn't do worse right ?
We are aware that the strong suit of WiredTiger is actually about having the writes directed to it and will surely share our experience of this aspect later.
compression is bliss
To make this comparison accurate, we resynchronized this node totally before migrating to WiredTiger so we could compare a non fragmented MMAPv1 disk usage with the WiredTiger compressed one.
While I can't disclose the actual values, compression worked like a charm for us with a gain ratio of 3,2 on disk usage (data + indexes) which is way beyond our expectations !
This is the DB Storage graph from MMS, showing a gain ratio of 4 surely due to indexes being in a separate disk now.
As with the disk usage, the node had been running hot on MMAPv1 before the actual migration so we can compare memory allocation/consumption of both engines.
There again the memory management of WiredTiger and its cache shows great improvement. For now, we left the default setting which has WiredTiger limit its cache to half the available memory of the system. We'll experiment with this setting later on.
This I'm still not sure of the actual cause yet but the connections count is higher and more steady than before on this node.
The node is running smooth for several hours now. We are getting acquainted to the new metrics and statistics from WiredTiger. The overall node and I/O load is better than before !
While all the above graphs show huge improvements there is no major change from our applications point of view. We didn't expect any since this is only one node in a whole cluster and that the main benefits will also come from master node migrations.
I'll continue to share our experience and progress about our mongoDB 3.0 upgrade.
07 May 2015 2:33pm GMT
29 Apr 2015
Some convenient Makefile targets that make it very easy to keep code clean:
scan: scan-build clang foo.c -o foo indent: indent -linux *.c
scan-build is llvm/clang's static analyzer and generates some decent warnings. Using clang to build (in addition to 'default' gcc in my case) helps diversity and sometimes catches different errors.
indent makes code pretty, the 'linux' default settings are not exactly what I want, but close enough that I don't care to finetune yet.
Every commit should be properly indented and not cause more warnings to appear!
29 Apr 2015 3:03am GMT
27 Apr 2015
The SELinux userspace project has released version 2.4 in february this year, after release candidates have been tested for half a year. After its release, we at the Gentoo Hardened project have been working hard to integrate it within Gentoo. This effort has been made a bit more difficult due to the migration of the policy store from one location to another while at the same time switching to HLL- and CIL based builds.
Lately, 2.4 itself has been pretty stable, and we're focusing on the proper migration from 2.3 to 2.4. The SELinux policy has been adjusted to allow the migrations to work, and a few final fixes are being tested so that we can safely transition our stable users from 2.3 to 2.4. Hopefully we'll be able to stabilize the userspace this month or beginning of next month.
27 Apr 2015 5:18pm GMT
Since the latest release is affected and is the version I am using, I have been looking for a way to disable comments globally, at least until a fix has been released.
I'm surprised how difficult disabling comments globally is.
Option "Allow people to post comments on new articles" is filed under "Default article settings", so it applies to new posts only. Let's disable that.
There is a plug-in Disable comments, but since it claims to not alter the database (unless in persistent mode), I have a feeling that it may only remove commenting forms but leave commenting active to hand-made GET/POST requests, so that may not be safe.
So without studying WordPress code in depth my impression is that I have two options:
- a) restrict comments to registered users, deactivate registration (hoping that all existing users are friendly and that disabled registration is waterproof in 4.2) and/or
- b) disable comments for future posts in the settings (in case I post again before an update) and for every single post from the past.
On database level, the former can be seen here:
mysql> SELECT option_name, option_value FROM wp_options WHERE option_name LIKE '%regist%'; +----------------------+--------------+ | option_name | option_value | +----------------------+--------------+ | users_can_register | 0 | | comment_registration | 1 | +----------------------+--------------+ 2 rows in set (0.01 sec)
For the latter, this is how to disable comments on all previous posts:
mysql> UPDATE wp_posts SET comment_status = 'closed'; Query OK, .... rows affected (.... sec) Rows matched: .... Changed: .... Warnings: 0
If you have comments to share, please use e-mail this time. Upgraded to 4.2.1 now.
27 Apr 2015 1:21pm GMT
26 Apr 2015
I announced it in February that Excelsior, which ran the Tinderbox, was no longer at Hurricane Electric. I have also said I'll start on working on a new generation Tinderbox, and to do that I need a new devbox, as the only three Gentoo systems I have at home are the laptops and my HTPC, not exactly hardware to run compilation all the freaking time.
So after thinking of options, I decided that it was much cheaper to just rent a single dedicated server, rather than a full cabinet, and after asking around for options I settled for Online.net, because of price and recommendation from friends. Unfortunately they do not support Gentoo as an operating system, which makes a few things a bit more complicated. They do provide you with a rescue system, based on Ubuntu, which is enough to do the install, but not everything is easy that way either.
Luckily, most of the configuration (but not all) was stored in Puppet - so I only had to rename the hosts there, changed the MAC addresses for the LAN and WAN interfaces (I use static naming of the interfaces as
wan0, which makes many other pieces of configuration much easier to deal with), changed the IP addresses, and so on. Unfortunately since I didn't start setting up that machine through Puppet, it also meant that it did not carry all the information to replicate the system, so it required some iteration and fixing of the configuration. This also means that the next move is going to be easier.
The biggest problem has been setting up correctly the MDRAID partitions, because of GRUB2: if you didn't know, grub2 has an automagic dependency on mdadm - if you don't install it it won't be able to install itself on a RAID device, even though it can detect it; the maintainer refused to add an USE flag for it, so you have to know about it.
Given what can and cannot be autodetected by the kernel, I had to fight a little more than usual and just gave up and rebuilt the two (
/ - yes laugh at me but when I installed Excelsior it was the only way to get GRUB2 not to throw up) arrays as metadata 0.90. But the problem was being able to tell what the boot up errors were, as I have no physical access to the device of course.
The Online.net server I rented is a Dell server, that comes with iDRAC for remote management (Dell's own name for IPMI, essentially), and Online.net allows you to set up connections to through your browser, which is pretty neat - they use a pool of temporary IP addresses and they only authorize your own IP address to connect to them. On the other hand, they do not change the default certificates, which means you end up with the same untrustable Dell certificate every time.
From the iDRAC console you can't do much, but you can start up the remove, JavaWS-based console, which reminded me of something. Unfortunately the JNLP file that you can download from iDRAC did not work on either Sun, Oracle or IcedTea JREs, segfaulting (no kidding) with an X.509 error log as last output - I seriously thought the problem was with the certificates until I decided to dig deeper and found this set of entries in the JNLP file:
<resources os="Windows" arch="x86"> <nativelib href="https://idracip/software/avctKVMIOWin32.jar" download="eager"/> <nativelib href="https://idracip/software/avctVMAPI_DLLWin32.jar" download="eager"/> </resources> <resources os="Windows" arch="amd64"> <nativelib href="https://idracip/software/avctKVMIOWin64.jar" download="eager"/> <nativelib href="https://idracip/software/avctVMAPI_DLLWin64.jar" download="eager"/> </resources> <resources os="Windows" arch="x86_64"> <nativelib href="https://idracip/software/avctKVMIOWin64.jar" download="eager"/> <nativelib href="https://idracip/software/avctVMAPI_DLLWin64.jar" download="eager"/> </resources> <resources os="Linux" arch="x86"> <nativelib href="https://idracip/software/avctKVMIOLinux32.jar" download="eager"/> <nativelib href="https://idracip/software/avctVMAPI_DLLLinux32.jar" download="eager"/> </resources> <resources os="Linux" arch="i386"> <nativelib href="https://idracip/software/avctKVMIOLinux32.jar" download="eager"/> <nativelib href="https://idracip/software/avctVMAPI_DLLLinux32.jar" download="eager"/> </resources> <resources os="Linux" arch="i586"> <nativelib href="https://idracip/software/avctKVMIOLinux32.jar" download="eager"/> <nativelib href="https://idracip/software/avctVMAPI_DLLLinux32.jar" download="eager"/> </resources> <resources os="Linux" arch="i686"> <nativelib href="https://idracip/software/avctKVMIOLinux32.jar" download="eager"/> <nativelib href="https://idracip/software/avctVMAPI_DLLLinux32.jar" download="eager"/> </resources> <resources os="Linux" arch="amd64"> <nativelib href="https://idracip/software/avctKVMIOLinux64.jar" download="eager"/> <nativelib href="https://idracip/software/avctVMAPI_DLLLinux64.jar" download="eager"/> </resources> <resources os="Linux" arch="x86_64"> <nativelib href="https://idracip/software/avctKVMIOLinux64.jar" download="eager"/> <nativelib href="https://idracip/software/avctVMAPI_DLLLinux64.jar" download="eager"/> </resources> <resources os="Mac OS X" arch="x86_64"> <nativelib href="https://idracip/software/avctKVMIOMac64.jar" download="eager"/> <nativelib href="https://idracip/software/avctVMAPI_DLLMac64.jar" download="eager"/> </resources>
Turns out if you remove everything but the Linux/x86_64 option, it does fetch the right jar and execute the right code without segfaulting. Mysteries of Java Web Start I guess.
So after finally getting the system to boot, the next step is setting up networking - as I said I used Puppet to set up the addresses and everything, so I had working IPv4 at boot, but I had to fight a little longer to get IPv6 working. Indeed IPv6 configuration with servers, virtual and dedicated alike, is very much an unsolved problem. Not because there is no solution, but mostly because there are too many solutions - essentially every single hosting provider I ever used had a different way to set up IPv6 (including none at all in one case, so the only option was a tunnel) so it takes some fiddling around to set it up correctly.
To be honest, Online.net has a better set up than OVH or Hetzner, the latter being very flaky, and a more self-service one that Hurricane, which was very flexible, making it very easy to set up, but at the same time required me to just mail them if I wanted to make changes. They document for dibbler, as they rely on DHCPv6 with DUID for delegation - they give you a single /56 v6 net that you can then split up in subnets and delegate independently.
What DHCPv6 in this configuration does not give you is routing - which kinda make sense, as you can use RA (Route Advertisement) for it. Unfortunately at first I could not get it to work. Turns out that, since I use subnets for the containerized network, I enabled IPv6 forwarding, through Puppet of course. Turns out that Linux will ignore Route Advertisement packets when forwarding IPv6 unless you ask it nicely to - by setting
accept_ra=2 as well. Yey!
Again this is the kind of problems that finding this information took much longer than it should have been; Linux does not really tell you that it's ignoring RA packets, and it is by far not obvious that setting one sysctl will disable another - unless you go and look for it.
Luckily this was the last problem I had, after that the server was set up fine and I just had to finish configuring the domain's zone file, and the reverse DNS and the SPF records… yes this is all the kind of trouble you go through if you don't just run your whole infrastructure, or use fully cloud - which is why I don't consider self-hosting a general solution.
What remained is just bits and pieces. The first was me realizing that Puppet does not remove the entries from
/etc/fstab by default, so I noticed that the Gentoo default
/etc/fstab file still contains the entries for CD-ROM drives as well as
/dev/fd0. I don't remember which was the last computer with a floppy disk drive that I used, let alone owned.
The other fun bit has been setting up the containers themselves - similarly to the server itself, they are set up with Puppet. Since the server used to be running a tinderbox, it used to also host a proper rsync mirror, it was just easier, but I didn't want to repeat that here, and since I was unable to find a good mirror through
mirrorselect (longer story), I configured Puppet to just provide to all the containers with
distfiles.gentoo.org as their sync server, which did not work. Turns out that our default mirror address does not have any IPv6 hosts on it - when I asked Robin about it, it seems like we just don't have any IPv6-hosted mirror that can handle that traffic, it is sad.
So anyway, I now have a new devbox and I'm trying to set up the rest of my repositories and access (I have not set up access to Gentoo's repositories yet which is kind of the point here.) Hopefully this will also lead to more technical blogging in the next few weeks as I'm cutting down on the overwork to relax a bit.
26 Apr 2015 4:31pm GMT
25 Apr 2015
As previously announced  , and previously in the discussion of merging Overlays with Gentoo's primary SCM hosting (CVS+Git): The old overlays hostnames (
overlays.gentoo.org) have now been disabled, as well as non-SSH traffic to
git.gentoo.org. This was a deliberate move to seperate anonymous versus authenticated Git traffic, and ensure that anonymous Git traffic can continued to be scaled when we go ahead with switching away from CVS. Anonymous and authenticated Git is now served by seperate systems, and no anonymous Git traffic is permitted to the authenticated Git server.
If you have anonymous Git checkouts from any of the affected hostnames, you should switch them to using one of these new URLs:
If you have authenticated Git checkouts from the same hosts, you should switch them to this new URL:
In either case, you can trivially update any existing checkout with:
git remote set-url origin git+ssh://firstname.lastname@example.org/$REPO
(be sure to adjust the path of the repository and the name of the remote as needed).
25 Apr 2015 12:00am GMT
23 Apr 2015
In a previous post I described how to patch QEMU to allow building binutils in a cross chroot. In there I increased the maximal number of argument pages to 64 because I was just after a quick fix. Today I finally bisected that, and the result is you need at least 46 for MAX_ARG_PAGES in order for binutils to build.
In bug 533882 it is discussed that LibreOffice requires an even larger number of pages. It is possible other packages also require such a large limit. Note that it may not be a good idea to increase the MAX_ARG_PAGES limit to an absurdly high number and leave it at that. A large amount of memory will be allocated in the target's memory space and that may be a problem.
Hopefully QEMU switches to a dynamic limit someday like the kernel. In the meantime, my upcoming
crossroot tool will offer a way to more easily deal with that.
23 Apr 2015 2:50pm GMT
21 Apr 2015
I give a lot of talks. Often I'm paid to give them, and I regularly get very high ratings or even awards. But every time I listen to people speaking in public for the first time, or maybe the first few times, I think of some very easy ways for them to vastly improve their talks.
Here, I wanted to share my top tips to make your life (and, selfishly, my life watching your talks) much better:
- Presenter mode is the greatest invention ever. Use it. If you ignore or forget everything else in this post, remember the rainbows and unicorns of presenter mode. This magical invention keeps the current slide showing on the projector while your laptop shows something different - the current slide, a small image of the next slide, and your slide notes. The last bit is the key. What I put on my notes is the main points of the current slide, followed by my transition to the next slide. Presentations look a lot more natural when you say the transition before you move to the next slide rather than after. More than anything else, presenter mode dramatically cut down on my prep time, because suddenly I no longer had to rehearse. I had seamless, invisible crib notes while I was up on stage.
- Plan your intro. Starting strong goes a long way, as it turns out that making a good first impression actually matters. It's time very well spent to literally script your first few sentences. It helps you get the flow going and get comfortable, so you can really focus on what you're saying instead of how nervous you are. Avoid jokes unless most of your friends think you're funny almost all the time. (Hint: they don't, and you aren't.)
- No bullet points. Ever. (Unless you're an expert, and you probably aren't.) We've been trained by too many years of boring, sleep-inducing PowerPoint presentations that bullet points equal naptime. Remember presenter mode? Put the bullet points in the slide notes that only you see. If for some reason you think you're the sole exception to this, at a minimum use visual advances/transitions. (And the only good transition is an instant appear. None of that fading crap.) That makes each point appear on-demand rather than all of them showing up at once.
- Avoid text-filled slides. When you put a bunch of text in slides, people inevitably read it. And they read it at a different pace than you're reading it. Because you probably are reading it, which is incredibly boring to listen to. The two different paces mean they can't really focus on either the words on the slide or the words coming out of your mouth, and your attendees consequently leave having learned less than either of those options alone would've left them with.
- Use lots of really large images. Each slide should be a single concept with very little text, and images are a good way to force yourself to do so. Unless there's a very good reason, your images should be full-bleed. That means they go past the edges of the slide on all sides. My favorite place to find images is a Flickr advanced search for Creative Commons licenses. Google also has this capability within Search Tools. Sometimes images are code samples, and that's fine as long as you remember to illustrate only one concept - highlight the important part.
- Look natural. Get out from behind the podium, so you don't look like a statue or give the classic podium death-grip (one hand on each side). You'll want to pick up a wireless slide advancer and make sure you have a wireless lavalier mic, so you can wander around the stage. Remember to work your way back regularly to check on your slide notes, unless you're fortunate enough to have them on extra monitors around the stage. Talk to a few people in the audience beforehand, if possible, to get yourself comfortable and get a few anecdotes of why people are there and what their background is.
- Don't go over time. You can go under, even a lot under, and that's OK. One of the best talks I ever gave took 22 minutes of a 45-minute slot, and the rest filled up with Q&A. Nobody's going to mind at all if you use up 30 minutes of that slot, but cutting into their bathroom or coffee break, on the other hand, is incredibly disrespectful to every attendee. This is what watches, and the timer in presenter mode, and clocks, are for. If you don't have any of those, ask a friend or make a new friend in the front row.
- You're the centerpiece. The slides are a prop. If people are always looking at the slides rather than you, chances are you've made a mistake. Remember, the focus should be on you, the speaker. If they're only watching the slides, why didn't you just post a link to Slideshare or Speakerdeck and call it a day?
I've given enough talks that I have a good feel on how long my slides will take, and I'm able to adjust on the fly. But if you aren't sure of that, it might make sense to rehearse. I generally don't rehearse, because after all, this is the lazy way.
If you can manage to do those 8 things, you've already come a long way. Good luck!
21 Apr 2015 3:42pm GMT
16 Apr 2015
I already sent the last rites announce a few days ago, but here is a more detailed post on the coming up removal of "old" NX packages. Long story short: migrate to X2Go if possible, or use the NX overlay ("best-effort" support provided).
2015/04/26 note: treecleaning done!
Basically, all NX clients and servers except x2go and nxplayer! Here is the complete list with some specific last rites reasons:
- net-misc/nxclient,net-misc/nxnode,net-misc/nxserver-freeedition: binary-only original NX client and server. Upstream has moved on to a closed-source technology, and this version bundles potientally vulnerable binary code. It does not work as well as before with current libraries (like Cairo).
- net-misc/nxserver-freenx, net-misc/nxsadmin: the first open-source alternative server. It could be tricky to get working, and is not updated anymore (last upstream activity around 2009)
- net-misc/nxcl, net-misc/qtnx: an open-source alternative client (last upstream activity around 2008)
- net-misc/neatx: Google's take on a NX server, sadly it never took off (last upstream activity around 2010)
- app-admin/eselect-nxserver (an eselect module to switch active NX server, useless without these servers in tree)
Continue using these packages on Gentoo
These packages will be dropped from the main tree by the end of this month (2015/04), and then only available in the NX overlay. They will still be supported there in a "best effort" way (no guarantee how long some of these packages will work with current systems).
So, if one of these packages still works better for you, or you need to keep them around before migrating, just run:
# layman -a nx
While it is not a direct drop-in replacement, x2go is the most complete solution currently in Gentoo tree (and my recommendation), with a lot of possible advanced features, active upstream development, … You can connect to net-misc/x2goserver with net-misc/x2goclient, net-misc/pyhoca-gui, or net-misc/pyhoca-cli.
If you want to try NoMachine's (the company that created NX) new system, well the client is available in Portage as net-misc/nxplayer. The server itself is not packaged yet, if you are interested in it, this is bug #488334
16 Apr 2015 9:30pm GMT
11 Apr 2015
I was pointed to this Mozilla Security Advisory:
Certificate verification bypass through the HTTP/2 Alt-Svc header
While it doesn't say if all versions prior to 37.0.1 are affected, it does say that sending a certain server response header disabled warnings of invalid SSL certificates for that domain. Ooops.
I'm not sure how relevant HTTP/2 is by now.
11 Apr 2015 7:23pm GMT
Slot conflicts can be annoying. It's worse when an attempt to fix them leads to an even bigger mess. I hope this post helps you with some cases - and that portage will keep getting smarter about resolving them automatically.
Read more »
11 Apr 2015 4:09pm GMT