28 Apr 2016

feedPlanet Gentoo

Gentoo News: GSoC 2016: Five projects accepted

We are excited to announce that 5 students have been selected to participate with Gentoo during the Google Summer of Code 2016!

You can follow our students' progress on the gentoo-soc mailing list and chat with us regarding our GSoC projects via IRC in #gentoo-soc on freenode.
Congratulations to all the students. We look forward to their contributions!

GSoC logo

Accepted projects

Clang native support - Lei Zhang: Bring native clang/LLVM support in Gentoo.

Continuous Stabilization - Pallav Agarwal: Automate the package stabilization process using continuous integration practices.

kernelconfig - André Erdmann: Consistently generate custom Linux kernel configurations from curated sources.

libebuild - Denys Romanchuk: Create a common shared C-based implementation for package management and other ebuild operations in the form of a library.

Gentoo-GPG- Angelos Perivolaropoulos: Code the new Meta­-Manifest system for Gentoo and improve Gentoo Keys capabilities.

28 Apr 2016 12:00am GMT

Gentoo News: Events: Gentoo Miniconf 2016

Gentoo Miniconf 2016 will be held in Prague, Czech Republic during the weekend of 8 and 9 October 2016. Like last time, is hosted together with the LinuxDays by the Faculty of Information Technology of the Czech Technical University.

Want to participate? The call for papers is open until 1 August 2016.

28 Apr 2016 12:00am GMT

25 Apr 2016

feedPlanet Gentoo

Gentoo Miniconf 2016: Announcing Gentoo Miniconf 2016 and Call for Papers

Gentoo Miniconf 2016 will be held in Prague, Czech Republic during the weekend of 8 and 9 October 2016. Like last time, is hosted together with the LinuxDays by the Faculty of Information Technology of the Czech Technical University in Prague (FIT ČVUT).

The call for papers is now open where you can submit your session proposal until 1 August 2016. Want to have a meeting, discussion, presentation, workshop, do ebuild hacking, or anything else? Tell us!

miniconf-2016

25 Apr 2016 2:23pm GMT

15 Apr 2016

feedPlanet Gentoo

Michał Górny: Why automated gentoo-mirror commits are not signed and how to verify them

Those of you who use my Gentoo repository mirrors may have noticed that the repositories are constructed of original repository commits automatically merged with cache updates. While the original commits are signed (at least in the official Gentoo repository), the automated cache updates and merge commits are not. Why?

Actually, I was wondering about signing them more than once, even discussed it a bit with Kristian. However, each time I decided against it. I was seriously concerned that those automatic signatures would not be able to provide sufficient security level - and could cause the users to believe the commits are authentic even if they were not. I think it would be useful to explain why.

Verifying the original commits

While this may not be entirely clear, by signing the merge commits I would implicitly approve the original commits as well. While this might be worked-around via some kind of policy requesting the developer to perform additional verification, such a policy would be impractical and confusing. Therefore, it only seems reasonable to verify the original commits before signing merges.

The problem with that is that we still do not have an official verification tool for repository commits. There's the whole Gentoo-keys project that aims to eventually solve the problem but it's not there yet. Maybe this year's Summer of Code will change that…

Not having an official verification routines, I would have to implement my own. I'm not saying it would be that hard - but it would always be semi-official, at best. Of course, I could spend a day or two in contributing needed code to Gentoo-keys and preventing some student from getting the $5500 of Google money… but that would be the non-enterprise way of solving the urgent problem.

Protecting the signing key

The other important point is the security of key used to sign commits. For the whole effort to make any sense, it needs to be strongly protected against being compromised. Keeping the key (or even a subkey) unencrypted on the server really diminishes the whole effort (I'm not pointing fingers here!)

Basic rules first. The primary key kept off-line, used to generate signing subkey only. Signing subkey stored encrypted on the server and used via gpg-agent, so that it won't be kept unencrypted outside the memory. All nice and shiny.

The problem is - this means someone needs to type the password in. Which means there needs to be an interactive bootstrap process. Which means every time server reboots for some reason, or gpg-agent dies, or whatever, the mirrors stop and wait for me to come and type the password in. Hopefully when I'm around some semi-secure device.

Protecting the software

Even all those points considered and solved satisfiably, there's one more issue: the software. I won't be running all those scripts in my home. So it's not just me you have to trust - you have to trust all other people with administrative access to the machine that's running the scripts, you have to trust the employees of the hosting company that have physical access to the machine.

I mean, any one of them can go and attempt to alter the data somehow. Even if I tried hard, I won't be able to protect my scripts from this. In the worst case, they are going to add a valid, verified signature to the data that has been altered externally. What's the value of this signature then?

And this is the exact reason why I don't do automatic signatures.

How to verify the mirrors then?

So if automatic signatures are not the way, how can you verify the commits on repository mirrors? The answer is not that complex.

As I've mentioned, the mirrors use merge commits to combine metadata updates with original repository commits. What's important is that this preserves the original commits, along with their valid signatures and therefore provides a way to verify them. What's the use of that?

Well, you can look for the last merge commit to find the matching upstream commit. Then you can use the usual procedure to verify the upstream commit. And then, you can diff it against the mirror HEAD to see that only caches and other metadata have been altered. While this doesn't guarantee that the alterations are genuine, the danger coming from them is rather small (if any).

15 Apr 2016 9:46pm GMT

11 Apr 2016

feedPlanet Gentoo

Nathan Zachary: Linux firmware for iwlwifi ucode failed with error -2

Important!

My tech articles-especially Linux ones-are some of the most-viewed on The Z-Issue. If this one has helped you, please consider a small donation to The Parker Fund by using the top widget at the right. Thanks!

A couple weeks ago, I decided to update my primary laptop's kernel from 4.0 to 4.5. Everything went smoothly with the exception of my wireless networking. This particular laptop uses the a wifi chipset that is controlled by the Intel Wireless DVM Firmware:


#lspci | grep 'Network controller'
03:00.0 Network controller: Intel Corporation Centrino Advanced-N 6205 [Taylor Peak] (rev 34)

According to Intel Linux support for wireless networking page, I need kernel support for the 'iwlwifi' driver. I remembered this requirement from building the previous kernel, so I included it in the new 4.5 kernel. The new kernel had some additional options, though, and they were:


[*] Intel devices
...
< > Intel Wireless WiFi Next Gen AGN - Wireless-N/Advanced-N/Ultimate-N (iwlwifi)
< > Intel Wireless WiFi DVM Firmware support
< > Intel Wireless WiFi MVM Firmware support
Debugging Options --->

As previously mentioned, the Kernel page for iwlwifi indicates that I need the DVM module for my particular chipset, so I selected it. Previously, I chose to build support for the driver into the kernel, and then use the firmware for the device. However, this time, I noticed that it wasn't loading:


[ 3.962521] iwlwifi 0000:03:00.0: can't disable ASPM; OS doesn't have ASPM control
[ 3.970843] iwlwifi 0000:03:00.0: Direct firmware load for iwlwifi-6000g2a-6.ucode failed with error -2
[ 3.976457] iwlwifi 0000:03:00.0: loaded firmware version 18.168.6.1 op_mode iwldvm
[ 3.996628] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEBUG enabled
[ 3.996640] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEBUGFS disabled
[ 3.996647] iwlwifi 0000:03:00.0: CONFIG_IWLWIFI_DEVICE_TRACING enabled
[ 3.996656] iwlwifi 0000:03:00.0: Detected Intel(R) Centrino(R) Advanced-N 6205 AGN, REV=0xB0
[ 3.996828] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 4.306206] iwlwifi 0000:03:00.0 wlp3s0: renamed from wlan0
[ 9.632778] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 9.633025] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 9.633133] iwlwifi 0000:03:00.0: Radio type=0x1-0x2-0x0
[ 9.898531] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 9.898803] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 9.898906] iwlwifi 0000:03:00.0: Radio type=0x1-0x2-0x0
[ 20.605734] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 20.605983] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 20.606082] iwlwifi 0000:03:00.0: Radio type=0x1-0x2-0x0
[ 20.873465] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 20.873831] iwlwifi 0000:03:00.0: L1 Enabled - LTR Disabled
[ 20.873971] iwlwifi 0000:03:00.0: Radio type=0x1-0x2-0x0

The strange thing, though, is that the firmware was right where it should be:


# ls -lh /lib/firmware/
total 664K
-rw-r--r-- 1 root root 662K Mar 26 13:30 iwlwifi-6000g2a-6.ucode

After digging around for a while, I finally figured out the problem. The kernel was trying to load the firmware for this device/driver before it was actually available. There are definitely ways to build the firmware into the kernel image as well, but instead of going that route, I just chose to rebuild my kernel with this driver as a module (which is actually the recommended method anyway):


[*] Intel devices
...
Intel Wireless WiFi Next Gen AGN - Wireless-N/Advanced-N/Ultimate-N (iwlwifi)
Intel Wireless WiFi DVM Firmware support
< > Intel Wireless WiFi MVM Firmware support
Debugging Options --->

If I had fully read the page instead of just skimming it, I could have saved myself a lot of time. Hopefully this post will help anyone getting the "Direct firmware load for iwlwifi-6000g2a-6.ucode failed with error -2" error message.

Cheers,
Zach

11 Apr 2016 9:23pm GMT

31 Mar 2016

feedPlanet Gentoo

Anthony Basile: Why macros like __GLIBC__ and __UCLIBC__ are bad.

I'll be honest, this is a short post because the aggregation on planet.gentoo.org is failing for my account! So, Jorge (jmbsvicetto) is debugging it and I need to push out another blog entry to trigger venus, the aggregation program. Since I don't like writing trivial stuff, I'm going to write something short, but hopefully important.

C Standard libraries, like glibc, uClibc, musl and the like, were born out of a world in which every UNIX vendor had their own set of useful C functions. Code portability put pressure on various libc to incorporate these functions from other libc, first leading to to a mess and then to standards like POSIX, XOPEN, SUSv4 and so on. Chpt 1 of Kerrisk's The Linux Programming Interface has a nice write up on this history.

We still live in the shadows of that world today. If you look thorugh the code base of uClibc you'll see lots of macros like __GLIBC__, __UCLIBC__, __USE_BSD, and __USE_GNU. These are used in #ifdef … #endif which are meant to shield features unless you want a glibc or uClibc only feature.

musl has stubbornly and correctly refused to include a __MUSL__ macro. Consider the approach to portability taken by GNU autotools. Marcos such as AC_CHECK_LIBS(), AC_CHECK_FUNC() or AC_CHECK_HEADERS() unambiguously target the feature in question without making the use of __GLIBC__ or __UCLIBC__. Whereas the previous approach globs together functions into sets, the latter just simply asks, do you have this function or not?

Now consider how uClibc makes use of both __GLIBC__ and __UCLIBC__. If a function is provided by the former but not by the latter, then it expects a program to use

#if defined(__GLIBC__) && !defined(__UCLIBC__)

This is getting a bit ugly and syntactically ambiguous. Someone not familiar with this could easily misinterpret it, or reject it.

So I've hit bugs like these. I hit one in gdk-pixbuf and I was not able to convince upstream to consistently use __GLIBC__ and __UCLIBC__. Alternatively I hit this in geocode-glib and geoclue, and they did accept it. I went with the wrong minded approach because that's what was already there, and I didn't feel like sifting through their code base and revamping their build system. This isn't just laziness, its historical weight.

So kudos to musl. And for all the faults of GNU autotools, at least its approach to portability is correct.

31 Mar 2016 11:15am GMT

28 Mar 2016

feedPlanet Gentoo

Anthony Basile: hardened-sources Role-based Access Control (RBAC): how to write mostly permissive policies.

RBAC is a security feature of the hardened-sources kernels. As its name suggests, its a role-based access control system which allows you to define policies for restricting access to files, sockets and other system resources. Even root is restricted, so attacks that escalate privilege are not going to get far even if they do obtain root. In fact, you should be able to give out remote root access to anyone on a well configured system running RBAC and still remain confident that you are not going to be owned! I wouldn't recommend it just in case, but it should be possible.

It is important to understand what RBAC will give you and what it will not. RBAC has to be part of a more comprehensive security plan and is not a single security solution. In particular, if one can compromise the kernel, then one can proceed to compromise the RBAC system itself and undermine whatever security it offers. Or put another way, protecting root is pretty much a moot point if an attacker is able to get ring 0 privileges. So, you need to start with an already hardened kernel, that is a kernel which is able to protect itself. In practice, this means configuring most of the GRKERNSEC_* and PAX_* features of a hardened-sources kernel. Of course, if you're planning on running RBAC, you need to have that option on too.

Once you have a system up and running with a properly configured kernel, the next step is to set up the policy file which lives at /etc/grsec/policy. This is where the fun begins because you need to ask yourself what kind of a system you're going to be running and decide on the policies you're going to implement. Most of the existing literature is about setting up a minimum privilege system for a server which runs only a few simple processes, something like a LAMP stack. I did this for years when I ran a moodle server for D'Youville College. For a minimum privilege system, you want to deny-by-default and only allow certain processes to have access to certain resources as explicitly stated in the policy file. RBAC is ideally suited for this. Recently, however, I was asked to set up a system where the opposite was the case, so this article is going to explore the situation where you want to allow-by-default; however, for completeness let me briefly cover deny-by-default first.

The easiest way to proceed is to get all your services running as they should and then turn on learning mode for about a week, or at least until you have one cycle of, say, log rotations and other cron based jobs. Basically your services should have attempted to access each resource at least once so the event gets logged. You then distill those logs into a policy file describing only what should be permitted and tweak as needed. Basically, you proceed something as follows:

1. gradm -P  # Create a password to enable/disable the entire RBAC system
2. gradm -P admin  # Create a password to authenticate to the admin role
3. gradm -F -L /etc/grsec/learning.log # Turn on system wide learning
4. # Wait a week.  Don't do anything you don't want to learn.
5. gradm -F -L /etc/grsec/learning.log -O /etc/grsec/policy  # Generate the policy
6. gradm -E # Enable RBAC system wide
7. # Look for denials.
8. gradm -a admin  # Authenticate to admin to do extraordinary things, like tweak the policy file
9. gradm -R # reload the policy file
10. gradm -u # Drop those privileges to do ordinary things
11. gradm -D # Disable RBAC system wide if you have to

Easy right? This will get you pretty far but you'll probably discover that some things you want to work are still being denied because those particular events never occurred during the learning. A typical example here, is you might have ssh'ed in from one IP, but now you're ssh-ing in from a different IP and you're getting denied. To tweak your policy, you first have to escape the restrictions placed on root by transitioning to the admin role. Then using dmesg you can see what was denied, for example:

[14898.986295] grsec: From 192.168.5.2: (root:U:/) denied access to hidden file / by /bin/ls[ls:4751] uid/euid:0/0 gid/egid:0/0, parent /bin/bash[bash:4327] uid/euid:0/0 gid/egid:0/0

This tells you that root, logged in via ssh from 192.168.5.2, tried to ls / but was denied. As we'll see below, this is a one line fix, but if there are a cluster of denials to /bin/ls, you may want to turn on learning on just that one subject for root. To do this you edit the policy file and look for subject /bin/ls under role root. You then add an 'l' to the subject line to enable learning for just that subject.

role root uG
…
# Role: root
subject /bin/ls ol {  # Note the 'l'

You restart RBAC using gradm -E -L /etc/grsec/partial-learning.log and obtain the new policy for just that subject by running gradm -L /etc/grsec/partial-learning.log -O /etc/grsec/partial-learning.policy. That single subject block can then be spliced into the full policy file to change the restircions on /bin/ls when run by root.

Its pretty obvious that RBAC designed to do deny-by-default. If access is not explicitly granted to a subject (an executable) to access some object (some system resource) when its running in some role (as some user), then access is denied. But what if you want to create a policy which is mostly allow-by-default and then you just add a few denials here and there? While RBAC is more suited for the opposite case, we can do something like this on a per account basis.

Let's start with a failry permissive policy file for root:

role admin sA
subject / rvka {
        /                       rwcdmlxi
}

role default
subject / {
        /                       h
        -CAP_ALL
        connect disabled
        bind    disabled
}

role root uG
role_transitions admin
role_allow_ip 0.0.0.0/0
subject /  {
        /                       r
        /boot                   h
#
        /bin                    rx
        /sbin                   rx
        /usr/bin                rx
        /usr/libexec            rx
        /usr/sbin               rx
        /usr/local/bin          rx
        /usr/local/sbin         rx
        /lib32                  rx
        /lib64                  rx
        /lib64/modules          h
        /usr/lib32              rx
        /usr/lib64              rx
        /usr/local/lib32        rx
        /usr/local/lib64        rx
#
        /dev                    hx
        /dev/log                r
        /dev/urandom            r
        /dev/null               rw
        /dev/tty                rw
        /dev/ptmx               rw
        /dev/pts                rw
        /dev/initctl            rw
#
        /etc/grsec              h
#
        /home                   rwcdl
        /root                   rcdl
#
        /proc/slabinfo          h
        /proc/modules           h
        /proc/kallsyms          h
#
        /run/lock               rwcdl
        /sys                    h
        /tmp                    rwcdl
        /var                    rwcdl
#
        +CAP_ALL
        -CAP_MKNOD
        -CAP_NET_ADMIN
        -CAP_NET_BIND_SERVICE
        -CAP_SETFCAP
        -CAP_SYS_ADMIN
        -CAP_SYS_BOOT
        -CAP_SYS_MODULE
        -CAP_SYS_RAWIO
        -CAP_SYS_TTY_CONFIG
        -CAP_SYSLOG
#
        bind 0.0.0.0/0:0-32767 stream dgram tcp udp igmp
        connect 0.0.0.0/0:0-65535 stream dgram tcp udp icmp igmp raw_sock raw_proto
        sock_allow_family all
}

The syntax is pretty intuitive. The only thing not illustrated here is that a role can, and usually does, have multiple subject blocks which follow it. Those subject blocks belong only to the role that they are under, and not another.

The notion of a role is critical to understanding RBAC. Roles are like UNIX users and groups but within the RBAC system. The first role above is the admin role. It is 'special' meaning that it doesn't correspond to any UNIX user or group, but is only defined within the RBAC system. A user will operate under some role but may transition to another role if the policy allows it. Transitioning to the admin role is reserved only for root above; but in general, any user can transition to any special role provided it is explicitly specified in the policy. No matter what role the user is in, he only has the UNIX privileges for his account. Those are not elevated by transitioning, but the restrictions applied to his account might change. Thus transitioning to a special role can allow a user to relax some restrictions for some special reason. This transitioning is done via gradm -a somerole and can be password protected using gradm -P somerole.

The second role above is the default role. When a user logs in, RBAC determines the role he will be in by first trying to match the user name to a role name. Failing that, it will try to match the group name to a role name and failing that it will assign the user the default role.

The third role above is the root role and it will be the main focus of our attention below.

The flags following the role name specify the role's behavior. The 's' and 'A' in the admin role line say, respectively, that it is a special role (ie, one not to be matched by a user or group name) and that it is has extra powers that a normal role doesn't have (eg, it is not subject ptrace restrictions). Its good to have the 'A' flag in there, but its not essential for most uses of this role. Its really its subject block which makes it useful for administration. Of course, you can change the name if you want to practice a little bit of security by obfuscation. As long as you leave the rest alone, it'll still function the same way.

The root role has the 'u' and the 'G' flags. The 'u' flag says that this role is to match a user by the same name, obviously root in this case. Alternatively, you can have the 'g' flag instead which says to match a group by the same name. The 'G' flag gives this role permission to authenticate to the kernel, ie, to use gradm. Policy information is automatically added that allows gradm to access /dev/grsec so you don't need to add those permissions yourself. Finally the default role doesn't and shouldn't have any flags. If its not a 'u' or 'g' or 's' role, then its a default role.

Before we jump into the subject blocks, you'll notice a couple of lines after the root role. The first says 'role_transitions admin' and permits the root role to transition to the admin role. Any special roles you want this role to transition to can be listed on this line, space delimited. The second line says 'role_allow_ip 0.0.0.0/0'. So when root logs in remotely, it will be assigned the root role provided the login is from an IP address matching 0.0.0.0/0. In this example, this means any IP is allowed. But if you had something like 192.168.3.0/24 then only root logins from the 192.168.3.0 network would get user root assigned role root. Otherwise RBAC would fall back on the default role. If you don't have the line in there, get used to logging on on console because you'll cut yourself off!

Now we can look at the subject blocks. These define the access controls restricting processes running in the role to which those subjects belong. The name following the 'subject' keyword is either a path to a directory containing executables or to an executable itself. When a process is started from an executable in that directory, or from the named executable itself, then the access controls defined in that subject block are enforced. Since all roles must have the '/' subject, all processes started in a given role will at least match this subject. You can think of this as the default if no other subject matches. However, additional subject blocks can be defined which further modify restrictions for particular processes. We'll see this towards the end of the article.

Let's start by looking at the '/' subject for the default role since this is the most restrictive set of access controls possible. The block following the subject line lists the objects that the subject can act on and what kind of access is allowed. Here we have '/ h' which says that every file in the file system starting from '/' downwards is hidden from the subject. This includes read/write/execute/create/delete/hard link access to regular files, directories, devices, sockets, pipes, etc. Since pretty much everything is forbidden, no process running in the default role can look at or touch the file system in any way. Don't forget that, since the only role that has a corresponding UNIX user or group is the root role, this means that every other account is simply locked out. However the file system isn't the only thing that needs protecting since it is possible to run, say, a malicious proxy which simply bounces evil network traffic without ever touching the filesystem. To control network access, there are the 'connect' and 'bind' lines that define what remote addresses/ports the subject can connect to as a client, or what local addresses/ports it can listen on as a server. Here 'disabled' means no connections or bindings are allowed. Finally, we can control what Linux capabilities the subject can assume, and -CAP_ALL means they are all forbidden.

Next, let's look at the '/' subject for the admin role. This, in contrast to the default role, is about as permissive as you can get. First thing we notice is the subject line has some additional flags 'rvka'. Here 'r' means that we relax ptrace restrictions for this subject, 'a' means we do not hide access to /dev/grsec, 'k' means we allow this subject to kill protected processes and 'v' means we allow this subject to view hidden processes. So 'k' and 'v' are interesting and have counterparts 'p' and 'h' respectively. If a subject is flagged as 'p' it means its processes are protected by RBAC and can only be killed by processes belonging to a subject flagged with 'k'. Similarly processes belonging to a subject marked 'h' can only be viewed by processes belonging to a subject marked 'v'. Nifty, eh? The only object line in this subject block is '/ rwcdmlxi'. This says that this subject can 'r'ead, 'w'rite, 'c'reate, 'd'elete, 'm'ark as setuid/setgid, hard 'l'ink to, e'x'ecute, and 'i'nherit the ACLs of the subject which contains the object. In other words, this subject can do pretty much anything to the file system.

Finally, let's look at the '/' subject for the root role. It is fairly permissive, but not quite as permissive as the previous subject. It is also more complicated and many of the object lines are there because gradm does a sanity check on policy files to help make sure you don't open any security holes. Notice that here we have '+CAP_ALL' followed by a series of '-CAP_*'. Each of these were included otherwise gradm would complain. For example, if 'CAP_SYS_ADMIN' is not removed, an attacker can mount filesystems to bypass your policies.

So I won't go through this entire subject block in detail, but let me highlight a few points. First consider these lines

   /                       r
        /boot                   h
        /etc/grsec              h
        /proc/slabinfo          h
        /proc/modules           h
        /proc/kallsyms          h
        /sys                    h

The first line gives 'r'ead access to the entire file system but this is too permissive and opens up security holes, so we negate that for particular files and directories by 'h'iding them. With these access controls, if the root user in the root role does ls /sys you get

# ls /sys
ls: cannot access /sys: No such file or directory

but if the root user transitions to the admin role using gradm -a admin, then you get

# ls /sys/
block  bus  class  dev  devices  firmware  fs  kernel  module

Next consider these lines:

   /bin                    rx
        /sbin                   rx
        ...
        /lib32                  rx
        /lib64                  rx
        /lib64/modules          h

Since the 'x' flag is inherited by all the files under those directories, this allows processes like your shell to execute, for example, /bin/ls or /lib64/ld-2.21.so. The 'r' flag further allows processes to read the contents of those files, so one could do hexdump /bin/ls or hexdump /lib64/ld-2.21.so. Dropping the 'r' flag on /bin would stop you from hexdumping the contents, but it would not prevent execution nor would it stop you from listing the contents of /bin. If we wanted to make this subject a bit more secure, we could drop 'r' on /bin and not break our system. This, however, is not the case with the library directories. Dropping 'r' on them would break the system since library files need to have readable contents for loaded, as well as be executable.

Now consider these lines:

        /dev                    hx
        /dev/log                r
        /dev/urandom            r
        /dev/null               rw
        /dev/tty                rw
        /dev/ptmx               rw
        /dev/pts                rw
        /dev/initctl            rw

The 'h' flag will hide /dev and its contents, but the 'x' flag will still allow processes to enter into that directory and access /dev/log for reading, /dev/null for reading and writing, etc. The 'h' is required to hide the directory and its contents because, as we saw above, 'x' is sufficient to allow processes to list the contents of the directory. As written, the above policy yields the following result in the root role

# ls /dev
ls: cannot access /dev: No such file or directory
# ls /dev/tty0
ls: cannot access /dev/tty0: No such file or directory
# ls /dev/log
/dev/log

In the admin role, all those files are visible.

Let's end our study of this subject by looking at the 'bind', 'connect' and 'sock_allow_family' lines. Note that the addresses/ports include a list of allowed transport protocols from /etc/protocols. One gotcha here is make sure you include port 0 for icmp! The 'sock_allow_family' allows all socket families, including unix, inet, inet6 and netlink.

Now that we understand this policy, we can proceed to add isolated restrictions to our mostly permissive root role. Remember that the system is totally restricted for all UNIX users except root, so if you want to allow some ordinary user access, you can simply copy the entire role, including the subject blocks, and just rename 'role root' to 'role myusername'. You'll probably want to remove the 'role_transitions' line since an ordinary user should not be able to transition to the admin role. Now, suppose for whatever reason, you don't want this user to be able to list any files or directories. You can simply add a line to his '/' subject block which reads '/bin/ls h' and ls become completely unavailable for him! This particular example might not be that useful in practice, but you can use this technique, for example, if you want to restrict access to to your compiler suite. Just 'h' all the directories and files that make up your suite and it becomes unavailable.

A more complicated and useful example might be to restrict a user's listing of a directory to just his home. To do this, we'll have to add a new subject block for /bin/ls. If your not sure where to start, you can always begin with an extremely restrictive subject block, tack it at the end of the subjects for the role you want to modify, and then progressively relax it until it works. Alternatively, you can do partial learning on this subject as described above. Let's proceed manually and add the following:

subject /bin/ls o {
{
        /  h
        -CAP_ALL
        connect disabled
        bind    disabled
}

Note that this is identical to the extremely restrictive '/' subject for the default role except that the subject is '/bin/ls' not '/'. There is also a subject flag 'o' which tells RBAC to override the previous policy for /bin/ls. We have to override it because that policy was too permissive. Now, in one terminal execute gradm -R in the admin role, while in another terminal obtain a denial to ls /home/myusername. Checking our dmesgs we see that:

[33878.550658] grsec: From 192.168.5.2: (root:U:/bin/ls) denied access to hidden file /lib64/ld-2.21.so by /bin/ls[bash:7861] uid/euid:0/0 gid/egid:0/0, parent /bin/bash[bash:7164] uid/euid:0/0 gid/egid:0/0

Well that makes sense. We've started afresh denying everything, but /bin/ls requires access to the dynamic linker/loader, so we'll restore read access to it by adding a line '/lib64/ld-2.21.so r'. Repeating our test, we get a seg fault! Obviously, we don't just need read access to the ld.so, but we also execute privileges. We add 'x' and try again. This time the denial is

[34229.335873] grsec: From 192.168.5.2: (root:U:/bin/ls) denied access to hidden file /etc/ld.so.cache by /bin/ls[ls:7917] uid/euid:0/0 gid/egid:0/0, parent /bin/bash[bash:7909] uid/euid:0/0 gid/egid:0/0
[34229.335923] grsec: From 192.168.5.2: (root:U:/bin/ls) denied access to hidden file /lib64/libacl.so.1.1.0 by /bin/ls[ls:7917] uid/euid:0/0 gid/egid:0/0, parent /bin/bash[bash:7909] uid/euid:0/0 gid/egid:0/0

Of course! We need 'rx' for all the libraries that /bin/ls links against, as well as the linker cache file. So we add lines for libc, libattr and libacl and ls.so.cache. Our final denial is

[34481.933845] grsec: From 192.168.5.2: (root:U:/bin/ls) denied access to hidden file /home/myusername by /bin/ls[ls:7982] uid/euid:0/0 gid/egid:0/0, parent /bin/bash[bash:7909] uid/euid:0/0 gid/egid:0/0

All we need now is '/home/myusername r' and we're done! Our final subject block looks like this:

subject /bin/ls o {
        /                         h
        /home/myusername          r
        /etc/ld.so.cache          r
        /lib64/ld-2.21.so         rx
        /lib64/libc-2.21.so       rx
        /lib64/libacl.so.1.1.0    rx
        /lib64/libattr.so.1.1.0   rx
        -CAP_ALL
        connect disabled
        bind    disabled
}

Proceeding in this fashion, we can add isolated restrictions to our mostly permissive policy.

References:

The official documentation is The_RBAC_System. A good reference for the role, subject and object flags can be found in these Tables.

28 Mar 2016 12:10am GMT

23 Mar 2016

feedPlanet Gentoo

Matthew Thode: Of OpenStack and uwsgi

Why use uwsgi

Not all OpenStack services support uwsgi. However, in the Liberty timeframe it is supported as the primary way to run Keystone api services and recommended way of running Horizon (if you use it). Going forward other openstack services will be movnig to support it as well, for instance I know that Neutron is working on it or have it completed for the Mitaka release.

Basic Setup

Configs and permissions

When defaults are available I will only note what needs to change.

uwsgi configs

/etc/conf.d/uwsgi

UWSGI_EMPEROR_PATH="/etc/uwsgi.d/"
UWSGI_EMPEROR_GROUP=nginx
UWSGI_EXTRA_OPTIONS='--need-plugins python27'

/etc/uwsgi.d/keystone-admin.ini

[uwsgi]
master = true
plugins = python27
processes = 10
threads = 2
chmod-socket = 660

socket = /run/uwsgi/keystone_admin.socket
pidfile = /run/uwsgi/keystone_admin.pid
logger = file:/var/log/keystone/uwsgi-admin.log

name = keystone
uid = keystone
gid = nginx

chdir = /var/www/keystone/
wsgi-file = /var/www/keystone/admin

/etc/uwsgi.d/keystone-main.ini

[uwsgi]
master = true
plugins = python27
processes = 4
threads = 2
chmod-socket = 660

socket = /run/uwsgi/keystone_main.socket
pidfile = /run/uwsgi/keystone_main.pid
logger = file:/var/log/keystone/uwsgi-main.log

name = keystone
uid = keystone
gid = nginx

chdir = /var/www/keystone/
wsgi-file = /var/www/keystone/main

I have horizon in use via a virtual environment so enabled vaccum in this config.

/etc/uwsgi.d/horizon.ini

[uwsgi]
master = true  
plugins = python27
processes = 10  
threads = 2  
chmod-socket = 660
vacuum = true

socket = /run/uwsgi/horizon.sock  
pidfile = /run/uwsgi/horizon.pid  
log-syslog = file:/var/log/horizon/horizon.log

name = horizon
uid = horizon
gid = nginx

chdir = /var/www/horizon/
wsgi-file = /var/www/horizon/horizon.wsgi

wsgi scripts

The directories are owned by the serverice they are containing, keystone:keystone or horizon:horizon.

/var/www/keystone/admin perms are 0750 keystone:keystone

# Copyright 2013 OpenStack Foundation
#
#    Licensed under the Apache License, Version 2.0 (the "License"); you may
#    not use this file except in compliance with the License. You may obtain
#    a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#    License for the specific language governing permissions and limitations
#    under the License.

import os

from keystone.server import wsgi as wsgi_server


name = os.path.basename(__file__)

# NOTE(ldbragst): 'application' is required in this context by WSGI spec.
# The following is a reference to Python Paste Deploy documentation
# http://pythonpaste.org/deploy/
application = wsgi_server.initialize_application(name)

/var/www/keystone/main perms are 0750 keystone:keystone

# Copyright 2013 OpenStack Foundation
#
#    Licensed under the Apache License, Version 2.0 (the "License"); you may
#    not use this file except in compliance with the License. You may obtain
#    a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#    License for the specific language governing permissions and limitations
#    under the License.

import os

from keystone.server import wsgi as wsgi_server


name = os.path.basename(__file__)

# NOTE(ldbragst): 'application' is required in this context by WSGI spec.
# The following is a reference to Python Paste Deploy documentation
# http://pythonpaste.org/deploy/
application = wsgi_server.initialize_application(name)

Note that this has paths to where I have my horizon virtual environment.

/var/www/horizon/horizon.wsgi perms are 0750 horizon:horizon

#!/usr/bin/env python
import os
import sys


activate_this = '/home/horizon/horizon/.venv/bin/activate_this.py'
execfile(activate_this, dict(__file__=activate_this))

sys.path.insert(0, '/home/horizon/horizon')
os.environ['DJANGO_SETTINGS_MODULE'] = 'openstack_dashboard.settings'

import django.core.wsgi
application = django.core.wsgi.get_wsgi_application()

23 Mar 2016 5:00am GMT

22 Mar 2016

feedPlanet Gentoo

Jan Kundrát: Implementing OpenPGP and S/MIME Cryptography in Trojita

Are you interested in cryptography, either as a user or as a developer? Read on -- this blogpost talks about some of the UI choices we made, as well as about the technical challenges of working with the existing crypto libraries.

The next version of Trojitá, a fast e-mail client, will support working with encrypted and signed messages. Thanks to Stephan Platz for implementing this during the Google Summer of Code project. If you are impatient, just install the trojita-nightly package and check it out today.

Here's how a signed message looks like in a typical scenario:

A random OpenPGP-signed e-mail

Some other e-mail clients show a yellow semi-warning icon when showing a message with an unknown or unrecognized key. In my opinion, that isn't a great design choice. If I as an attacker wanted to get rid of the warning, I could just as well sign a faked but unsigned e-mail message. This message is signed by something, so we should probably not make this situation appear less secure than as if the e-mail was not signed at all.

(Careful readers might start thinking about maintaining a peristant key association database based on the observed traffic patterns. We are aware of the upstream initiative within the GnuPG project, especially the TOFU, Trust On First Use, trust model. It is a pretty fresh code not available in major distributions yet, but it's definitely something to watch and evaluate in future.)

Key management, assigning trust etc. is something which is outside of scope for an e-mail client like Trojitá. We might add some buttons for key retrieval and launching a key management application of your choice, such as Kleopatra, but we are definitely not in the business of "real" key management, cross-signatures, defining trust, etc. What we do instead is working with your system's configuration and showing the results based on whether GnuPG thinks that you trust this signature. That's when we are happy to show a nice green padlock to you:

Mail with a trusted signature

We are also making a bunch of sanity checks when it comes to signatures. For example, it is important to verify that the sender of an e-mail which you are reading has an e-mail which matches the identity of the key holder -- in other words, is the guy who sent the e-mail and the one who made the signature the same person?

If not, it would be possible for your co-worker (who you already trust) to write an e-mail message to you with a faked From header pretending to be your boss. The body of a message is signed by your colleague with his valid key, so if you forget to check the e-mail addresses, you are screwed -- and that's why Trojitá handles this for you:

Something fishy is going on!

In some environments, S/MIME signatures using traditional X.509 certificates are more common than the OpenPGP (aka PGP, aka GPG). Trojitá supports them all just as easily. Here is what happens when we are curious and decide to drill down to details about the certificate chain:

All the glory details about an X.509 trust chain

Encrypted messages are of course supported, too:

An ancrypted message

We had to start somewhere, so right now, Trojitá supports only read-only operations such as signature verification and decrypting of messages. It is not yet possible to sign and encrypt new messages; that's something which will be implemented in near future (and patches are welcome for sure).

Technical details

Originally, we were planning to use the QCA2 library because it provides a stand-alone Qt wrapper over a pluggable set of cryptography backends. The API interface was very convenient for a Qt application such as Trojitá, with native support for Qt's signals/slots and asynchronous operation implemented in a background thread. However, it turned out that its support for GnuPG, a free-software implementation of the OpenPGP protocol, leaves much to be desired. It does not really support the concept of PGP's Web of Trust, and therefore it doesn't report back how trustworthy the sender is. This means that there woldn't be any green padlock with QCA. The library was also really slow during certain operations -- including retrieval of a single key from a keystore. It just isn't acceptable to wait 16 seconds when verifying a signature, so we had to go looking for something else.

Compared to the QCA, the GpgME++ library lives on a lower level. Its Qt integration is limited to working with QByteArray classes as buffers for gpgme's operation. There is some support for integrating with Qt's event loop, but we were warned not to use it because it's apparently deprecated code which will be removed soon.

The gpgme library supports some level of asynchronous operation, but it is a bit limited. Ultimately, someone has to do the work and consume the CPU cycles for all the crypto operations and/or at least communication to the GPG Agent in the background. These operations can take a substantial amount of time, so we cannot do that in the GUI thread (unless we wanted to reuse that discouraged event loop integration). We could use the asynchronous operations along with a call to gpgme_wait in a single background thread, but that would require maintaining our own dedicated crypto thread and coming up with a way to dispatch the results of each operation to the original requester. That is certainly doable, but in the end, it was a bit more straightforward to look into the C++11's toolset, and reuse the std::async infrastructure for launching background tasks along with a std::future for synchronization. You can take a look at the resulting code in the src/Cryptography/GpgMe++.cpp. Who can dislike lines like task.wait_for(std::chrono::duration_values::zero()) == std::future_status::timeout? :)

Finally, let me provide credit where credit is due. Stephan Platz worked on this feature during his GSoC term, and he implemented the core infrastructure around which the whole feature is built. That was the crucial point and his initial design has survived into the current implementation despite the fact that the crypto backend has changed and a lot of code was refactored.

Another big thank you goes to the GnuPG and GpgME developers who provide a nice library which works not just with OpenPGP, but also with the traditional X.509 (S/MIME) certificates. The same has to be said about the developers behind the GpgME++ library which is a C++ wrapper around GpgME with roots in the KDEPIM software stack, and also something which will one day probably move to GpgME proper. The KDE ties are still visible, and Andre Heinecke was kind enough to review our implementation for obvious screwups in how we use it. Thanks!

22 Mar 2016 6:02pm GMT

06 Mar 2016

feedPlanet Gentoo

Jason Donenfeld: Hasp HL Library

Hasp HL Library

git clone https://git.zx2c4.com/hasplib

The Hasp HL is a copy protection dongle that ships horrible closed-source drivers.

This is a very simple OSS library based on libusb for accessing MemoHASP functions of the Hasp HL USB dongle. It currently can view the ID of a dongle, validate the password, read from memory locations, and write to memory locations.

This library allows use of the dongle without any drivers!

API

Include hasplib.h, and compile your application alongside hasplib.c and optionally hasplib-simple.c.

Main Functions

Get a list of all connected dongles:

size_t hasp_find_dongles(hasp_dongle ***dongles);

Login to that dongle using the password, and optionally view the memory size:

bool hasp_login(hasp_dongle *dongle, uint16_t password1, uint16_t password2, uint16_t *memory_size);

Instead of the first two steps, you can also retreive the first connected dongle that fits your password:

hasp_dongle *hasp_find_login_first_dongle(uint16_t password1, uint16_t password2);

Read the ID of a dongle:

bool hasp_id(hasp_dongle *dongle, uint32_t *id);

Read from a memory location:

bool hasp_read(hasp_dongle *dongle, uint16_t location, uint16_t *value);

Write to a memory location:

bool hasp_write(hasp_dongle *dongle, uint16_t location, uint16_t value);

Free the list of dongles opened earlier:

void hasp_free_dongles(hasp_dongle **dongles);

Free a single dongle:

void hasp_free_dongle(hasp_dongle *dongle);

Simple Functions

The simple API wraps the main API and provides access to a default dongle, which is the first connected dongle that responds to the given passwords. It handles dongle disconnects and reconnections.

Create a hasp_simple * object for a given password pair:

hasp_simple *hasp_simple_login(uint16_t password1, uint16_t password2);

Free this object:

void hasp_simple_free(hasp_simple *simple);

Read an ID, returning 0 if an error occurred:

uint32_t hasp_simple_id(hasp_simple *simple);

Read a memory location, returning 0 if an error occurred:

uint16_t hasp_simple_read(hasp_simple *simple, uint16_t location);

Write to a memory location, returning its success:

bool hasp_simple_write(hasp_simple *simple, uint16_t location, uint16_t value);

Licensing

This is released under the GPLv3. See COPYING for more information. If you need a less restrictive license, please contact me.

06 Mar 2016 1:10pm GMT

02 Mar 2016

feedPlanet Gentoo

Alexys Jacob: py3status v2.9

py3status v2.9 is out with a good bunch of new modules, exciting improvements and fixes !

Thanks

This release is made of their stuff, thank you contributors !

New modules

Fixes and enhancements

What's next ?

Some major core enhancements and code clean up are coming up thanks to @cornerman, @Horgix and @pydsigner. The next release will be faster than ever and even less CPU consuming !

Meanwhile, this 2.9 release is available on pypi and Gentoo portage, have fun !

02 Mar 2016 8:23am GMT

29 Feb 2016

feedPlanet Gentoo

Gentoo News: Gentoo accepted to GSoC 2016

Students are encouraged to start working now on their project proposals. You can peruse the list of ideas or come up with your own. In any case, it is highly recommended you talk to a mentor sooner rather than later. The official application period for student proposals starts on March 14th.

Do not hesitate to join us in the #gentoo-soc channel on freenode. We will be happy to answer your questions there.
More information on Gentoo's GSoC effort is also available on our Wiki.

29 Feb 2016 12:00am GMT

28 Feb 2016

feedPlanet Gentoo

Richard Freeman: Gentoo Ought to be About Choice

"Gentoo is about choice." We've said it so often that it seems like we just don't bother to say it any more. However, with some of the recent conflicts on the lists (which I've contributed to) and indeed across the FOSS community at large, I think this is a message that is worth repeating…

Ok, bare with me because I'm going to talk about systemd. This post isn't really about systemd, but it would probably not be nearly as important in its absence. So, we need to talk about why I'm bringing this up.

How we got here

Systemd has brought a wave of change in the Linux community, and most of the popular distros have decided to adopt it. This has created a bit of a vacuum for those who strongly prefer to avoid it, and many of these have adopted Gentoo (the only other large-ish option is Slackware), and indeed some have begun to contribute back. The resulting shift in demographics have caused tensions in the community, and I believe this has created a tendency for us to focus too much on what makes us different.

Where we are now

Every distro has a niche of some kind - a mission that gives it a purpose for existence. It is the thing that its community coalesces around. When a distro loses this sense of purpose, it will die or fork, whether by the forces of lost contributors or lost profits. This purpose can certainly evolve over time, but ultimately it is this purpose which holds everything together.

For many years in Gentoo our purpose has been about providing choices, and enabling the user. Sometimes we enable them to shoot their own feet, and we often enable them to break things in ways that our developers would prefer not to troubleshoot. We tend to view the act of suppressing choices as contrary to our values, even if we don't always have the manpower to support every choice that can possibly exist.

The result of this philosophy is what we all see around us. Gentoo is a distro that can be used to build the most popular desktop linux-based operating system (ChromeOS), and which reportedly is also used as the basis of servers that run NASDAQ[1]. It shouldn't be surprising that Gentoo works with no fewer than 7 device-manager implementations and 4 service managers.

Still, many in the Linux community struggle to understand us. They mistake our commitment to providing a choice as some kind of endorsement of that choice. Gentoo isn't about picking winners. We're not an anti-systemd distro, even if many who dislike systemd may be found among us and it is straightforward to install Gentoo without "systemd" appearing anywhere in the filesystem. We're not a pro-systemd distro, even if (IMHO) we offer one of the best and undiluted systemd experiences around. We're a distro where developers and users with a diverse set of interests come together to contribute using a set of tools that makes it practical for each of us to reach in and pull out the system that we want to have.

Where we need to be

Ultimately, I think a healthy Gentoo is one which allows us all to express our preferences and exchange our knowledge, but where in the end we all get behind a shared goal of empowering our users to make the decisions. There will always be conflict when we need to pick a default, but we must view defaults as conveniences and not endorsements. Our defaults must be reasonably well-supported, but not litmus tests against which packages and maintainers are judged. And, in the end, we all benefit when we are exposed to those who disagree and are able to glean from them the insights that we might have otherwise missed on our own.

When we stop making Gentoo about a choice, and start making it about having a choice, we find our way.

1 - http://www.computerworld.com/article/2510334/financial-it/how-linux-mastered-wall-street.html


Filed under: foss, gentoo, linux, Uncategorized

28 Feb 2016 2:07am GMT

26 Feb 2016

feedPlanet Gentoo

Bernard Cafarelli: Setting USE_EXPAND flags in package.use

This has apparently been supported in Portage for some time, but I only learned it recently from a gentoo-dev mail: you do not have to write down the expanded USE-flags in package.use anymore (or set them in make.conf)!

For example, if I wanted to set some APACHE2_MODULES and a custom APACHE2_MPM, the standard package.use entry would be something like:

www-servers/apache apache2_modules_proxy apache2_modules_proxy apache2_modules_proxy_http apache2_mpms_event ssl

Not as pretty/convenient as a 'APACHE2_MODULES="proxy proxy_http"' line in make.conf. Here is the best-of-both-worlds syntax (also supported in Paludis apparently):

www-servers/apache ssl APACHE2_MODULES: proxy proxy_http APACHE2_MPMS: event

Or if you use python 2.7 as your main python interpreter, set 3.4 for libreoffice-5.1 😉

app-office/libreoffice PYTHON_SINGLE_TARGET: python3_4

Have fun cleaning your package.use file

26 Feb 2016 5:32pm GMT

23 Feb 2016

feedPlanet Gentoo

Jason Donenfeld: ctmg: a Linux-native bash script Truecrypt replacement

ctmg - extremely simple encrypted container system

ctmg is an encrypted container manager for Linux using cryptsetup and various standard file system utilities. Containers have the extension .ct and are mounted at a directory of the same name, but without the extension. Very simple to understand, and very simple to implement; ctmg is a simple bash script.

Usage

Usage: ctmg [ new | delete | open | close | list ] [arguments...]
  ctmg new    container_path container_size[units_suffix]
  ctmg delete container_path
  ctmg open   container_path
  ctmg close  container_path
  ctmg list

Calling ctmg with no arguments will call list if there are any containers open, and otherwise show the usage screen. Calling ctmg with a filename argument will call open if it is not already open and otherwise will call close.

Examples

Create a 100MiB encrypted container called "example"

zx2c4@thinkpad ~ $ ctmg create example 100MiB
[#] truncate -s 100MiB /home/zx2c4/example.ct
[#] cryptsetup --cipher aes-xts-plain64 --key-size 512 --hash sha512 --iter-time 5000 --batch-mode luksFormat /home/zx2c4/example.ct
Enter passphrase:
[#] chown 1000:1000 /home/zx2c4/example.ct
[#] cryptsetup luksOpen /home/zx2c4/example.ct ct_example
Enter passphrase for /home/zx2c4/example.ct:
[#] mkfs.ext4 -q -E root_owner=1000:1000 /dev/mapper/ct_example
[+] Created new encrypted container at /home/zx2c4/example.ct
[#] cryptsetup luksClose ct_example

Open a container, add a file, and then close it

zx2c4@thinkpad ~ $ ctmg open example
[#] cryptsetup luksOpen /home/zx2c4/example.ct ct_example
Enter passphrase for /home/zx2c4/example.ct: 
[#] mkdir -p /home/zx2c4/example
[#] mount /dev/mapper/ct_example /home/zx2c4/example
[+] Opened /home/zx2c4/example.ct at /home/zx2c4/example
zx2c4@thinkpad ~ $ echo "super secret" > example/mysecretfile.txt
zx2c4@thinkpad ~ $ ctmg close example
[#] umount /home/zx2c4/example
[#] cryptsetup luksClose ct_example
[#] rmdir /home/zx2c4/example
[+] Closed /home/zx2c4/example.ct

Installation

$ git clone https://git.zx2c4.com/ctmg
$ cd ctmg
$ sudo make install

Or, use the package from your distribution:

Gentoo

# emerge ctmg

23 Feb 2016 4:07pm GMT

Jason Donenfeld: Git Daemon Dummy: 301 Redirects for git://

Git Daemon Dummy: 301 Redirects for git://

With the wide deployment of HTTPS, the plaintext nature of git:// is becoming less and less desirable. In order to inform users of the git://-based URIs to switch to https://-based URIs, while still being able to shut down aging git-daemon infrastructure, this git-daemon-dummy is an extremely lightweight daemon that simply provides an informative error message to connecting git:// users, providing the new URI.

It drops all privileges, chroots, sets rlimits, and uses seccomp-bpf to limit the amount of available syscalls. To remain high performance, it makes use of epoll.

Example

zx2c4@thinkpad ~ $ git clone git://git.zx2c4.com/cgit
Cloning into 'cgit'...
fatal: remote error: 
******************************************************

  This git repository has moved! Please clone with:

      $ git clone https://git.zx2c4.com/cgit

******************************************************

Installation

$ git clone https://git.zx2c4.com/git-daemon-dummy
$ cd git-daemon-dummy
$ make
$ ./git-daemon-dummy

Usage

Usage: ./git-daemon-dummy [OPTION]...
  -d, --daemonize              run as a background daemon
  -f, --foreground             run in the foreground (default)
  -P FILE, --pid-file=FILE     write pid of listener process to FILE
  -p PORT, --port=PORT         listen on port PORT (default=9418)
  -h, --help                   display this message

23 Feb 2016 2:35am GMT