19 Apr 2014

feedPlanet Grep

Paul Cobbaut: Vagrant: Creating 10 vm's with 6 disks each

Hello lazyweb,

the Vagrantfile below works fine, but can probably be written simpler. I've been struggling to create variables like "servers=10" and "disks=6" to automate creation of 10 servers with 6 disks each.

Drop me a hint if you feel like creating those two loops.


paul@retinad:~/vagrant$ cat Vagrantfile
hosts = [ { name: 'server1', disk1: './server1disk1.vdi', disk2: 'server1disk2.vdi' },
{ name: 'server2', disk1: './server2disk1.vdi', disk2: 'server2disk2.vdi' },
{ name: 'server3', disk1: './server3disk1.vdi', disk2: 'server3disk2.vdi' }]

Vagrant.configure("2") do |config|

config.vm.provider :virtualbox do |vb|
vb.customize ["storagectl", :id, "--add", "sata", "--name", "SATA" , "--portcount", 2, "--hostiocache", "on"]
end

hosts.each do |host|

config.vm.define host[:name] do |node|
node.vm.hostname = host[:name]
node.vm.box = "chef/centos-6.5"
node.vm.network :public_network
node.vm.synced_folder "/srv/data", "/data"
node.vm.provider :virtualbox do |vb|
vb.name = host[:name]
vb.customize ['createhd', '--filename', host[:disk1], '--size', 2 * 1024]
vb.customize ['createhd', '--filename', host[:disk2], '--size', 2 * 1024]
vb.customize ['storageattach', :id, '--storagectl', "SATA", '--port', 1, '--device', 0, '--type', 'hdd', '--medium', host[:disk1] ]
vb.customize ['storageattach', :id, '--storagectl', "SATA", '--port', 2, '--device', 0, '--type', 'hdd', '--medium', host[:disk2] ]
end
end

end

end

19 Apr 2014 10:02am GMT

18 Apr 2014

feedPlanet Grep

Frederic Hornain: 2014 Red Hat Summit: Open Playground

;)

/f


18 Apr 2014 10:16am GMT

Mark Van den Borre: Reglementitis

Wie toerist laat overnachten riskeert boete





Wat we zelf regelneven regelneven we beter!

18 Apr 2014 7:49am GMT

17 Apr 2014

feedPlanet Grep

Wim Coekaerts: Oracle E-Business Suite R12 Pre-Install RPM available for Oracle Linux 5 and 6

One of the things we have been focusing on with Oracle Linux for quite some time now, is making it easy to install and deploy Oracle products on top of it without having to worry about which RPMs to install and what the basic OS configuration needs to be.

A minimal Oracle Linux install contains a really small set of RPMs but typically not enough for a product to install on and a full/complete install contains way more packages than you need. While a full install is convenient, it also means that the likelihood of having to install an errata for a package is higher and as such the cost of patching and updating/maintaining systems increases.

In an effort to make it as easy as possible, we have created a number of pre-install RPM packages which don't really contain actual programs but they 're more or less dummy packages and a few configuration scripts. They are built around the concept that you have a minimal OL installation (configured to point to a yum repository) and all the RPMs/packages which the specific Oracle product requires to install cleanly and pass the pre-requisites will be dependencies for the pre-install script.

When you install the pre-install RPM, yum will calculate the dependencies, figure out which additional RPMs are needed beyond what's installed, download them and install them. The configuration scripts in the RPM will also set up a number of sysctl options, create the default user, etc. After installation of this pre-install RPM, you can confidently start the Oracle product installer.

We have released a pre-install RPM in the past for the Oracle Database (11g, 12c,..) and Oracle Enterprise Manager 12c agent. And we now also released a similar RPM for E-Business R12.

This RPM is available on both ULN and public-yum in the addons channel.

17 Apr 2014 11:44pm GMT

Frank Goossens: Some HTML DOM parsing gotchas in PHP’s DOMDocument

Although I had used Simple HTML DOM parser for WP DoNotTrack, I've been looking into native PHP HTML DOM parsing as a possible replacement for regular expressions for Autoptimize as proposed by Arturo. I won't go into the performance comparison results just yet, but here's some of the things I learned while experimenting with DOMDocument which in turn might help innocent passers-by of this blogpost.

// loadHTML from string, suppressing errors
$dom = new DOMDocument();
@$dom->loadHTML($html);

// get all script-nodes
$_scripts=$dom->getElementsByTagName("script");

// move the result form a DomNodeList to an array
$scripts = array();
foreach ($_scripts as $script) {
   $scripts[]=$script;
}

// iterate over array and remove script-tags from DOM
foreach ($scripts as $script) {
   $script->parentNode->removeChild($script);
}

// write DOM back to the HTML-string
$html = $dom->saveHTML();

Now chop chop, back to my code to finish that performance comparison. Who know what else we'll learn ;-)

Possibly related twitterless twaddle:

17 Apr 2014 5:05pm GMT

16 Apr 2014

feedPlanet Grep

Wouter Verhelst: Call for help for DVswitch maintenance

I've taken over "maintaining" DVswitch from Ben Hutchings a few years ago, since Ben realized he didn't have the time anymore to work on it well.

After a number of years, I have to admit that I haven't done a very good job. Not becase I didn't want to work on it, but mainly because I don't have enough time to fix DVswitch against the numerous moving targets that it uses; the APIs of libav and of liblivemedia are fluent enough that just making sure everything remains compilable and in working order is quite a job.

DVswitch is used by many people; DebConf, FOSDEM, and the CCC are just a few examples, but I know of at least three more.

Most of these (apart from DebConf and FOSDEM) maintain local patches which I've been wanting to merge into the upstream version of dvswitch. However, my time is limited, and over the past few years I've not been able to get dvswitch into a state where I confidently felt I could upload it into Debian unstable for a release. One step we took in order to get that closer was to remove the liblivemedia dependency (which implied removing the support for RTSP sources). Unfortunately, the resulting situation wasn't good enough yet, since libav had changed API enough that current versions of DVswitch compiled against current versions of libav will segfault if you try to do anything useful.

I must admit to myself that I don't have the time and/or skill set to maintain DVswitch on an acceptable level all by myself. So, this is a call for help:

If you're using DVswitch for your conference and want to continue doing so, please talk to us. The first things we'll need to do:

See you there?

16 Apr 2014 4:24pm GMT

15 Apr 2014

feedPlanet Grep

Luc Stroobant: Telenet ipv6 pfSense configuratie

Na jaren gepruts met commerciele wifi routers die om de 2-3 jaar kapot gaan, heb ik eindelijk maar eens geinvesteerd in een Soekris bordje voor een veel krachtigere pfSense in de kelder. Bijkomend interessant punt van pfSense is dat IPv6 goed ondersteund is.

Wat je moet aan zetten om dit met Telenet te laten werken is niet op het eerste zicht duidelijk, dus even een overzichtje voor wie het in één zoekopdracht wil terug vinden. :)

Op nieuw geinstalleerde PFsense setups staat de optie "Allow IPv6" standaard aan. Als je een setup hebt die al een tijdje bestaat moet je dit nog aan zetten onder System: Advanced: Networking.

Op de WAN interface zet je onder "DHCP6 client configuration" "DHCPv6 Prefix Delegation size" op /56, de grootte van het prefix dat je van Telenet krijgt. De rest van de opties mag uit blijven staan.
Op de LAN interface zet je bij "IPv6 Configuration Type" "Track interface" en dan iets lager onder "Track ipv6 interface" selecteer je de WAN interface. Dat is alles... Je zou nu ipv6 adressen moeten krijgen op je pfSense interfaces en op de clients achter de pfSense.

Default wordt alle inboud verkeer geblokkeerd, als je ping wil door laten pas dan de standaard aanwezige rule voor inbound ipv4 ICMP op de WAN interface aanzodat naar IPv4+6.

NB: dit is getest en werkt met een gewone modem, geen home-gateway-Telenet-managed wifi-router-ding. Ervaringen of het daar ook mee werkt zijn altijd welkom in de comments.

15 Apr 2014 6:28pm GMT

Lionel Dricot: Lily & Lily à Ottignies

lilylily

Dans le strass et les paillettes du Hollywood des années 1930, la star sur le déclin Lily Da Costa remplit plus souvent les verres et les chroniques des journaux à scandale que les salles obscures et les plateaux de tournage. Sam, le brave imprésario dépassé par tous ses caprices, ne sait plus à quel saint se vouer. Entre un mari gigolo, un bagnard en cavale et des domestiques malhonnêtes, voici que débarque à l'improviste Déborah, la sœur jumelle de Lily, pleine de bonnes intentions. Mais l'enfer n'est-il pas pavé de bonnes intentions ?

Envie de connaître la suite ? Alors je vous invite à venir assister à l'une des représentations de Lily & Lily par les Comédiens du Petit-Ry à l'école primaire Saint-Pie X d'Ottignies-Louvain-la-Neuve :

Le prix des places est de 10€ et les réservations se font à l'adresse reservationscomry@gmail.com.

Outre le rire, les portes qui claquent, les amants sous les lits et dans les placards, Lily & Lily est également l'occasion de fêter les 30 ans d'existence de la troupe et les 25 ans de participation de Laure Destercke, qui jouera bien entendu Lily.

lily_lily

La troupe, en pleine répétition

À titre plus personnel, Lily & Lily représente ma première participation à la troupe. Lors de la lecture du texte, j'ai également eu la surprise de découvrir que la pièce a été montée en 1985 avec Jacqueline Maillan et… Francis Lemaire, mon oncle, décédé il y a un an déjà. C'est donc avec une pointe d'émotion et une certaine fierté que je monterai sur les planches en pensant à lui.

Tout cela fait beaucoup d'occasions de rire et de faire la fête. Alors prenez votre agenda, choisissez une date, faites suivre les événements, invitez vos amis et, comme Lily Da Costa, venez vous enfiler un godet avec nous durant l'entracte ! Avec les comédiens du Petit-Ry, l'ambiance est autant dans la salle que sur la scène !

Au plaisir de vous voir dans la salle un de ces soirs…

Merci d'avoir pris le temps de lire ce texte. Ce blog est payant mais vous êtes libre de choisir le prix. Vous pouvez soutenir l'écriture de ces billets via Flattr, Patreon, virements IBAN, Paypal ou en bitcoins. Mais le plus beau moyen de me remercier est de simplement partager ce texte autour de vous ou de m'aider à trouver de nouveaux défis en 2014.

flattr this!

15 Apr 2014 12:04pm GMT

14 Apr 2014

feedPlanet Grep

Xavier Mertens: xip.py: Executing Commands per IP Address

Batch ProcessingDuring a penetration test, I had to execute specific commands against some IP networks. Those networks were represented under the CIDR form (network/subnet). Being a lazy guy, I spent some time to write a small Python script to solve this problem. The idea was based on the "xargs" UNIX command which is used to build complex command lines. From the xargs man page:

"xargs reads items from the standard input, delimited by blanks (which can be protected with double or single quotes or a backslash) or newlines, and executes the command (default is /bin/echo) one or more times with any initial-arguments followed by items read from standard input. Blank lines on the standard input are ignored."

I called the tool logically "xip.py" as it allows you to execute a provided command for each IP address from a subnet or a range. The syntax is simple:

$ ./xip.py -h
Usage: xip.py [options]

Options:
 --version             show program's version number and exit
 -h, --help            show this help message and exit
 -i IPADDRESSES, --ip-addresses=IPADDRESSES
                       IP Addresses subnets to expand
 -c COMMAND, --command=COMMAND
                       Command to execute for each IP ("{}" will be replaced by the IP)
 -o OUTPUT, --output=OUTPUT
                       Send commands output to a file
 -s, --split           Split outfile files per IP address
 -d, --debug           Debug output

The IP addresses can be added in two formats: x.x.x.x/x or x.x.x.x-x. Multiple subnets can be delimited by commas and subnet starting with a "-" will be excluded. Examples:

$ ./xip.py -i 10.0.0.0/29,10.10.0.0/29,-10.0.0.1-4 -c "echo {}"

This command will return:

10.0.0.0
10.0.0.5
10.0.0.6
10.0.0.7
10.10.0.0
10.10.0.1
10.10.0.2
10.10.0.3
10.10.0.4
10.10.0.5
10.10.0.6
10.10.0.7

Like the "find" UNIX command, "{}" are replaced by the IP address (multiple {} pairs can be used). With the "-o <file>" option, the command output will be stored to the file (stderr & stdout). You can split the output across multiple files using the switch "-s". In this case, <file> will end the IP addresses.

This is a quick and dirty tool which helped me a lot. I already have some ideas to improve it, if I've time… The script is available on my github repository.

14 Apr 2014 6:50pm GMT

Tom Laermans: Setting up a Postfix-based relay server with user authentication via Active Directory

In this post I will explain how to setup Postfix authentication against an AD server. This is similar to regular LDAP authentication, I am running a Samba 4.0 domain, but it should work just as well against a "real" Microsoft AD Domain.

Packages required:

/etc/default/saslauthd (excerpts):

MECHANISMS="ldap"
OPTIONS="-c -m /var/spool/postfix/var/run/saslauthd"
NAME=Mailserver

This code sets the SASL auth daemon (saslauthd) up to use LDAP authentication (for which the configuration is read from /etc/saslauthd.conf as detailed below), and puts the saslauthd communication socket inside the Postfix chroot so we can reach it from Postfix.

/etc/saslauthd.conf (complete file):

ldap_servers: ldap://domaincontroller.example.com/
ldap_search_base: cn=Users,dc=domain,dc=example,dc=com
ldap_filter: (userPrincipalName=%u@domain.example.com)

ldap_bind_dn: cn=lookupuser,cn=Users,dc=domain,dc=example,dc=com
ldap_password: omnomnom

This file configures the actual LDAP connection. You need a working user/password combination for the domain to be able to connect to the domain controller and browse the tree. We're filtering on userPrincipalName; in this example @domain.example.com is added behind the username, as the principalName in AD is actually yourusername@yourwindowsdomain. I prefer to authenticate to Postfix without adding the Windows domain to the username, so we have to hardcode it in the LDAP query filter. You can add multiple servers after ldap_servers, which will be tried in order.

After configuring both saslauthd files, (re)start the saslauthd service.

You can then test if the SASL authentication works already with the testsaslauthd command. Careful, you have to pass it the password on the command line in plain text - be sure to use a test password or clear your terminal and shell history! If this doesn't work, there's no reason to continue to Postfix yet, as working SASL authentication is key!

/etc/postfix/main.cf (excerpts):

smtpd_tls_cert_file=/etc/ssl/certs/mycert.crt
smtpd_tls_key_file=/etc/ssl/private/mycert.key
smtpd_tls_CAfile = /etc/ssl/certs/myintermediate.crt
smtpd_use_tls=yes
smtpd_sasl_auth_enable = yes
broken_sasl_auth_clients = yes
smtpd_recipient_restrictions = permit_mynetworks,
  permit_sasl_authenticated,
  reject_unauth_destination

Use your commercial or selfsigned certificate and key combination for the first 3 lines. I have a wildcard certificate that I use for most of my servers, which matches the hostname used to identify this relaying Postfix server.

/etc/postfix/master.cf (excerpts):

submission inet n       -       -       -       -       smtpd
  -o smtpd_enforce_tls=yes
  -o smtpd_sasl_auth_enable=yes
  -o smtpd_client_restrictions=permit_sasl_authenticated,reject
smtps     inet  n       -       -       -       -       smtpd
  -o smtpd_tls_wrappermode=yes
  -o smtpd_sasl_auth_enable=yes
  -o smtpd_client_restrictions=permit_sasl_authenticated,reject

These 2 blocks are already in master.cf on Debian, but they're commented out. The first block enables submission on port 587 via STARTTLS (fully encrypted after initial greeting dialogue). The second enables secure smtp on port 465, which is fully SSL encrypted.

/etc/postfix/sasl/smtpd.conf (complete file):

pwcheck_method: saslauthd
mech_list: plain login

Finally, we set Postfix up to use saslauthd as authentication mechanism. We can only support plain, because we don't have the plaintext password stored in Active Directory. This means we have to get it from the client and hash it (or well, verify by logging in) at our side to make sure it's correct.

Finally, add Postfix to the "sasl" group, to be able to access the saslauthd communication socket.

# useradd -G sasl postfix

Restart Postfix, and sending mail through it should work, authenticated against Active Directory! Be sure to test with a wrong password, so that you don't accidentally create an open relay somehow. With running a mail server on the internet comes great responsability, so make sure not to contribute to the spam problem - SMTP relay accounts get stolen on a regular basis as well so monitor your queues for unusually high amounts of outgoing mail.

Feel free to leave comments, questions or suggestions below!

14 Apr 2014 10:43am GMT

Joram Barrez: Review ‘Activiti 5.x Business Process Management’ by Zakir Laliwala and Irshad Mansuri

I've been contacted by the people of Packt Publishing to review their recent book release of the 'Activiti 5.x Business Process Management", written by Dr. Zakir Laliwala and Irshad Mansuri. For an open source project, books are a good thing. They indicate that a project is popular, and often people prefer books over gathering all […]

14 Apr 2014 8:39am GMT

13 Apr 2014

feedPlanet Grep

Mattias Geniar: Follow-up: use ondemand PHP-FPM masters using systemd

A few days ago, I published a blogpost called A better way to run PHP-FPM. It's gotten a fair amount of attention. It detailed the use of the "ondemand" process manager as well as using a separate PHP-FPM master per pool, for a unique APC cache.

The setup works fine, but has the downside that you'll have multiple masters running -- an obvious consequence. Kim Ausloos created a solution for this by using systemd's socket activation. This means PHP-FPM masters are only started when needed and no longer stay active on the system when obsolete.

This has a few benefits and possible downsides;

I'll do some more testing on this use-case, as well as the performance penalty (if any) of having to start new master on a first request to the PHP-FPM socket. For this to work out, RHEL or CentOS 7 is needed in my case (we're a RHEL/CentOS shop), as systemd is required and will only be supported from RHEL/CentOS 7.

13 Apr 2014 11:26am GMT

Frank Goossens: Gastgeblogt op nummervandedag.nl

Er staat een stukje over Kate Bush van mij op nummvervandedag.nl. Fijne muziekblog, overigens, gebruiken WP YouTube Lyte :-)

Possibly related twitterless twaddle:

13 Apr 2014 7:17am GMT

11 Apr 2014

feedPlanet Grep

Wouter Verhelst: Review: John Scalzi: Redshirts

I'm not much of a reader anymore these days (I used to be when I was a young teenager), but I still do tend to like reading something every once in a while. When I do, I generally prefer books that can be read front to cover in one go-because that allows me to immerse myself into the book so much more.

John Scalzi's book is... interesting. It talks about a bunch of junior officers on a starship of the "Dub U" (short for "Universal Union"), which flies off into the galaxy to Do Things. This invariably involves away missions, and on these away missions invariably people die. The title is pretty much a dead giveaway; but in case you didn't guess, it's mainly the junior officers who die.

What I particularly liked about this book is that after the story pretty much wraps up, Scalzi doesn't actually let it end there. First there's a bit of a tie-in that has the book end up talking about itself; after that, there are three epilogues in which the author considers what this story would do to some of its smaller characters.

All in all, a good read, and something I would not hesitate to recommend.

11 Apr 2014 3:25pm GMT

Xavier Mertens: Log Awareness Trainings?

ChuckawareMore and more companies organize "security awareness" trainings for their team members. With the growing threats faced by people while using their computers or any connected device, it is definitively a good idea. The goal of such trainings is to make people open their eyes and change their attitude towards security.

If the goal of an awareness training is to change the attitude of people, why not apply the same in other domains? Log files sounds a good example! Most log management solutions prone to be extended to collect and digest almost any type of log files. With their standard configuration, they are able to process logfiles generated by most solutions on the information security market but they can also "learn" unknown logfile formats. Maaaagic!

A small reminder for those who are new in this domain. The primary goal of a log management solution is to collect, parse and store events in a common format to help searching, alerting or reporting on events. The keyword here is "to parse". Let's take the following event generated by UFW on Ubuntu:

Apr 10 23:56:17 marge kernel: [8209773.464692] [UFW BLOCK] IN=eth0 OUT= \
  MAC=xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx SRC=11.12.13.14 DST=88.191.132.217 \
  LEN=60 TOS=0x00 PREC=0x00 TTL=42 ID=36063 DF PROTO=TCP SPT=32345 DPT=143 \
  WINDOW=14600 RES=0x00 SYN URGP=0

We can extract some useful "fields" like: the source IP address and port, the destination IP address and port, a timestamp, interfaces, protocols, etc. Let's come back to our unknown logfile format! The biggest issue is our total dependance of the way developers generate and store the events. If events are stored in a database or if fields are delimited by a common character, it's quite easy: we just have to setup a mapping between the source and our standard fields:

if ($event =~ /(\S+),(\S+),(\S+)/) {
  $source_address = $1;
  $source_pot = $2;
  $dest_address = $1;
  # ...
}

Alas, most of the time, it's more complicated and we have to switch to complex regular expressions to extract juicy fields. And the nightmare begins… I had to integrate events generated by "Exotic Product 3.2.1" because "Its events are interesting for our compliance requirements". Challenge accepted!

Of course, there are chances that your regex will fail after an upgrade from 3.2.1 to 3.2.2 because developers decided to change some messages. This bad scenario is real! By telling this, I would like attract the attention of developers. Guys, could you please not only write logs but write good logs? In the example above, I faced the following issues:

The primary goal of logfiles is to be able to help sysadmins, network admins or security team during investigation or debugging phases. When something occur, the first place that people will look at are logs! I'm not a developer but I'm playing with logs almost every day. Here are some guidelines which seems important for me:

Enough said, if you are interested, have a look at the OWASP document "Logging Cheat Sheet" (here). This was my Friday tribune to all developers! Happy logging…

11 Apr 2014 9:14am GMT

Frederic Descamps: PLMCE: SKySQL/MariaDB: my Digital Caricature

Once again MariaDB invited Doug Shannon from EventToons at their Percona Live MySQL Conference & Expo's booth.

And once again he drew me.

This is the picture of last year:

and the one of this year:

The conclusion is simple: my beard narrowed and my face grew ! ... is this a sign that I'm becoming old ? ;-)

BTW, thx MariaDB's team for this nice and funny gift !

11 Apr 2014 6:26am GMT