09 Nov 2011

feedPlanet filibeto

Alan Coopersmith: S11 X11: ye olde window system in today's new operating system

Today's the big release for Oracle Solaris 11, after 7 years of development. For me, the Solaris 11 release comes a little more than 11 years after I joined the X11 engineering team at what was then Sun, and finishes off some projects that were started all the way back then.

For instance, when I joined the X team, Sun was finishing off the removal of the old OpenWindows desktop, and we kept getting questions asking about the rest of the stuff being shipped in /usr/openwin, the directory that held both the OpenLook applications and the X Window System software. I wrote up an ARC case at the time to move the X software to /usr/X11, but there were various issues and higher priority work, so we didn't end up starting that move until near the end of the Solaris 10 development cycle several years later. Solaris 10 thus had a mix of the recently added Xorg server and related code delivered in /usr/X11, while most of the existing bits from Sun's proprietary fork of X11R6 were still in /usr/openwin.

During Solaris 11 development, we finished that move, and then jumped again, moving the programs directly into /usr/bin, following the general Solaris 11 strategy of using /usr/bin for most of the programs shipped with the OS, and using other directories, such as /usr/gnu/bin, /usr/xpg4/bin, /usr/sunos/bin, and /usr/ucb for conflicting alternate implementations of the programs shipped in /usr/bin, no longer as a way to segregate out various subsystems to allow the OS to better fit onto the 105Mb hard disks that shipped with Sun workstations back when /usr/openwin was created. However, if for some reason you wanted to build your own set of X binaries, you could put them in /usr/X11R7 (as I do for testing builds of the upstream git master repos), and then put that first in your $PATH, so nothing is really lost here.

The other major project that was started during Solaris 10 development and finished for Solaris 11 was replacing that old proprietary fork of X11R6, including the Xsun server, with the modernized, modularized, open source X11R7.* code base from the new X.Org, including the Xorg server. The final result, included in this Solaris 11 release, is based mostly on the X11R7.6 release, including recent additions such as the XCB API I blogged about last year, though we did include newer versions of modules that had upstream releases since the X11R7.6 katamari, such as Xorg server version 1.10.3.

That said, we do still apply some local patches, configuration options, and other changes, for things from just fitting into the Solaris man page style or adding support for Trusted Extensions labeled desktops. You can see all of those changes in our source repository, which is searchable and browsable via OpenGrok on src.opensolaris.org (or via hgweb on community mirrors such as openindiana.org) and available for anonymous hg cloning as well. That xnv-clone tree is now frozen, a permanent snapshot of the Solaris 11 sources, while we've created a new x-s11-update-clone tree for the Solaris 11 update releases now being developed to follow on from here.

Naturally, when your OS has 7 years between major release cycles, the hardware environment you run on greatly changes in the meantime as well, and as the layer that handles the graphics hardware, there have been changes due to that. Most of the SPARC graphics devices that were supported in Solaris 10 aren't any more, because the platforms they ran in are no longer supported - we still ship a couple SPARC drivers that are supported, the efb driver for the Sun XVR-50, XVR-100, and XVR-300 cards based on the ATI Radeon chipsets, and the astfb driver for the AST2100 remote Keyboard/Video/Mouse/Storage (rKVMS) chipset in the server ILOM devices. On the x86 side, the EOL of 32-bit platforms let us clear out a lot of the older x86 video device drivers for chipsets and cards you wouldn't find in x64 systems - of course, there's still many supported there, due to the wider variety of graphics hardware found in the x64 world, and even some recent updates, such as the addition of Kernel Mode Setting (KMS) support for Intel graphics up through the Sandy Bridge generation.

For those who followed the development as it happened, either via watching our open source code releases or using one of the many development builds and interim releases such as the various Solaris Express trains, much of this is old news to you. For those who didn't, or who want a refresher on the details, you can see last year's summary in my X11 changes in the 2010.11 release blog post. Once again, the detailed change logs for the X11 packages are available, though unfortunately, all the links in them to the bug reports are now broken, so browsing the hg history log is probably more informative.

Since that update, which covered up to the build 151 released as 2010.11, we've continued development and polishing to get this Solaris 11 release finished up. We added a couple more components, including the previously mentioned xcb libraries, the FreeGLUT library, and the Xdmx Distributed Multihead X server. We cleaned up documentation, including the addition of some docs for the Xserver DTrace provider in /usr/share/doc/Xserver/. The packaging was improved, clearing up errors and optimizing the builds to reduce unnecessary updates. A few old and rarely used components were dropped, including the rstart program for starting up X clients remotely (ssh X forwarding replaces this in a more secure fashion) and the xrx plugin for embedding X applications in a web browser page (which hasn't been kept up to date with the rapidly evolving browser environment). Because Solaris 11 only supports 64-bit systems, and most of the upstream X code was already 64-bit clean, the X servers and most of the X applications are now shipped as 64-bit builds, though the libraries of course are delivered in both 32-bit and 64-bit versions for binary compatibility with applications of each flavor. The Solaris auditing system can now record each attempt by a client to connect to the Xorg server and whether or not it succeeded, for sites which need that level of detail.

In total, we recorded 1512 change request id's during Solaris 11 development, from the time we forked the "Nevada" gate from the Solaris 10 release until the final code freeze for todays release - some were one line bug fixes, some were man page updates, some were minor RFE's and some were major projects, but in the end, the result is both very different (and hopefully much better) than what we started with, and yet, still contains the core X11 code base with 24 years of backwards compatibility in the core protocols and APIs.

09 Nov 2011 10:10pm GMT

Henrik Johansson: Solaris 11 released

Solaris 11 is available for download "SunOS Release 5.11 Version 11.0", based on build snv_175b.

There are of course many changes since Solaris 10, most of them have been available in the latest build of OpenSolaris but there are some new that are unique to the final release of Solaris 11.

Install images are available for download and works on all current SPARC machines which is the T and M-series. There are also images available for X86-based machines which also can be used in VirtualBox. Here is a quick reference for the brand new packaging system: IPS one liners.

I will post more detailed follow-up after I've had time to test it for more than a few hours.

Oracle Solaris 11 11/11 - What's new
Download Oracle Solaris 11
Future features of Solaris 11

09 Nov 2011 6:57pm GMT

Darren Moffat: Completely disabling root logins on Solaris 11

Since Solaris 8 it has been possible to make the root account a role. That means you can't login directly as root (except in single user mode) but have to login as an authorised user first and assume (via su) the root role. This still required the root account to have a valid and known password as it is needed for the su step and for single user access.

With Solaris 11 it is possible to go one step further and completely disable all need for a root password even for access in single user mode.

There are two complementary new features that make this possible. The first is the ability to change which password is used when authenticating to a role. A new per role property called roleauth was added, if it isn't present the prior behaviour of using the role account password is retained, if roleauth=user is set instead then the password of the user assuming the role is used.

The second feature was one that existed in the Solaris 11 Express release which changed how the sulogin command worked, prior releases all just asked for the root password. The sulogin program was changed to authenticate a specific user instead so now asks for a username and the password of that user. The user must be one authorised to enter single user mode by being granted the 'solaris.system.maintenance' authorisation - and obviously be one that can actually connect to the system console (which I recommend is protected by "other means" eg ILOM level accounts or central "terminal server")

The following sequence of commands takes root from being a normal root account (which depending on how you install Solaris 11 it maybe, or it might already be a role) and granting the user darrrenm the ability to assume the root role and enter single user mode.

# usermod -K type=role root
# usermod -R +root -A +solaris.system.maintenance darrenm
# rolemod -K roleauth=user
# passwd -N root

Note that some of the install methods for Solaris 11 will have created an initial user account that is granted the root role and has been given the "System Administrator" profile, in those cases only the last two steps are required as the equivalent of the first two will already have been done at install time for the initial non root user.

Note that we do not lock (-l) the root account but instead ensure it has no valid password (-N) this is because the root account does still have some cron jobs that we ideally want to run and if it was locked then the pam_unix_account.so.1 PAM module would prevent cron from running those jobs.

09 Nov 2011 6:38pm GMT

Darren Moffat: Password (PAM) caching for Solaris su - "a la sudo"

I talk to a lot of users about Solaris RBAC but many of them prefer to use sudo for various reasons. One the common usability features that users like is the that they don't have to continually type their password. This is because sudo uses a "ticket" system for caching the authentication for a defined period (by default 5 minutes).

To bring this usability feature to Solaris 11 I wrote a new PAM module (pam_tty_tickets) that provides a similar style of caching for Solaris roles.

By default the tickets are stored in /system/volatile/tty_tickets (/var/run is a symlink to /system/volatile now).

When using su(1M) the user you currently are is set in PAM_USER and PAM_AUSER is the user you are becoming (ie the username argument to su or root if one is not specified). The PAM module implements the caching using tickets, the internal format of the tickets is the same as what sudo uses. The location can be changed to be compatible with sudo so the same ticket can be used for su and sudo.

To enable pam_tty_tickets for su put the following into /etc/pam.conf (the module is in the pkg:/system/library package so it is always installed but not configured for use by default):

su      auth required           pam_unix_cred.so.1
su      auth sufficient         pam_tty_tickets.so.1
su      auth requisite          pam_authtok_get.so.1
su      auth required           pam_unix_auth.so.1

So what does it now look like:

braveheart:pts/3$ su -
root@braveheart:~# id -a
uid=0(root) gid=0(root) groups=0(root),1(other),2(bin),3(sys),4(adm),5(uucp),6(mail),7(tty),8(lp),9(nuucp),12(daemon)
darrenm@braveheart:~# exit
braveheart:pts/3$ su -

If you want to enable it in the desktop for gksu then you need to add a similar set of changes to /etc/pam.conf with the service name as "embedded_su" with the same modules as is listed above. The default timeout matches the sudo default of 5 minutes, the timeout= module option allows specifying a different timeout.

[ NOTE: The man page for pam_tty_tickets was mistakenly placed in section 1 for Solaris 11, it should have been in section 5. ]

09 Nov 2011 6:36pm GMT

Darren Moffat: User home directory encryption with ZFS

ZFS encryption has a very flexible key management capability, including the option to delegate key management to individual users. We can use this together with a PAM module I wrote to provide per user encrypted home directories. My laptop and workstation at Oracle are configured like this:

First lest setup console login for encrypted home directories:

    root@ltz:~# cat >> /etc/pam.conf<<_EOM
    login auth     required pam_zfs_key.so.1 create
    other password required pam_zfs_key.so.1

The first line ensures that when we login on the console bob's home directory is created with as an encrypted ZFS file system if it doesn't already exist, the second one ensures that the passphrase for it stays in sync with his login password.

Now lets create a new user 'bob' who looks after his own encryption key for is home directory, note that we do not specify '-m' to useradd so that pam_zfs_key will create the home directory when the user logs in.

root@ltz:~# useradd bob
root@ltz:~# passwd bob
New Password: 
Re-enter new Password: 
passwd: password successfully changed for bob
root@ltz:~# passwd -f bob
passwd: password information changed for bob

We have now created the user bob with an expired password. Lets login as bob and see what happens:

    ltz console login: bob
    Choose a new password.
    New Password: 
    Re-enter new Password: 
    login: password successfully changed for bob
    Creating home directory with encryption=on.
    Your login password will be used as the wrapping key.
    Last login: Tue Oct 18 12:55:59 on console
    Oracle Corporation      SunOS 5.11      11.0    November 2011
    -bash-4.1$ /usr/sbin/zfs get encryption,keysource rpool/export/home/bob
    NAME                   PROPERTY    VALUE              SOURCE
    rpool/export/home/bob  encryption  on                 local
    rpool/export/home/bob  keysource   passphrase,prompt  local

Note that bob had to first change the expired password. After we provided a new login password a new ZFS file system for bob's home directory was created. The new login password that bob chose is also the passphrase for this ZFS encrypted home directory. This means that at no time did the administrator ever know the passphrase for bob's home directory. After the machine reboots bob's home directory won't be mounted anymore until bob logs in again. If we want bob's home directory to be unmounted and the key removed from the kernel when bob logs out (even if the system isn't rebooting) then we can add the 'force' option to the pam_zfs_key.so.1 module line in /etc/pam.conf

If users login with GDM or ssh then there is a little more configuration needed in /etc/pam.conf to enable pam_zfs_key for those services as well.

root@ltz:~# cat >> /etc/pam.conf<<_EOM
gdm     auth requisite          pam_authtok_get.so.1
gdm     auth required           pam_unix_cred.so.1
gdm     auth required           pam_unix_auth.so.1
gdm     auth required           pam_zfs_key.so.1 create
gdm     auth required           pam_unix_auth.so.1

root@ltz:~# cat >> /etc/pam.conf<<_EOM
sshd-kbdint     auth requisite          pam_authtok_get.so.1
sshd-kbdint     auth required           pam_unix_cred.so.1
sshd-kbdint     auth required           pam_unix_auth.so.1
sshd-kbdint     auth required           pam_zfs_key.so.1 create
sshd-kbdint     auth required           pam_unix_auth.so.1

Note that this only works when we are logging in to SSH with a password. Not if we are doing pubkey authentication because the encryption passphrase for the home directory hasn't been supplied. However pubkey and gssapi will work for later authentications after the home directory is mounted up since the ZFS passphrase is supplied during that first ssh or gdm login.

09 Nov 2011 6:21pm GMT

Darren Moffat: Immutable Zones on Encrypted ZFS

Rather that just discussing the new Immutable Zones feature of Solaris 11, I'm going to show how it can be combined with ZFS file system encryption as part of a defense in depth deployment.

Lets assume as part of our security threat model we need to protect data written to disk but we also want to protect the system from malicious or accidental tampering with (system binaries and configuration) during runtime.

Deploying our application in a Solaris Zone allows us to provide both of those, even for a user that has gained root access inside the zone. We will use two new features of Solaris 11 to do this. Firstly ZFS encryption to provide protection of the data written to disk and secondly the new 'file-mac-profile' mandatory write access feature that gives us Immtuable Zones.

Normally we can let 'zoneadm install' create the ZFS file system for the zone for us, but it is also perfectly happy using a file system that already exists. We can use that to our advantage to enable encryption for the Zone. So lets first setup our encrypted dataset and put a zone on it. Note in this case the encryption keys are stored outside of the zone and aren't managed by or visible to the zone users (even root).

# pktool genkey keystore=file keytype=aes keylen=128 outkey=/zones/key
# zfs create -o encryption=on -o keysource=raw,file:///zones/key
# zonecfg -z ltz 'create ; set zonepath=/zones/ltz'
# zoneadm -z ltz install
zoneadm -z ltz install
/zones/ltz2 must not be group readable.
/zones/ltz2 must not be group executable.
/zones/ltz2 must not be world readable.
/zones/ltz2 must not be world executable.
changing zonepath permissions to 0700.
Progress being logged to /var/log/zones/zoneadm.20111018T123039Z.ltz2.install
       Image: Preparing at /zones/ltz/root.

 Install Log: /system/volatile/install.4194/install_log
 AI Manifest: /tmp/manifest.xml.14a4hi
  SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml
    Zonename: ltz2
Installation: Starting ...

              Creating IPS image
              Installing packages from:
                      origin:  http://ipkg.us.oracle.com/solaris11/dev/
DOWNLOAD                                  PKGS       FILES    XFER (MB)
Completed                              167/167 32062/32062  175.8/175.8

PHASE                                        ACTIONS
Install Phase                            44311/44311 

PHASE                                          ITEMS
Package State Update Phase                   167/167 
Image State Update Phase                         2/2 
Installation: Succeeded

        Note: Man pages can be obtained by installing pkg:/system/manual


        Done: Installation completed in 230.518 seconds.

  Next Steps: Boot the zone, then log into the zone console (zlogin -C)

              to complete the configuration process.

Log saved in non-global zone as /zones/ltz/root/var/log/zones/zoneadm.20111018T123039Z.ltz.install

Note the first four lines zoneadm ensured that the zonepath had the correct secure permissions but was otherwise perfectly happy with our pre-created encrypted ZFS dataset.

So at this point we just boot the zone and connect to the console and finish of the system configuration (since I didn't supply a manifest the system will be waiting to be told its name and network config. Once that is done we can login and have a look at what local ZFS filesystems we have:

# zlogin ltz
# zfs list
rpool                    423M   447G    33K  /rpool
rpool/ROOT               423M   447G    33K  legacy
rpool/ROOT/solaris       423M   447G   358M  /
rpool/ROOT/solaris/var  61.5M   447G  59.6M  /var
rpool/export              68K   447G    35K  /export
rpool/export/home         33K   447G    33K  /export/home

Notice that we have separate datasets for / and /var as well as /export and /export/home. These will all be encrypted because rpool inside this zone is really the ZFS dataset that is underneath /zones/ltz in the global zone, so lets look and check:

# zfs get encryption,keysource rpool/ROOT/solaris
NAME                PROPERTY    VALUE                  SOURCE
rpool/ROOT/solaris  encryption  on                     inherited from $globalzone
rpool/ROOT/solaris  keysource   raw,file:///zones/key  inherited from $globalzone

Notice that the source is that we inherited this from the globalzone. That has dealt with our on disk protection without needing the admin users inside the zone to do anything additional.

It also worth now pointing out a third new feature, note that the ZFS dataset names in side the zone look just like they do in the global zone, rpool/ROOT/solaris. This is very different to what we had in Solaris 10 where the zone saw parts of the ZFS namespace that were only applicable in the global zone. This namespace virtualisation provide both additional security and makes p2v and v2v transitions of Zones much easier. Note only have we virtualised the dataset names but we have hidden any global zone paths when they appear in the property source, we only know that this was set my a global zone admin.

Now lets move on to the Immutable Zones part. In the zone configuration, which is held in the global zone only, we specify what 'file-mac-profile' we want.

# zonecfg -z ltz 'set file-mac-profile=fixed-configuration'
# zoneadm -z ltz reboot

Now lets 'do some damage':

root@ltz:~# touch /etc/foo
touch: cannot create /etc/foo: Read-only file system
root@ltz:~# touch /var/tmp/foo
root@ltz:~# touch /tmp/foo
root@ltz:~# pkg install emacs

pkg install: Could not complete the operation on /var/pkg/lock: read-only filesystem.
root@ltz:~# rm /usr/bin/vi
rm: /usr/bin/vi not removed: Read-only file system

root@ltz:~# useradd alice

UX: useradd: ERROR: Cannot update system - login cannot be created.

So we can't create users or remove binaries or add new ones out side of /var/tmp and /tmp, maybe we can disable SMF services permanently:

root@ltz:~# svcadm disable ssh
root@ltz:~# svcs ssh
disabled 13:21:37 svc:/network/ssh:default

Finally 'some damage' - but lets reboot...

root@ltz:~# svcs ssh
STATE          STIME    FMRI
online         13:23:19 svc:/network/ssh:default

The service restarted again on reboot; but we said disable it permanently. What happened here was that it got disabled in the running SMF but because we couldn't persist the changes back to the on disk SMF database its permanent state didn't change and it came back on line after a reboot.

Thats nice but we need to maintain the Zone still and ensure it gets updated with security fixes so how do we write to it ?

Only from the global zone is it possible to transition the zone into a 'read-write' state. We do this by passing '-w' or '-W' to the zoneadm boot command. Note that this is an argument interperted by zoneadmd in the global zone and is not interpreted inside the zone at all, there is thus no way for a privileged user inside the zone to request that we reboot read-write by passing '-w' as an argument to reboot(1M).

# zoneadm -z ltz boot -w
# zlogin -C ltz
[NOTICE: Read-only zone rebooting read-write]

In this case we will get a login prompt and we can login and do everything to the zone we normally could - it is just as if 'file-mac-profile' hadn't been set. When packages are added or updated for a zone with a file-mac-profile it transiently reboots read-write (this can be forced manually with '-W') automatically; this is so any package self-assembly can be done to do config file upgrades etc, we would see this on the console:

[NOTICE: This read-only system transiently booted read/write]
[NOTICE: Now that self assembly has been completed, the system is rebooting]
[NOTICE: Zone rebooting]

At this point the zone is back to being protected just like it was above.

Two very simple to use new features in Solaris 11 that can be used separately or together to give us protection of the zones environment both on disk and at runtime.

09 Nov 2011 6:17pm GMT

Darren Moffat: My 11 favourite Solaris 11 features

  1. ZFS on disk encryption: zfs create -o encryption=on [ With pam_zfs_key PAM module for per-user key management]
  2. Immutable Zones: zonecfg -z myzone set file-mac-profile=fixed-configuration
  3. New package system - with cryptographically signed packages [ pkg(5) ] and multiple signature support
  4. Root as a role by default & authentication with user password with authentication cacheing [pam_tty_tickets ]
  5. Network virtualisation dladm(1M) & bandwidth control flowadm(1M)
  6. Automatic VNICs for Zones - one line zone creation: zonecfg -z myzone 'create ; set zonepath=/zones/myzone'
  7. IPfilter SMF integration - per service firewall rules
  8. New basic privileges: file_read/file_write/net_access
  9. Default root shell is bash (I'd personally prefer zsh but bash is good enough)
  10. 'man -k' works by default
  11. sudo with Solaris Audit support and priv_exec removal for NOEXEC

09 Nov 2011 6:14pm GMT

Tim Foster: The IPS System Repository

Original image by nori_n

I'm excited about today's launch of Solaris 11 - I've been contributing to Solaris for quite a while now, pretty much since 1996, but my involvement in S11 has been the most fun I've had in all releases so far.

I've talked before about some of the work I've done on IPS over the last two years - pkg history, pkgdepend (and here), pkglint and pkgsend and most recently, helping to put together the IPS Developer Guide.

Today, I'm going to talk about the system repository and how I helped.

How zones differ from earlier releases

Zones that use IPS are different than those in Solaris 10, in that they are always full-root: every zone contains its own local copy of each package, they don't inherit packaged content from the global zone as "sparse" zones did in Solaris 10.

This simplifies a lot of zone-related functionality: for the most part, administrators can treat a zone as if it were a full Solaris instance, albeit a very small one. By default new zones in S11 are tiny. However, packaging with zones is a little more complex, and the system aims to hide that complexity
from users.

Some packages in the zone always need to be kept in sync with those packages in the global zone. For example, anything which delivers a kernel module and a userland application that interfaces with it must be kept in sync between the global zone and any non-global zones on the system.

In earlier OpenSolaris releases, after each global-zone update, each non-global zone had to be updated by hand, attaching and detaching each zone. During that detach/attach the ipkg brand scripts determined which packages were now in the global zone, and updated the non-global zone accordingly.

In addition, in OpenSolaris, the packaging system itself didn't have any way of ensuring that every publisher in the global zone was also available in the non-global zone, making updates difficult if switching publishers.

Zones in Solaris 11

In Solaris 11, zones are now first-class citizens of the packaging system. Each zone is installed as a linked image, connected to the parent image, which is the global zone.

During packaging operations in the global zone, IPS recurses into any non-global zones to ensure that packages which need to be kept in sync between the global and non-global zones are kept in sync.

For this to happen, it's important for the zone to have access to all of the IPS repositories that are available from the global zone.

This is problematic for a few reasons:

The System Repository

The system repository, and accompanying zones-proxy services was our solution to the list of problems above.

The SMF Services responsible are:

The first two services run in the global zone, the last one runs in the non-global zones.

With these services, the system repository shares publisher configuration to all non-global zones on the system, and also acts as a conduit to the publishers configured in the global zone. Inside the non-global zone, these proxied global-zone publishers are called system publishers.

When performing packaging operations inside a zone that accesses those publishers, Solaris proxies access through the system repository. While proxying, the system repository also caches any file-content that was
downloaded. If there are lots of zones all downloading the same packaged content, that will be efficiently managed.


If you don't care about how all this works behind the scenes, then you can stop reading now.

There's three parts to making all of the above work, apart from the initial linked image functionality that Ed worked on, which was fundamental to all of the system repository work.

IPS client/repository support

Brock managed the heavy lifting here. This work involved:

Zones proxy

The zones proxy client, when started in the non-global zone creates a socket which listens on an inet port on It passes the file descriptor for this socket to the zones proxy daemon via a door call.

The zones proxy daemon then listens for connections on the file descriptor. When the zone proxy daemon receives a connection, it proxies the connection to the system repository.

This allows the zone to access the system repository without any additional networking configuration needed (which I think is pretty neat - nicely done Krister!)

System repository

The system repository itself consists of two components:

Brock initially prototyped some httpd.conf configurations, and I worked on the code to write them automatically, produce the response that the system repository would use to inform zones of the configured publishers, and also worked out how to proxy access to file-based publishers in the global zone, which was an interesting problem to solve.

When you start the system-repository service in the global zone, pkg.sysrepo(1) determines the enabled, configured publishers then creates a response file served to non-global zones that want to discover the publishers configured in the global zone. It then uses a Mako template from /etc/pkg/sysrepo/sysrepo_httpd.conf.mako to generate an Apache configuration file.

The configuration file describes a basic caching proxy, providing limited access to the URLs of each publisher, as well as allowing URL rewrites to serve any file-based repositories. It uses the SSL keys and certificates from the global zone, and allows proxies access to those from the non-global zone over http.
(remember, data served by the system repository between the zone and non-global zone goes over the zones proxy socket, so http is fine here: access from the proxy to the publisher still goes over https)

The system repository service then starts an Apache instance, and a daemon to keep the proxy cache down to its configured maximum size. More detail on the options available to tune the system repository are in pkg.sysrepo(1) man page.


The practical upshot of all this, is that all zones can access all publishers configured on the global zone, and if that configuration changes, the zones publishers automatically change too. Of course, non-global zones can add their own publishers, but aren't allowed to change the order, or disable any system

Here's what the pkg publisher output looks like in a non-global zone:

root@puroto:~# pkg publisher
PUBLISHER                             TYPE     STATUS   URI
solaris                  (non-sticky, syspub) origin   online   proxy://http://pkg.oracle.com/solaris11/release/
mypublisher              (syspub)     origin   online   http://localhost:1008/mypublisher/89227627f3c003d11b1e4c0b5356a965ef7c9712/
test                     (syspub)     origin   online   http://localhost:1008/test/eec48b7c8b107bb3ec9b9cf0f119eb3d90b5303e/

and here's the system repository running in the global zone:

$ ps -fu pkg5srv | grep httpd
 pkg5srv   206  2334   0 12:02:02 ?           0:00 /usr/apache2/2.2/bin/64/httpd.worker -f /system/volatile/pkg/sysrepo/sysrepo_ht
 pkg5srv   204  2334   0 12:02:02 ?           0:00 /usr/apache2/2.2/bin/64/httpd.worker -f /system/volatile/pkg/sysrepo/sysrepo_ht
 pkg5srv   205  2334   0 12:02:02 ?           0:00 /usr/apache2/2.2/bin/64/httpd.worker -f /system/volatile/pkg/sysrepo/sysrepo_ht
 pkg5srv   939  2334   0 12:46:32 ?           0:00 /usr/apache2/2.2/bin/64/httpd.worker -f /system/volatile/pkg/sysrepo/sysrepo_ht

Personally, I've found this capability to be incredibly useful. I work from home, and have a system with an internet-facing non-global zone, and a global zone accessing our corporate VPN. My non-global zone is able to securely access new packages when it needs to (and I get to test my own code at the same time!)

Performing a pkg update from the global zone ensures that all zones are kept in sync, and will update all zones automatically (though, as mentioned in the Zones administration guide, pkg update <list of packages> will simply update the global zone, and ensure that during that update only the packages that cross the kernel/userland boundary are updated in each zone.)

Working on zones and the system repository was a lot of fun - hope you find it useful.

Filed under: IPS, OpenSolaris

09 Nov 2011 3:14pm GMT

Tim Foster: IPS Self-assembly – Part 1: overlays

Original image by 3liz4


I'm starting a small series of blog posts to talk about one of the important concepts in IPS - self-assembly. We cover this in the IPS Developer Guide but don't provide many examples as yet.

In the IPS Developer Guide, we introduced the concept of self-assembly as:

Any collection of installed software on a system should be able to build itself into a working configuration when that system is booted, by the time the packaging operation completes, or at software runtime.

Lots of software ships with default configuration in sample files, often installed in /etc. During packaging, these files are commonly marked as "user editable", with an attribute defining how those user edits should be treated in the case where the shipped example file gets updated in new release of the package.

In IPS, those user editable files are marked with a preserve attribute, which is documented in the pkg(5) man page.

However, what happens if we want to allow another package to deliver new configuration instead of simply allowing user edits?

By default, IPS will report an error if two packages try to deliver the same file.

In these blog posts, we'll take a sample package, and show how it can be modified to allow us to deliver new add-on packages that deliver different configuration.

Before getting into a more complicated true self-assembly scenario (in the next post), we'll cover a very simple one first.

In this first post, we'll talk about the overlay attribute. Technically, this example doesn't actually cover self-assembly. Instead, it shows how IPS allows packages to re-deliver configuration files already delivered by another package.

First, let's introduce our example package.

Our example package

We'll use a package that already exists as our example: the Squid web proxy.

In our examples, we're going to delivering a new version of Squid that allows us to achieve our goal of being able to deliver add-on packages to supply configuration.

To be clear, I'm not suggesting all administrators ought to do this - by using their own private copy of a package shipped by Oracle, they face the burden of maintaining this version themselves: future upgrades from the solaris publisher will not automatically update their version. By default, publishers in IPS are sticky - so packages installed from one publisher may not be updated by a new version of that package from another publisher.

Publisher stickiness may be overridden, but then the administrator risks that their carefully crafted package gets updated by a version of the package from Oracle. In addition, the presence of a local version of the package may also prevent updates from occurring.

However, when I was looking for an example of the modifications that need to be made to a package which doesn't normally participate in self-assembly, Squid fits the bill nicely.

Let's look at the choices that were made when Squid was being packaged for Solaris, concentrating on how its configuration files are handled.

Using the following command, we can show the actions associated with the squid.conf files that are delivered in the package:

$ pkg contents -H -r -o action.raw -a path=etc/squid/squid.conf* squid | pkgfmt

Here is the output from the command:

file 7d8f133b331e7460fbbdca593bff31446f8a3bad path=etc/squid/squid.conf \
    owner=root group=webservd mode=0644 \
    chash=272ed7f686ce409a121f427a5b0bf75aed0e2095 \
    original_name=SUNWsquid:etc/squid/squid.conf pkg.csize=1414 pkg.size=3409 \
file 7d8f133b331e7460fbbdca593bff31446f8a3bad \
    path=etc/squid/squid.conf.default owner=root group=bin mode=0444 \
    chash=272ed7f686ce409a121f427a5b0bf75aed0e2095 pkg.csize=1414 pkg.size=3409
file 971681745b21a3d88481dbadeea6ce7f87b0070a \
    path=etc/squid/squid.conf.documented owner=root group=bin mode=0444 \
    chash=b9662e497184c97fff50b1c249a6e153c51432e1 pkg.csize=60605 \

We can see that the package delivers three files:

This is the default configuration file that squid uses. You can see that it has a preserve attribute, with a value set to renamenew User edits to this file are allowed, and will be preserved on upgrade, and any new versions of the file (delivered by an updated Squid package) will be renamed.
Squid also ships with a second copy of the configuration file (notice how the hashes are the same as the previous version) with a different name - presumably to use as a record of the original configuration.
Finally we have another copy of the configuration file, this time with more comments included, to better explain the configuration.

Adding an overlay attribute

In IPS, two packages are allowed to deliver the same file if:

In both cases, all other file attributes (owner, mode, group) must match. The overlay attribute is covered in Chapter 3 of the IPS Developer Guide and is also documented in the pkg(5) man page.

Since our sample package doesn't deliver its configuration file, etc/squid/squid.conf, with an overlay attribute, we'll need to modify the package.

First, we download the package in a raw form, suitable for republishing later, and show where pkgrecv(1) stores the manifest:

$ pkgrecv -s http://pkg.oracle.com/solaris/release --raw -d squid-proto squid@3.1.8,5.11-
Processing packages for publisher solaris ...
Retrieving and evaluating 1 package(s)...
PROCESS                                         ITEMS     GET (MB)    SEND (MB)
Completed                                         1/1    18.0/18.0      0.0/0.0

$ find squid-proto -name manifest

Next, we'll define a simple pkgmogrify(1) transform to add an overlay=allow attribute.

We'll also remove the solaris publisher from the FMRI, as we intend to republish this package to our own repository. (This transform is discussed in more detail in Chapter 14 of the IPS Developer Guide)

The transform looks like:

<transform set name=pkg.fmri -> edit value pkg://[^/]+/ pkg://mypublisher/>
<transform file path=etc/squid/squid.conf$ -> set overlay allow>

Here's how we run it:

$ pkgmogrify squid-overlay.mog \
    squid-proto/web%2Fproxy%2Fsquid/3.1.8%2C5.11- \
    > squid-overlay.mf

Finally we can republish our package:

$ pkgsend -s myrepository publish \
    -d squid-proto/web%2Fproxy%2Fsquid/3.1.8%2C5.11- \
WARNING: Omitting signature action 'signature 2ce2688faa049abe9d5dceeeabc4b17e7b72e792

We get a warning when republishing it saying that we're dropping the signature action (I've trimmed the output here).

Package signing is always performed on a repository using pkgsign(1), never on a manifest. Since the package's timestamp is always updated on publication, that would cause any hardcoded signatures to be invalid. Package signing is covered in more detail in Chapter 11 of the IPS Developer Guide.

This gets us part of the way towards our goal: we've now got a version of Squid that can allow other packages to deliver a new copy of etc/squid/squid.conf.

Notice that we've left the version alone on our copy of Squid, so it still complies with the same package version constraints that were on the original version of Squid that was shipped with Solaris.

Writing Configuration Packages

At this point, we can start writing packages to deliver new versions of our configuration file.

First let's install our modified squid package. We'll add our local repository to the system, and make sure we search for packages there before the solaris publisher, so that our packages are discovered first.

$ pfexec pkg set-publisher --search-before=solaris -p ./myrepository
Updated publisher(s): mypublisher
$ pfexec pkg install squid
           Packages to install:  1
       Create boot environment: No
Create backup boot environment: No
            Services to change:  1

DOWNLOAD                                  PKGS       FILES    XFER (MB)
Completed                                  1/1   1519/1519      8.5/8.5

PHASE                                        ACTIONS
Install Phase                              1704/1704

PHASE                                          ITEMS
Package State Update Phase                       1/1
Image State Update Phase                         2/2

Next, we'll create our configuration package. Perhaps the only thing we want to change, is the default port that Squid listens on. Let's write a new squid.conf file that uses port 8080 instead of 3128:

Our original squid configuration shows:

$ grep 3128 /etc/squid/squid.conf
# Squid normally listens to port 3128
http_port 3128

We'll write our new configuration:

$ mkdir -p squid-conf-proto/etc/squid
$ cat /etc/squid/squid.conf | sed -e 's/3128/8080/g' \
    > squid-conf-proto/etc/squid/squid.conf
$ grep 8080 squid-conf-proto/etc/squid/squid.conf
# Squid normally listens to port 8080
http_port 8080

Now, we'll create a package for the file. We'll make the package depend on our Squid package. For this package, since the Squid package already delivers the dir action needed for etc/squid we'll just deliver the file-action for our new squid.conf.

$ cat > squid-conf.mf
set name=pkg.fmri value=config/web/proxy/squid-configuration@1.0
set name=pkg.summary value="My Company Inc. Default squid.conf settings"
file path=etc/squid/squid.conf owner=root group=webservd mode=0644 \
    overlay=true preserve=renameold
depend type=require fmri=web/proxy/squid@3.1.8

Notice that we have specified overlay=true to indicate that this action should overlay any existing file, and have specified preserve=renameold to indicate that we want the old file renamed if one exists.

$ pkgsend -s myrepository publish -d squid-conf-proto squid-conf.mf

We can now install this package to our system, and check to make sure our changes have appeared:

$ pfexec pkg install squid-configuration
           Packages to install:  1
       Create boot environment: No
Create backup boot environment: No

DOWNLOAD                                  PKGS       FILES    XFER (MB)
Completed                                  1/1         1/1      0.0/0.0

PHASE                                        ACTIONS
Install Phase                                    4/4

PHASE                                          ITEMS
Package State Update Phase                       1/1
Image State Update Phase                         2/2

The following unexpected or editable files and directories were
salvaged while executing the requested package operation; they
have been moved to the displayed location in the image:

  etc/squid/squid.conf -> /var/pkg/lost+found/etc/squid/squid.conf-20111108T071810Z

$ grep 8080 /etc/squid/*
/etc/squid/squid.conf:# Squid normally listens to port 8080
/etc/squid/squid.conf:http_port 8080
$ pkg list squid squid-configuration
NAME (PUBLISHER)                                  VERSION                    IFO
config/web/proxy/squid-configuration              1.0                        i--
web/proxy/squid                                   3.1.8-    i--


This was a pretty simple case - we've simply modified an existing package, and delivered a single new package allowing a single configuration package to deliver a change to the file.

This wasn't really self-assembly per se, since the configuration is still hard-coded, but it is a common use-case, and provides a good introduction to our next example.

However, what happens if we want to deliver a further change to this file, from another package? Trying the same approach again, creating a new package "pkg:/config/web/proxy/squid-configuration-redux" then trying to install it,
we see:

$ pkgsend -s myrepository publish -d squid-conf-proto squid-conf-redux.mf

$ pfexec pkg install squid-configuration-redux
Creating Plan |
pkg install: The following packages all deliver file actions to etc/squid/squid.conf:


These packages may not be installed together. Any non-conflicting set may
be, or the packages must be corrected before they can be installed.

So IPS only allows one configuration package to be installed at a time. We'll uninstall our configuration package, revert the old squid.conf content, then install our new configuration package:

$ pfexec pkg uninstall squid-configuration
            Packages to remove:  1
       Create boot environment: No
Create backup boot environment: No

PHASE                                        ACTIONS
Removal Phase                                    3/3

PHASE                                          ITEMS
Package State Update Phase                       1/1
Package Cache Update Phase                       1/1
Image State Update Phase                         2/2

$ pfexec pkg revert /etc/squid/squid.conf
            Packages to update:  1
       Create boot environment: No
Create backup boot environment: No

DOWNLOAD                                  PKGS       FILES    XFER (MB)
Completed                                  1/1         1/1      0.0/0.0

PHASE                                        ACTIONS
Update Phase                                     1/1

PHASE                                          ITEMS
Image State Update Phase                         2/2
$ pfexec pkg install squid-configuration-redux
           Packages to install:  1
       Create boot environment: No
Create backup boot environment: No

DOWNLOAD                                  PKGS       FILES    XFER (MB)
Completed                                  1/1         1/1      0.0/0.0

PHASE                                        ACTIONS
Install Phase                                    4/4

PHASE                                          ITEMS
Package State Update Phase                       1/1
Image State Update Phase                         2/2

The following unexpected or editable files and directories were
salvaged while executing the requested package operation; they
have been moved to the displayed location in the image:

  etc/squid/squid.conf -> /var/pkg/lost+found/etc/squid/squid.conf-20111108T072930Z

We see that the new configuration file has been installed.

In the next post in this series, we'll provide a more complex example of

Filed under: IPS, OpenSolaris

09 Nov 2011 3:02pm GMT

Blog O' Matty: Configuring wget to use a proxy server

Periodically I need to download files on servers that aren't directly connected to the Internet. If the server has wget installed I will usually execute it passing it the URL of the resource I want to retrieve: $ wget prefetch.net/iso.dvd If the system resides behind a proxy server the http_proxy variable needs to be set [...]

09 Nov 2011 12:22pm GMT

Tim Foster: Self assembly – Part 2: multiple packages delivering configuration

Original image by bre pattis


In the previous post in this series, we showed how it was possible to take a single package and republish it, such that other packages could overwrite a default configuration file.

The example we used was the Squid web proxy, allowing configuration packages to overwrite /etc/squid/squid.conf with new contents.

There was a limitation using that approach: only one package could contribute to that configuration at a time, replacing the entire shipped configuration.

Recall, that we define self-assembly in Chapter 1 of the IPS Developer Guide as:

Any collection of installed software on a system should be able to build itself into a working configuration when that system is booted, by the time the packaging operation completes, or at software runtime.

In this post, we'll cover a more advanced case than last time: true self-assembly, where the configuration can be delivered by multiple add-on packages, if necessary. In particular, we'll continue to talk about Squid, a package that isn't normally capable of self-assembly, and will show how we fix that.

How does self-assembly work?

The main premise with self-assembly, is that configuration for an application must be built from a composed view of all fragments of the entire configuration that are present on the system. That can be done either by the application itself, in which case nothing else is required on the part of the application packager, or it can be done with an add-on service to assemble the entire configuration file from the delivered fragments.

When a new package delivers another fragment of the configuration, then the application must have its configuration rebuilt to include that fragment.

Similarly, when a fragment is removed from the system, again, the application must have its configuration rebuilt from the remaining fragments on the system.

A good example of self-assembly is in the Solaris package for pkg:/web/server/apache-22. Solaris ships a default httpd.conf file that has an Include directive that references /etc/apache2/2.2/conf.d.

Packages can deliver a new file to that directory, and use a refresh_fmri actuator causing the system to automatically to refresh the Apache instance
either after a pkg install operation has completed, or after a
pkg remove operation has completed, causing the webserver to rebuild its configuration.

The reason behind self-assembly, is to replace postinstall, preinstall, preremove, postremove and class action scripts, needed by other packaging systems. Install-time scripting was a common source of errors during application packaging because the scripting had to work in multiple scenarios.

For example, scripts had to correctly run

With IPS, we eliminated those forms of install-time scripting, concentrating on an atomic set of actions (discussed in Chapter 3 of the IPS Developer Guide) that performed common packaging tasks, and allowing for actuators (discussed in Chapter 9 of the IPS Developer Guide) to run during packaging operations.

Actuators enable self-assembly to work on live systems by restarting or refreshing the necessary SMF services. Since the same SMF services they point to run during boot as well, we don't need to do anything when performing operations on alternate images: the next time the image is booted, our self-assembly is completed.

Making Squid self-assembly aware

As in the previous post, we will start by downloading and modifying our Squid package.

This time, we intend to remove the etc/squid/squid.conf file entirely - our self-assembly service will be constructing this file instead for us. Recall that
Squid delivers some of its configuration files with the following actions:

file 7d8f133b331e7460fbbdca593bff31446f8a3bad path=etc/squid/squid.conf \
    owner=root group=webservd mode=0644 \
    chash=272ed7f686ce409a121f427a5b0bf75aed0e2095 \
    original_name=SUNWsquid:etc/squid/squid.conf pkg.csize=1414 pkg.size=3409 \
file 7d8f133b331e7460fbbdca593bff31446f8a3bad \
    path=etc/squid/squid.conf.default owner=root group=bin mode=0444 \
    chash=272ed7f686ce409a121f427a5b0bf75aed0e2095 pkg.csize=1414 pkg.size=3409
file 971681745b21a3d88481dbadeea6ce7f87b0070a \
    path=etc/squid/squid.conf.documented owner=root group=bin mode=0444 \
    chash=b9662e497184c97fff50b1c249a6e153c51432e1 pkg.csize=60605 \

Since squid.conf.default is already shipped and is identical to the
squid.conf file that is also delivered, we can use that for the basis of our self-assembly of the squid.conf configuration file.

We download a copy of the package with the following command:

$ pkgrecv -s http://pkg.oracle.com/solaris/release --raw -d squid-proto squid@3.1.8,5.11-

which pulls the content into the squid-proto directory.

We'll use a series of pkgmogrify(1) transforms to edit the package contents, similar to the ones we used in the previous post. We will remove the file action that delivers squid.conf using a drop transform operation, and will also deliver a new directory, etc/squid/conf.d. Here is the transform file that accomplishes that:

<transform set name=pkg.fmri -> edit value pkg://[^/]+/ pkg://mypublisher/>
<transform file path=etc/squid/squid.conf$ -> drop>
dir path=etc/squid/conf.d owner=root group=bin mode=0755

We can create a new manifest using this transform using pkgmogrify(1):

$ pkgmogrify squid\-assembly.mog \
    squid-proto/web%2Fproxy%2Fsquid/3.1.8%2C5.11- \
    > squid-assembly.mf

A self-assembly SMF service

In order for self-assembly to happen during packaging operations, we need to use an actuator discussed in Chapter 9 of the IPS Developer Guide.

The actuator is a special tag on any IPS action that points to an SMF service. The SMF service is made up of two components:

This self-assembly SMF service is going to be responsible for building the contents of /etc/squid/squid.conf. We'll talk about each component in the following section:

SMF manifest

This is what the SMF manifest of our self-assembly service looks like:

<?xml version="1.0"?>
<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">

<service_bundle type='manifest' name='Squid:self-assembly'>


    <single_instance />

            <service_fmri value='svc:/system/filesystem/local:default' />

            <service_fmri value='svc:/milestone/self-assembly-complete' />
    <instance enabled='true' name='default'>


            <property_group name='startd' type='framework'>
                <propval name='duration' type='astring' value='transient' />

This defines a service instance that we intend to use whenever we deliver new configuration file fragments to the system.

For that to happen, any configuration file fragment added or removed must include a restart_fmri actuator.

For example, a package might deliver a configuration file fragment:

file path=etc/squid/squid.conf/myconfig.conf owner=root group=bin mode=0644 \
    restart_fmri=svc:/config/network/http/squid-assembly:default \

The other vital thing needed, is an SMF dependency on the SMF service delivered by the Squid package. We need to add this, so that the Squid application will only be able to start once our self-assembly service has finished producing our configuration file.

First, we'll create a proto area for the files we're going to add to our Squid package, and copy the default SMF manifest:

$ mkdir -p squid-assembly-proto/lib/svc/manifest/network
$ cp /lib/svc/manifest/network/http\-squid.xml squid-assembly-proto/lib/svc/manifest/network

Next, we edit the http-squid.xml SMF manifest, adding the following:

  Wait for the Squid self-assembly service to complete
<dependency name='squid-assembly'

Now that we've done this, our next step, is writing the method script for our self-assembly service.

The SMF method script

We need to write a script, such that when it is run, we end up with /etc/squid.conf containing all changes, as defined in all configuration fragments installed on the system.

This step can be as simple or complex as you'd like it to be - essentially we're performing postinstall scripting here, but on our terms: we know exactly the environment the script is running in - that of a booted OS where our package is installed (defined by the depend actions that accompany the package)

Here is a sample script, written in Python (as short as I could make it, so there's very little error checking involved here) which takes squid.conf.default copies it to squid.conf, then applies a series of edits to it.

We'll save the script as /lib/svc/method/squid-self-assembly.

import shutil
import os
import re
import logging

# define the paths we'll work with

# verbose logging for now

def apply_edits(fragment):
        """Takes edit operations in the path "fragment", and applies
        them to CONF_FILE in order. The syntax of our config file is
        intentionally basic. We support the following operations:

        # lines that start with a hash are comments
        add <line to add to the config file>
        remove <regular expression to remove>

        squid_config = file(CONF_FILE).readlines()
        squid_config = "".join(squid_config)

        # read our list of operations
        operations = open(fragment, "r").readlines()
        operations = [line.rstrip() for line in operations]
        for op in operations:
                if op.startswith("add"):
                        addition = op[len("add") + 1:]
                        logging.debug("adding line %s" % addition)
                        squid_config += "\n" + addition
                elif op.startswith("remove"):
                        exp = op[len("remove") + 1:]
                        squid_config = re.sub(exp, "", squid_config)
                        logging.debug("removing expression %s" % exp)
                elif op.startswith("#"):

        conf = open(CONF_FILE, "w")
        conf.write(squid_config + "\n")

# first, remove any existing configuration
if os.path.exists(CONF_FILE):

# now copy the master template file in, on
# which all edits are based
shutil.copy(MASTER, CONF_FILE)
os.chmod(CONF_FILE, 0644)

fragments = []
# now iterate through the contents of /etc/squid/conf.d
# looking for configuration fragments, and apply the changes
# find in a defined order.   We do not look in subdirectories.
for dirpath, dirnames, filenames in os.walk("/etc/squid/conf.d/"):
        fragments = sorted(filenames)

for fragment in fragments:
        logging.debug("Applying edits from %s" % fragment)
        apply_edits(os.path.join(CONF_DIR, fragment))

Testing the self-assembly script

We can now test the self-assembly script. For the most part, this testing can be done outside the confines of the pkg(1) command - we simply need to ensure
that our self-assembly script runs properly.

First, we'll check that the squid.conf file isn't present, run the script, then determine that the contents are the same as squid.conf.default

# ls /etc/squid/squid.conf
/etc/squid/squid.conf: No such file or directory
# /lib/svc/method/squid-self-assembly
# digest -a sha1 /etc/squid/squid.conf.default /etc/squid/squid.conf
(/etc/squid/squid.conf.default) = 7d8f133b331e7460fbbdca593bff31446f8a3bad
(/etc/squid/squid.conf) = 7d8f133b331e7460fbbdca593bff31446f8a3bad

Next, we'll try a simple configuration fragment:

# cat > /etc/squid/conf.d/change_http_port.conf
# The default configuration uses port 3128, our organisation uses 8080
# We'll remove that default, add a comment, and add a http_port directive
remove # Squid normally listens to port 3128
remove http_port 3128
add # Our organisation requires Squid to operate on port 8080
add http_port 8080

Then we'll test the self-assembly script again:

# /lib/svc/method/squid-self-assembly
DEBUG:root:  --- applying edits from change_http_port.conf   ---
DEBUG:root:removing expression # Squid normally listens to port 3128
DEBUG:root:removing expression http_port 3128
DEBUG:root:adding line # Our organisation requires Squid to operate on port 8080
DEBUG:root:adding line http_port 8080

We can verify that the changes have been made:

# grep "port 8080" /etc/squid/squid.conf
# Our organisation requires Squid to operate on port 8080
http_port 8080

Now, we'll add another configuration fragment:

# cat > /etc/squid/conf.d/connect_ports.conf
# We want to allow users to connect to gmail and irc
# over our proxy server.
add # We need to allow access to gmail and irc
add acl Connect_ports port 5222     # gmail chat
add acl Connect_ports port 6667     # irc chat
add http_access allow CONNECT Connect_ports

and see what happens when we run the self-assembly script:

# /lib/svc/method/squid-self-assembly
DEBUG:root:  --- applying edits from change_http_port.conf   ---
DEBUG:root:removing expression # Squid normally listens to port 3128
DEBUG:root:removing expression http_port 3128
DEBUG:root:adding line # Our organisation requires Squid to operate on port 8080
DEBUG:root:adding line http_port 8080
DEBUG:root:  --- applying edits from connect_ports.conf   ---
DEBUG:root:adding line # We need to allow access to gmail and irc
DEBUG:root:adding line acl Connect_ports port 5222     # gmail chat
DEBUG:root:adding line acl Connect_ports port 6667     # irc chat
DEBUG:root:adding line http_access allow CONNECT Connect_ports

Again, we can verify that the edits have been made correctly:

# grep "port 8080" /etc/squid/squid.conf
# Our organisation requires Squid to operate on port 8080
http_port 8080
# egrep gmail\|irc /etc/squid/squid.conf
# We need to allow access to gmail and irc
acl Connect_ports port 5222     # gmail chat
acl Connect_ports port 6667     # irc chat

And finally, we can see what happens if we remove one of our fragments:

# rm /etc/squid/conf.d/connect_ports.conf
# /lib/svc/method/squid-self-assembly
DEBUG:root:  --- applying edits from change_http_port.conf   ---
DEBUG:root:removing expression # Squid normally listens to port 3128
DEBUG:root:removing expression http_port 3128
DEBUG:root:adding line # Our organisation requires Squid to operate on port 8080
DEBUG:root:adding line http_port 8080
# grep "port 8080" /etc/squid/squid.conf
# Our organisation requires Squid to operate on port 8080
http_port 8080
# egrep gmail\|irc /etc/squid/squid.conf

As expected, the configuration file no longer contains the directives configured by connect_ports.conf, since that was removed from the system, but still
contains the changes from change_http_port.conf

Delivering the SMF service

The bulk of the hard work has been done now - to recap:

All that remains, is to ensure that the self-assembly service gets included in
the Squid package.

For that, we'll add a few more lines to the pkgmogrify(1) transform that we talked about earlier, so that it looks like:

<transform set name=pkg.fmri -> edit value pkg://[^/]+/ pkg://mypublisher/>
<transform file path=etc/squid/squid.conf$ -> drop>
dir path=etc/squid/conf.d owner=root group=bin mode=0755
file path=lib/svc/method/squid/squid-self-assembly group=bin mode=0555 owner=root
file path=lib/svc/manifest/network/http-squid-assembly.xml group=sys \
    mode=0444 owner=root restart_fmri=svc:/system/manifest-import:default

Now we can transform our original Squid package, and publish it to our repository:

$ pkgmogrify squid-assembly.mog \
    squid-proto/web%2Fproxy%2Fsquid/3.1.8%2C5.11- \
    > squid\-assembly.mf
$ pkgsend -s myrepository publish -d squid-assembly-proto \
    -d squid-proto/web%2Fproxy%2Fsquid/3.1.8%2C5.11- \
WARNING: Omitting signature action 'signature 2ce2688faa049abe9d5dceeeabc4b17e7b72e792

Installing that package, we discover a svc:/config/network/http/squid-assembly service, and verify that when we drop unpackaged files into /etc/squid/conf.d, and restart the self-assembly service, we see what we expect:

# more /var/svc/log/config-network-http-squid-assembly\:default.log
[ Nov  8 12:19:50 Enabled. ]
[ Nov  8 12:19:50 Rereading configuration. ]
[ Nov  8 12:19:50 Executing start method ("/lib/svc/method/squid-self-assembly"). ]
[ Nov  8 12:19:50 Method "start" exited with status 0. ]
[ Nov  8 12:23:42 Stopping because service restarting. ]
[ Nov  8 12:23:42 Executing stop method (null). ]
[ Nov  8 12:23:42 Executing start method ("/lib/svc/method/squid-self-assembly"). ]
DEBUG:root:  --- applying edits from change_port.conf   ---
DEBUG:root:removing expression # Squid normally listens to port 3128
DEBUG:root:removing expression http_port 3128
DEBUG:root:adding line # Our organisation requires Squid to operate on port 8080
DEBUG:root:adding line http_port 8080
[ Nov  8 12:23:42 Method "start" exited with status 0. ]

We have verified that Squid is performing self-assembly perfectly.

Delivering new configuration fragments

Now that we have a service that's capable of performing self-assembly, we need to know how to deliver configuration fragments in new packages.

This is simply a case of delivering config files to /etc/squid/conf.d, and applying the correct actuator tags to the manifest.

An example manifest that delivers this would be:

set name=pkg.fmri value=pkg:/config/web/proxy/squid-configuration@2.0
set name=pkg.summary value="Our organisations squid configurations"
file path=etc/squid/conf.d/change_http.conf owner=root group=bin mode=0644 \
    restart_fmri=svc:/config/network/http/squid-assembly:default \

When we publish, then install this manifest, we see:

# pkg install squid-configuration@2
           Packages to install:  1
       Create boot environment: No
Create backup boot environment: No
            Services to change:  2

DOWNLOAD                                  PKGS       FILES    XFER (MB)
Completed                                  1/1         1/1      0.0/0.0

PHASE                                        ACTIONS
Install Phase                                    3/3

PHASE                                          ITEMS
Package State Update Phase                       1/1
Image State Update Phase                         2/2

We can quickly verify that the Squid configuration has changed:

$ curl localhost:8080 | grep squid/
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  2904  100  2904    0     0  1633k      0 --:--:-- --:--:-- --:--:-- 2835k
<p>Generated Tue, 08 Nov 2011 23:00:27 GMT by tcx2250-13 (squid/3.1.8)</p>

And we can backout the configuration by removing the package, and again check that the Squid configuration has changed:

# curl localhost:8080 | grep squid
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
curl: (7) couldn't connect to host
# curl localhost:3128 | grep squid/
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  3140  100  3140    0     0  1779k      0 --:--:-- --:--:-- --:--:-- 3066k
<p>Generated Tue, 08 Nov 2011 23:03:37 GMT by tcx2250-13 (squid/3.1.8)</p>

We won't go into details here, but clearly, multiple packages could deliver
configuration fragments at the same time, and they would all contribute to the
configuration of our service.


This has been a pretty fast example of the self-assembly idiom, but we hope this has been useful, and shows complex scripting operations can be performed in IPS.

There may more work to do to make the Squid application fully self-assembly aware - we've only covered the main configuration file and have't looked at whether we also want to allow the other files in /etc/squid to participate in self-assembly. If we did want to do that, it would be a case of ensuring that:

Of course, there's other ways in which a self-assembly service could perform edits - we could use SMF to deliver properties to the service, which are then accessed by a self-assembly script, and placed into a configuration file, but perhaps that's an example for another day.

Filed under: IPS, OpenSolaris

09 Nov 2011 10:08am GMT

Gerry Haskins: Solaris 11 released, 2 days early

Today, we launch Solaris 11 in New York City.

Work on Solaris 11 started 7 years ago, as soon as Solaris 10 reached "code freeze".

About 6 years ago in a Solaris P-Team (Product Team) meeting, someone raised the repeatedly asked question as to when we planned to release Solaris 11.

A slightly exasperated Jeff Jackson said 11/11/11, half jokingly, half seriously. It made sense. It was in the right ballpark considering all the radical changes the architects wanted to make in Solaris 11. And what better date to launch Solaris 11 ?

175 bi-weekly builds and two release candidate respins later, and we're releasing Solaris "Nevada" build snv_175b, officially known as Solaris 11. But 2 days early. Ooops! I must admit to having been tempted to file a "Stopper" bug to cause enough of a smoke screeen to delay the release by two days. But early is good. So 11/9/2011 it is.

The Solaris 11 Tech Lead, David Comay, has posted some excuses - er, I mean "reasons" - on his blog as to why we're releasing 2 days early. See http://blogs.oracle.com/solaris for further information.

Having arrived in New York Tuesday afternoon, I went to the 9/11 memorial to pay my respects.

May I just say, well done New York! Well done America!

It's a truly excellent and moving memorial. The sound of the water falling and the patterns it makes as it falls into the abyss in the center of the very footprint where the twin towers stood is poignant symbolism. It's impossible not to be moved.

And the fact that all around the memorial is still a construction site, with all the sounds of rebuilding what was destroyed, is very apt indeed.

Evil will not triumph. Good will overcome.

It puts our humble efforts in stark perspective.

I hope you enjoy Solaris 11. It's our most radical Solaris release since SunOS 2.0. Virtualization built in. Cloud built in. Architected for maintainability. Scalability beyond your imagination (and mine!).

I'll be presenting an updated version of my Solaris 11 Customer Maintenance Lifecycle presentation at the DOAG (Deutsche Oracle Anwender Gruppe) Conference in Nuremberg, Germany, next week. I hope to meet some of you there.

I'll then post the presentation here on my blog.

Let the fun begin! Enjoy!

Best Wishes,


09 Nov 2011 2:55am GMT

Bart Smaalders: New York

I'm in New York this week, visiting Solaris customers and preparing for tomorrow's launch of Solaris 11.

As readers of my occasional blog may know, I've been working on IPS, the new packaging system used in Solaris 11. We've recently finished the first version of the developer's guide for IPS. For those folks interested in how to use IPS to deliver their own software, or just want to better understand how Solaris uses IPS, we hope the developer's guide will be useful reading.

You can find the new guide here.

For those of you interested in attending either the live webcast, or happen to be in the area and want to join us at Gotham Hall in New York City, the registration link is here.

09 Nov 2011 2:00am GMT

08 Nov 2011

feedPlanet filibeto

Steve Tunstall: Mobile app for Oracle Support

So many of you use MOS, and like to track your service tickets, etc.

Did you know that there are mobil apps for both the iPhone and for the Droid that allow you to interface with MOS on-the-go?

Check this out:


**Update: I have a Droid, and it seems the MOS link is only on the iPhone app, not the Droid app. At least, I sure can't seem to find it on mine. Just news. Disappointing. I will let everyone know if I find it or when it becomes available on the Droid.

08 Nov 2011 6:46pm GMT

Henrik Johansson: Solaris 11 release and webcast

Solaris 11 will be release 2011/11/09 (2011/11/11 was not optimal for some reason).

Oracle will host a launch event in New York and you can register to attend to the live webcast.

Even if I have abandoned Solaris 11 for OpenIndiana for storage related installations, Solaris 11 have it's obvious place on bigger iron in the datacenter or for any mission critical workload that needs enterprise support. I would gladly have continued to use Solaris 11 for storage but the change made by Oracle to ditch the community and move to closed source and stricter licensing prevents that.

This will make fantastic features such as crossbow, IPS, Native CIFS and COMSTAR available for use in production environments. Many enhancements for zones have also been made, they can for example be NFS servers in Solaris 11.

Also if you want to make the most use of the new SPARC T4, Solaris 11 is the best choice since not every change usable to the T4 have been ported back to Solaris 10 8/11.

If you pay for support of Solaris 11, please demand that Oracle gives you access to the source, DTrace will loose it's value otherwise and I think Oracle needs to hear that.

Oracle Solaris 11 Launch webcast

08 Nov 2011 12:14pm GMT

07 Nov 2011

feedPlanet filibeto

Blog O' Matty: Four super cool utilities that are part of the psmisc package

There are a ton of packages available for the various Linux distributions. Some of these packages aren't as well know as others, though they contain some crazy awesome utilities. One package that fits into this cataegory is psmisc. Psmisc contains several tools that be used to print process statistics, look at file descriptor activity, see [...]

07 Nov 2011 5:14pm GMT