22 Dec 2014

feedPlanet Ubuntu

Joe Liau: Dumb is Dead: Snappy is the New Smart

Keyword: "bulletproof" (Source)

"bulletproof" (Source)

Once thought to be a "smart" device, the dumb telephone is now a thing of the past. But, if we look closely we might see a hero arise from the ashes:

Unnecessary
Balderdash
Unintelligible
Nostalgia
Touch-Heavy
Ulterior Movies

Ubuntu could truly save our phones, and it's up to us to make sure that it happens. It's fun to dream up ideas, and it's even more fun to contribute towards seeing those ideas in action. We can then start to see these dreams become a reality. We can make things better, and we may even see some truly smart devices. Or, is "smart" now a thing of a the past as well?

I must say that "snappy" was never part of my personal lexicon, but I think that's why it will be effective. There's no baggage attached to the word. Just like we have seen a departure from "loco", we can start to move away from other concepts of the old world and begin to create something fresh. Ubuntu has always about being a positive change rather than another flavor of the past.

Let's continue to make Ubuntu this way. Stay snappy, my friends.

sop

22 Dec 2014 10:06pm GMT

The Fridge: Ubuntu Weekly Newsletter Issue 397

Welcome to the Ubuntu Weekly Newsletter. This is issue #397 for the week December 15 - 21, 2014, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

22 Dec 2014 9:08pm GMT

Ronnie Tucker: Dota 2 Runs Natively on Mir with the Same Performance as X11

Canonical has been working on the Mir display server for some time, although most of their efforts have been made towards the mobile platform. They are now looking to optimize it for desktop use and nothing reflects the progress made more than a famous game running on Mir.

Mir is already working on the desktop, but users need to have the open source video drivers in order to make it work. Canonical has recently built a new flavor called Ubuntu Next which features Unity 8 and the Mir display server. The new desktop environment needs Mir, so it stands to reason that the updated DE will arrive for regular users when Mir is also ready. It's not there yet, but it's taking great strides.

Source:

http://news.softpedia.com/news/Dota-2-Runs-Natively-on-Mir-with-the-Same-Performance-as-X11-466662.shtml

Submitted by: Silviu Stahie

22 Dec 2014 3:39pm GMT

Colin King: Controlling data flow using sluice

Earlier this year I was instrumenting wifi power consumption and needed a way to produce and also consume data at a specific rate to pipe over netcat for my measurements. I had some older crufty code around to do this but it needed some polishing up so I eventually got around to turning this into a more usable tool called "sluice". (A sluice gate controls flow of water, the sluice tool controls rate of data though a pipe).

The sluice package is currently available in PPA:colin-king/white and is built for Ubuntu Trusty, Utopic and Vivid, but there the git repository and tarballs are available too if one wants to build this from source.

The following starts a netcat 'server' to send largefile at a rate of 1MB a second using 1K buffers and reports the transfer stats to stderr with the -v verbose mode enabled:

cat largefile | sluice -r 1MB -i 1K -v | nc -l 127.0.0.1 1234 


Sluice also allows one to adjust the read/write buffer sizes dynamically to try to avoid buffer underflow or overflows while trying to match the specified transfer rate.

Sluice can be used as a data sink on the receiving side and also has an option to throw the data away if one just wants to stream data and test the data link rates, e.g., get data from somehost.com on port 1234 and throw it away at 2MB a second:

nc somehost.com 1234 | sluice -d -r 2MB -i 8K


And finally, sluice as a "tee" mode, where data is copied to stdout and to a specified output file using the -t option.

For more details, refer to the sluice project page.

22 Dec 2014 3:23pm GMT

Riccardo Padovani: Ubuntu Phone seen by my friends

Some days ago OMG!Ubuntu announced the date of the release of first Ubuntu Phone. That night I was in a pub with some friends, so I told them about the release and the price of the phone.

All my friends present that night study in universities not related to tech, or technologies. They know about Ubuntu only because I'm involved.

So, they wanted to try the system, and I was more than happy to show them my phone. It's a Nexus 4 with RTM #12, and I use it as everyday phone, so it has lot of apps and stuffs on it.

All feedback were more than positive, mainly on two topics:

Not all is so good, there are also two negative feedback:

Anyway, I'm more than happy with the results: Ubuntu Phone seems ready for all, and this is a good point where we could start to build. We have simply to continue on that way, and we will change the world, bit by bit, mind by mind.

Here my friends try Ubuntu on Nexus 4:

Friends try Ubuntu Friends try Nexus 4

Did you like the post? Consider to do a donation

Ciao,
R.

22 Dec 2014 12:55pm GMT

21 Dec 2014

feedPlanet Ubuntu

Costales: Día U - 47. Evento de lanzamiento del primer móvil con Ubuntu

Quedan 47 días para que entre en la liga de las estrellas un nuevo equipo. Y seguro que esos días pasarán volando.

Considero muy acertada la posición de Michael Hall, de que con sólo alcanzar un 1% del mercado de telefonía móvil sería un hito y sustentaría económicamente a todo Canonical holgadamente.

Este será el móvil BQ con Ubuntu que saldrá a la venta en febrero

En la actualidad, con decenas de fabricantes, lo que dicta quien ganará la liga de la telefonía móvil no es quien tenga el mejor hardware, la ganará quien tenga el mejor software.
Y un móvil con Ubuntu destacará por varias características únicas que podrían hacerle campeón: Su acicate será esa libertad que tanto buscamos, exprimirá el hardware evitando correr a través de una máquina virtual, una aplicación desarrollada para el móvil se ejecutará perfectamente integrada en el escritorio, y puestos ya a soñar, tal vez resurja de sus cenizas el fantástico proyecto Ubuntu for Android.


Desde el 6 de Febrero, existirá un antes y un después para el mundo Linux... Somos nuevos y novatos, pero esta liga ya ha visto caer auténticos gigantes...
Es hora de aprender de los campeones y labrar el camino propio hacia una meta díficil, pero no imposible.
¡¡¡Salimos equipo!!!

Together!


Sigue leyendo en mi blog sobre Ubuntu Phone.

21 Dec 2014 6:39pm GMT

Ronnie Tucker: Year End Core Apps Hack Days Announced for Ubuntu Touch

Canonical is looking to improve the core apps that are already available for Ubuntu Touch and is organizing a new Core Apps Hack Days event that should galvanize the efforts of more developers towards this platform.

Native apps are what Ubuntu Touch needs more than anything and that's because the team can only deal with the operating system, but the rest of the ecosystem has to come from third-party developers who need to take the rest of the journey.

The guys and gals who build Ubuntu Touch do provide a number of apps, like the Gallery or the Browser, but they can't spread their efforts in all the directions. This is where Core Apps Hack Days comes into play.

Source:

http://news.softpedia.com/news/Year-End-Core-Apps-Hack-Days-Announced-for-Ubuntu-Touch-466699.shtml

Submitted by: Silviu Stahie

21 Dec 2014 3:38pm GMT

Randall Ross: Why Smart Phones Aren't - Reason #6

You dutifully inform me that I have a voice message waiting. You carefully protect me from some unknown threat by forcing me to type in a voice mail password that I can never remember, *every* time. You eat my minutes to hear someone say "Call me back, blah, blah, blah."

"Smart" phone, why are you wasting my time?

You see, I never wanted your voice mail anti-feature. You assumed I did. I gave you my voice mail password more than a few times. Why did you not remember it? Why can you not just inform people that there are better ways to communicate?

"Smart" phone, you have not progressed since the '90's. I'm tempted to dump you.

In fact, I already have plans...

In my lifetime I will actually see a phone that is truly smart. When the Ubuntu Phone arrives, the world will have the means to finally fix this and other issues once and for all.

"Smart" phone, your days are numbered.

--

Image: Ribbit, CC BY-NC 2.0
https://www.flickr.com/photos/ribbitvoice/

21 Dec 2014 12:24am GMT

Riccardo Padovani: New blog

Hey all folks, after long time I return to write something about changes I made to my blog, and how I plan to use it in next months.

From Wordpress to Jekyll

So, first thing: I moved away from Wordpress! I find Wordpress annoying and to big for what I need: a place where I could write some things, without anything else.

I stopped to write because I find is a pain to write a new post in Wordpress: the editor isn't so good, and the admin interface is so slow.

Some months ago an italian tech blogger, Alessio Biancalana, moved from Wordpress to Jekyll. I was curious, so I tried it, and it's awesome! Only thing you need is a text editor and BOOM, you're ready.

Also because you write post with markdown, and markdown rocks! The site is hosted on Github Pages, so my workflow is definitely better: write a post, git add, git commit, git push and GitHub takes care of all!

Contents

Ok, you now have a new blog, and it's cool. But what about contents? A blog without contents is useless!

Well, I want to try to use the blog to post all things I used to post on Google+. In last months I did a lot of things (attended a Canonical sprint, wrote a scope for Ubuntu for Phones, improved bookmarks in Ubuntu Browser, has been invited to the launch of the first Ubuntu Smartphone and so on) but I wrote about them only on Google+, and this is a shame. Google+ is cool for a lot of things, but your contents belong to it, and who hasn't an account could't read them, and things fly away after few days.

My plan is to write all posts I usually write on G+ here, so they will be also on Ubuntu Planet :-)

Comments

As maybe you notice, there isn't a comment section. Well, Jekyll is a static content manager, so all solutions out of there are based on Disqus or some other cloud content manager. I totally don't like them, so at the moment I prefer to avoid to implement comments.

I found a plugin for Jekyll to manage them in a cool way, I' ll try to implement it during Christmas holidays.

Meanwhile, if you have any feedback, write a mail to me, or leave a comment on G+ :-D

Donations

Another thing I added in the restyling of the website is a donation page. I'm an universitary student and I don't have any income, please help me to have free time to do what I love most: help others developing. First donations will be used to buy a VPS to host this blog (see below).

Privacy

I talked a lot about privacy in previous posts, but now I moved my contents from my hosting to GitHub. What?

This is because I like very much Jekyll, and to use it I need Ruby, and I don't have a VPS where install it. So I prefer to have a blog where I'm happy to write hosted on a GitHub server rather a blog without contents hosted on my hosting.

But if I'll take some money with donations I'll buy a VPS, so I could also manage a mail server - at the moment I use an european provider - ovh - but I want to have full control on the mails.

So, for now it is all, and I hope this is the first of a long series of posts.

Enjoy your holidays!

21 Dec 2014 12:00am GMT

20 Dec 2014

feedPlanet Ubuntu

Costales: Dear Santa Claus...

I know, I know, you can't leave an awesome Ubuntu Phone yet... ;)

But... What about a nice accessory now?


Or maybe... could you return back this February? > : P

20 Dec 2014 10:56am GMT

19 Dec 2014

feedPlanet Ubuntu

Ubuntu Podcast from the UK LoCo: S07E38 – The Last One

We're back with the final episode of Season Seven: It's Episode Thirty-eight of the Ubuntu Podcast! Alan Pope, Laura Cowen, Tony Whitmore and Mark Johnson are all present with mince pies and brandy butter to bring you this episode.

Download OGG Download MP3 Play in Popup

In this week's show:

That's all for this season, but while we are off the air, please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

19 Dec 2014 5:00pm GMT

Oli Warner: Streaming your Kodi library over the Internet

Even if you're leaving your media centre behind while you travel this Christmas, you don't have to be without all your media. I'm going to show you that in just a few steps you can access all your TV and movies remotely on an Android device over a nice, secure SSH connection.

I recorded a short demo showing me play a video from Kodi over HDSPA. It's boring to watch but it's proof this is possible.

Before we get too carried away, you need a few things:

If you have all that, let's get started.

Install a SSH server on your Kodi machine

So you've got an media centre running Ubuntu. If you haven't already, let's install the SSH server:

sudo apt-get install ssh

Before we make this accessible on the internet, we need to make the SSH server secure. Use key-based auth, disable password auth, use a non-standard port, install fail2ban, etc. Do not skip this.

If SSH can be insecure, why are we using it at all? We could just forward ports through to Kodi... But I just wouldn't want to risk exposing it directly to the internet. OpenSSH has millions of active deployments. It's battle-tested in a way that Kodi could only dream. I trust it.

Expose SSH to the internet

Almost all of you will be behind a NAT router. These split a public IP into a subnetwork of private IPs. One side effect is computers on the outside can't directly address internal computers. We need to tell the router where to send our connection when we try to log in with SSH.

The process is subtly different for every router but if you don't know what you're doing, there's a guide for just about every router on portforward.com. We just want to forward a port for whatever port you assigned to your SSH server when hardening it above.

Now you can connect to your public IP from outside the network on your SSH port. However most consumer routers won't do port forwarding from inside the internal network. You'll need another connection to test it or you could use an online port tester to probe your SSH port.

Using dynamic DNS to keep track of our network

If you have a static IP, skip this but most home connections are assigned an IP address that frequently changes. This makes it hard to know where we're going to SSH to once we're outside the network. But we can use a "dynamic DNS" service to make sure there's a domain name that always points to our external IP address.

No-IP has a free service and a Linux client. There are many other services out there.

By the end of this step you should have a domain name (eg myaccount.no-ip.info) and something running regularly on the media centre that keeps this DNS updated to our latest IP.

Install Android packages

We need a few things on our client phone:

ConnectBot is completely free while Yaste and MX Player both have free versions with unlockable features. You shouldn't need to pay any money to test this out though I do recommend paying for Yaste because it's that good.

Connect to SSH and set up our tunnels

We'll start in ConnectBot. We need to start by generating a keypair. This is what will allow us to log into the SSH server. This guide has the full process but in short: generate a keypair, email yourself the public key and copy that into ~/.ssh/authorized_keys2 on the server.

Then we can create a new connection. On the ConnectBot home screen just punch user@myaccount.no-ip.info:12345 (obviously replacing each bit with your actual username, domain and port respectively). Assuming that all works, disconnect and long press the new connection on the home screen and select "Edit port forwards". We want to add two ports:

Reconnect and leave it in the background. We'll connect to this now with Yaste.

Create a Yaste host using our tunnels

Open Yaste and open the Host Manager. Create a new host. It probably won't detect the tunnels so skip the wizard. When asked, use localhost as the IP and 8080 as the port. It will test the connection before it lets you add it.

Sync your library (long press the item on the sidebar), select the "Play locally" toggle and then you'll be able to stream things over the internet! It may buffer a little if you're on a slow connection but it should work. Alternatively, you can download files from the Kodi server using Yaste which might be a little more predictable on a spiky connection.

19 Dec 2014 4:55pm GMT

Dustin Kirkland: AWSnap! Snappy Ubuntu Now Available on AWS!


Awww snap!

That's right! Snappy Ubuntu images are now on AWS, for your EC2 computing pleasure.

Enjoy this screencast as we start a Snappy Ubuntu instance in AWS, and install the xkcd-webserver package.


And a transcript of the commands follows below.

kirkland@x230:/tmp⟫ cat cloud.cfg
#cloud-config
snappy:
ssh_enabled: True
kirkland@x230:/tmp⟫ aws ec2 describe-images \
> --region us-east-1 \
> --image-ids ami-5c442634

{
"Images": [
{
"ImageType": "machine",
"Description": "ubuntu-core-devel-1418912739-141-amd64",
"Hypervisor": "xen",
"ImageLocation": "ucore-images/ubuntu-core-devel-1418912739-141-amd64.manifest.xml",
"SriovNetSupport": "simple",
"ImageId": "ami-5c442634",
"RootDeviceType": "instance-store",
"Architecture": "x86_64",
"BlockDeviceMappings": [],
"State": "available",
"VirtualizationType": "hvm",
"Name": "ubuntu-core-devel-1418912739-141-amd64",
"OwnerId": "649108100275",
"Public": false
}
]
}
kirkland@x230:/tmp⟫
kirkland@x230:/tmp⟫ # NOTE: This AMI will almost certainly have changed by the time you're watching this ;-)
kirkland@x230:/tmp⟫ clear
kirkland@x230:/tmp⟫ aws ec2 run-instances \
> --region us-east-1 \
> --image-id ami-5c442634 \
> --key-name id_rsa \
> --instance-type m3.medium \
> --user-data "$(cat cloud.cfg)"
{
"ReservationId": "r-c6811e28",
"Groups": [
{
"GroupName": "default",
"GroupId": "sg-d5d135bc"
}
],
"OwnerId": "357813986684",
"Instances": [
{
"KeyName": "id_rsa",
"PublicDnsName": null,
"ProductCodes": [],
"StateTransitionReason": null,
"LaunchTime": "2014-12-18T17:29:07.000Z",
"Monitoring": {
"State": "disabled"
},
"ClientToken": null,
"StateReason": {
"Message": "pending",
"Code": "pending"
},
"RootDeviceType": "instance-store",
"Architecture": "x86_64",
"PrivateDnsName": null,
"ImageId": "ami-5c442634",
"BlockDeviceMappings": [],
"Placement": {
"GroupName": null,
"AvailabilityZone": "us-east-1e",
"Tenancy": "default"
},
"AmiLaunchIndex": 0,
"VirtualizationType": "hvm",
"NetworkInterfaces": [],
"SecurityGroups": [
{
"GroupName": "default",
"GroupId": "sg-d5d135bc"
}
],
"State": {
"Name": "pending",
"Code": 0
},
"Hypervisor": "xen",
"InstanceId": "i-af43de51",
"InstanceType": "m3.medium",
"EbsOptimized": false
}
]
}
kirkland@x230:/tmp⟫
kirkland@x230:/tmp⟫ aws ec2 describe-instances --region us-east-1 | grep PublicIpAddress
"PublicIpAddress": "54.145.196.209",
kirkland@x230:/tmp⟫ ssh -i ~/.ssh/id_rsa ubuntu@54.145.196.209
ssh: connect to host 54.145.196.209 port 22: Connection refused
255 kirkland@x230:/tmp⟫ ssh -i ~/.ssh/id_rsa ubuntu@54.145.196.209
The authenticity of host '54.145.196.209 (54.145.196.209)' can't be established.
RSA key fingerprint is 91:91:6e:0a:54:a5:07:b9:79:30:5b:61:d4:a8:ce:6f.
No matching host key fingerprint found in DNS.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '54.145.196.209' (RSA) to the list of known hosts.
Welcome to Ubuntu Vivid Vervet (development branch) (GNU/Linux 3.16.0-25-generic x86_64)

* Documentation: https://help.ubuntu.com/

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

Welcome to the Ubuntu Core rolling development release.

* See https://ubuntu.com/snappy

It's a brave new world here in snappy Ubuntu Core! This machine
does not use apt-get or deb packages. Please see 'snappy --help'
for app installation and transactional updates.

To run a command as administrator (user "root"), use "sudo ".
See "man sudo_root" for details.

ubuntu@ip-10-153-149-47:~$ mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,relatime,size=1923976k,nr_inodes=480994,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=385432k,mode=755)
/dev/xvda1 on / type ext4 (ro,relatime,data=ordered)
/dev/xvda3 on /writable type ext4 (rw,relatime,discard,data=ordered)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,mode=755)
tmpfs on /etc/fstab type tmpfs (rw,nosuid,noexec,relatime,mode=755)
/dev/xvda3 on /etc/systemd/system type ext4 (rw,relatime,discard,data=ordered)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset,clone_children)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
tmpfs on /etc/machine-id type tmpfs (ro,relatime,size=385432k,mode=755)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=22,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
/dev/xvda3 on /etc/hosts type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /etc/sudoers.d type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /root type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/click/frameworks type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /usr/share/click/frameworks type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/systemd/snappy type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/systemd/click type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/initramfs-tools type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /etc/writable type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /etc/ssh type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/tmp type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/apparmor type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/cache/apparmor type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /etc/apparmor.d/cache type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /etc/ufw type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/log type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/system-image type ext4 (rw,relatime,discard,data=ordered)
tmpfs on /var/lib/sudo type tmpfs (rw,relatime,mode=700)
/dev/xvda3 on /var/lib/logrotate type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/dhcp type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/dbus type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/cloud type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /var/lib/apps type ext4 (rw,relatime,discard,data=ordered)
tmpfs on /mnt type tmpfs (rw,relatime)
tmpfs on /tmp type tmpfs (rw,relatime)
/dev/xvda3 on /apps type ext4 (rw,relatime,discard,data=ordered)
/dev/xvda3 on /home type ext4 (rw,relatime,discard,data=ordered)
/dev/xvdb on /mnt type ext3 (rw,relatime,data=ordered)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=385432k,mode=700,uid=1000,gid=1000)
ubuntu@ip-10-153-149-47:~$ mount | grep " / "
/dev/xvda1 on / type ext4 (ro,relatime,data=ordered)
ubuntu@ip-10-153-149-47:~$ sudo touch /foo
touch: cannot touch '/foo': Read-only file system
ubuntu@ip-10-153-149-47:~$ sudo apt-get update
Ubuntu Core does not use apt-get, see 'snappy --help'!
ubuntu@ip-10-153-149-47:~$ sudo snappy --help
Usage:snappy [-h] [-v]
{info,versions,search,update-versions,update,rollback,install,uninstall,tags,build,chroot,framework,fake-version,nap}
...

snappy command line interface

optional arguments:
-h, --help show this help message and exit
-v, --version Print this version string and exit

Commands:
{info,versions,search,update-versions,update,rollback,install,uninstall,tags,build,chroot,framework,fake-version,nap}
info
versions
search
update-versions
update
rollback undo last system-image update.
install
uninstall
tags
build
chroot
framework
fake-version ==SUPPRESS==
nap ==SUPPRESS==
ubuntu@ip-10-153-149-47:~$ sudo snappy info
release: ubuntu-core/devel
frameworks:
apps:
ubuntu@ip-10-153-149-47:~$ sudo snappy versions -a
Part Tag Installed Available Fingerprint Active
ubuntu-core edge 141 - 7f068cb4fa876c *
ubuntu@ip-10-153-149-47:~$ sudo snappy search docker
Part Version Description
docker 1.3.2.007 The docker app deployment mechanism
ubuntu@ip-10-153-149-47:~$ sudo snappy install docker
docker 4 MB [=============================================================================================================] OK
Part Tag Installed Available Fingerprint Active
docker edge 1.3.2.007 - b1f2f85e77adab *
ubuntu@ip-10-153-149-47:~$ sudo snappy versions -a
Part Tag Installed Available Fingerprint Active
ubuntu-core edge 141 - 7f068cb4fa876c *
docker edge 1.3.2.007 - b1f2f85e77adab *
ubuntu@ip-10-153-149-47:~$ sudo snappy search webserver
Part Version Description
go-example-webserver 1.0.1 Minimal Golang webserver for snappy
xkcd-webserver 0.3.1 Show random XKCD compic via a build-in webserver
ubuntu@ip-10-153-149-47:~$ sudo snappy install xkcd-webserver
xkcd-webserver 21 kB [=====================================================================================================] OK
Part Tag Installed Available Fingerprint Active
xkcd-webserver edge 0.3.1 - 3a9152b8bff494 *
ubuntu@ip-10-153-149-47:~$ exit
logout
Connection to 54.145.196.209 closed.
kirkland@x230:/tmp⟫ ec2-instances
i-af43de51 ec2-54-145-196-209.compute-1.amazonaws.com
kirkland@x230:/tmp⟫ ec2-terminate-instances i-af43de51
INSTANCE i-af43de51 running shutting-down
kirkland@x230:/tmp⟫


Cheers!
Dustin

19 Dec 2014 2:01pm GMT

The Fridge: Vivid Vervet Alpha 1 Released

"How much wood could a woodchuck chuck if a woodchuck could chuck wood?"
- Guybrush Threepwood, Monkey Island

The first alpha of the Vivid Vervet (to become 15.04) has now been released!

This alpha features images for Kubuntu, Lubuntu, Ubuntu GNOME, UbuntuKylin and the Ubuntu Cloud images.

Pre-releases of the Vivid Vervet are *not* encouraged for anyone needing a stable system or anyone who is not comfortable running into occasional, even frequent breakage. They are, however, recommended for Ubuntu flavor developers and those who want to help in testing, reporting and fixing bugs as we work towards getting this release ready.

Alpha 1 includes a number of software updates that are ready for wider testing. This is quite an early set of images, so you should expect some bugs.

While these Alpha 1 images have been tested and work, except as noted in the release notes, Ubuntu developers are continuing to improve the Vivid Vervet. In particular, once newer daily images are available, system installation bugs identified in the Alpha 1 installer should be verified against the current daily image before being reported in Launchpad. Using an obsolete image to re-report bugs that have already been fixed wastes your time and the time of developers who are busy trying to make 15.04 the best Ubuntu release yet. Always ensure your system is up to date before reporting bugs.

Kubuntu

Kubuntu uses KDE software and now features the new Plasma 5 desktop.

The Alpha-1 images can be downloaded at: http://cdimage.ubuntu.com/kubuntu/releases/vivid/alpha-1/

More information on Kubuntu Alpha-1 can be found here: https://wiki.ubuntu.com/VividVervet/Alpha1/Kubuntu

Lubuntu

Lubuntu is a flavour of Ubuntu based on LXDE and focused on providing a very lightweight distribution.

The Alpha 1 images can be downloaded at: http://cdimage.ubuntu.com/lubuntu/releases/vivid/alpha-1/

More information on Lubuntu Alpha-1 can be found here: https://wiki.ubuntu.com/VividVervet/Alpha1/Lubuntu

Ubuntu GNOME

Ubuntu GNOME is an flavour of Ubuntu featuring the GNOME desktop environment.

The Alpha-1 images can be downloaded at: http://cdimage.ubuntu.com/ubuntu-gnome/releases/vivid/alpha-1/

More information on Ubuntu GNOME Alpha-1 can be found here: https://wiki.ubuntu.com/VividVervet/Alpha1/UbuntuGNOME

UbuntuKylin

UbuntuKylin is a flavour of Ubuntu that is more suitable for Chinese users.

The Alpha-1 images can be downloaded at: http://cdimage.ubuntu.com/ubuntukylin/releases/vivid/alpha-1/

More information on UbuntuKylin Alpha-1 can be found here: https://wiki.ubuntu.com/VividVervet/Alpha1/UbuntuKylin

Ubuntu Cloud

Ubuntu Cloud images can be run on Amazon EC2, Openstack, SmartOS and many other clouds.

http://cloud-images.ubuntu.com/releases/vivid/alpha-1/

Regular daily images for Ubuntu can be found at: http://cdimage.ubuntu.com

If you're interested in following the changes as we further develop Vivid, we suggest that you subscribe to the ubuntu-devel-announce list. This is a low-traffic list (a few posts a week) carrying announcements of approved specifications, policy changes, alpha releases and other interesting events.

http://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-announce

A big thank you to the developers and testers for their efforts to pull together this Alpha release!

Jonathan Riddell, on behalf of the Ubuntu release team.

Originally posted to the ubuntu-devel-announce mailing list on Thu Dec 18 22:17:15 UTC 2014 by Jonathan Riddell

19 Dec 2014 5:35am GMT

Thomas Ward: NGINX: Mitigating the BREACH vulnerability

This post serves as a notice regarding the BREACH vulnerability and NGINX.

For Ubuntu, Debian, and the PPA users: If you are on 1.6.2-5 (or 1.7.8 from the PPAs), the default configuration has GZIP compression enabled, which means it does not mitigate BREACH on your sites by default. You need to look into whether you are actually impacted by BREACH, and if you are consider mitigation steps.


What is it?

Unlke CRIME, which attacks TLS/SPDY compression and is mitigated by disabling SSL compression, BREACH attacks HTTP responses. These are compressed using the common HTTP compression, which is much more common than TLS-level compression. This allows essentially the same attack demonstrated by Duong and Rizzo, but without relying on TLS-level compression (as they anticipated).

BREACH is a category of vulnerabilities and not a specific instance affecting a specific piece of software. To be vulnerable, a web application must:

Additionally, while not strictly a requirement, the attack is helped greatly by responses that remain mostly the same (modulo the attacker's guess). This is because the difference in size of the responses measured by the attacker can be quite small. Any noise in the side-channel makes the attack more difficult (though not impossible).

It is important to note that the attack is agnostic to the version of TLS/SSL, and does not require TLS-layer compression. Additionally, the attack works against any cipher suite. Against a stream cipher, the attack is simpler; the difference in sizes across response bodies is much more granular in this case. If a block cipher is used, additional work must be done to align the output to the cipher text blocks.

How practical is it?

The BREACH attack can be exploited with just a few thousand requests, and can be executed in under a minute. The number of requests required will depend on the secret size. The power of the attack comes from the fact that it allows guessing a secret one character at a time.

Am I affected?

If you have an HTTP response body that meets all the following conditions, you might be vulnerable:

Mitigations

NOTE: The Breach Attack Information Site offers several tactics for mitigating the attack. Unfortunately, they are unaware of a clean, effective, practical solution to the problem. Some of these mitigations are more practical and a single change can cover entire apps, while others are page specific.

The mitigations are ordered by effectiveness (not by their practicality - as this may differ from one application to another).

  1. Disabling HTTP compression
  2. Separating secrets from user input
  3. Randomizing secrets per request
  4. Masking secrets (effectively randomizing by XORing with a random secret per request)
  5. Protecting vulnerable pages with CSRF
  6. Length hiding (by adding random number of bytes to the responses)
  7. Rate-limiting the requests.

Whichever mitigation you choose, it is strongly recommended you also monitor your traffic to detect attempted attacks.


Mitigation Tactics and Practicality

Unfortunately, the practicality of the listed mitigation tactics is widely varied. Practicality is determined by the application you are working with, and in a lot of cases it is not possible to just disable GZIP compression outright due to the size of what's being served.

This blog post will cover and describe in varying detail three mitigation methods: Disabling HTTP Compression, Randomizing secrets per request, and Length Hiding (using this site as a reference for the descriptions here).

Disabling HTTP Compression

This is the simplest and most effective mitigation tactic, but is ultimately not the most wieldy mitigation tactic, as there is a chance your application actually requires GZIP compression. If this is the case, then you should not use this mitigation option, when GZIP compression is needed in your environment. However, if your application and use case does not necessitate the requirement of GZIP compression, this is easily fixed.

To disable GZIP globally on your NGINX instances, in nginx.conf, add this code to the http block: gzip off;.

To disable GZIP specifically in your sites and not globally, follow the same instructions for globally disabling GZIP, but add it to your server block in your sites' specific configurations instead.

If you are using NGINX from the Ubuntu or Debian repositories, or the NGINX PPAs, you should check your /etc/nginx.conf file to see if it has gzip on; and you should comment this out or change it to gzip off;.

However, if disabling GZIP compression is not an option for your sites, then consider looking into other mitigation methods.

Randomizing secrets per request or masking secrets

Unfortunately, this one is the least descriptive here. Secret handling is handled on an application level and not an NGINX level. If you have the capability to modify your application, you should modify it to randomize the secrets with each request, or mask the secrets. If this is not an option, then consider using another method of mitigation.

Length hiding

Length hiding can be done by nginx, however it is not currently available in the NGINX packages in Ubuntu, Debian, or the PPAs.

It can be done on the application side, but it is easier to update an nginx configuration than to modify and deploy an application when you need to enable or disable this in a production environment. A Length Hiding Filter Module has been made by Nulab, and it adds randomly generated HTML comments to the end of an HTML response to hide correct length and make it difficult for attackers to guess secret information.

An example of such a comment added by the module is as follows:

<!-- random-length HTML comment: E9pigGJlQjh2gyuwkn1TbRI974ct3ba5bFe7LngZKERr29banTtCizBftbMk0mgRq8N1ltPtxgY -->

NOTE: To use this method, until there is any packaging available that uses this module or includes it, you will need to compile NGINX from the source tarballs.

To enable this module, you will need to compile NGINX from source and add the module. Then, add the length_hiding directive to the server,http, or location blocks in your configuration with this line: length_hiding on;


Special Packaging of NGINX PPA with Length Hiding Enabled

I am currently working on building NGINX stable and mainline with the Length Hiding module included in all variants of the package which have SSL enabled. This will eventually be available in separate PPAs for the stable and mainline PPAs.

Until then, I strongly suggest that you look into whether you can operate without GZIP compression enabled, or look into one of the other methods of mitigating this issue.

19 Dec 2014 1:08am GMT

Ubuntu GNOME: Vivid Vervet Alpha 1 has arrived

Hi,

Ubuntu GNOME Team is glad to announce the availability of the first milestone (Alpha 1) for Ubuntu GNOME Vivid Vervet (15.04).

Kindly do take the time and read the release notes:
https://wiki.ubuntu.com/VividVervet/Alpha1/UbuntuGNOME

We would like to thank our great helpful and very supportive testers. They have responded to our urgent call for help in no time. Having high quality testers on the team make us more confident this cycle will be extraordinary and needless to mention, that is an endless motivation for us to do more and give more. Thank you so much again for all those who helped to test Alpha 1 images.

As always, if you need more information about testing, please see this page.

And, don't hestiate to contact us if you have any question, feedback, notes, suggestions, etc - please see this page.

Thank you for choosing, testing and using Ubuntu GNOME!

19 Dec 2014 12:49am GMT