01 Nov 2014

feedPlanet Ubuntu

Valorie Zimmerman: Season of KDE - Let's go!

We're now in the countdown. The deadline for applications is midnight 31 October UT. So give the ideas page one last look:

https://community.kde.org/SoK/Ideas/2014

We even got one last task just now, just for you devops -- UPDATE: this task is taken.

Please talk to your prospective mentor and get their OK before signing up on https://season.kde.org/ . If you have already signed up and your mentor has signed off on your plans and timeline, get to work!

===========================================

UPDATE: Because of the glitches in the schedule, we are extending the student deadline a few days, to match the mentor deadline for logging into and making an account on https://season.kde.org/

Please don't delay. Make all necessary accounts, subscribe to KDE-Soc-Mentor or KDE-Soc list, and get the proposals posted and approved. Please ping us in #kde-soc if there are any problems we can help you with. Otherwise, get to work!


01 Nov 2014 2:14am GMT

31 Oct 2014

feedPlanet Ubuntu

Chris J Arges: getting kernel crashdumps for hung machines

Debugging hung machines can be a bit tricky. Here I'll document methods to trigger a crashdump when these hangs occur.

What exactly does it mean when a machine 'hangs' or 'freezes-up'? More information can be found in the kernel documentation [1], but overall there are a few types of hangs A "Soft Lock-Up" is when the kernel loops in kernel mode for a duration without giving tasks a chance to run. A "Hard Lock-Up" is when the kernel loops in kernel mode for a duration without letting other interrupts run. In addition a "Hung Task" is when a userspace task has been blocking for a duration. Thankfully the kernel has options to panic on these conditions and thus create a proper crashdump.

In order to setup crashdump, on an Ubuntu machine we can do the following. First we need to install and setup crashdump, more info can be found here [2].

sudo apt-get install linux-crashdump

Select NO unless you really would like to use kexec for your reboots.

Next we need to enable it since by default it is disabled.

sudo sed -i 's/USE_KDUMP=0/USE_KDUMP=1/' /etc/default/kdump-tools


Reboot to ensure the kernel cmdline options are properly setup

sudo reboot


After reboot run the following:

sudo kdump-config show


If this command shows 'ready to dump', then we can test a crash to ensure kdump has enough memory and will dump properly. This command will crash your computer, so hopefully you are doing this on a test machine.

echo c | sudo tee /proc/sysrq-trigger


The machine will reboot and you'll see a crash in /var/crash.

All of this is already documented in [2], so now we need to enable panics for hang and lockup conditions. Now we need to enable crashing on lockups, so we'll enable many cases at once.

Edit /etc/default/grub and change this line to the following:

GRUB_CMDLINE_LINUX="nmi_watchdog=panic hung_task_panic=1 softlockup_panic=1 unknown_nmi_panic"


In addition you could enable these via /proc/sys/kernel or sysctl. For more information about these parameters there is documentation here [3].

If you've used the command line change, update grub and then reboot.

sudo update-grub && sudo reboot


Now your machine should crash when it locks up, and you'll get a nice crashdump to analyze. If you want to test such a setup I wrote a module [4] that induces a hang to see if this works properly.

Happy hacking.

  1. https://www.kernel.org/doc/Documentation/lockup-watchdogs.txt
  2. https://wiki.ubuntu.com/Kernel/CrashdumpRecipe
  3. https://www.kernel.org/doc/Documentation/kernel-parameters.txt
  4. https://github.com/arges/hanger



31 Oct 2014 8:53pm GMT

Ronnie Tucker: Full Circle Magazine #90 has arrived!

Full Circle
Issue #90
Full Circle - the independent magazine for the Ubuntu Linux community are proud to announce the release of our ninetieth issue.

This month:
* Command & Conquer
* How-To : OpenConnect to Cisco, LibreOffice, and Broadcasting With WebcamStudio
* Graphics : Inkscape.
* Linux Labs: Compiling a Kernel Pt.3
* Review: MEGAsync
* Ubuntu Games: Prison Architect, and X-Plane Plugins
plus: News, Arduino, Q&A, and soooo much more.

Get it while it's hot!
http://fullcirclemagazine.org/issue-90
We now have several issues available for download on Google Play/Books. If you like Full Circle, please leave a review.
AND: We have a Pushbullet channel which we hope will make it easier to automatically receive FCM on launch day.

31 Oct 2014 8:10pm GMT

Canonical Design Team: Washington Devices Sprint

Last week was a week of firsts for me: my first trip to America, my first Sprint and my first chili-dog.

Introducing myself as the new (only) Editorial and Web Publisher, I dove head first into the world of developers, designers and Community members. It was a very absorbing week, which after felt more like a marathon than a sprint.

After being grilled by Customs, finally we arrived at Tyson's Corner where 200 or so other developers, designers and Community members gathered for the Devices Sprint. It was a great opportunity for me to see how people from each corner of the world contribute to Ubuntu, and share their passion for open source. I especially found it interesting to see how designers and developers work together, given their different mind sets and how they collaborated together.

The highlight for me was talking to some of the Community guys, it was really interesting to talk to them about why and how they contribute from all corners of the world.

From left to right: Riccardo, Andrew, Filippo and Victor.

(From left to right: Riccardo, Andrew, Filippo and Victor)

The main ballroom.

(The main Ballroom)

Design Team dinner. From the left: TingTing, Andrew, John, Giorgio, Marcus, Olga, James, Florian, Bejan and Jouni.

(Design Team dinner. From the left: TingTing, Andrew, John, Giorgio, Marcus, Olga, James, Florian, Bejan and Jouni)

I caught up with Olga and Giorgio to share their thoughts and experiences from the Sprint:

So how did the Sprint go for you guys?

Olga: "It was very busy and productive in terms of having face time with development, which was the main reason we went, as we don't get to see them that often.

For myself personally, I have a better understanding of things in terms of what the issues are and what is needed, and also what can or cannot be done in certain ways. I was very pleased with the whole sprint. There was a lot of running around between meetings, where I tried to use the the time in-between to catch-up with people. On the other hand as well, Development made the approach to the Design Team in terms of guidance, opinions and a general catch-up/chat, which was great!

Steph: "I agree, I found it especially productive in terms of getting the right people in the same room and working face-to-face, as it was a lot more productive than sharing a document or talking on IRC."

Giorgio: "Working remotely with the engineers works well for certain tasks, but the Design Team sometimes needs to achieve a higher bandwidth through other means of communication, so these sprints every 3 months are incredibly useful.

What a Sprint allows us to do is to put a face to the name and start to understand each other's needs, expectations and problems, as stuff gets lost in translation.

I agree with Olga, this Sprint was a massive opportunity to shift to much higher level of collaboration with the engineers.

What was your best moment?

Giorgio: "My best moment was when the engineers perception towards the efforts of the Design Team changed. My goal is to better this collaboration process with each Sprint."

Did anything come up that you didn't expect?

Giorgio: "Gaming was an underground topic that came up during the Sprint. There was a nice workshop on Wednesday on it, which was really interesting."

Steph: "Andrew a Community Developer I interviewed actually made two games one evening during the Sprint!"

Olga: "They love what they do, they're very passionate and care deeply."

Do you feel as a whole the Design Team gave off a good vibe?

Giorgio: "We got a good vibe but it's still a working progress, as we need to raise our game and become even better. This has been a long process as the design of the Platform and Apps wasn't simply done overnight. However, now we are in a mature stage of the process where we can afford to engage with Community more. We are all in this journey together.

Canonical has a very strong engineering nature, as it was founded by engineers and driven by them, and it is has evolved because of this. As a result, over the last few years the design culture is beginning to complement that. Now they expect steer from the Design Team on a number of things, for example: Responsive design and convergence.

The Sprint was good, as we finally got more of a perception on what other parties expect from you. It's like a relationship, you suddenly have a moment of clarity and enlightenment, where you start to see that you actually need to do that, and that will make the relationship better."

Olga: "The other parties and the Development Team started to understand that initiated communication is not just the responsibility of the Design Team, but it's an engagement we all need to be involved in."

In all it was a very productive week, as everyone worked hard to push for the first release of the BQ phone; together with some positive feedback and shout-outs for the Design Team :)

Unicorn hard at work.

(Unicorn hard at work)

There was a bit of time for some sightseeing too…

It would have been rude not to see what the capital had to offer, so on the weekend before the sprint we checked out some of Washington's iconic sceneries.

The Washington Monument.

(The Washington Monument)

We saw most of the important parliamentary buildings like the White House, Washington Monument and Lincoln's Statue. Seeing them in the flesh was spectacular, however, I half expected a UFO to appear over the Monument like in 'Independence Day', and for Abraham Lincoln to suddenly get up off his chair like in the movie 'Night at the Museum' - unfortunately none of that happened.

The White House.

(The White House)

D.C. isn't as buzzing as London but it definitely has a lot of character, as it embodies an array of thriving ethnic pockets that represented African, Asian and Latin American cultures, and also a lot of Italians. Washington is known for getting its sax on, so me and a few of the Design Team decided to check-out the night scene and hit a local Jazz Club in Georgetown.

...And all the jazz.

(Twins Jazz Club)

On the Sunday, we decided to leave the hustle and bustle of the city and venture to the beautiful Great Falls Park, which was only 10-15 minutes from the hotel. The park was located in the Northern Fairfax County along the banks of the Potomac River, which is an integral part of the George Washington Memorial Parkway. Its creeks and rapids made for some great selfie opportunities…

Great Falls Park.

(Great Falls Park)

31 Oct 2014 2:21pm GMT

Oli Warner: Bulk renaming files in Ubuntu; the briefest of introductions to the rename command

I've seen more than a few Ask Ubuntu users struggling with how to batch rename their files. They get lost in Bash and find -exec loops and generally make a big mess of things before asking for help. But there is an easy method in Ubuntu that relatively few users know about: the rename command.

Replacing (and adding and removing) with s/.../.../ Zero-padding numbers so they sort correctly Attaching a counter into the filename Incrementing an existing number in a file Changing a filename's case Fixing extension based on actual content or MIME Why doesn't my rename take a perlexpr?!

I was a couple of years into Ubuntu before I discovered the rename command but now wonder how I ever got along without it. I seem to use it at least once a week for personal gain and that again for helping other people. Let's just spend a second or two marvelling at the outward simplicity of syntax and we'll crack on.

rename [-v] [-n] [-f] perlexpr [filenames]

Before we get too crazy, let's talk about those first two flags. -v will tell you what it's doing and -n will exit before if does anything. If you are in any doubt about your syntax, sling -vn on the end. It'll tell you what it would have done if you hadn't had the -n there. -f will give permission for rename to overwrite files. Be careful.

Replacing (and adding and removing) with s/.../.../

This is probably the most common usage for rename. You've got a bunch of files that have the wrong junk in their filenames in them, or you want to change the formatting, or add a prefix, or replace certain characters... rename lets us do all this through simple regular expressions.

I'm using -vn here so the changes aren't persisting. Let's start by creating a few files:

$ touch dog{1..3}.dog
$ ls
dog1.dog  dog2.dog  dog3.dog

Replacing the first dog with cat:

$ rename 's/dog/cat/' * -vn
dog1.dog renamed as cat1.dog
dog2.dog renamed as cat2.dog
dog3.dog renamed as cat3.dog

Replacing the last dog with cat (note $ means "end of line" in this context, ^ means start):

$ rename 's/dog$/cat/' * -vn
dog1.dog renamed as dog1.cat
dog2.dog renamed as dog2.cat
dog3.dog renamed as dog3.cat

Replacing all instances of dog with the /g (global) flag:

$ rename 's/dog/cat/g' * -vn
dog1.dog renamed as cat1.cat
dog2.dog renamed as cat2.cat
dog3.dog renamed as cat3.cat

Removing a string is as simple as replacing it with nothing. Let's nuke the first dog:

$ rename 's/dog//' * -vn
dog1.dog renamed as 1.dog
dog2.dog renamed as 2.dog
dog3.dog renamed as 3.dog

Adding strings is a case of finding your insertion point and replacing it with your string. Here's how to add "PONIES-" to the start:

$ rename 's/^/PONIES-/' * -vn
dog1.dog renamed as PONIES-dog1.dog
dog2.dog renamed as PONIES-dog2.dog
dog3.dog renamed as PONIES-dog3.dog

This is all fairly simple and your ability to use it in the wild is largely going to depend on your ability to manipulate regular expressions. I've been using them for well over a decade in a professional setting so this might be something I take for granted, but they aren't hard once you get past the syntax. Here's a fairly simple introduction to REGEX if you would like to learn more.

Zero-padding numbers so they sort correctly

ls can be pretty shoddy sorting numbers correctly. Here's a simple example:

$ touch {1..11}
$ ls
1  10  11  2  3  4  5  6  7  8  9

It's sorting each character position at a time, left to right. This isn't too bad when we're only talking about tens, but this scales up and you end up with thousands coming before 9s. A good way to fix this (ls isn't the only application with this issue) is to zero-pad the beginnings of the numbers so all numbers are the same length and their values are in corresponding positions. Rename makes this super-easy because we can dip into Perl and use sprintf to reformat the number:

$ rename 's/\d+/sprintf("%02d", $&)/e' *
$ ls
01  02  03  04  05  06  07  08  09  10  11

The %02d there means we're printing at least 2 characters and padding it with zeroes if we need to. If you're dealing with thousands, increase that to 4, 5 or 6.

$ rename 's/\d+/sprintf("%05d", $&)/e' *
$ ls
00001  00002  00003  00004  00005  00006  00007  00008  00009  00010  00011

Similarly, you can parse a number and remove the zero padding with something like this:

$ rename 's/\d+/sprintf("%d", $&)/e' *
$ ls
1  10  11  2  3  4  5  6  7  8  9

Attaching a counter into the filename

Say we have three files and we want to add a counter onto the filename:

$ touch {a..c}
$ rename 's/$/our $i; sprintf("-%02d", 1+$i++)/e' * -vn
a renamed as a-01
b renamed as b-02
c renamed as c-03

It's the our $i that let's us persist a variable state over multiple passes.

Incrementing an existing number in a file

Given three files with consecutive numbers, increment them. It's a simple enough expression but we have to be mindful that sometimes there are going to be conflicting filenames. Here's an example that moves all the files to temporary filenames and then strips them back to what they should be.

$ touch file{1..3}.ext
$ rename 's/\d+/sprintf("%d-tmp", $& + 1)/e' * -v
file1.ext renamed as file2-tmp.ext
file2.ext renamed as file3-tmp.ext
file3.ext renamed as file4-tmp.ext

$ rename 's/(\d+)-tmp/$1/' * -v
file2-tmp.ext renamed as file2.ext
file3-tmp.ext renamed as file3.ext
file4-tmp.ext renamed as file4.ext

Changing a filename's case

Until now we've been using substitutions and expressions but there are other forms of Perl expression. In this case we can remap all lowercase characters to uppercase with a simple translation:

$ touch lowercase UPPERCASE MixedCase
$ rename 'y/a-z/A-Z/' * -vn
lowercase renamed as LOWERCASE
MixedCase renamed as MIXEDCASE

We could do that with an substitution-expression like: s/[a-z]/uc($&)/ge

Fixing extension based on actual content or MIME

What if you are handed a bunch of files without extensions? Well you could loop through and use things like the file command, or you could just use Perl's File::MimeInfo::Magic library to parse the file and hand you an extension to tack on.

rename 's/.*/use File::MimeInfo qw(mimetype extensions); $&.".".extensions(mimetype($&))/e' *

This one is a bit of a monster but further highlights that anything you can do in Perl can be done with rename. You could read ID3 tags from music or process internal data to get filename fragments.

Why doesn't my rename take a perlexpr?!

Ubuntu's default rename is actually a link to a Perl script called prename. Some distributions ship the util-linux version of rename --called rename.ul in Ubuntu-- instead. You can work out which version you have using the following command:

$ dpkg -S $(readlink -f $(which rename))
perl: /usr/bin/prename

So unless you want to shunt things around, you'll have to install and call prename instead of rename.

31 Oct 2014 1:41pm GMT

David Tomaschik: Towards a Better Password Manager

The consensus in the security community is that passwords suck, but they're here to stay, at least for a while longer. Given breaches like Adobe, ..., it's becoming more and more evident that the biggest threat is not weak passwords, but password reuse. Of course, the solution to password to reuse is to use one password for every site that requires you to log in. The problem is that your average user has dozens of online accounts, and they probably can't remember those dozens of passwords. So, we build tools to help people remember passwords, mostly password managers, but do we build them well?

I don't think so. But before I look at the password managers that are out there, it's important to define the criteria that a good password manager would meet.

  1. Use well-understood encryption to protect the data. A good password manager should use cryptographic constructions that are well understood and reviewed. Ideally, it would build upon existing cryptographic libraries or full cryptosystems. This includes the KDF (Key-derivation function) as well as encryption of the data itself. Oh, and all of the data should be encrypted, not just the passwords.

  2. The source should be auditable. No binaries, no compressed/minified Javascript. If built in a compiled language, it should have source available with verifiable builds. If built in an interpreted language, the source should be unobfuscated and readable. Not everyone will audit their password manager, but it should be possible.

  3. The file format should be open. The data should be stored in an open, documented, format, allowing for interoperability. Your passwords should not be tired into a particular manager, whether that's because the developer of that manager abandoned it, or because it's not supported on a particular platform, or because you like a blue background instead of grey.

  4. It should integrate with the browser. Yes, there are some concerns about exposing the password manager within the browser, but it's more important that this be highly usable. That includes making it easy to generate passwords, easy to fill passwords, and most importantly: harder to phish. In-browser password managers can compare the origin of the page you're on to the data stored, so users are less likely to enter their password in the wrong page. With a separate password manager, users generally copy/paste their passwords into a login page, which relies on the user to ensure they're putting their password into the right site.

  5. Sync, if offered, should be independent to encryption. Your encryption passphrase should not be used for sync. In fact, your encryption passphrase should never be sent to the provider: not at signup, not at login, not ever. Sync, unfortunately, sounds simple: drop a file in Dropbox or Google Drive, right? What happens if the file gets updated while the password manager is open? How do changes get synced if two clients are open?

These are just the five most important features as I see them, and not a comprehensive design document for password managers. I've yet to find a manager that meets all of these criteria, but I'm hoping we're moving in this direction.

31 Oct 2014 1:16am GMT

30 Oct 2014

feedPlanet Ubuntu

Ubuntu Podcast from the UK LoCo: S07E31 – The One with the Dozen Lasagnas

Join Laura Cowen, Tony Whitmore, Mark Johnson and Alan Pope in Studio L for season seven, episode thirty-one of the Ubuntu Podcast!

Download OGG Download MP3 Play in Popup

In this week's show:-

We'll be back next week, when we'll be talking about the fanfare surrounding the latest Ubuntu release and looking over your feedback.

Please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

30 Oct 2014 8:00pm GMT

Ubuntu App Developer Blog: It’s time for a Scope development competition!

With all of the new documentation coming to support the development of Unity Scopes, it's time for us to have another development shodown! Contestants will have five (5) weeks to develop a project, from scratch, and submit it to the Ubuntu Store. But this time all of the entries must be Scopes.

Be sure to update to the latest SDK packages to ensure that you have the correct template and tools. You should also create a new Click chroot to get the latest build and runtime packages.

Prizes

prizesWe've got some great prizes lined up for the winners of this competition.

Judging

Scope entries will be reviewed by a panel of judges from a variety of backgrounds and specialties, all of whom will evaluate the scope based on the following criteria:

The judges for this contest are:

Learn how to write Ubuntu Scopes

To get things started we've recently introduced a new Unity Scope project template into the Ubuntu SDK, you can use this to get a working foundation for your code right away. Then you can follow along with our new SoundCloud scope tutorial to learn how to tailor your code to a remote data source and give your scope a unique look and feel that highlights both the content and the source. To help you out along the way, we'll be scheduling a series of online Workshops that will cover how to use the Ubuntu SDK and the Scope APIs. In the last weeks of the contest we will also be hosting a hackathon on our IRC channel (#ubuntu-app-devel on Freenode) to answer any last questions and help you get your c If you cannot join those, you can still find everything you need to know in our scope developer documentation.

How to participate

If you are not a programmer and want to share some ideas for cool scopes, be sure to add and vote for scopes on our reddit page. The contest is free to enter and open to everyone. The five week period starts on the Thursday 30th October and runs until Wednesday 3rd December 2014! Enter the Ubuntu Scope Showdown >

30 Oct 2014 6:36pm GMT

Ronnie Tucker: Packt offers library subscription with additional $150 worth of free content

As you may know, Packt Publishing supports Full Circle Magazine with review copies of books, so it's only fair that we help them by bringing this offer to your attention:

packt

PacktLib provides full online access to over 2000 books and videos to give users the knowledge they need, when they need it. From innovative new solutions and effective learning services to cutting edge guides on emerging technologies, Packt's extensive library has got it covered. For a limited time only, Packt is offering 5 free eBook or Video downloads in the first month of a new annual subscription - up to $150 worth of extra content. That's in addition to one free download a month for the rest of the year.

This special PacktLib Plus offer marks the release of the new and improved reading and watching platform, packed with new features.

The deal expires on 4 November.

30 Oct 2014 6:33pm GMT

Alessio Treglia: Handling identities in distributed Linux cloud instances

I've many distributed Linux instances across several clouds, be them global, such as Amazon or Digital Ocean, or regional clouds such as TeutoStack or Enter.

Probably many of you are facing the same issue: having a consistent UNIX identity across all multiple instances. While in an ideal world LDAP would be a perfect choice, letting LDAP open to the wild Internet is not a great idea.

So, how to solve this issue, while being secure? The trick is to use the new NSS module for SecurePass.

While SecurePass has been traditionally used into the operating system just as a two factor authentication, the new beta release is capable of holding "extended attributes", i.e. arbitrary information for each user profile.

We will use SecurePass to authenticate users and store Unix information with this new capability. In detail, we will:

SecurePass and extended attributes

The next generation of SecurePass (currently in beta) is capable of storing arbitrary data for each profile. This is called "Extended Attributes" (or xattrs) and -as you can imagine- is organized as key/value pair.

You will need the SecurePass tools to be able to modify users' extended attributes. The new releases of Debian Jessie and Ubuntu Vivid Vervet have a package for it, just:

# apt-get install securepass-tools

ERRATA CORRIGE: securepass-tools hasn't been uploaded to Debian yet, Alessio is working hard to make the package available in time for Jessie though.

For other distributions or previous releases, there's a python package (PIP) available. Make sure that you have pycurl installed and then:

# pip install securepass-tools

While SecurePass tools allow local configuration file, we highly recommend for this tutorial to create a global /etc/securepass.conf, so that it will be useful for the NSS module. The configuration file looks like:

[default]
app_id = xxxxx
app_secret = xxxx
endpoint = https://beta.secure-pass.net/

Where app_id and app_secrets are valid API keys to access SecurePass beta.

Through the command line, we will be able to set UID, GID and all the required Unix attributes for each user:

# sp-user-xattrs user@domain.net set posixuid 1000

While posixuid is the bare minimum attribute to have a Unix login, the following attributes are valid:

Install and Configure NSS SecurePass

In a similar way to the tools, Debian Jessie and Ubuntu Vivid Vervet have native package for SecurePass:

# apt-get install libnss-securepass

For previous releases of Debian and Ubuntu can still run the NSS module, as well as CentOS and RHEL. Download the sources from:

https://github.com/garlsecurity/nss_securepass

Then:

./configure
make
make install (Debian/Ubuntu Only)

For CentOS/RHEL/Fedora you will need to copy files in the right place:

/usr/bin/install -c -o root -g root libnss_sp.so.2 /usr/lib64/libnss_sp.so.2
ln -sf libnss_sp.so.2 /usr/lib64/libnss_sp.so

The /etc/securepass.conf configuration file should be extended to hold defaults for NSS by creating an [nss] section as follows:

[nss]
realm = company.net
default_gid = 100
default_home = "/home"
default_shell = "/bin/bash"

This will create defaults in case values other than posixuid are not being used. We need to configure the Name Service Switch (NSS) to use SecurePass. We will change the /etc/nsswitch.conf by adding "sp" to the passwd entry as follows:

$ grep sp /etc/nsswitch.conf
 passwd:     files sp

Double check that NSS is picking up our new SecurePass configuration by querying the passwd entries as follows:

$ getent passwd user
 user:x:1000:100:My User:/home/user:/bin/bash
$ id user
 uid=1000(user)  gid=100(users) groups=100(users)

Using this setup by itself wouldn't allow users to login to a system because the password is missing. We will use SecurePass' authentication to access the remote machine.

Configure PAM for SecurePass

On Debian/Ubuntu, install the RADIUS PAM module with:

# apt-get install libpam-radius-auth

If you are using CentOS or RHEL, you need to have the EPEL repository configured. In order to activate EPEL, follow the instructions on http://fedoraproject.org/wiki/EPEL

Be aware that this has not being tested with SE-Linux enabled (check off or permissive).

On CentOS/RHEL, install the RADIUS PAM module with:

# yum -y install pam_radius

Note: as per the time of writing, EPEL 7 is still in beta and does not contain the Radius PAM module. A request has been filed through RedHat's Bugzilla to include this package also in EPEL 7

Configure SecurePass with your RADIUS device. We only need to set the public IP Address of the server, a fully qualified domain name (FQDN), and the secret password for the radius authentication. In case of the server being under NAT, specify the public IP address that will be translated into it. After completion we get a small recap of the already created device. For the sake of example, we use "secret" as our secret password.

Configure the RADIUS PAM module accordingly, i.e. open /etc/pam_radius.conf and add the following lines:

radius1.secure-pass.net secret 3
radius2.secure-pass.net secret 3

Of course the "secret" is the same we have set up on the SecurePass administration interface. Beyond this point we need to configure the PAM to correct manage the authentication.

In CentOS, open the configuration file /etc/pam.d/password-auth-ac; in Debian/Ubuntu open the /etc/pam.d/common-auth configuration and make sure that pam_radius_auth.so is in the list.

auth required   pam_env.so
auth sufficient pam_radius_auth.so try_first_pass
auth sufficient pam_unix.so nullok try_first_pass
auth requisite  pam_succeed_if.so uid >= 500 quiet
auth required   pam_deny.so

Conclusions

Handling many distributed Linux poses several challenges, from software updates to identity management and central logging. In a cloud scenario, it is not always applicable to use traditional enterprise solutions, but new tools might become very handy.

To freely subscribe to securepass beta, join SecurePass on: http://www.secure-pass.net/open
And then send an e-mail to info@garl.ch requesting beta access.

30 Oct 2014 12:55pm GMT

29 Oct 2014

feedPlanet Ubuntu

Rhonda D'Vine: Feminist Year

If someone would have told me that I would visit three feminist events this year I would have slowly nodded at them and responded with "yeah, sure..." not believing it. But sometimes things take their own turns.

It all started with the Debian Women Mini-Debconf in Barcelona. The organizers did ask me how they have to word the call for papers so that I would feel invited to give a speech, which felt very welcoming and nice. So we settled for "people who identify themselves as female". Due to private circumstances I didn't prepare well for my talk, but I hope it was still worth it. The next interesting part though happened later when there were lightning talks. Someone on IRC asked why there are male people in the lightning talks, which was explicitly allowed for them only. This also felt very very nice, to be honest, that my talk wasn't questioned. Those are amongst the reasons why I wrote My place is here, my home is Debconf.

Second event I went to was the FemCamp Wien. It was my first event that was a barcamp, I didn't know what to expect organization wise. Topic-wise it was set about Queer Feminism. And it was the first event that I went to which had a policy. Granted, there was an extremely silly written part in it, which naturally ended up in a shit storm on twitter (which people from both sides did manage very badly, which disappointed me). Denying that there is sexism against cis-males is just a bad idea, but the background of it was that this wasn't the topic of this event. The background of the policy was that usually barcamps but events in general aren't considered that save of a place for certain people, and that this barcamp wanted to make it clear that people usually shying away from such events in the fear of harassment can feel at home there.
And what can I say, this absolutely was the right thing to do. I never felt any more welcomed and included in any event, including Debian events-sorry to say that so frankly. Making it clear through the policy that everyone is on the same boat with addressing each other respectfully totally managed to do exactly that. The first session of the event about dominant talk patterns and how to work around or against them also made sure that the rest of the event was giving shy people a chance to speak up and feel comfortable, too. And the range of the sessions that were held was simply great. This was the event that I came up with the pattern that I have to define the quality of an event on the sessions that I'm unable to attend. The thing that hurt me most in the afterthought was that I couldn't attend the session about minorities within minorities. :/

Last but not least I attended AdaCamp Berlin. This was a small unconference/barcamp dedicated to increase women's participation in open technology and culture named after Ada Lovelace who is considered the first programmer. It was a small event with only 50 slots for people who identify as women. So I was totally hyper when I received the mail that was accepted. It was another event with a policy, and at first reading it looked strange. But given that there are people who are allergic to ingredients of scents, it made sense to raise awareness of that topic. And given that women are facing a fair amount of harassment in the IT and at events, it also makes sense to remind people to behave. After all it was a general policy for all AdaCamps, not for this specific one with only women.
I enjoyed the event. Totally. And that's not only because I was able to meet up with a dear friend who I haven't talked to in years, literally. I enjoyed the environment, and the sessions that were going on. And quite similar to the FemCamp, it started off with a session that helped a lot for the rest of the event. This time it was about the Impostor Syndrome which is extremely common for women in IT. And what can I say, I found myself in one of the slides, given that I just tweeted the day before that I doubted to belong there. Frankly spoken, it even crossed my mind that I was only accepted so that at least one trans person is there. Which is pretty much what the impostor syndrome is all about, isn't it. But when I was there, it did feel right. And we had great sessions that I truly enjoyed. And I have to thank one lady once again for her great definition on feminism that she brought up during one session, which is roughly that feminism for her isn't about gender but equality of all people regardless their sexes or gender definition. It's about dropping this whole binary thinking. I couldn't agree more.

All in all, I totally enjoyed these events, and hope that I'll be able to attend more next year. From what I grasped all three of them think of doing it again, the FemCamp Vienna already has the date announced at the end of this year's event, so I am looking forward to meet most of these fine ladies again, if faith permits. And keep in mind, there will always be critics and haters out there, but given that thy wouldn't think of attending such an event anyway in the first place, don't get wound up about it. They just try to talk you down.

P.S.: Ah, almost forgot about one thing to mention, which also helps a lot to reduce some barrier for people to attend: The catering during the day and for lunch both at FemCamp and AdaCamp (there was no organized catering at the Debian Women Mini-Debconf) did take off the need for people to ask about whether there could be food without meat and dairy products by offering mostly Vegan food in the first place, even without having to query the participants. Often enough people otherwise choose to go out of the event or bring their own food instead of asking for it, so this is an extremely welcoming move, too. Way to go!

/personal | permanent link | Comments: 0 | Flattr this

29 Oct 2014 7:47pm GMT

Jonathan Riddell: Kubuntu Vivid in Bright Blue

KDE Project:

Kubuntu Vivid is the development name for what will be released in April next year as Kubuntu 15.04.

The exiting news is that following some discussion and some wavering we will be switching to Plasma 5 by default. It has shown itself as a solid and reliable platform and it's time to show it off to the world.

There are some bits which are missing from Plasma 5 and we hope to fill those in over the next six months. Click on our Todo board above if you want to see what's in store and if you want to help out!

The other change that affects workflow is we're now using Debian git to store our packaging in a kubuntu branch so hopefully it'll be easier to share updates.

29 Oct 2014 7:11pm GMT

Daniel Holbach: Washington sprint

In the Community Q&A with Alan and Michael yesterday, I talked a bit about the sprint in Washington already, but I thought I'd write up a bit more about it again.

First of all: it was great to see a lot of old friends and new faces at the sprint. Especially with the two events (14.10 release and upcoming phone release) coming together, it was good to lock people up in various rooms and let them figure it out when nobody could run away easily. For me it was a great time to chat with lots of people and figure out if we're still on track and if our old assumptions still made sense. :-)

We were all locked up in a room as well...We were all locked up in a room as well…

What was pretty fantastic was the general vibe there. Everyone was crazy busy, but everybody seemed happy to see that their work of the last months and years is slowly coming together. There are still bugs to be fixed but we are close to getting the first Ubuntu phone ever out the door. Who would have thought that a couple of years ago?

It was great to catch up with people about our App Development story. There were a number of things we looked at during the sprint:

What also liked a lot was being able to debug issues with the phone on the spot. I changed to the proposed channel, set it to read-write and installed debug symbols and voilà, grabbing the developer was never easier. My personal recommendation: make sure the problem happens around 12:00, stand in the hallway with your laptop attached to the phone and wait for the developer in charge to grab lunch. This way I could find out more about a couple of issues which are being fixed now.

It was also great to meet the non-Canonical folks at the sprint who worked on the Core Apps like crazy.

What I liked as well was our Berlin meet-up: we basically invited Berliners, ex-Berliners and honorary Berliners and went to a Mexican place. Wished I met those guys more often.

I also got my Ubuntu Pioneers T-Shirt. Thanks a lot! I'll make sure to post a selfie (as everyone else :-)) soon.

Thanks a lot for a great sprint, now I'm looking forward to the upcoming Ubuntu Online Summit (12-14 Nov)! Make sure you register and add your sessions to the schedule!

29 Oct 2014 5:46pm GMT

Randall Ross: Make Software? Come to San Francisco and Check Out Ubuntu on Power!

Do you make software that solves real-world problems? Do you want your software to be instantly available to everyone that's building cloud solutions? Did you know that Ubuntu powers most of the cloud?

Some fun Ubuntu folks will be with their IBM and OpenPower friends just south of San Francisco, California next Wednesday (Nov. 5th, 2014) to talk about the future: Ubuntu on Power.

The event is free, but you'll have to register in advance.

Click the power button to get more information and to register!

Cheers,
Randall
Ubuntu Community Manager
Ubuntu on *Power*

--
Questions? randall AT ubuntu DOT com

29 Oct 2014 5:36pm GMT

Didier Roche: Eclipse and android adt support now in Ubuntu Developer Tools Center

Eclipse and Android ADT support now in Ubuntu Developer Tools Center

Now that the excellent Ubuntu 14.10 is released, it's time to focus as part of our Ubuntu Loves Developers effort on the Ubuntu Developer Tools Center and cutting a new release, bringing numerous new exciting features and framework support!

0.1 Release main features

Eclipse support

Eclipse is now part of the Ubuntu Developer Tools Center thanks to the excellent work of Tin Tvrtković who implemented the needed bits to bring that up to our users! He worked on the test bed as well to ensure we'll never break unnoticed! That way, we'll always deliver the latest and best Eclipse story on ubuntu.

To install it, just run:

$ udtc ide eclipse

and let the system set it up for you!

eclipse.png

Android Developer Tools support (with eclipse)

The first release introduced the Android Studio (beta) support, which is the default in UDTC for the Android category. In addition to that, we now complete the support in bringing ADT Eclipse support with this release.

eclipse-adt.png

It can be simply installed with:

$ udtc android eclipse-adt

Accept the SDK license like in the android studio case and be done! Note that from now on as suggested by a contributor, with both Android Studio and Eclipse ADT, we add the android tools like adb, fastboot, ddms to the user PATH.

Ubuntu is now a truly first-class citizen for Android application developers as their platform of choice!

Removing installed platform

As per a feature request on the ubuntu developer tools center issue tracker, it's now really easy to remove any installed platform. Just enter the same command than for installing, and append --remove. For instance:

$ udtc android eclipse-adt --remove
Removing Eclipse ADT
Suppression done

Enabling local frameworks

As requested as well on the issue tracker, users can now provide their own local frameworks, by using either UDTC_FRAMEWORKS=/path/to/directory and dropping those frameworks here, or in ~/.udtc/frameworks/.

On glorious details, duplicated categories and frameworks loading order is the following:

  1. UDTC_FRAMEWORKS content
  2. ~/.udtc/frameworks/ content
  3. System ones.

Note though that duplicate filenames aren't encouraged, but supported. This will help as well testing for large tests with a basic framework for all the install, reinstall, remove and other cases common in all BaseInstaller frameworks.

Other enhancements from the community

A lot of typo fixes have been included into that release thanks to the excellent and regular work of Igor Vuk, providing regular fixes! A big up to him :) I want to highlight as well the great contributions that we got in term of translations support. Thanks to everyone who helped providing or updating de, en_AU, en_CA, en_GB, es, eu, fr, hr, it, pl, ru, te, zh_CN, zh_HK support in udtc! We are eager to see what next language will enter into this list. Remember that the guide on how to contribute to Ubuntu Developer Tools Center is available here.

Exciting! How can I get it?

The 0.1 release is now tagged and all tests are passing (this release brings 70 new tests). It's available directly on vivid.

For 14.04 LTS and 14.10, use the ubuntu-developer-tools-center ppa where it's already available.

Contributions

As you have seen above, we really listen to our community and implement & debate anything coming through. We start as well to see great contributions that we accept and merge in. We are just waiting for yours!

If you want to discuss some ideas or want to give a hand, please refer to this blog post which explains how to contribute and help influencing our Ubuntu loves developers story! You can as well reach us on IRC on #ubuntu-desktop on freenode. We'll likely have an opened hangout soon during the upcoming Ubuntu Online Summit as well. More news in the coming days here. :)

29 Oct 2014 11:23am GMT

28 Oct 2014

feedPlanet Ubuntu

Ubuntu App Developer Blog: How to add settings to your scope

A scope can provide persistent settings for simple customizations, such as allowing the user to configure an email address or select a distance unit as metric or imperial.

In this tutorial, you well learn how to add settings to your scope and allow users to customize their experience.

Read…

scope-settings_coffeenearby2 scope-settings_visitparis2 scope-settings_indieconcerts1

28 Oct 2014 10:31pm GMT