13 Dec 2018

feedPlanet Ubuntu

Jonathan Riddell: Achievement of the Week

This week I gave KDE Frameworks a web page after only 4 years of us trying to promote it as the best thing ever since cabogganing without one. I also updated the theme on the KDE Applications 18.12 announcement to this millennium and even made the images in it have a fancy popup effect using the latest in JQuery Bootstrap CSS. But my proudest contribution is making the screenshot for the new release of Konsole showing how it can now display all the cat emojis plus one for a poodle.

So far no comments asking why I named my computer thus.

Facebooktwittergoogle_pluslinkedinby feather

13 Dec 2018 6:41pm GMT

Ubuntu Podcast from the UK LoCo: S11E40 – North Dallas Forty

This week we've been playing on the Nintendo Switch. We review our tech highlights from 2018 and go over our 2018 predictions, just to see how wrong we really were. We also have some Webby love and go over your feedback.

It's Season 11 Episode 40 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week's show:

That's all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there's a topic you'd like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

13 Dec 2018 3:00pm GMT

Alan Pope: Fixing Broken Dropbox Sync Support

Like many people, I've been using Dropbox to share files with friends and family for years. It's a super convenient and easy way to get files syncronised between machines you own, and work with others. This morning I was greeted with a lovely message on my Ubuntu desktop.

Dropbox says 'no'

It says "Can't sync Dropbox until you sign in and move it to a supported file system" with options to "See requirements", "Quit Dropbox" and "Sign in".

Dropbox have reduced the number of file systems they support. We knew this was coming for a while, but it's a pain if you don't use one of the supported filesystems.

Recently I re-installed my Ubuntu 18.04 laptop and chose XFS rather than the default ext4 partition type when installing. That's the reason the error is appearing for me.

I do also use NextCloud and Syncthing for syncing files, but some of the people I work with only use Dropbox, and forcing them to change is tricky.

So I wanted a solution where I could continue to use Dropbox but not have to re-format the home partition on my laptop. The 'fix' is to create a file, format it ext4 and mount it where Dropbox expects your files to be. That's essentially it. Yay Linux. This may be useful to others, so I've detailed the steps below.

Note: I strongly recommend backing up your dropbox folder first, but I'm sure you already did that so let's assume you're good.

This is just a bunch of commands, which you could blindly paste en masse, or, preferably one-by-one, checking it did what it says it should, before moving on. It worked for me, but may not work for you. I am not to blame if this deletes your cat pictures. Before you begin, stop Dropbox completely. Close the client.

I've also put these in a github gist.

# Location of the image which will contain the new ext4 partition
DROPBOXFILE="$HOME"/.dropbox.img

# Current location of my Dropbox folder
DROPBOXHOME="$HOME"/Dropbox

# Where we will copy the folder to. If you have little space, you could make this
# a folder on a USB drive
DROPBOXBACKUP="$HOME"/old_Dropbox

# What size is the Dropbox image file going to be. It makes sense to set this
# to whatever the capacity of your Dropbox account is, or a little more.
DROPBOXSIZE="20G"

# Create a 'sparse' file which will start out small and grow to the maximum
# size defined above. So we don't eat all that space immediately.
dd if=/dev/zero of="$DROPBOXFILE" bs=1 count=0 seek="$DROPBOXSIZE"

# Format it ext4, because Dropbox Inc. says so
sudo mkfs.ext4 "$DROPBOXFILE"

# Move the current Dropbox folder to the backup location
mv "$DROPBOXHOME" "$DROPBOXBACKUP"

# Make a new Dropbox folder to replace the old one. This will be the mount point
# under which the sparse file will be mounted
mkdir "$DROPBOXHOME"

# Make sure the mount point can't be written to if for some reason the partition 
# doesn't get mounted. We don't want dropbox to see an empty folder and think 'yay, let's delete
# all his files because this folder is empty, that must be what they want'
sudo chattr +i "$DROPBOXHOME"

# Mount the sparse file at the dropbox mount point
sudo mount -o loop "$DROPBOXFILE" "$DROPBOXHOME"

# Copy the files from the existing dropbox folder to the new one, which will put them
# inside the sparse file. You should see the file grow as this runs.
sudo rsync -a "$DROPBOXBACKUP"/ "$DROPBOXHOME"/

# Create a line in our /etc/fstab so this gets mounted on every boot up
echo "$DROPBOXFILE" "$DROPBOXHOME" ext4 loop,defaults,rw,relatime,exec,user_xattr 0 0 | sudo tee -a /etc/fstab

# Let's unmount it so we can make sure the above line worked
sudo umount "$DROPBOXHOME"

# This will mount as per the fstab 
sudo mount -a

# Set ownership and permissions on the new folder so Dropbox has access
sudo chown $(id -un) "$DROPBOXHOME"
sudo chgrp $(id -gn) "$DROPBOXHOME"

Now start Dropbox. All things being equal, the error message will go away, and you can carry on with your life, syncing files happily.

Hope that helps. Leave a comment here or over on the github gist.

13 Dec 2018 11:15am GMT

12 Dec 2018

feedPlanet Ubuntu

Colin King: Linux I/O Schedulers

The Linux kernel I/O schedulers attempt to balance the need to get the best possible I/O performance while also trying to ensure the I/O requests are "fairly" shared among the I/O consumers. There are several I/O schedulers in Linux, each try to solve the I/O scheduling issues using different mechanisms/heuristics and each has their own set of strengths and weaknesses.

For traditional spinning media it makes sense to try and order I/O operations so that they are close together to reduce read/write head movement and hence decrease latency. However, this reordering means that some I/O requests may get delayed, and the usual solution is to schedule these delayed requests after a specific time. Faster non-volatile memory devices can generally handle random I/O requests very easily and hence do not require reordering.

Balancing the fairness is also an interesting issue. A greedy I/O consumer should not block other I/O consumers and there are various heuristics used to determine the fair sharing of I/O. Generally, the more complex and "fairer" the solution the more compute is required, so selecting a very fair I/O scheduler with a fast I/O device and a slow CPU may not necessarily perform as well as a simpler I/O scheduler.

Finally, the types of I/O patterns on the I/O devices influence the I/O scheduler choice, for example, mixed random read/writes vs mainly sequential reads and occasional random writes.

Because of the mix of requirements, there is no such thing as a perfect all round I/O scheduler. The defaults being used are chosen to be a good best choice for the general user, however, this may not match everyone's needs. To clarify the choices, the Ubuntu Kernel Team has provided a Wiki page describing the choices and how to select and tune the various I/O schedulers. Caveat emptor applies, these are just guidelines and should be used as a starting point to finding the best I/O scheduler for your particular need.

12 Dec 2018 9:55am GMT

11 Dec 2018

feedPlanet Ubuntu

Jono Bacon: 10 Ways To Up Your Public Speaking Game

Public speaking is an art form. There are some amazing speakers, such as Lawrence Lessig, Dawn Wacek, Rory Sutherland, and many more. There are also some boring, rambling disasters that clog up meetups, conferences, and company events.

I don't claim to be an expert in public speaking, but I have had the opportunity to do a lot of it, including keynotes, presentation sessions, workshops, tutorials, and more. Over the years I have picked up some best practices and I thought I would share some of them here. I would love to hear your recommendations too, so pop them in the comments.

1. Produce Clean Slides

Great talks are a mixture of simple, effective slides and a dynamic, engaging speaker. If one part of this combination is overloading you with information, the other part gets ignored.

The primary focus should be you and your words. Your #1 goal is to weave together an interesting story that captivates your audience.

Your slides should simple provide a visual tool to help get your words over more effectively. Your slides are not the lead actress, they are the supporting actor.

Avoid extensive amounts of text and paragraphs. Focus on diagrams, pictures, and simple lists.

Good:

Bad:

Notice how I took my company logo off, just in case someone swipes it and think that I actually like to make slides like this. 🙂

Look at the slides of great speakers to get your creativity flowing.

2. Deliver Pragmatic Information

Keynotes are designed for the big ideas that set the stage for a conference. Regular talks are designed to get over key concepts that can help the audience expand their capabilities.

With both, give your audience information they can pragmatically use. How many times have you left a talk and thought, "Well, that was neat, but, er…how the hell do I start putting those concepts into action?"

You don't have to have all the answers, but you need to package up your ideas in a way that is easy to consume in the real world, not just on a stage.

Diagrams, lists, and step-by-step instructions work well. Make these higher level for the keynotes and more in-depth for the regular talks. Avoid abstract, generic ideas: they are unsatisfying and boring.

3. Build and Relieve Tension

Great movies and TV shows build a sense of tension (e.g. a character in a hostage situation) and the payoff is when that tension is relieved (e.g. the character gets rescued.)

Take a similar approach in your talks. Become vulnerable. Share times when you struggled, got things wrong, or made mistakes. Paint a picture of the low point and what was running through your mind.

Then, relieve the tension by sharing how you overcame it, bringing your audience along for the ride. This makes your presentation dynamic and interesting, and makes it clear that you are not perfect either, which helps build a closer connection with the audience. Speaking of which…

4. Loosen Up and Be Yourself

Far too many speakers deliver their presentations like they have a rod up their backside.

Formal presentations are boring. Presentations where the speaker feels comfortable in their own skin and is able to identify with the audience are much more interesting.

For example, I was delivering a presentation to a financial services firm a few months ago. I weaved in it stories about my family, my love of music, travel experiences, and other elements that made it more personal. After the session a number of audience members came over and shared how it was refreshing to see a more approachable presentation in a world that is typically so formal.

Your goal is to build a connection with your audience. To do this well they need to feel you are on the same level. Speak like them, share stories that relate to them, and they will give you their attention, which is all you can ask for.

5. Involve Your Audience (but not too much)

There is a natural barrier between you and your audience. We are wired up to know that the social context of a presentation means the speaker does the talking and the audience does the listening. If you violate this norm (such as heckling), you would be perceived as an asshole.

You need to break this barrier, but to never cede control to your audience. If you loose control and make the social norm for them to interrupt, your presentation will be riddled with audience noise.

Give them very specific ways to participate, such as:

6. Keep Your Ego in Check

We have all seen it. A speaker is welcomed to the stage and they constantly remind you about how great they are, the awards they have won, and how (allegedly) inspirational they are. In some cases this is blunt-force ego, in some cases it is a humblebrag. In both cases it sucks.

Be proud of your work and be great at it, but let the audience sing your praises, not you. Ego can have a damaging impact on your presentation and how you are perceived. This can drive a wedge between you and your audience.

7. Don't Rush, but Stay on Time

We live in multi-cultural world in which we travel a lot. You are likely to have an audience from all over the world, speaking many different languages, and from a variety of backgrounds. Speaking at a million words a minute will make understanding you very difficult some people.

Speak at a comfortable pace, and don't rush it. Now, some of you will be natural fast-talkers, and will need to practice this. Remember these?:

Well, we now all have them on our phones. Switch it on, practice, and ensure you always finish at least a few minutes before your allocated time. This will give you a buffer.

Running over your allocated time is a sure-fire way to annoy (a) the other speakers who may have to cut their time short, and (b) the event organizer who has to deal with overruns in the schedule. "But it only went over by a few minutes!" Sure, but when everyone does this, entire events get way behind schedule. Don't be that person.

8. Practice and get Honest Feedback

We only get better when we practice and can see our blind spots. Both are essential for getting good at public speaking.

Start simple. Speak at your local meetups, community events, and other gatherings. Practice, get comfortable, and then file papers at conferences and other events. Keep practicing, and keep refining.

Critique is essential here. Ask close friends to sit in your talks and ask them for blunt feedback afterwards. What went well? What didn't go well? Be explicit in inviting criticism and don't overreact to them when you get it. You want critical feedback…about your slides, your content, your pacing, your hand gestures…the lot. I have had some very blunt feedback over the years and it has always improved my presentations.

9. Never Depend on Conference WiFi

It rarely works well, simple as that.

Oh, and your mobile hotspot may not work either as many conference centers often seem to be built in borderline faraday cages. Next…

10. Remember, it is just a Presentation

Some people get a little wacky when it comes to perfecting presentations and public speaking. I know some people who have spent weeks preparing and refining their talks, often getting into a tailspin about imperfections that need to be improved.

The most important thing to worry about is the content. Is it interesting? Is it valuable? Does it enrich your audience? People are not going to remember the minute details of how you said something, what your slides looked like, and what whether you blinked too much. They will remember the content and ideas: focus on that.

Oh, and a bonus 11th: turn off animations. They are great in the hands of an artisan, but for most of us they look tacky and awful.

I am purely scratching the surface here and I would love to hear your suggestions of public speaking tips and recommendations. Share them in the comments! Oh and be sure to join as a member, which is entirely free.

The post 10 Ways To Up Your Public Speaking Game appeared first on Jono Bacon.

11 Dec 2018 4:00pm GMT

The Fridge: Ubuntu Weekly Newsletter Issue 556

Welcome to the Ubuntu Weekly Newsletter, Issue 556 for the weeks of November 25 - December 8, 2018. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

11 Dec 2018 12:45am GMT

10 Dec 2018

feedPlanet Ubuntu

Jono Bacon: Speaking Engagements in Tel Aviv in December

I am excited to share that I will be heading to Tel Aviv later this month to speak at a few events. I wanted to share a few details here, and I hope to see you there!

DevOps Days Tel Aviv

Dev Ops Days Tel Aviv Tues 18 December 2018 + Wed 19 December 2018 at Tel Aviv Convention Center, 101 Rokach Blvd, Tel Aviv, Israel.

I am delivering the opening keynote on Tuesday 18th December 2018 at 9am.

Get Tickets

Meetup: Building Technical Communities That Scale

Thu 20th Dec 2018 at 9am at RISE, Ahad Ha'Am St 54, 54 Ahad Ha'Am Street, Tel Aviv-Yafo, Tel Aviv District, Israel.

I will be delivering a talk and participating in a panel (which includes Fred Simon, Chief Architect of JFrog, Shimon Tolts, CTO of Datree, and Demi Ben Ari, VP R&D of Panorays.)

Get Tickets (Space is limited, so grab tickets ASAP)

I popped a video about this online earlier this week. Check it out:

I hope to see many of you there!

The post Speaking Engagements in Tel Aviv in December appeared first on Jono Bacon.

10 Dec 2018 7:00am GMT

09 Dec 2018

feedPlanet Ubuntu

Benjamin Mako Hill: Awards and citations at computing conferences

I've heard a surprising "fact" repeated in the CHI and CSCW communities that receiving a best paper award at a conference is uncorrelated with future citations. Although it's surprising and counterintuitive, it's a nice thing to think about when you don't get an award and its a nice thing to say to others when you do. I've thought it and said it myself.

It also seems to be untrue. When I tried to check the "fact" recently, I found a body of evidence that suggests that computing papers that receive best paper awards are, in fact, cited more often than papers that do not.

The source of the original "fact" seems to be a CHI 2009 study by Christoph Bartneck and Jun Hu titled "Scientometric Analysis of the CHI Proceedings." Among many other things, the paper presents a null result for a test of a difference in the distribution of citations across best papers awardees, nominees, and a random sample of non-nominees.

Although the award analysis is only a small part of Bartneck and Hu's paper, there have been at least two papers have have subsequently brought more attention, more data, and more sophisticated analyses to the question. In 2015, the question was asked by Jaques Wainer, Michael Eckmann, and Anderson Rocha in their paper "Peer-Selected 'Best Papers'-Are They Really That 'Good'?"

Wainer et al. build two datasets: one of papers from 12 computer science conferences with citation data from Scopus and another papers from 17 different conferences with citation data from Google Scholar. Because of parametric concerns, Wainer et al. used a non-parametric rank-based technique to compare awardees to non-awardees. Wainer et al. summarize their results as follows:

The probability that a best paper will receive more citations than a non best paper is 0.72 (95% CI = 0.66, 0.77) for the Scopus data, and 0.78 (95% CI = 0.74, 0.81) for the Scholar data. There are no significant changes in the probabilities for different years. Also, 51% of the best papers are among the top 10% most cited papers in each conference/year, and 64% of them are among the top 20% most cited.

The question was also recently explored in a different way by Danielle H. Lee in her paper on "Predictive power of conference‐related factors on citation rates of conference papers" published in June 2018.

Lee looked at 43,000 papers from 81 conferences and built a regression model to predict citations. Taking into an account a number of controls not considered in previous analyses, Lee finds that the marginal effect of receiving a best paper award on citations is positive, well-estimated, and large.

Why did Bartneck and Hu come to such a different conclusions than later work?

Distribution of citations (received by 2009) of CHI papers published between 2004-2007 that were nominated for a best paper award (n=64), received one (n=12), or were part of a random sample of papers that did not (n=76).

My first thought was that perhaps CHI is different than the rest of computing. However, when I looked at the data from Bartneck and Hu's 2009 study-conveniently included as a figure in their original study-you can see that they did find a higher mean among the award recipients compared to both nominees and non-nominees. The entire distribution of citations among award winners appears to be pushed upwards. Although Bartneck and Hu found an effect, they did not find a statistically significant effect.

Given the more recent work by Wainer et al. and Lee, I'd be willing to venture that the original null finding was a function of the fact that citations is a very noisy measure-especially over a 2-5 post-publication period-and that the Bartneck and Hu dataset was small with only 12 awardees out of 152 papers total. This might have caused problems because the statistical test the authors used was an omnibus test for differences in a three-group sample that was imbalanced heavily toward the two groups (nominees and non-nominees) in which their appears to be little difference. My bet is that the paper's conclusions on awards is simply an example of how a null effect is not evidence of a non-effect-especially in an underpowered dataset.

Of course, none of this means that award winning papers are better. Despite Wainer et al.'s claim that they are showing that award winning papers are "good," none of the analyses presented can disentangle the signalling value of an award from differences in underlying paper quality. The packed rooms one routinely finds at best paper sessions at conferences suggest that at least some additional citations received by award winners might be caused by extra exposure caused by the awards themselves. In the future, perhaps people can say something along these lines instead of repeating the "fact" of the non-relationship.


09 Dec 2018 8:20pm GMT

07 Dec 2018

feedPlanet Ubuntu

Omer Akram: Introducing PySide2 (Qt for Python) Snap Runtime

Lately at Crossbar.io, we have been PySide2 for an internal project. Last week it reached a milestone and I am now in the process of code cleanup and refactoring as we had to rush quite a few things for that deadline. We also create a snap package for the project, our previous approach was to ship the whole PySide2 runtime (170mb+) with the Snap, it worked but was a slow process, because each new snap build involved downloading PySide2 from PyPI and installing some deb dependencies.

So I decided to play with the content interface and cooked up a new snap that is now published to snap store. This definitely resulted in overall size reduction of the snap but at the same time opens a lot of different opportunities for app development on the Linux desktop.

I created a 'Hello World' snap that is just 8Kb in size since it doesn't include any dependencies with it as they are provided by the pyside2 snap. I am currently working on a very simple "sound recorder" app using PySide and will publish to the Snap store.

With pyside2 snap installed, we can probably export a few environment variables to make the runtime available outside of snap environment, for someone who is developing an app on their computer.

07 Dec 2018 5:11pm GMT

06 Dec 2018

feedPlanet Ubuntu

Jonathan Riddell: www.kde.org

It's not uncommon to come across some dusty corner of KDE which hasn't been touched in ages and has only half implemented features. One of the joys of KDE is being able to plunge in and fix any such problem areas. But it's quite a surprise when a high profile area of KDE ends up unmaintained. www.kde.org is one such area and it was getting embarrassing. February 2016 we had a sprint where a new theme was rolled out on the main pages making the website look fresh and act responsively on mobiles but since then, for various failures of management, nothing has happened. So while the neon build servers were down for shuffling to a new machine I looked into why Plasma release announcements were updated but not Frameworks or Applications announcments. I'd automated Plasma announcements a while ago but it turns out the other announcements are still done manually, so I updated those and poked the people involved. Then of course I got stuck looking at all the other pages which hadn't been ported to the new theme. On review there were not actually too many of them, if you ignore the announcements, the website is not very large.

Many of the pages could be just forwarded to more recent equivalents such as getting the history page (last update in 2003) to point to timeline.kde.org or the presentation slides page (last update for KDE 4 release) to point to a more up to date wiki page.

Others are worth reviving such as KDE screenshots page, press contacts, support page. The contents could still do with some pondering on what is useful but while they exist we shouldn't pretend they don't so I updated those and added back links to them.

While many of these pages are hard to find or not linked at all from www.kde.org they are still the top hits in Google when you search for "KDE presentation" or "kde history" or "kde support" so it is worth not looking like we are a dead project.

There were also obvious bugs that needed fixed for example the cookie-opt-out banner didn't let you opt out, the font didn't get loaded, the favicon was inconsistent.

All of these are easy enough fixes but the technical barrier is too high to get it done easily (you need special permission to have access to www.kde.org reasonably enough) and the social barrier is far too high (you will get complaints when changing something high profile like this, far easier to just let it rot). I'm not sure how to solve this but KDE should work out a way to allow project maintenance tasks like this be more open.

Anyway yay, www.kde.org is now new theme everywhere (except old announcements) and pages have up to date content.

There is a TODO item to track website improvements if you're interested in helping, although it missed the main one which is the stalled port to WordPress, again a place it just needs someone to plunge in and do the work. It's satisfying because it's a high profile improvement but alas it highlights some failings in a mature community project like ours.

Facebooktwittergoogle_pluslinkedinby feather

06 Dec 2018 4:44pm GMT

Ubuntu Podcast from the UK LoCo: S11E39 – The Thirty-Nine Steps

This week we've been flashing devices and getting a new display. We discuss Huawei developing its own mobile OS, Steam Link coming to the Raspberry Pi, Epic Games laucnhing their own digital store and we round up the community news.

It's Season 11 Episode 39 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week's show:

That's all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there's a topic you'd like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

06 Dec 2018 3:00pm GMT

Podcast Ubuntu Portugal: S01E14 – Dos oito, aos oitenta

Já com o pensamento em 2019, sem esquecer a quadra natalícia, neste episódio -que volta a sair à quinta-feira!!! - falamos sobre prendas, home automation e revivalismo. Já sabes: Ouve, subscreve e partilha!

Patrocínios

Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios - captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT-arroba-gmail.com.

Atribuição e licenças

A imagem de capa: Nick Hobgood e está licenciada como CC BY-SA.

A música do genérico é: "Won't see it comin' (Feat Aequality & N'sorte d'autruche)", por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License.

Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

06 Dec 2018 1:13pm GMT

05 Dec 2018

feedPlanet Ubuntu

Benjamin Mako Hill: Banana Peels

Photo comic of seeing a banana peel in the road while on a bike.

Although it's been decades since I last played, it's still flashbacks to Super Mario Kart and pangs of irrational fear every time I see a banana peel in the road.

05 Dec 2018 4:25am GMT

04 Dec 2018

feedPlanet Ubuntu

Colin Watson: Deploying Swift

Sometimes I want to deploy Swift, the OpenStack object storage system.

Well, no, that's not true. I basically never actually want to deploy Swift as such. What I generally want to do is to debug some bit of production service deployment machinery that relies on Swift for getting build artifacts into the right place, or maybe the parts of the Launchpad librarian (our blob storage service) that use Swift. I could find an existing private or public cloud that offers the right API and test with that, but sometimes I need to test with particular versions, and in any case I have a terribly slow internet connection and shuffling large build artifacts back and forward over the relevant bit of wet string makes it painfully slow to test things.

For a while I've had an Ubuntu 12.04 VM lying around with an Icehouse-based Swift deployment that I put together by hand. It works, but I didn't keep good notes and have no real idea how to reproduce it, not that I really want to keep limping along with manually-constructed VMs for this kind of thing anyway; and I don't want to be dependent on obsolete releases forever. For the sorts of things I'm doing I need to make sure that authentication works broadly the same way as it does in a real production deployment, so I want to have Keystone too. At the same time, I definitely don't want to do anything close to a full OpenStack deployment of my own: it's much too big a sledgehammer for this particular nut, and I don't really have the hardware for it.

Here's my solution to this, which is compact enough that I can run it on my laptop, and while it isn't completely automatic it's close enough that I can spin it up for a test and discard it when I'm finished (so I haven't worried very much about producing something that runs efficiently). It relies on Juju and LXD. I've only tested it on Ubuntu 18.04, using Queens; for anything else you're on your own. In general, I probably can't help you if you run into trouble with the directions here: this is provided "as is", without warranty of any kind, and all that kind of thing.

First, install Juju and LXD if necessary, following the instructions provided by those projects, and also install the python-openstackclient package as you'll need it later. You'll want to set Juju up to use LXD, and you should probably make sure that the shells you're working in don't have http_proxy set as it's quite likely to confuse things unless you've arranged for your proxy to be able to cope with your local LXD containers. Then add a model:

juju add-model swift

At this point there's a bit of complexity that you normally don't have to worry about with Juju. The swift-storage charm wants to mount something to use for storage, which with the LXD provider in practice ends up being some kind of loopback mount. Unfortunately, being able to perform loopback mounts exposes too much kernel attack surface, so LXD doesn't allow unprivileged containers to do it. (Ideally the swift-storage charm would just let you use directory storage instead.) To make the containers we're about to create privileged enough for this to work, run:

lxc profile set juju-swift security.privileged true
lxc profile device add juju-swift loop-control unix-char \
    major=10 minor=237 path=/dev/loop-control
for i in $(seq 0 255); do
    lxc profile device add juju-swift loop$i unix-block \
        major=7 minor=$i path=/dev/loop$i
done

Now we can start deploying things! Save this to a file, e.g. swift.bundle:

series: bionic
description: "Swift in a box"
applications:
  mysql:
    charm: "cs:mysql-62"
    channel: candidate
    num_units: 1
    options:
      dataset-size: 512M
  keystone:
    charm: "cs:keystone"
    num_units: 1
  swift-storage:
    charm: "cs:swift-storage"
    num_units: 1
    options:
      block-device: "/etc/swift/storage.img|5G"
  swift-proxy:
    charm: "cs:swift-proxy"
    num_units: 1
    options:
      zone-assignment: auto
      replicas: 1
relations:
  - ["keystone:shared-db", "mysql:shared-db"]
  - ["swift-proxy:swift-storage", "swift-storage:swift-storage"]
  - ["swift-proxy:identity-service", "keystone:identity-service"]

And run:

juju deploy swift.bundle

This will take a while. You can run juju status to see how it's going in general terms, or juju debug-log for detailed logs from the individual containers as they're putting themselves together. When it's all done, it should look something like this:

Model  Controller  Cloud/Region     Version  SLA
swift  lxd         localhost        2.3.1    unsupported

App            Version  Status  Scale  Charm          Store       Rev  OS      Notes
keystone       13.0.1   active      1  keystone       jujucharms  290  ubuntu
mysql          5.7.24   active      1  mysql          jujucharms   62  ubuntu
swift-proxy    2.17.0   active      1  swift-proxy    jujucharms   75  ubuntu
swift-storage  2.17.0   active      1  swift-storage  jujucharms  250  ubuntu

Unit              Workload  Agent  Machine  Public address  Ports     Message
keystone/0*       active    idle   0        10.36.63.133    5000/tcp  Unit is ready
mysql/0*          active    idle   1        10.36.63.44     3306/tcp  Ready
swift-proxy/0*    active    idle   2        10.36.63.75     8080/tcp  Unit is ready
swift-storage/0*  active    idle   3        10.36.63.115              Unit is ready

Machine  State    DNS           Inst id        Series  AZ  Message
0        started  10.36.63.133  juju-d3e703-0  bionic      Running
1        started  10.36.63.44   juju-d3e703-1  bionic      Running
2        started  10.36.63.75   juju-d3e703-2  bionic      Running
3        started  10.36.63.115  juju-d3e703-3  bionic      Running

At this point you have what should be a working installation, but with only administrative privileges set up. Normally you want to create at least one normal user. To do this, start by creating a configuration file granting administrator privileges (this one comes verbatim from the openstack-base bundle):

_OS_PARAMS=$(env | awk 'BEGIN {FS="="} /^OS_/ {print $1;}' | paste -sd ' ')
for param in $_OS_PARAMS; do
    if [ "$param" = "OS_AUTH_PROTOCOL" ]; then continue; fi
    if [ "$param" = "OS_CACERT" ]; then continue; fi
    unset $param
done
unset _OS_PARAMS

_keystone_unit=$(juju status keystone --format yaml | \
    awk '/units:$/ {getline; gsub(/:$/, ""); print $1}')
_keystone_ip=$(juju run --unit ${_keystone_unit} 'unit-get private-address')
_password=$(juju run --unit ${_keystone_unit} 'leader-get admin_passwd')

export OS_AUTH_URL=${OS_AUTH_PROTOCOL:-http}://${_keystone_ip}:5000/v3
export OS_USERNAME=admin
export OS_PASSWORD=${_password}
export OS_USER_DOMAIN_NAME=admin_domain
export OS_PROJECT_DOMAIN_NAME=admin_domain
export OS_PROJECT_NAME=admin
export OS_REGION_NAME=RegionOne
export OS_IDENTITY_API_VERSION=3
# Swift needs this:
export OS_AUTH_VERSION=3
# Gnocchi needs this
export OS_AUTH_TYPE=password

Source this into a shell: for instance, if you saved this to ~/.swiftrc.juju-admin, then run:

. ~/.swiftrc.juju-admin

You should now be able to run openstack endpoint list and see a table for the various services exposed by your deployment. Then you can create a dummy project and a user with enough privileges to use Swift:

USERNAME=your-username
PASSWORD=your-password
openstack domain create SwiftDomain
openstack project create --domain SwiftDomain --description Swift \
    SwiftProject
openstack user create --domain SwiftDomain --project-domain SwiftDomain \
    --project SwiftProject --password "$PASSWORD" "$USERNAME"
openstack role add --project SwiftProject --user-domain SwiftDomain \
    --user "$USERNAME" Member

(This is intended for testing rather than for doing anything particularly sensitive. If you cared about keeping the password secret then you'd use the --password-prompt option to openstack user create instead of supplying the password on the command line.)

Now create a configuration file granting privileges for the user you just created. I felt like automating this to at least some degree:

touch ~/.swiftrc.juju
chmod 600 ~/.swiftrc.juju
sed '/^_password=/d;
     s/\( OS_PROJECT_DOMAIN_NAME=\).*/\1SwiftDomain/;
     s/\( OS_PROJECT_NAME=\).*/\1SwiftProject/;
     s/\( OS_USER_DOMAIN_NAME=\).*/\1SwiftDomain/;
     s/\( OS_USERNAME=\).*/\1'"$USERNAME"'/;
     s/\( OS_PASSWORD=\).*/\1'"$PASSWORD"'/' \
     <~/.swiftrc.juju-admin >~/.swiftrc.juju

Source this into a shell. For example:

. ~/.swiftrc.juju

You should now find that swift list works. Success! Now you can swift upload files, or just start testing whatever it was that you were actually trying to test in the first place.

This is not a setup I expect to leave running for a long time, so to tear it down again:

juju destroy-model swift

This will probably get stuck trying to remove the swift-storage unit, since nothing deals with detaching the loop device. If that happens, find the relevant device in losetup -a from another window and use losetup -d to detach it; juju destroy-model should then be able to proceed.

Credit to the Juju and LXD teams and to the maintainers of the various charms used here, as well as of course to the OpenStack folks: their work made it very much easier to put this together.

04 Dec 2018 1:37am GMT

03 Dec 2018

feedPlanet Ubuntu

Daniel Pocock: Smart home: where to start?

My home automation plans have been progressing and I'd like to share some observations I've made about planning a project like this, especially for those with larger houses.

With so many products and technologies, it can be hard to know where to start. Some things have become straightforward, for example, Domoticz can soon be installed from a package on some distributions. Yet this simply leaves people contemplating what to do next.

The quickstart

For a small home, like an apartment, you can simply buy something like the Zigate, a single motion and temperature sensor, a couple of smart bulbs and expand from there.

For a large home, you can also get your feet wet with exactly the same approach in a single room. Once you are familiar with the products, use a more structured approach to plan a complete solution for every other space.

The Debian wiki has started gathering some notes on things that work easily on GNU/Linux systems like Debian as well as Fedora and others.

Prioritize

What is your first goal? For example, are you excited about having smart lights or are you more concerned with improving your heating system efficiency with zoned logic?

Trying to do everything at once may be overwhelming. Make each of these things into a separate sub-project or milestone.

Technology choices

There are many technology choices:

  • Zigbee, Z-Wave or another protocol? I'm starting out with a preference for Zigbee but may try some Z-Wave devices along the way.
  • E27 or B22 (Bayonet) light bulbs? People in the UK and former colonies may have B22 light sockets and lamps. For new deployments, you may want to standardize on E27. Amongst other things, E27 is used by all the Ikea lamp stands and if you want to be able to move your expensive new smart bulbs between different holders in your house at will, you may want to standardize on E27 for all of them and avoid buying any Bayonet / B22 products in future.
  • Wired or wireless? Whenever you take up floorboards, it is a good idea to add some new wiring. For example, CAT6 can carry both power and data for a diverse range of devices.
  • Battery or mains power? In an apartment with two rooms and less than five devices, batteries may be fine but in a house, you may end up with more than a hundred sensors, radiator valves, buttons, and switches and you may find yourself changing a battery in one of them every week. If you have lodgers or tenants and you are not there to change the batteries then this may cause further complications. Some of the sensors have a socket for an optional power supply, battery eliminators may also be an option.

Making an inventory

Creating a spreadsheet table is extremely useful.

This helps estimate the correct quantity of sensors, bulbs, radiator valves and switches and it also helps to budget. Simply print it out, leave it under the Christmas tree and hope Santa will do the rest for you.

Looking at my own house, these are the things I counted in a first pass:

Don't forget to include all those unusual spaces like walk-in pantries, a large cupboard under the stairs, cellar, en-suite or enclosed porch. Each deserves a row in the table.

Sensors help make good decisions

Whatever the aim of the project, sensors are likely to help obtain useful data about the space and this can help to choose and use other products more effectively.

Therefore, it is often a good idea to choose and deploy sensors through the home before choosing other products like radiator valves and smart bulbs.

The smartest place to put those smart sensors

When placing motion sensors, it is important to avoid putting them too close to doorways where they might detect motion in adjacent rooms or hallways. It is also a good idea to avoid putting the sensor too close to any light bulb: if the bulb attracts an insect, it will trigger the motion sensor repeatedly. Temperature sensors shouldn't be too close to heaters or potential draughts around doorways and windows.

There are a range of all-in-one sensors available, some have up to six features in one device smaller than an apple. In some rooms this is a convenient solution but in other rooms, it may be desirable to have separate motion and temperature sensors in different locations.

Consider the dining and sitting rooms in my own house, illustrated in the floorplan below. The sitting room is also a potential 6th bedroom or guest room with sofa bed, the downstairs shower room conveniently located across the hall. The dining room is joined to the sitting room by a sliding double door. When the sliding door is open, a 360 degree motion sensor in the ceiling of the sitting room may detect motion in the dining room and vice-versa. It appears that 180 degree motion sensors located at the points "1" and "2" in the floorplan may be a better solution.

These rooms have wall mounted radiators and fireplaces. To avoid any of these potential heat sources the temperature sensors should probably be in the middle of the room.

This photo shows the proposed location for the 180 degree motion sensor "2" on the wall above the double door:

Summary

To summarize, buy a Zigate and a small number of products to start experimenting with. Make an inventory of all the products potentially needed for your home. Try to mark sensor locations on a floorplan, thinking about the type of sensor (or multiple sensors) you need for each space.

03 Dec 2018 8:44am GMT

Eric Hammond: Guest Post: Notable AWS re:invent Sessions, by Jennine Townsend

A guest post authored by Jennine Townsend, expert sysadmin and AWS aficionado

There were so many sessions at re:Invent! Now that it's over, I want to watch some sessions on video, but which ones?

Of course I'll pick out those that are specific to my interests, but I also want to know the sessions that had good buzz, so I made a list that's kind of mashed together from sessions that I heard good things about on Twitter, with those that had lots of repeats and overflow sessions, figuring those must have been popular.

But I confess I left out some whole categories! There aren't sessions for Alexa or DeepRacer (not that I'm not interested, they're just not part of my re:Invent followup), and I don't administer any Windows systems so I leave out most of those sessions.

Some sessions have YouTube links, some don't (yet) have and may never have YouTube videos, since lots of (types of) sessions aren't recorded. (But even there, if I search the topic and speakers, I bet I can often find an earlier talk.)

There's not much of a ranking: keynotes at the top, sessions I heard good things about in the middle, then sessions that had lots of repeats. It's only mildly specific to my interests, so I thought other people might find it helpful. It's also not really finished, but I wanted to get started watching sessions this weekend!

Keynotes

Peter DeSantis Monday Night Live

Terry Wise Global Partner Keynote

Andy Jassy keynote

Werner Vogels keynote

Popular: Buzz during AWS re:Invent

DEV322 What's New with the AWS CLI (Kyle Knapp, James Saryerwinnie)

SRV409 A Serverless Journey: AWS Lambda Under the Hood

CON362 Container Power Hour with Jess, Clare, and Abby

SRV325 Using DevOps, Microservices, and Serverless to Accelerate Innovation (David Richardson, Ken Exner, Deepak Singh)

SRV375 Lambda Layers and Runtime API (Danilo Poccia) - Chalk Talk

SRV338 Configuration Management and Service Discovery (mentions CloudMap) (Alex Casalboni, Ben Kehoe) - Chalk Talk

CON367 Introducing App Mesh (Kiran Meduri, Shubha Rao, James Straub)

SRV355 Best Practices for CI/CD with AWS Lambda and Amazon API Gateway (Chris Munns) (focuses on SAM, CodeStar, I believe) - Chalk Talk

DEV327 Advanced Infrastructure as Code Programming on AWS

SRV322 From Monolith to Modern Apps: Best Practices

Popular: Repeats During AWS re:Invent

CON301 Mastering Kubernetes on AWS

ARC202 Running Lean Architectures: How to Optimize for Cost Efficiency

DEV319 Continuous Integration Best Practices

AIM404 Build, Train, and Deploy ML Models Quickly and Easily with Amazon SageMaker

STG209 Amazon S3 Storage Management (Scott Hewitt) - Chalk Talk

ENT205 Executing a Large-Scale Migration to AWS (Joe Chung, Jonathan Allen, Mike Wittig)

DEV317 Advanced Continuous Delivery Best Practices

CON308 Building Microservices with Containers

ANT323 Build Your Own Log Analytics Solutions on AWS

ANT201 Big Data Analytics Architectural Patterns and Best Practices

DEV403 Automate Common Maintenance & Deployment Tasks Using AWS Systems Manager - Builders Session

DAT356 Which Database Should I Use? - Builders Session

DEV309 CI/CD for Serverless and Containerized Applications

ARC209 Architecture Patterns for Multi-Region Active-Active Applications

AIM401 Deep Learning Applications Using TensorFlow

SRV305 Inside AWS: Technology Choices for Modern Applications

SEC401 Mastering Identity at Every Layer of the Cake

SEC371 Incident Response in AWS - Builders Session

SEC322 Using AWS Lambda as a Security Team

NET404 Elastic Load Balancing: Deep Dive and Best Practices

DEV321 What's New with AWS CloudFormation

DAT205 Databases on AWS: The Right Tool for the Right Job

Original article and comments: https://alestic.com/2018/12/aws-reinvent-jennine/

03 Dec 2018 12:00am GMT