26 Jun 2019

feedPlanet Ubuntu

Canonical Design Team: Ubuntu Server development summary – 26 June 2019

Hello Ubuntu Server

The purpose of this communication is to provide a status update and highlights for any interesting subjects from the Ubuntu Server Team. If you would like to reach the server team, you can find us at the #ubuntu-server channel on Freenode. Alternatively, you can sign up and use the Ubuntu Server Team mailing list or visit the Ubuntu Server discourse hub for more discussion.

Spotlight: Weekly Ubuntu Server team updates in Discourse

The Ubuntu Server team will now be sending weekly team updates to discourse to give a clear view of the projects, feature and bug work we are working on each week. Come see what we are up to and participate in driving Ubuntu Server changes with us. Here is our June 24 status update. Come discuss any topics of interest with us.

cloud-init

curtin

Contact the Ubuntu Server team

Bug Work and Triage

Ubuntu Server Packages

Below is a summary of uploads to the development and supported releases. Current status of the Debian to Ubuntu merges is tracked on the Merge-o-Matic page. For a full list of recent merges with change logs please see the Ubuntu Server report.

Proposed Uploads to the Supported Releases

Please consider testing the following by enabling proposed, checking packages for update regressions, and making sure to mark affected bugs verified as fixed.

Total: 10

Uploads Released to the Supported Releases

Total: 24

Uploads to the Development Release

Total: 25

The post Ubuntu Server development summary - 26 June 2019 appeared first on Ubuntu Blog.

26 Jun 2019 5:30pm GMT

25 Jun 2019

feedPlanet Ubuntu

Jonathan Carter: PeerTube and LBRY

I have many problems with YouTube, who doesn't these days, right? I'm not going to go into all the nitty gritty of it in this post, but here's a video from a LBRY advocate that does a good job of summarizing some of the issues by using clips from YouTube creators:

(link to the video if the embedded video above doesn't display)

I have a channel on YouTube for which I have lots of plans for. I started making videos last year and created 59 episodes for Debian Package of the Day. I'm proud that I got so far because I tend to lose interest in things after I figure out how it works or how to do it. I suppose some people have assumed that my video channel is dead because I haven't uploaded recently, but I've just been really busy and in recent weeks, also a bit tired as a result. Things should pick up again soon.

Mediadrop and PeerTube

I wanted to avoid a reliance on YouTube early on, and set up a mediadrop instance on highvoltage.tv. Mediadrop ticks quite a few boxes but there's a lot that's missing. On top of that, it doesn't seem to be actively developed anymore so it will probably never get the features that I want.

Screenshot of my MediaDrop instance.

I've been planning to move over to PeerTube for a while and hope to complete that soon. PeerTube is a free software video hosting platform that resemble YouTube style video sites. It's on the fediverse and videos viewed by users are shared by webtorrents to other users who are viewing the same videos. After reviewing different video hosting platforms last year during DebCamp, I also came to the conclusion that PeerTube is the right platform to host DebConf and related Debian videos on. I intend to implement an instance for Debian shortly after I finish up my own migration.

(link to PeerTube video if embedded video doesn't display)

Above is an introduction of PeerTube by its creators (which runs on PeerTube so if you've never tried it out before, there's your chance!)

LBRY

LBRY App Screenshot

LBRY takes a drastically different approach to the video sharing problem. It's not yet as polished as PeerTube in terms of user experience and it's a lot newer too, but it's interesting in its own right. It's also free software and implements it's own protocol that you access on lbry:// URIs and it prioritizes it's own native apps over accessing it in a web browser. Videos are also shared on its peer-to-peer network. One big thing that it implements is its own blockchain along with its own LBC currency (don't roll your eyes just yet it's not some gimmick from 2016 ;) ). It's integrated with the app so viewers can easily give a tip to a creator. I think that's better than YouTube's ad approach because people can earn money by the value their video provides to the user, not by the amount of eyes they bring to the platform. It's also possible for creators to create paid for content, although I haven't seen that on the platform yet.

If you try out LBRY using my referral code I can get a whole 20 LBC (1 LBC is nearly USD $0.04 so I'll be rich soon!). They also have a sync system that can sync all your existing YouTube videos over to LBRY. I requested this yesterday and it's scheduled so at some point my YouTube videos will show up on my @highvoltage channel on LBRY. Their roadmap also includes some interesting reading.

I definitely intend to try out LBRY's features and it's unique approach, although for now my plan is to use my upcoming PeerTube instance as my main platform. It's the most stable and viable long-term option at this point and covers all the important features that I care about.

25 Jun 2019 7:14pm GMT

Canonical Design Team: The future of mobile connectivity

An image displaying a range of devices connected to a mobile network.

Mobile operators face a range of challenges today from saturation, competition and regulation - all of which are having a negative impact on revenues. The introduction of 5G offers new customer segments and services to offset this decline. However, unlike the introduction of 4G which was dominated by consumer benefits, 5G is expected to be driven by enterprise use. According to IDC, enterprises will generate 60 percent of the world's data by 2025.

Rather than rely on costly proprietary hardware and operating models, the use of open source technologies offers the ability to commoditise and democratise the wireless network infrastructure. Major operators such as Vodafone, Telefonica and China Mobile have already adopted such practices.

Shifting to open source technology and taking a software defined approach enables mobile operators to differentiate based on the services they offer, rather than network coverage or subscription costs.

This whitepaper will explain how mobile operators can break the proprietary stranglehold and adopt an open approach including:

To view the whitepaper, sign up using the form below:

<noscript><a class="p-link--external" href="https://ubuntu.com/engage/ubuntu-lime-telco?utm_source=blog&amp;utm_medium=Blog&amp;utm_campaign=FY19_IOT_UbuntuCore_Whitepaper_LimeSDR">Get the whitepaper</a> </noscript>

The post The future of mobile connectivity appeared first on Ubuntu Blog.

25 Jun 2019 12:29pm GMT

24 Jun 2019

feedPlanet Ubuntu

The Fridge: Ubuntu Weekly Newsletter Issue 584

Welcome to the Ubuntu Weekly Newsletter, Issue 584 for the week of June 16 - 22, 2019. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

24 Jun 2019 10:24pm GMT

Riccardo Padovani: Using AWS Textract in an automatic fashion with AWS Lambda

During the last AWS re:Invent, back in 2018, a new OCR service to extract data from virtually any document has been announced. The service, called Textract, doesn't require any previous machine learning experience, and it is quite easy to use, as long as we have just a couple of small documents. But what if we have millions of PDF of thousands of page each? Or what if we want to analyze documents loaded by users?

In that case, we need to invoke some asynchronous APIs, poll an endpoint to check when it has finished working, and then read the result, which is paginated, so we need multiple APIs call. Wouldn't be super cool to just drop files in an S3 bucket, and after some minutes, having their content in another S3 bucket?

Let's see how to use AWS Lambda, SNS, and SQS to automatize all the process!

Overview of the process

This is the process we are aiming to build:

  1. Drop files to an S3 bucket;
  2. A trigger will invoke an AWS Lambda function, which will inform AWS Textract of the presence of a new document to analyze;
  3. AWS Textract will do its magic, and push the status of the job to an SNS topic, that will post it over an SQS topic;
  4. The SQS topic will invoke another Lambda function, which will read the status of the job, and if the analysis was successful, it downloads the extracted text and save to another S3 bucket (but we could replace this with a write over DynamoDB or others database systems);
  5. The Lambda function will also publish the state over Cloudwatch, so we can trigger alarms when a read was unsuccessful.

Since a picture is worth a thousand words, let me show a graph of this process.

Textract structure

While I am writing this, Textract is available only in 4 regions: US East (Northern Virginia), US East (Ohio), US West (Oregon), and EU (Ireland). I strongly suggest therefore to create all the resources in just one region, for the sake of simplicity. In this tutorial, I will use eu-west-1.

S3 buckets

First of all, we need to create two buckets: one for our raw file, and one for the JSON file with the extracted test. We could also use the same bucket, theoretically, but with two buckets we can have better access control.

Since I love boring solutions, for this tutorial I will call the two buckets textract_raw_files and textract_json_files. If necessary, official documentation explains how to create S3 buckets.

Invoke Textract

The first part of the architecture is informing Textract of every new file we upload to S3. We can leverage the S3 integration with Lambda: each time a new file is uploaded, our Lambda function is triggered, and it will invoke Textract.

The body of the function is quite straightforward:

from urllib.parse import unquote_plus

import boto3

s3_client = boto3.client('s3')
textract_client = boto3.client('textract')

SNS_TOPIC_ARN = 'arn:aws:sns:eu-west-1:123456789012:AmazonTextract'    # We need to create this
ROLE_ARN = 'arn:aws:iam::123456789012:role/TextractRole'   # This role is managed by AWS

def handler(event, _):
    for record in event['Records']:
        bucket = record['s3']['bucket']['name']
        key = unquote_plus(record['s3']['object']['key'])

        print(f'Document detection for {bucket}/{key}')

        textract_client.start_document_text_detection(
            DocumentLocation={'S3Object': {'Bucket': bucket, 'Name': key}},
            NotificationChannel={'RoleArn': ROLE_ARN, 'SNSTopicArn': SNS_TOPIC_ARN})

You can find a copy of this code hosted over Gitlab.

As you can see, we receive a list of freshly uploaded files, and for each one of them, we ask Textract to do its magic. We also ask it to notify us, when it has finished its work, sending a message over SNS. We need therefore to create an SNS topic. It is well explained how to do so in the official documentation.

When we have finished, we should have something like this:

SNS topic

We copy the ARN of our freshly created topic and insert it in the script above in the variable SNS_TOPIC_ARN.

Now we need to actually create our Lambda function: once again the official documentation is our friend if we have never worked with AWS Lambda before.

Since the only requirement of the script is boto3, and it is included by default in Lambda, we don't need to create a custom package.

At least, this is usually the case :-) Unfortunately, while I am writing this post, boto3 on Lambda is at version boto3-1.9.42, while support for Textract landed only in boto3-1.9.138. We can check which version is currently on Lambda from this page, under Python Runtimes: if boto3 has been updated to a version >= 1.9.138, we don't have to do anything more than simply create the Lambda function. Otherwise, we have to include a newer version of boto3 in our Lambda function. But fear not! The official documentation explains how to create a deployment package.

We need also to link an IAM role to our Lambda function, which requires some additional permission:

Of course, other than that, the function requires the standard permissions to be executed and to write on Cloudwatch: AWS manages that for us.

We are almost there, we need only to create the trigger: we can do that from the Lambda designer! From the designer we select S3 as the trigger, we set our textract_raw_files bucket, and we select All object create events as Event type.

If we implemented everything correctly, we can now upload a PDF file to the textract_raw_files, and over Cloudwatch we should be able to see the log of the Lambda function, which should say something similar to Document detection for textract_raw_files/my_first_file.pdf.

Now we only need to read the extracted text, all the hard work has been done by AWS :-)

Read data from Textract

AWS Textract is so kind to notify us when it has finished extracting data from PDFs we provided: we create a Lambda function to intercept such notification, invoke AWS Textract and save the result in S3.

The Lambda function needs also to support pagination in the results, so the code is a bit longer:

import json
import boto3

textract_client = boto3.client('textract')
s3_bucket = boto3.resource('s3').Bucket('textract_json_files')


def get_detected_text(job_id: str, keep_newlines: bool = False) -> str:
    """
    Giving job_id, return plain text extracted from input document.
    :param job_id: Textract DetectDocumentText job Id
    :param keep_newlines: if True, output will have same lines structure as the input document
    :return: plain text as extracted by Textract
    """
    max_results = 1000
    pagination_token = None
    finished = False
    text = ''

    while not finished:
        if pagination_token is None:
            response = textract_client.get_document_text_detection(JobId=job_id,
                                                                   MaxResults=max_results)
        else:
            response = textract_client.get_document_text_detection(JobId=job_id,
                                                                   MaxResults=max_results,
                                                                   NextToken=pagination_token)

        sep = ' ' if not keep_newlines else '\n'
        text += sep.join([x['Text'] for x in response['Blocks'] if x['BlockType'] == 'LINE'])

        if 'NextToken' in response:
            pagination_token = response['NextToken']
        else:
            finished = True

    return text


def handler(event, _):
    for record in event['Records']:
        message = json.loads(record['Sns']['Message'])
        job_id = message['JobId']
        status = message['Status']
        filename = message['DocumentLocation']['S3ObjectName']

        print(f'JobId {job_id} has finished with status {status} for file {filename}')

        if status != 'SUCCEEDED':
            return

        text = get_detected_text(job_id)
        to_json = {'Document': filename, 'ExtractedText': text, 'TextractJobId': job_id}
        json_content = json.dumps(to_json).encode('UTF-8')
        output_file_name = filename.split('/')[-1].rsplit('.', 1)[0] + '.json'
        s3_bucket.Object(f'{output_file_name}').put(Body=bytes(json_content))

        return message


You can find a copy of this code hosted over Gitlab.

Again, this code has to be published as a Lambda function. As before, it shouldn't need any special configuration, but since it requires boto3 >= 1.9.138 we have to create a deployment package, as long as AWS doesn't update their Lambda runtime.

After we have uploaded the Lambda function, from the control panel we set as trigger SNS, specifying as ARN the ARN of the SNS topic we created before - in our case, arn:aws:sns:eu-west-1:123456789012:AmazonTextract.

We need also to give the IAM role which executes the Lambda function new permissions, in addition to the ones it already has. In particular, we need:

This should be the final result:

Lambda Configuration

And that's all! Now we can simply drop any document in a supported format to the textract_raw_files bucket, and after some minutes we will find its content in the textract_json_files bucket! And the quality of the extraction is quite good.

Known limitations

Other than being available in just 4 locations, at least for the moment, AWS Textract has other known hard limitations:

It has also some soft limitations that make it unsuitable for mass ingestion:

So, if you need it for anything but testing, you should open a ticket to ask for higher limits, and maybe poking your point of contact in AWS to speed up the process.

That's all for today, I hope you found this article useful! For any comment, feedback, critic, write to me on Twitter (@rpadovani93) or drop an email at riccardo@rpadovani.com.

Regards,
R.

24 Jun 2019 7:00pm GMT

Full Circle Magazine: Full Circle Weekly News #136


OpenMandriva Lx 4.0 is here
https://betanews.com/2019/06/16/openmandriva-lx4-linux-amd/

KDE Plasma 5.16 Gets First Point Release
https://news.softpedia.com/news/kde-plasma-5-16-desktop-environment-gets-first-point-release-update-now-526455.shtml

Canonical Outs Important Security Update for All Ubuntu Releases
https://news.softpedia.com/news/canonical-outs-important-linux-kernel-security-update-for-all-ubuntu-releases-526440.shtml

Canonical Will Drop Support for 32-bit Architectures in Future Ubuntu Releases
https://news.softpedia.com/news/canonical-will-drop-support-for-32-bit-architectures-in-future-ubuntu-releases-526439.shtml

Canonical's Snap Store Adds 11 Distro Specific Installation Pages for Every Single App
https://www.forbes.com/sites/jasonevangelho/2019/06/14/canonicals-snap-store-adds-10-distro-specific-installation-pages-for-every-single-app/#2f9f4bf65448

Mozilla Patches Firefox Zero-Day Abused in the Wild
https://www.zdnet.com/article/mozilla-patches-firefox-zero-day-abused-in-the-wild/

Mozilla Patches Second Zero-Day Flaw This Week

https://thehackernews.com/2019/06/firefox-0day-vulnerability.html
Credits:
Ubuntu "Complete" sound: Canonical
Theme Music: From The Dust - Stardust

https://soundcloud.com/ftdmusic
https://creativecommons.org/licenses/by/4.0/

24 Jun 2019 4:21pm GMT

23 Jun 2019

feedPlanet Ubuntu

Simos Xenitellis: I am running Steam/Wine on Ubuntu 19.10 (no 32-bit on the host)

I like to take care of my desktop Linux and I do so by not installing 32-bit libraries. If there are any old 32-bit applications, I prefer to install them in a LXD container. Because in a LXD container you can install anything, and once you are done with it, you delete it and poof it is gone forever!

In the following I will show the actual commands to setup a LXD container for a system with an NVidia GPU so that we can run graphical programs. Someone can take these and make some sort of easy-to-use GUI utility. Note that you can write a GUI utility that uses the LXD API to interface with the system container.

Prerequisites

You are running Ubuntu 19.10.

You are using the snap package of LXD.

You have an NVidia GPU.

Setting up LXD (performed once)

Install LXD.

sudo snap install lxd

Set up LXD. Accept all defaults. Add your non-root account to the lxd group. Replace myusername with your own username.

sudo lxd init
usermod -G lxd -a myusername
newgrp lxd

You have setup LXD. Now you can create containers.

Creating the system container

Launch a system container. You can create as many as you wish. This one we will call steam and will put Steam in it.

 lxc launch ubuntu:18.04 steam

Create a GPU passthrough device for your GPU.

lxc config device add steam gt2060 gpu

Create a proxy device for the X11 Unix socket of the host to this container. The proxy device is called X0. The abstract Unix socket @/tmp/.X11-unix/X0 of the host is proxied into the container. The 1000/1000 is the UID and GID of your desktop user on the host.

lxc config device add steam X0 proxy listen=unix:@/tmp/.X11-unix/X0 connect=unix:@/tmp/.X11-unix/X0 bind=container security.uid=1000 security.gid=1000 

Get a shell into the system container.

lxc exec steam -- sudo --user ubuntu --login

Add the NVidia 430 driver to this Ubuntu 18.04 LTS container, using the PPA. The driver in the container has to match the driver on the host. This is an NVidia requirement.

sudo add-apt-repository ppa:graphics-drivers/ppa  

Install the NVidia library, both 32-bit and 64-bit. Also install utilities to test X11, OpenGL and Vulkan.

sudo apt install -y libnvidia-gl-430 
sudo apt install -y libnvidia-gl-430:i386
sudo apt install -y x11-apps mesa-utils vulkan-utils  

Set the $DISPLAY. You can add this into ~/.profile as well.

export DISPLAY=:0
echo export DISPLAY=:0 >> ~/.profile

Enjoy by testing X11, OpenGL and Vulkan.

xclock 
glxinfo
vulkaninfo

xclock X11 application running in a LXD container

ubuntu@steam:~$ glxinfo
 name of display: :0
 display: :0  screen: 0
 direct rendering: Yes
 server glx vendor string: NVIDIA Corporation
 server glx version string: 1.4
 server glx extensions:
     GLX_ARB_context_flush_control, GLX_ARB_create_context, 
...
ubuntu@steam:~$ vulkaninfo 
===========
VULKANINFO
===========

Vulkan Instance Version: 1.1.101


Instance Extensions:
====================
Instance Extensions    count = 16
     VK_EXT_acquire_xlib_display         : extension revision  1
...

The system is now ready to install Steam, and also Wine!

Installing Steam

We grab the deb package of Steam and install it.

wget https://steamcdn-a.akamaihd.net/client/installer/steam.deb
sudo dpkg -i steam.deb
sudo apt install -f

Then, we run it.

steam

Here is some sample output.

ubuntu@steam:~$ steam
 Running Steam on ubuntu 18.04 64-bit
 STEAM_RUNTIME is enabled automatically
 Pins up-to-date!
 Installing breakpad exception handler for appid(steam)/version(0)
 Installing breakpad exception handler for appid(steam)/version(1.0)
 Installing breakpad exception handler for appid(steam)/version(1.0)
...

Installing Wine

Here is how you install Wine in the container.

sudo dpkg --add-architecture i386 
wget -nc https://dl.winehq.org/wine-builds/winehq.key
sudo apt-key add winehq.key
sudo apt update
sudo apt install --install-recommends winehq-stable

Conclusion

There are options to run legacy 32-bit software, and here we show how to do that using LXD containers. We pick NVidia (closed-source drivers) which entails a bit of extra difficulty. You can create many system containers and put in them all sorts of legacy software. Your desktop (host) remains clean and when you are done with a legacy app, you can easily remove the container and it is gone!

https://blog.simos.info/

23 Jun 2019 10:24pm GMT

22 Jun 2019

feedPlanet Ubuntu

Costales: Podcast Ubuntu y otras hierbas S03E06: Huawei y Android; IoT ¿más intrusión en los hogares?

Paco Molinero, Fernando Lanero y Marcos Costales debatiremos sobre la polémica de Huawei con el Gobierno de los Estados Unidos. Además hablaremos sobre los problemas de privacidad y seguridad de los dispositivos conectados al Internet de las Cosas.

Ubuntu y otras hierbas
Escúchanos en:

22 Jun 2019 1:58pm GMT

21 Jun 2019

feedPlanet Ubuntu

Jonathan Riddell: Plasma Vision

The Plasma Vision got written a couple years ago, a short text saying what Plasma is and hopes to create and defines our approach to making a useful and productive work environment for your computer. Because of creative differences it was never promoted or used properly but in my quest to make KDE look as up to date in its presence on the web as it does on the desktop I've got the Plasma sprinters who are meeting in Valencia this week to agree to adding it to the KDE Plasma webpage.

21 Jun 2019 2:19pm GMT

20 Jun 2019

feedPlanet Ubuntu

Ubuntu Podcast from the UK LoCo: S12E11 – 1942

This week we've been to FOSS Talk Live and created games in Bash. We have a little LXD love in and discuss 32-bit Intel being dropped from Ubuntu 19.10. OggCamp tickets are on sale and we round up some tech news.

It's Season 12 Episode 11 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week's show:

That's all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there's a topic you'd like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

20 Jun 2019 3:00pm GMT

18 Jun 2019

feedPlanet Ubuntu

Elizabeth K. Joseph: Building a PPA for s390x

About 20 years ago a few clever, nerdy folks got together and ported Linux to the mainframe (s390x architecture). Reasons included because it's there, and other ones you'd expect from technology enthusiasts, but if you read far enough, you'll learn that they also saw a business case, which has been realized today. You can read more about that history over on Linas Vepstas' Linux on the IBM ESA/390 Mainframe Architecture.

Today the s390x architecture not only officially supports Ubuntu, Red Hat Enterprise Linux (RHEL), and SUSE Linux Enterprise Server (SLES), but there's an entire series of IBM Z mainframes available that are devoted to only running Linux, that's LinuxONE. At the end of April I joined IBM to lend my Linux expertise to working on these machines and spreading the word about them to my fellow infrastructure architects and developers.

As its own architecture (not the x86 that we're accustomed to), compiled code needs to be re-compiled in order to run on the s390x platform. In the case of Ubuntu, the work has already been done to get a large chunk of the Ubuntu repository ported, so you can now run thousands of Linux applications on a LinuxONE machine. In order to effectively do this, there's a team at Canonical responsible for this port and they have access to an IBM Z server to do the compiling.

But the most interesting thing to you and me? They also lend the power of this machine to support community members, by allowing them to build PPAs as well!

By default, Launchpad builds PPAs for i386 and amd64, but if you select "Change details" of your PPA, you're presented with a list of other architectures you can target.

Last week I decided to give this a spin with a super simple package: A "Hello World" program written in Go. To be honest, the hardest part of this whole process is creating the Debian package, but you have to do that regardless of what kind of PPA you're creating and there's copious amounts of documentation on how to do that. Thankfully there's dh-make-golang to help the process along for Go packages, and within no time I had a source package to upload to Launchpad.

From there it was as easy as clicking the "IBM System z (s390x)" box under "Change details" and the builds were underway, along with build logs. Within a few minutes all three packages were built for my PPA!

Now, mine was the most simple Go application possible, so when coupled with the build success, I was pretty confident that it would work. Still, I hopped on my s390x Ubuntu VM and tested it.

It worked! But aren't I lucky, as an IBM employee I have access to s390x Linux VMs.

I'll let you in on a little secret: IBM has a series of mainframe-driven security products in the cloud: IBM Cloud Hyper Protect Services. One of these services is Hyper Protect Virtual Servers which are currently Experimental and you can apply for access. Once granted access, you can launch and Ubuntu 18.04 VM for free to test your application, or do whatever other development or isolation testing you'd like on a VM for a limited time.

If this isn't available to you, there's also the LinuxONE Community Cloud. It's also a free VM that can be used for development, but as of today the only distributions you can automatically provision are RHEL or SLES. You won't be able to test your deb package on these, but you can test your application directly on one of these platforms to be sure the code itself works on Linux on s390x before creating the PPA.

And if you're involved with an open source project that's more serious about a long-term, Ubuntu-based development platform on s390x, drop me an email at lyz@ibm.com so we can have a chat!

18 Jun 2019 2:59pm GMT

Santiago Zarate: Permission denied for hugepages in QEMU without libvirt

So, say you're running qemu, and decided to use hugepages, nice isn't it? helps with performace and stuff, however a wild wall appears!

 QEMU: qemu-system-aarch64: can't open backing store /dev/hugepages/ for guest RAM: Permission denied

This basically means that you're using the amazing -mem-path /dev/hugepages, and that QEMU running as an unprivileged user can't write there… This is how it looked for me:

sudo -u _openqa-worker qemu-system-aarch64 -device virtio-gpu-pci -m 4094 -machine virt,gic-version=host -cpu host \ 
  -mem-prealloc -mem-path /dev/hugepages -serial mon:stdio  -enable-kvm -no-shutdown -vnc :102,share=force-shared \ 
  -cdrom openSUSE-Tumbleweed-DVD-aarch64-Snapshot20190607-Media.iso \ 
  -pflash flash0.img -pflash flash1.img -drive if=none,file=opensuse-Tumbleweed-aarch64-20190607-gnome-x11@aarch64.qcow2,id=hd0 \ 
  -device virtio-blk-device,drive=hd0

The machine tries to start, but utimately I get that dreadful message. You can simply do a chmod to the directory, use an udev rule, and get away with it, it's quick and does the job but also there are few options to solve this using libvirt, however if you're not using hugeadm to manage those pools and let the operating system take care of it, likely the operating system will take care of this for you, so you can look to /usr/lib/systemd/system/dev-hugepages.mount, since trying to add an udev rule failed for a colleague of mine, I decided to use the systemd approach, ending up with the following:


[Unit]
Description=Systemd service to fix hugepages + qemu ram problems.
After=dev-hugepages.mount

[Service]
Type=simple
ExecStart=/usr/bin/chmod o+w /dev/hugepages/

[Install]
WantedBy=multi-user.target

18 Jun 2019 12:00am GMT

17 Jun 2019

feedPlanet Ubuntu

The Fridge: Ubuntu Weekly Newsletter Issue 583

Welcome to the Ubuntu Weekly Newsletter, Issue 583 for the week of June 9 - 15, 2019. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

17 Jun 2019 10:21pm GMT

Full Circle Magazine: Full Circle Weekly News #135


Linux Command Line Editors Vulnerable to High Severity Bug
https://threatpost.com/linux-command-line-editors-high-severity-bug/145569/

KDE 5.16 Is Now Available for Kubuntu
https://news.softpedia.com/news/kde-plasma-5-16-desktop-is-now-available-for-kubuntu-and-ubuntu-19-04-users-526369.shtml

Debian 10 Buster-based Endless OS 3.6.0 Linux Distribution Now Available
https://betanews.com/2019/06/12/debian-10-buster-endless-os-linux/

Introducing Matrix 1.0 and the Matrix.org Foundation
https://www.pro-linux.de/news/1/27145/matrix-10-und-die-matrixorg-foundation-vorgestellt.html

System 76's Supercharged Gazelle Laptop is Finally Available
https://betanews.com/2019/06/13/system76-linux-gazelle-laptop/

Lenovo Thinkpad P Laptops Are Available with Ubuntu
https://www.omgubuntu.co.uk/2019/06/lenovo-thinkpad-p-series-ubuntu-preinstalled

Atari VCS Linux-powered Gaming Console Is Now Available for Pre-order
https://news.softpedia.com/news/atari-vcs-linux-powered-gaming-console-is-now-available-for-pre-order-for-249-526387.shtml

Credits:
Ubuntu "Complete" sound: Canonical
Theme Music: From The Dust - Stardust

https://soundcloud.com/ftdmusic
https://creativecommons.org/licenses/by/4.0/

17 Jun 2019 3:46pm GMT

Simos Xenitellis: How to run LXD containers in WSL2

Microsoft announced in May that the new version of Windows Subsystem for Linux 2 (WSL 2), will be running on the Linux kernel, itself running alongside the Windows kernel in Windows.

In June, the first version of WSL2 has been made available as long as you update your Windows 10 installation to the Windows Insider program, and select to receive the bleeding edge updates (fast ring).

In this post we are going to see how to get LXD running in WSL2. In a nutshell, LXD does not work out of the box yet, but LXD is versatile enough to actually make it work even when the default Linux kernel in Windows is not fully suitable yet.

Prerequisites

You need to have Windows 10, then join the Windows Insider program (Fast ring).

Then, follow the instructions on installing the components for WSL2 and switching your containers to WSL2 (if you have been using WSL1 already).

Install the Ubuntu container image from the Windows Store.

At the end, when you run wsl in CMD.exe or in Powershell, you should get a Bash prompt.

The problems

We are listing here the issues that do not let LXD run out of the box. Skip to the next section to get LXD going.

In WSL2, there is a modified Linux 4.19 kernel running in Windows, inside Hyper-V. It looks like this is a cut-down/optimized version of Hyper-V that is good enough for the needs of Linux.

The Linux kernel in WSL2 has a specific configuration, and some of the things that LXD needs, are missing. Specifically, here is the output of lxc-checkconfig.

ubuntu@DESKTOP-WSL2:~$ lxc-checkconfig
 --- Namespaces ---
 Namespaces: enabled
 Utsname namespace: enabled
 Ipc namespace: enabled
 Pid namespace: enabled
 User namespace: enabled
 Network namespace: enabled

--- Control groups ---
 Cgroups: enabled

--- Control groups ---
 Cgroups: enabled

Cgroup v1 mount points:
 /sys/fs/cgroup/cpuset
 /sys/fs/cgroup/cpu
 /sys/fs/cgroup/cpuacct
 /sys/fs/cgroup/blkio
 /sys/fs/cgroup/memory
 /sys/fs/cgroup/devices
 /sys/fs/cgroup/freezer
 /sys/fs/cgroup/net_cls
 /sys/fs/cgroup/perf_event
 /sys/fs/cgroup/hugetlb
 /sys/fs/cgroup/pids
 /sys/fs/cgroup/rdma

Cgroup v2 mount points:

 Cgroup v1 systemd controller: missing
 Cgroup v1 clone_children flag: enabled
 Cgroup device: enabled
 Cgroup sched: enabled
 Cgroup cpu account: enabled
 Cgroup memory controller: enabled
 Cgroup cpuset: enabled

--- Misc ---
 Veth pair device: enabled, not loaded
 Macvlan: enabled, not loaded
 Vlan: missing
 Bridges: enabled, not loaded
 Advanced netfilter: enabled, not loaded
 CONFIG_NF_NAT_IPV4: enabled, not loaded
 CONFIG_NF_NAT_IPV6: enabled, not loaded
 CONFIG_IP_NF_TARGET_MASQUERADE: enabled, not loaded
 CONFIG_IP6_NF_TARGET_MASQUERADE: missing
 CONFIG_NETFILTER_XT_TARGET_CHECKSUM: missing
 CONFIG_NETFILTER_XT_MATCH_COMMENT: missing
 FUSE (for use with lxcfs): enabled, not loaded

--- Checkpoint/Restore ---
 checkpoint restore: enabled
 CONFIG_FHANDLE: enabled
 CONFIG_EVENTFD: enabled
 CONFIG_EPOLL: enabled
 CONFIG_UNIX_DIAG: enabled
 CONFIG_INET_DIAG: enabled
 CONFIG_PACKET_DIAG: enabled
 CONFIG_NETLINK_DIAG: enabled
 File capabilities:

Note : Before booting a new kernel, you can check its configuration
 usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig

ubuntu@DESKTOP-WSL2:~$           

The systemd-related mount point is OK in the sense that currently systemd does not work anyway in WSL (either WSL1 or WSL2). At some point it will get fixed in WSL2, and there are pending issues on this at Github. Talking about systemd, we cannot use yet the snap package of LXD because snapd depends on systemd. And no snapd, means no snap package of LXD.

The missing netfilter kernel modules mean that we cannot use the managed LXD network interfaces (the one with default name lxdbr0). If you try to create a managed network interface, you will get the following error.

Error: Failed to create network 'lxdbr0': Failed to run: iptables -w -t filter -I INPUT -i lxdbr0 -p udp --dport 67 -j ACCEPT -m comment --comment generated for LXD network lxdbr0: iptables: No chain/target/match by that name.

For completeness, here is the LXD log. Notably, AppArmor is missing from the Linux kernel and there was no CGroup network class controller.

ubuntu@DESKTOP-WSL2:~$ cat /var/log/lxd/lxd.log
 t=2019-06-17T10:17:10+0100 lvl=info msg="LXD 3.0.3 is starting in normal mode" path=/var/lib/lxd
 t=2019-06-17T10:17:10+0100 lvl=info msg="Kernel uid/gid map:"
 t=2019-06-17T10:17:10+0100 lvl=info msg=" - u 0 0 4294967295"
 t=2019-06-17T10:17:10+0100 lvl=info msg=" - g 0 0 4294967295"
 t=2019-06-17T10:17:10+0100 lvl=info msg="Configured LXD uid/gid map:"
 t=2019-06-17T10:17:10+0100 lvl=info msg=" - u 0 100000 65536"
 t=2019-06-17T10:17:10+0100 lvl=info msg=" - g 0 100000 65536"
 t=2019-06-17T10:17:10+0100 lvl=warn msg="AppArmor support has been disabled because of lack of kernel support"
 t=2019-06-17T10:17:10+0100 lvl=warn msg="Couldn't find the CGroup network class controller, network limits will be ignored."
 t=2019-06-17T10:17:10+0100 lvl=info msg="Kernel features:"
 t=2019-06-17T10:17:10+0100 lvl=info msg=" - netnsid-based network retrieval: no"
 t=2019-06-17T10:17:10+0100 lvl=info msg=" - unprivileged file capabilities: yes"
 t=2019-06-17T10:17:10+0100 lvl=info msg="Initializing local database"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Starting /dev/lxd handler:"
 t=2019-06-17T10:17:14+0100 lvl=info msg=" - binding devlxd socket" socket=/var/lib/lxd/devlxd/sock
 t=2019-06-17T10:17:14+0100 lvl=info msg="REST API daemon:"
 t=2019-06-17T10:17:14+0100 lvl=info msg=" - binding Unix socket" socket=/var/lib/lxd/unix.socket
 t=2019-06-17T10:17:14+0100 lvl=info msg="Initializing global database"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Initializing storage pools"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Initializing networks"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Pruning leftover image files"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Done pruning leftover image files"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Loading daemon configuration"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Pruning expired images"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Done pruning expired images"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Expiring log files"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Done expiring log files"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Updating images"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Done updating images"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Updating instance types"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Done updating instance types"
 ubuntu@DESKTOP-WSL2:~$                     

Having said all that, let's get LXD working.

Configuring LXD on WSL2

Let's get a shell into WSL2.

C:\> wsl
ubuntu@DESKTOP-WSL2:~$

The aptpackage of LXD is already available in the Ubuntu 18.04.2 image, found in the Windows Store. However, the LXD service is not running by default and we will to start it.

ubuntu@DESKTOP-WSL2:~$ sudo service lxd start

Now we can run sudo lxd initto configure LXD. We accept the defaults (btrfs storage driver, 50GB default storage). But for networking, we avoid creating the local network bridge, and instead we configure LXD to use an existing bridge. The existing bridge configures macvlan, which avoids the error, but macvlan does not work yet anyway in WSL2.

ubuntu@DESKTOP-WSL2:~$ sudo lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]:
Create a new BTRFS pool? (yes/no) [default=yes]:
Would you like to use an existing block device? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=50GB]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]: no
Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]: yes
Name of the existing bridge or host interface: eth0
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes
 config: {}
 networks: []
 storage_pools:
 - config:
     size: 50GB
   description: ""
   name: default
   driver: btrfs
 profiles:
 - config: {}
   description: ""
   devices:
   eth0:
     name: eth0
     nictype: macvlan
     parent: eth0
     type: nic
   root:
     path: /
     pool: default
     type: disk
   name: default
 cluster: null 

ubuntu@DESKTOP-WSL2:~$

For some reason, LXD does not manage to mount sysfor the containers, therefore we need to perform this ourselves.

ubuntu@DESKTOP-WSL2:~$ sudo mkdir /usr/lib/x86_64-linux-gnu/lxc/sys
ubuntu@DESKTOP-WSL2:~$ sudo mount sysfs -t sysfs /usr/lib/x86_64-linux-gnu/lxc/sys

The containers will not have direct Internet connectivity, therefore we need to use a Web proxy. In our case, it suffices to use privoxy. Let's install it. privoxy uses by default the port 8118, which means that if the containers can somehow get access to port 8118 on the host, they get access to the Internet!

ubuntu@DESKTOP-WSL2:~$ sudo apt update
...
ubuntu@DESKTOP-WSL2:~$ sudo apt install -y privoxy

Now, we are good to go! In the following we create a container with a Web server, and view it using Internet Explorer. Yes, IE has two uses, 1. to download Firefox, and 2. to view the Web server in the LXD container as evidence that all these are real.

Setting up a Web server in a LXD container in WSL2

Let's create our first container, running Ubuntu 18.04.2. It does not get an IP address from the network because macvlan is not working. The container has no Internet connectivity!

ubuntu@DESKTOP-WSL2:~$ lxc launch ubuntu:18.04 mycontainer
Creating mycontainer
Starting mycontainer

ubuntu@DESKTOP-WSL2:~$ lxc list
+-------------+---------+------+------+------------+-----------+
|    NAME     |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS |
+-------------+---------+------+------+------------+-----------+
| mycontainer | RUNNING |      |      | PERSISTENT | 0         |
+-------------+---------+------+------+------------+-----------+

ubuntu@DESKTOP-WSL2:~$

The container has no Internet connectivity, so we need to give it access to port 8118 on the host. But how can we do that, if the container does not have even network connectivity with the host? We can do this using a LXD proxy device. Run the following on the host. The command creates a proxy device called myproxy8118 that proxies the TCP port 8118 between the host and the container (the binding happens in the container because the port already exists on the host).

ubuntu@DESKTOP-WSL2:~$ lxc config device add mycontainer myproxy8118 proxy listen=tcp:127.0.0.1:8118 connect=tcp:127.0.0.1:8118 bind=container
Device myproxy8118 added to mycontainer

ubuntu@DESKTOP-WSL2:~$

Now, get a shell in the container and configure the proxy!

ubuntu@DESKTOP-WSL2:~$ lxc exec mycontainer bash
root@mycontainer:~# export http_proxy=http://localhost:8118/
root@mycontainer:~# export https_proxy=http://localhost:8118/

It's time to install and start nginx!

root@mycontainer:~# apt update
...
root@mycontainer:~# apt install -y nginx
...
root@mycontainer:~# service nginx start

nginx is installed. For a finer touch, let's edit a bit the default HTML file of the Web server so that it is evident that the Web server runs in the container. Add some text you think suitable, using the command

root@mycontainer:~# nano /var/www/html/index.nginx-debian.html

Up to now, there is a Web server running in the container. This container is not accessible by the host and obviously by Windows either. So, how can we view the website from Windows? By creating an additional proxy device. The command creates a proxy device called myproxy80 that proxies the TCP port 80 between the host and the container (the binding happens on the host because the port already exists in the container).

root@mycontainer:~# logout
ubuntu@DESKTOP-WSL2:~$ lxc config device add mycontainer myproxy80 proxy listen=tcp:0.0.0.0:80 connect=tcp:127.0.0.1:80 bind=host

Finally, find the IP address of your WLS2 Ubuntu host (hint: use ifconfig) and connect to that IP using your Web browser.

Conclusion

We managed to install LXD in WSL2 and got a container to start. Then, we installed a Web server in the container and viewed the page from Windows.

I hope future versions of WSL2 will be more friendly to LXD. In terms of the networking, there is need for more work to make it work out of the box. In terms of storage, btrfs is supported (over a loop file) and it is fine.

https://blog.simos.info/

17 Jun 2019 2:22pm GMT

Stephen Michael Kellat: So That Happened...

I previously made a call for folks to check in on a net so I could count heads. It probably was not the most opportune timing but it was what I had available. You can listen to the full net at https://archives.anonradio.net/201906170000_sdfarc.mp3 and you'll find my after-net call to all Ubuntu Hams at roughly 44 minutes and 50 seconds into the recording.

This was a first attempt. The folks at SDF were perfectly fine with me making the attempt. The net topic for the night was "special projects" we happened to be undertaking.

Now you might wonder what I might be doing in terms of special projects. That bit is special. Sunspots are a bit non-existent at the moment so I have been fiddling around with listening for distant stations on the AM broadcast band which starts in the United States at 530 kHz and ends at 1710 kHz. From my spots in Ashtabula I end up hearing some fairly distant stations ranging from KYW 1060 in Philadelphia to WCBS 880 in New York City to WPRR 1680 in Ada, Michigan. When I am out driving Interstate Route 90 in the mornings during the winter I have had the opportunity to hear stations such as WSM 650 broadcasting from the vicinity of the Grand Old Opry in Nashville, Tennessee. One time I got lucky and heard WSB 750 out of Atlanta while driving when conditions were right.

These were miraculous feats of physics. WolframAlpha would tell you that the distance between Ashtabula and Atlanta is about 593 miles/955 kilometers. In the computing realm we work very hard to replicate the deceptively simple. A double-sideband non-suppressed carrier amplitude modulated radio signal is one of the simplest voice transmissions that can be made. The receiving equipment for such is often just as simple. For all the infrastructure it would take to route a live stream over a distance somewhat further than that between Derry and London proper, far less would be needed for the one-way analog signal.

Although there is Digital Audio Broadcasting across Europe we really still do not have it adopted across much of the United States. A primary problem is that it works best in areas with higher population density than we have in the USA. So far we have various trade names for IBOC, that is to say in-band on-channel, subcarriers giving us hybrid signals now. Digital-only IBOC has been tested at WWFD in Maryland and there was a proposal to the Federal Communications Commission to make a permanent rules change to make this possible. It appears in the American Experience, though, that the push is more towards Internet-connected products like iHeartRadio and Spotify rather than the legacy media outlets that has public service obligations as well as emergency alerting obligations.

I am someone who considers the Internet fairly fragile as evidenced most recently by the retailer Target having a business disaster through being unable to accept payments due to communications failures. I am not against technology advances, though. Keeping connections to the technological ways of old as well as sometimes having cash in the wallet as well as knowing how to write a check seem to be skills that are still useful in our world today.

Creative Commons License
So That Happened... by Stephen Michael Kellat is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

17 Jun 2019 2:33am GMT