28 Feb 2021

feedPlanet Ubuntu

Martin-Éric Racine: Shipping Debian with GNOME X.XX.0 is an extremely bad idea

Since the freeze has slowly crept in, now is the time to revisit my pet peeve with Debian's release process: to publish a new Debian release as soon as GNOME published a new X.XX.0 version. This is an extremely bad idea: X.XX.0 releases tend to lack polish, their translations are not up-to-date and several silly bugs that hamper the user experience (what the Ubuntu guys call "paper cuts") exist. Those issues tend to be fixed later when GNOME X.XX.1, X.XX.2, etc. bugfix releases are published. However, Debian has a policy of not pushing non-security releases onto a stable distribution. In this particular case, there are only two valid alternatives: either release Bullseye with GNOME 3.38.X or change the Debian policy to allow pushing 3.40.X bugfix releases via bullseye-updates.

28 Feb 2021 2:46pm GMT

Alan Pope: Mix and Match Rocketbooks

Last month I 'discovered' Rocketbooks. Well, now I'm in deep! I've picked up a bunch of coloured pens, a large folio cover for the full size Rocketbook, and now, I've grabbed some more! Wait! Surely the point of Rocketbooks is that they're re-usable, so you don't need to buy many of them. Yeah, true. However, as they're not super expensive, I can actually leave one at my work desk upstairs for business-related things, and another downstairs in the kitchen.

28 Feb 2021 12:00pm GMT

27 Feb 2021

feedPlanet Ubuntu

Simos Xenitellis: How to run a Windows virtual machine on LXD on Linux

LXD is a hypervisor to run both system containers (a la LXC) and virtual machines (a la QEMU) on Linux distributions. System containers are lightweight because they are based solely on the Linux kernel for their virtualization features, and support Linux guests only. However, virtual machines can run other operating systems. In this post, we see how to run Windows in a LXD virtual machine.

The benefit with running Windows through LXD is that you are using the familiar LXD workflow and takes away some of the the complexity from the other ways of running a VM (like virt-manager).

The content of this tutorial came from https://discuss.linuxcontainers.org/t/running-virtual-machines-with-lxd-4-0/7519 Look towards the end of the thread where Stéphane Graber describes how to simplify the process compared to the instructions at the top of that thread.

The prerequisite is that you have LXD configured and running.

In the following, we

  1. Download a Windows 10 ISO from Microsoft.
  2. Prepare the ISO using distrobuilder (we do this once per ISO).
  3. Start the virtual machine from that prepared ISO and go through the installation.

Download a Windows 10 ISO

You can download a Windows 10 ISO from Microsoft, through the following URL.

https://www.microsoft.com/en-us/software-download/windows10ISO

You will be given the option to select Windows 10. Then, select a language for the Windows 10 ISO, and finally to select whether to download a 32-bit or 64-bit version. Select your preferred language, and then choose the 64-bit version.

Once the ISO file has been downloaded, move to the next section.

Prepare the ISO using distrobuilder

Install the distrobuilder package. It is available as a snap package, using the classic type of confinement.

sudo snap install distrobuilder --classic

Then, run distrobuilder over the Windows 10 ISO as follows. Obviously, the filename of the ISO file might be different in your case. Adapt and overcome.

$ sudo distrobuilder repack-windows Win10_1809Oct_English_x64.iso Win10_1809Oct_English_x64-distrobuilder.iso 
2021/02/13 23:15:57 Mounting Windows ISO ...
2021/02/13 23:15:57 Downloading drivers ISO ...
2021/02/13 23:24:19 Mounting driver ISO ...
2021/02/13 23:24:20 Injecting drivers into boot.wim (index 2)...
2021/02/13 23:24:24 Injecting drivers into install.wim (index 1)...
2021/02/13 23:25:01 Injecting drivers into install.wim (index 2)...
2021/02/13 23:25:09 Injecting drivers into install.wim (index 3)...
2021/02/13 23:25:17 Injecting drivers into install.wim (index 4)...
2021/02/13 23:25:25 Injecting drivers into install.wim (index 5)...
2021/02/13 23:25:33 Injecting drivers into install.wim (index 6)...
2021/02/13 23:25:41 Injecting drivers into install.wim (index 7)...
2021/02/13 23:25:49 Injecting drivers into install.wim (index 8)...
2021/02/13 23:25:57 Injecting drivers into install.wim (index 9)...
2021/02/13 23:26:05 Injecting drivers into install.wim (index 10)...
2021/02/13 23:26:13 Injecting drivers into install.wim (index 11)...
2021/02/13 23:26:21 Generating new ISO ...
$ 

In my case, it generated a new ISO, the one with a name Win10_1809Oct_English_x64-distrobuilder.iso. We have the prepared ISO, we can now start.

Start the Windows 10 installation

Run the following commands to initialize the VM, to configure (=increase) the allocated disk space and finally attach the full path of your prepared ISO file. Note that the installation of Windows 10 takes about 10GB (before updates), therefore a 30GB disk gives you about 20GB of free space.

lxc init win10 --empty --vm -c security.secureboot=false
lxc config device override win10 root size=30GiB
lxc config device add win10 iso disk source=/home/myusername/Downloads/Win10_1809Oct_English_x64-distrobuilder.iso boot.priority=10

Now, the VM win10 has been configured and it is ready to be started. The following command starts the virtual machine and opens up a VGA console so that we go through the graphical installation of Windows.

lxc start win10 --console=vga

Windows 10 is booting up in a LXD virtual machine. The LXD logo is shown above.

Note that the installation of Windows 10 does a few restarts before you end up on the desktop. Those will appear as errors in your terminal that look like the following.

(remote-viewer:13539): dbind-WARNING **: 15:51:45.744: Couldn't register with accessibility bus: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.

(remote-viewer:13539): GLib-GIO-CRITICAL **: 15:51:45.765: g_dbus_proxy_new_sync: assertion 'G_IS_DBUS_CONNECTION (connection)' failed

You just need to launch again the console as follows. To start the VM we run lxc start but to re-attach to the running VM (since it was restarted by itself), we run lxc console.

lxc console win10 --type=vga

It will take you two lxc consoles until you get to the Windows desktop.

Windows 10 booted up in a LXD virtual machine.

Notes

  1. When installing Windows, you will be prompted to use your personal information. If this is an expendable virtual machine, you can get away with creating a new outlook.comaccount and adding some telephone number (it does not verify the telephone number). You will be prompted for a PIN as well, and it will be asking for that PIN when you use this installation.
  2. To shutdown the VM, click to Shutdown from the Windows desktop. To start it again at a later time, run the following (as we did before).
    lxc start win10 --console=vga
  3. When the VM is stopped, you may remove the Windows 10 ISO with the following. iso was the name of the ISO device we setup earlier.
    lxc config device remove win10 iso
  4. This VM setup does not have sound. It is probably easy to add that using a QEMU option. There are no virtio drivers yet for a virtualized audio card. But it is not an issue if you go for Audio Passthrough, using a tool like Remmina. For this to work, you need to enable Remote Desktop in Windows, and also should have installed Windows Professional (instead of Windows Home).
  5. By default, the VM gets a private IP address, one from the lxdbr0 network range. I could not get to ping that IP address from the host or from a container. It is likely a VM issue. But of couse, the VM has networking. edit It is a Windows Firewall issue. You can enable an inbound IPv4 ICMP rule, such as the following. We selected to enable some existing rule instead of composing it from scratch.

Summary

We have seen how to get to run Windows 10 on a LXD virtual machine. At the moment, these are the easiest instructions. Once more features are added, I'll update this page.

blog.simos.info/

27 Feb 2021 7:18pm GMT

Alan Pope: Snapcraft Clinic Successes

On Thursday I mentioned we were restarting the Snapcraft Clinic. Basically we stand up a regular video call with engineers from the snap and snapcraft team & us from Snap Advocacy. Developers of applications and publishers of snaps are invited to join to troubleshoot. There was nothing especially secret or private discussed, but as we don't record or stream the calls, and I don't have direct permission to mention the applications or people involved, so I'll keep this a little vague.

27 Feb 2021 12:00pm GMT

26 Feb 2021

feedPlanet Ubuntu

Full Circle Magazine: Full Circle Magazine #166

This month:
* Command & Conquer : LMMS
* How-To : Python, Podcast Production, and Make a Budget
* Graphics : Inkscape
* Linux Loopback
* Everyday Ubuntu : RetroComputing TRS-80
* Review : Ubuntu Web 20.04.1 & Unetbootin
* Ubuntu Games : Cyber Shadow
plus: News, My Opinion, The Daily Waddle, Q&A, and more.

Get it while it's hot: https://fullcirclemagazine.org/issue-166/

26 Feb 2021 2:52pm GMT

Ubuntu Blog: What is virtualisation? The basics

Virtualisation plays a huge role in almost all of today's fastest-growing software-based industries. It is the foundation for most cloud computing, the go-to methodology for cross-platform development, and has made its way all the way to 'the edge'; the eponymous IoT. This article is the first in a series where we explain what virtualisation is and how it works. Here, we start with the broad strokes. Anything that goes beyond the scope of a 101 article will be covered in subsequent blog posts. Let's get into it.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/2b35/virtualisationimage.jpg" width="720" /> </noscript>

What is virtualisation?

Virtualisation technology creates virtualised hardware environments. It uses software to create an 'abstraction layer' on top of hardware to divide up parts of a single computer's resources, such as processors, memory, storage, etc, between multiple virtual computers. The result can be virtual machines (VMs) or containers. Both allow you to create isolated, secure environments for testing, debugging, legacy software, and for specific needs that do not require all of the resources on the physical hardware.

Today, virtualization is a standard practice in enterprise IT architectures, software development and at the edge. You can virtualise numerous parts of a computers 'stack' for a myriad of reasons. You can virtualise:

Each of these scenarios enables providers to serve users, or individual VMs, and means users only need the exact computational resources necessary for a given workload. This could be anything from virtualising single machines to more complex setups like full virtual data centre environments.

What is a virtual machine?

A virtual machine is a resource that uses software to run workloads and deploy apps. Each VM runs its own operating system (OS) (the guest OS), and behaves like an independent computer utilising a portion of the underlying computer's resources (the host). VMs allow users to run numerous different operating systems on one machine, each with potentially different applications and libraries inside. There are numerous tools and methodologies for managing VMs in different places, the first layer of management comes from either a 'hypervisor' or 'application virtualisation'.

<noscript> <img alt="" height="365" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_435,h_365/https://ubuntu.com/wp-content/uploads/9f0c/virtualmachine.png" width="435" /> </noscript>

What is a hypervisor?

A hypervisor is a layer of software that sits between VMs and hardware to manage resource allocation, general VM to hardware communications, and to make sure VMs don't interfere with each other. There are two types of hypervisors:

Each operating system, macOS, Windows, Linux, and so on, use different hypervisors for different things. MacOS ships with Hyperkit, Windows with Hyper-V and Linux with KVM as their built-in 'type 1' hypervisors. But there are lots of organisations that offer type 1 and type 2 solutions. For example, Virtual box is a type 2 hypervisor that is popular on both Windows and macOS. VMware specialises in all different kinds of virtualisation; server, desktop, networking and storage, with different hypervisor offerings for each. The details of how hypervisors work is beyond the scope of this article.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/39be/hypervisor.png" width="720" /> </noscript>

What is application-virtualisation?

Application-based virtualisation uses an application (such as Parallels RAS) to effectively stream applications to a virtual environment on another server or host system. Instead of logging into a host computer, users gain access to the application virtually, separating applications from the operating system and allowing the user to run almost any application on other hardware. In this way users don't have to worry about local storage and multiple applications can be run in this way with barely touching the host system.

What is virtual networking?

A key part of virtualisation is allowing virtual machines to talk to the rest of the world. VMs need the ability to talk to other VMs, internally with the host, and externally, with things outside of the virtual environment. This is done with a virtual network between the virtual machine(s) and the host OS. The network is a line of communication that goes between the VMs, and the hardware in the physical environment. There is lots more to it than that but the details are beyond the scope of this particular article.

There are many ways to implement a virtual network, two of the most common are "bridged networking" and "network address translation" (NAT). Using NAT, virtual machines are represented on external networks using the IP address of the host system. In this way virtual machines in the virtual environment are not visible to the outside, this is why virtual machines behind NAT are considered protected. . When a connection is made between an address inside and outside of the virtual environment the NAT system forwards the connection to the correct VM.

Bridged networking connects the VMs directly onto the physical network that the host is using. The DHCP server can then assign each VM its own IP address and is visible on the network. Once connected the VM is accessible over the network and can access other machines on the network as if it were a physical machine.

What are containers?

Containers are standardised units of software that bundle code and all its dependencies into one modular package. While each VM brings its own OS, containers can share the OS of the host machine or bring their own in separate containers. As a result, they are more lightweight, you can deploy a lot more at once, and they are low(er) maintenance, with everything you need in one place. We typically recommend three types of containers for different use cases:

<noscript> <img alt="" height="333" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_500,h_333/https://ubuntu.com/wp-content/uploads/9c65/container.png" width="500" /> </noscript>

Linux containers

Linux containers focus on being system containers. Containers which create an environment as close to a VM as possible without the overhead of running a kernel and virtualising the hardware. These are considered more robust because they are closer to being a machine with all the services in place, and so are used in a lot of traditional operations. Linux containers come from the Linux containers project (LXC), an open source container platform that is a userspace interface for the tools, templates, libraries and bindings to allow for the creation and management of containers.

<noscript> <img alt="" height="201" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_218,h_201/https://ubuntu.com/wp-content/uploads/8f91/lxc.png" width="218" /> </noscript>

Docker containers

Docker containers are the most popular kind of container among developers for cross-platform deployments in data centres or serverless environments. Docker containers use Docker Engine and numerous other container technologies, including LXC, to create developer-friendly environments that are reproducible regardless of the underlying infrastructure. They are standalone executable packages that include everything needed to run an application: code, runtime, system tools, libraries and settings.

<noscript> <img alt="" height="283" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_396,h_283/https://ubuntu.com/wp-content/uploads/14ee/docker.png" width="396" /> </noscript>

Snaps

Snaps are containerised software packages that focus on being singular application containers. Where LXC could be seen as a machine container, Docker as a process container, snaps can be seen as application containers. Snaps package code and dependencies in a similar way to containers to keep the application content isolated and immutable. They have a writable area that is separated from the rest of the system, but are visible to the host via user application-defined interfaces and behave more like traditional Debian apt packages.

Snaps are designed for when you want to deploy to a single machine. Applications are built and packaged as snaps using a tool called snapcraft that incorporates different container technologies to create a secure and easy-to-update way to package applications for workstations or for fleets of IoT devices. There are a few ways to develop snaps. Developers can configure snap to even run unconfined while they put it together and containerise everything later when pushing to production. Read more about the different way snaps can be configured in another article.

<noscript> <img alt="" height="283" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_417,h_283/https://ubuntu.com/wp-content/uploads/2404/snap.png" width="417" /> </noscript>

Virtual machines vs Containers

Whether you should use a VM or a container depends on your use case. They're both great technologies for separate reasons, not necessarily competitors. Virtual machines allow users to run multiple OSes on the same hardware, and containers allow users to deploy multiple applications on the same OS, on a single machine.

Pros and cons of VMs

The benefits of using a VM include, but are not limited to:

And of course there are several caveats that include, but are also not limited to:

Pros and cons of containers

The benefits of containers include but are not limited to:

And of course there are several caveats that include, but are also not limited to:

Conclusion

Virtualisation can exist anywhere computation is important. It is used to isolate whatever is being done from the host computer and to utilise specific resources more efficiently. There are two major kinds of virtualisation: virtual machines, and containers. Each has its pros and cons and can be used independently or together but both have the aim of providing flexibility and efficiency in deploying and managing applications. In our next article we will talk about some of the topics touched on here in more detail.

26 Feb 2021 11:13am GMT

25 Feb 2021

feedPlanet Ubuntu

Podcast Ubuntu Portugal: Ep 131 – Superstição

Conversámos sobre navegadores de Internet, em snaps ou não, bem como upgrades de nuvens e servidores de conversa. Demos também carinho e atenção a algumas dicas e conversas vindas directamente da comunidade sendo o youtube-dl do assunto do momento.

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: "Won't see it comin' (Feat Aequality & N'sorte d'autruche)", por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

25 Feb 2021 10:45pm GMT

Sean Davis: Xubuntu 21.04 Progress Update

Xubuntu 21.04 Progress Update

Today, February 25, 2021 marks Feature Freeze for the Ubuntu 21.04 release schedule. At this point, we stop introducing new features and packages and start to focus on testing and bug fixes. Feature Freeze also marks Debian Import Freeze, which means that packages we have in common with Debian will no longer automatically sync to Xubuntu for the rest of the cycle.

This makes it a great time to update you on the goings-on in Xubuntu 21.04. So far, we have a pretty impressive list of changes, both technical and user-facing.

Xfce 4.16

The highlight of this release, of course, is Xfce 4.16. Having been released in December 2020, Xfce 4.16 includes a wide variety of new features and improvements. Most visibly, Xfce has a new color palette and refreshed icons, based loosely on Adwaita. To see the new icons in action, switch to the Adwaita icon theme in the Appearance settings.

Xubuntu 21.04 Progress UpdateXfce 4.16's new visual identity. A consistent set of icons based on a shared palette and design principles.

For a complete overview of the changes in Xfce 4.16, please check out the feature tour and changelog.

Ayatana Indicators

We've switched to the Ayatana indicator stack with the Xfce Indicator Plugin and LightDM GTK+ Greeter. Where the previous Application Indicator stack exists primarily in Ubuntu, Ayatana Indicators are cross-platform and available on Debian and elsewhere. This change may affect your indicator usage, as not all existing Application Indicators have been ported to Ayatana.

New Package Additions

Xubuntu 21.04 has added Hexchat (#12) and Synaptic to the desktop seed. adwaita-icon-theme-full is now included to make the included Adwaita icon theme fully functional, whereas it previously didn't include a large number of icons. Finally, mlocate has been replaced with plocate, which should result in even faster lookups with Catfish.

Xubuntu 21.04 Progress Update
Xubuntu 21.04 Progress Update

Hexchat and Synaptic have been welcomed into the Xubuntu packageset.

Xubuntu Documentation

It's been years since we'd last updated the included Xubuntu Documentation, and the latest packaged version doesn't even include the rewrite we completed last cycle. For now, here's what been updated since 18.04.

Help Needed

For the latest changes to the Xubuntu Documentation, we're looking for help! Docbook is not the most straight-forward format, and we have a lot of changes in Google Drive that need to make their way to the docs-refresh branch on the Xubuntu GitHub. If you'd like to help out, please join us on Freenode at #xubuntu-devel.

Settings Changes

General

Panel

Xubuntu 21.04 Progress UpdateBy replacing the separators, there's no longer a mouse gap between plugins and the clock is no longer smashed against the side of the screen.

Desktop

File Manager

Xubuntu 21.04 Progress UpdateThunar with the new Pathbar layout and stable window icon.

Menu & Settings Manager

Xubuntu 21.04 Progress UpdateXfce Settings Manager with the new Sound entry and dropdown search.

Keyboard Shortcuts

And More

With two months to go, there's still a lot of work to be done and plenty of changes coming from mainline Ubuntu as well. If you'd like to join in on the fun, check out the Get Involved section of the Xubuntu website.

25 Feb 2021 12:31pm GMT

Ubuntu Blog: What is MEC ? The telco edge.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/3cuXriwxnlcPaWxHdFTjIh6SAarVWGNUy1ZQSSjYg9cGxjwngKpBzKoU6Tqm2Th1lmxcGERBMrrkrYcHcoCv2MbvMLJXdYTTvVgM8hejXVFIUZqK64GvKWi6cpyAXzIt-9e3R5gP" width="720" /> </noscript>

MEC, as ETSI defines it, stands for Multi-access Edge Computing and is sometimes referred to as Mobile edge computing. MEC is a solution that gives content providers and software developers cloud-computing capabilities which are close to the end users. This micro cloud deployed in the edge of mobile operators' networks has ultra low latency and high bandwidth which enables new types of applications and business use cases. On top of that an application running on MEC can have real-time access to a subset of radio network information that can improve the overall experience.

MEC Use Cases

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/_bNBLDDPP_TEz9fLOP-GBFsA2R057rLc1NAuFHo9vbmMxlx-xP9tSw9JGlWTFH1xbQX8xHwmr3MNN50-lJZs5cTq0oXmxagg6K-oMC6hVqQ25niM9BZ0gQLIonscKdY6DeRuaSZM" width="720" /> </noscript>

MEC opens a completely new ecosystem where a mobile network operator becomes a cloud provider, just like big hyperscalers. Its unique capabilities enabled by access to telecom specific data points and a location close to the user gives mobile network operators (MNOs) a huge advantage. From a workload perspective, we can distinguish 4 main groups of use-cases.

Services based on user location

Services based on user location utilize the location capabilities of the mobile network from functions like LMF. Location capabilities are more and more precise every standard and 5G networks aim for sub meter accuracy in sub 100ms intervals. This allows an application to track location, even for fast moving objects like drones or connected vehicles. Other simpler use cases exist: for example, if you want to make an user engagement app on a football stadium, you can now coordinate your team fans for more immersive events. And if you need to control movement of a swarm of drones, you can just deploy your command and control (C&C) server on the edge.

IoT services

IoT services are another big group. The number of connected devices grows exponentially every year. They produce unimaginable amounts of data. Yet it makes no sense to transfer each and every data point to the public cloud. In order to save some bandwidth, ML models running at the edge can aggregate the data and perform simple calculations to help with the decision making. The same goes for IoT software management, security updates, and devices fleet control. All of these use-cases make much more economical sense if they are deployed on small clouds at the edge of the network. If that is something that interests you, you can find more details on making a secure and manageable IoT device with Ubuntu Core here.

CDN and data caching

CDN and data caching are using edge to store content as close as possible to a requesting client machine, thereby reducing latency and improving page load time, video quality, and gaming experience. The main benefit of a cloud over legacy CDN is the fact that you can analyze the traffic locally and make better decisions on which content to store, and in what type of memory. A great and open source way to manage your edge storage and serve it in S3 compatible way is CEPH.

GPU intensive services

GPU intensive services such as AI/ML, AR/VR, and video analytics are all relying on the computing power available to the mobile equipment user with low latency. Any device can benefit from a powerful GPU located on an edge micro cloud, making access to ML algorithms, API for video analytics, or augmented reality based services much easier. These capabilities are also revolutionizing the mobile gaming industry, giving gamers more immersive experience and low enough latency to compete on a professional level. Together with NVidia , Canonical partnered to create a dedicated GPU accelerated edge server

MEC infrastructure requirements

In order to have MEC you need some infrastructure at the edge, including computers, storage, network, and accelerators. In telecom cases, accelerators like DPDK, SR-IOV or Numa are very important as they allow us to achieve the required performance of the whole solution. One bare metal machine is not enough, and two servers are just a single one and a spare. With 3 servers, which would be a minimum for an edge site, we have a cloud, albeit a small one. At Canonical they are called micro clouds, a new class of compute for the edge, made of resilient, self-healing and opinionated technologies reusing proven cloud primitives.

To choose a proper micro cloud setup, you need to know the workload that would be running on it. As the business need is to expose edge sites to a huge market of software developers it needs to be something familiar and liked by them. That's why you don't deploy OpenStack on the edge site. What you need is a Kubernetes cluster. The best case scenario for your operations team would be to have the same tools to manage, deploy, and upgrade such an edge site as tools they have in the main data center.

Typical edge site design

Canonical provides all the elements necessary to build an edge stack, top to bottom. There is MAAS to manage bare metal hardware, LXD clustering to provide an abstract layer of virtualization, Ceph for distributed storege, and MicroK8s to provide a Kubernetes cluster. All of these projects are modular, open source, and you can set it up yourself, or reach out to discuss more supported options.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/2XmTTYnmcITxV2pDUOP6abW4pPjkhgrS1HbV7CdBWX4ZeUY27vxD3wIJTXF4qbDccBnBcMYozzxY3XB47WNjgLVtLOcwv7jAhfrY8tAf9E4OP7CYFw42_he_r-AaxFxXzk6PBrhF" width="720" /> </noscript>

Obviously, edge is not a single site. Managing many micro clouds efficiently is a crucial task in front of mobile operators. I would suggest using an orchestration solution directly communicating to your MAAS instances, without any intermediate "big MAAS" as the middle man. The only thing you need is a service that returns each site name, location and network address. In order to simplify edge management even further all open source components managed by Canonical use semantic channels. You might be familiar with it already, as you use it when you install software on your local desktop using:

sudo snap install vlc -channel=3.0/stable/fix-playback

A channel is a combination of <track>/<risk>/<branch> and epochs to create a communication protocol between software developers and users. You will find the same concept used in charms, snaps and LTS Docker images.

You now know what is MEC and what is its underlying infrastructure. I encourage you to try it out yourself and share your story with us on Twitter. This is a great skill to have, as all major analyst companies are now agreeing with the fact that edge micro clouds will take over the public cloud and will be the next big environment for which all of us will write software in the future.

25 Feb 2021 7:32am GMT

23 Feb 2021

feedPlanet Ubuntu

David Tomaschik: Is Reusing an Old Mac Mini Worth It?

I was cleaning up some old electronics (I'm a bit of a pack rat) and came across a Mac Mini I've owned since 2009. I was curious whether it still worked and whether it could get useful work done. This turned out to be more than a 5 minute experiment, so I thought I'd write it up here as it was just an interesting little test.

The Hardware

The particular model I have is known as "Macmini2,1" or "MB139*/A" or "Mid 2007", with the following specs:

The Software

The last version of Mac OS that was supported is Mac OS X 10.7 "Lion", which has been unsupported since 2014. Since I'm a Linux guy anyway, I figured I'd see about installing Linux on this. Unfortunately, according to the Debian wiki, this device won't boot from USB, and I don't have any blank optical media to burn to. This was the first point where I nearly decided this wasn't worth my time, but I decided to push on.

Linux is pretty good about booting on any hardware, even if it's not the hardware you installed on, as kernel module drivers are loaded based on present hardware. I decided to try installing to a disk and then swapping disks and seeing if the Mac Mini would boot. The EFI on the Mac Mini supports BIOS emulation, and that seemed the more likely to work out of the box.

I plugged a spare SSD into my SATA dock and then used a virtual machine with a raw disk to install Debian testing on the SSD. I then used the excellent iFixIt teardown and my iFixit toolkit to open the Mac Mini and swap out the drive. I point to the teardown because opening a Mac Mini is neither obvious nor trivial.

Booting

I plugged in the Mac Mini along with a network cable and powered it on, hoping to see it just appear on the network. I gave it adequate time to boot and did a port scan to find it - and got nothing. Thinking it might have been a first boot issue, I rebooted the Mac Mini, waited even longer, and checked again - and once again, couldn't find it. I checked the logs on my DHCP server, and there was nothing relevant there. This is the second point at which I considered quitting on this.

I decided to see what error I might have been getting, or at least how far it would get in booting, so I dug out a DVI cable and hooked it up to a monitor. Powering it on again, I got 30 seconds of grey screen from the EFI (due to the BIOS boot delay mentioned in the Debian wiki page), and then - Debian booted normally.

Okay, maybe networking was just broken. I did another port scan of my lab network - and there it was. Somehow it had just started working. I felt so confused at this point. I began to wonder if connecting a monitor had been the fix somehow. A few Google searches later, I had confirmed my suspicion - this Mac Mini model (and several others) will not boot unless it detects an attached monitor. There's a workaround involving a resistor between two of the analog pins (or a commercial DVI emulator), but for the moment, I just kept the monitor attached.

At this point, I had the Mac Mini running Debian Testing and everything seemed to be more or less working. But would it be worth it in terms of computing power and electrical power?

Benchmarking & Comparison

I decided to run just a handful of CPU benchmarks. I wasn't looking to tweak this system to find the maximal performance, just to get an idea of where it stands as a system.

The first run was a 7-zip benchmark. The Mac Mini managed about 3700 MB/s for compression. (Average across all dictionary sizes.) My laptop with a Core i5-5200U did 6345MB/s, and my Ryzen 7 3700X in my desktop managed a whopping 57,250MB/s!

With OpenSSL, I checked both SHA-512 and AES-128-CBC mode. For SHA-512 computations, the Mac Mini managed about 200 MB/s, my laptop 470 MB/s, and my desktop 903 MB/s. For AES-128-CBC, the Mac Mini is 89MB/s, my laptop 594MB/s, and my desktop a whopping 1.6GB/s! This result is obviously heavily skewed by the AES-NI instructions present on my laptop and desktop, but not the Mac Mini. (These are all single-thread results.)

Finally, I ran the POV-Ray 3.7 benchmark. The Mac Mini took 952s, my laptop 452s, and my desktop just 54s.

I began to wonder how all these results compared to something like a Raspberry Pi, so I pulled out a Pi 3B+ and a Pi 4B and ran the same benchmarks again.

Device 7-Zip SHA-512 AES-128 POV-Ray 3.7
Mac Mini w/T7200 3713 MB/s 193 MB/s 89 MB/s 952s
Laptop (i5-5200U) 6345 MB/s 470 MB/s 593 MB/s 452s
Desktop (R7-3700X) 57250 MB/s 903 MB/s 1591 MB/s 54s
Raspberry Pi 3B+ 1962 MB/s 31 MB/s 47 MB/s 1897s
Raspberry Pi 4B 3582 MB/s 204 MB/s 91 MB/s 597s

As can be seen, in most of the tests, the Mac Mini with a Core 2 Duo is trading blows back and forth with the Raspberry Pi 4B - and gets handily beat in the POV-Ray 3.7 test. Below is a chart of normalized test results, with the slowest device a 1.0 (always the Pi 3B+), and all others represent how many times faster the other systems are.

Normalized Relative Performance

During all of these tests, I had the Mac Mini plugged into a Kill-A-Watt Meter to measure the power consumption. Idling, it's around 20 watts. Under one of these load tests, it reaches about 45-49 watts. Given that the Raspberry Pi 4B only uses around 5W under full load, the Pi 4B absolutely destroys this Mac Mini in performance-per-watt. (Note, again, this is an old Mac Mini - it's no surprise that it's not an even comparison.)

Conclusion

Given the lack of expandability, the mediocre baseline performance, and the very poor performance per watt, I can't see using this for much, if anything. Running it 24/7 for a home server doesn't offer much over a Raspberry Pi 4B, and the I/O is only slightly better. At this point, it's probably headed for the electronics recycling center.

23 Feb 2021 8:00am GMT

22 Feb 2021

feedPlanet Ubuntu

The Fridge: Ubuntu Weekly Newsletter Issue 671

Welcome to the Ubuntu Weekly Newsletter, Issue 671 for the week of February 14 - 20, 2021. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

22 Feb 2021 10:00pm GMT

21 Feb 2021

feedPlanet Ubuntu

Dmitry Shachnev: ReText turns 10 years

Exactly ten years ago, in February 2011, the first commit in ReText git repository was made. It was just a single 364 lines Python file back then (now the project has more than 6000 lines of Python code).

Since 2011, the editor migrated from SourceForge to GitHub, gained a lot of new features, and - most importantly - now there is an active community around it, which includes both long-time contributors and newcomers who create their first issues or pull requests. I don't always have enough time to reply to issues or implement new features myself, but the community members help me with this.

Earlier this month, I made a new release (7.2), which adds a side panel with directory tree (contributed by Xavier Gouchet), option to fully highlight wrapped lines (contributed by nihillum), ability to search in the preview mode and much more - see the release page on GitHub.

Side panel in ReText

Also a new version of PyMarkups module was released, which contains all the code for processing various markup languages. It now supports markdown-extensions.yaml files which allow specifying complex extensions options and adds initial support for MathJax 3.

Also check out the release notes for 7.1 which was not announced on this blog.

Future plans include making at least one more release this year, adding support for Qt 6. Qt 5 support will last for at least one more year.

21 Feb 2021 6:30pm GMT

20 Feb 2021

feedPlanet Ubuntu

Stephen Michael Kellat: Late February 2021 Miscellany

In no particular order:

20 Feb 2021 7:47pm GMT

18 Feb 2021

feedPlanet Ubuntu

Podcast Ubuntu Portugal: Ep 130 – FOSDEM 2021

Revisitando o Hacktoberfest voltámos a olhar para o projecto Onde é que pára a Cultura, do Interruptor, olhando para suas as últimas movimentações. Reflectimos ainda sobre a FOSDEM 2021, nesta edição em moldes totalmente distintos do habitual, mas sempre com elevando interesse e relevância.

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: "Won't see it comin' (Feat Aequality & N'sorte d'autruche)", por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

18 Feb 2021 10:45pm GMT

Julian Andres Klode: APT 2.2 released

APT 2.2.0 marks the freeze of the 2.1 development series and the start of the 2.2 stable series.

Let's have a look at what changed compared to 2.2. Many of you who run Debian testing or unstable, or Ubuntu groovy or hirsute will already have seen most of those changes.

New features

Other behavioral changes

Performance improvements

Bug fixes

Security fixes

(all of which have been backported to all stable series, back all the way to 1.0.9.8.* series in jessie eLTS)

Incompatibilities

Deprecations

18 Feb 2021 8:09pm GMT

15 Feb 2021

feedPlanet Ubuntu

The Fridge: Ubuntu Weekly Newsletter Issue 670

Welcome to the Ubuntu Weekly Newsletter, Issue 670 for the week of February 7 - 13, 2021. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

15 Feb 2021 9:34pm GMT