15 Jun 2021

feedPlanet Ubuntu

Ubuntu Blog: How to manage a 24×7 private cloud with one engineer

Why do we need digital transformation? Really!

In the last several years, we have witnessed the creation of many technologies, starting with the cloud and going further to machine learning, artificial intelligence, IoT, big data, robotics, automation and much more. The more the tech evolves, the more organizations thrive to adopt these technologies seeking digital transformation and disrupting industries along their journey, all for the benefit of better serving their consumers.

With every technology having its own requirements, costs and benefits, the only common aspect between any technology you decide to invest in is one thing: it is all based on achieving a business goal that will help your organization better position itself in the market. You might be luckily taking advantage of leading your field, or looking to better serve your customers, or even keeping up with tough competition. Whatever your motive is, the aim will always be to realise a business goal out of your investment.

What are the overlooked costs?

As much as organisations benefit from digital transformation, everything comes at a cost. With cost here, I mean to refer to the friction that opposes the successful implementation and smooth operation of the technology rather than the financial cost of acquiring the technology. This leads to the most important overlooked question when considering a digital transformation: Who cleans the windows after raising a shiny artifact? The massive acceleration in evolving technology has made it even more challenging to achieve sustainable operations and has added a technical debt in post-deployment that organisations should not be immersed in rather than keeping their focus on activities that drive impact.

"When we pick up one end of a stick, we pick up the other end", Stephen Covey said in his famous bestseller The 7 Habits of Highly Effective People. The truth is that the rule applies to every aspect of life, but is it debatable in technology adoption? Is it possible to achieve a successful digital transformation journey without having to pay its costs? Is it possible to remove the friction opposing your journey, and avoid increasing the complexity of your operations, growing your team exponentially to keep the lights on while spending months on finding the right talent?

How to pick up only one end of the stick, your goal?

We all want to pick up the gain and avoid the cost whenever possible, it's human nature! Think of your car insurance for a moment, what makes you want to pay the insurance company for their service (apart from being mandatory)? They simply pay your cheque, and may even provide you with a temporary ride until your car is ready to hit the streets again. They might even come to pick it up and hand it over at your doorstep after it's fixed. The important thing is that you don't worry much about what's happening behind the scenes, as you have more important things in life to take care of rather than spending time, effort and money getting your car fixed. You bought the car to get you around conveniently and fixing it was not part of your goals, but a consequence of buying the car instead. You simply let the insurance company take care of this, and hold the other end of the stick for you.

Building a private cloud can be complicated, but operating a private cloud efficiently is definitely challenging. Taking Openstack as an example, being the preferred choice for organisations building private and hybrid/multi-clouds and being the most widely used open source cloud software. The post-deployment operations are said to be complex due to requirements of expertise in different layers of the stack, in addition to regular firmware upgrades that require proper planning, backup and fallback strategies to ensure safe upgrades. What if you further require to deploy Kubernetes on top of Openstack to further enable focus on developing your cloud-native applications? Luckily, there's a similar "car insurance" story when building your private cloud using Openstack or deploying Kubernetes, but let's first wrap up what you're looking for at that stage:

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/81L2hdxSFvuXuXmrMjqsDG96w0U-5oMnq--Iq0loNFiC46PniTIeffoIJ-cfILmuzJ-DvPJUbpkCzYFLeT7YB1FsmTrXPPitHoBRVMs1gNzRXfJwdbpT8XG6tetcZ8DGNMcnVmXn" width="720" /> </noscript>

Managed IT services can provide you with the opportunity to offload many commodity activities and benefit from the knowledge and experience brought by the managed service provider's team of cloud experts, and help you maintain your focus on driving your business.

Let's Talk Numbers!

It's time to think about efficiency, and the financial cost of outsourcing to a managed service provider compared to hiring a dedicated team. So first, let us breakdown the operational costs of a self-managed cloud:

What if you want to start your cloud small, and scale as the business grows? Would you want to add this massive overhead to your expansion plans rather than allocate it to core activities that will support your successful journey?

Now Really, How can I Manage a 24×7 Private Cloud with one Engineer?

Let's get this straight, it is not possible to operate a private cloud with only one engineer. This is due to the architecture of private cloud solutions having many integrated components, each requiring a unique technical skillset and expertise. Although this is true, you can still operate your private cloud at the cost of one engineer by partnering with the right managed service provider (MSP).

An MSP can help you significantly minimise costs and accelerate the adoption of the cloud while maintaining your focus on what you do best, driving your business. This is possible because an MSP simply has the ability to leverage the same pool of specialists to serve different customers. This way, multiple organizations benefit from running their 24×7 operations by experienced professionals at a much lower cost compared to hiring the same engineers exclusively.

If we consider Canonical's offering for cost comparison, a managed Openstack service is provided at the cost of $ 5,475 per host annually. The minimum number of nodes to build an Openstack cloud is 12, making the total annual cost for operating your cloud less than $ 66,000, which is $ 6,000 less than the minimum annual income of one full-time cloud operations engineer. This allows your team of IT specialists to focus more on innovation and strategy rather than keeping the lights on.

Want a fully managed private cloud?

Kubernetes or OpenStack? On public clouds, on your premises or hosted? Tell us what your preference is!

If you want to learn more, watch the "Hosted private cloud infrastructure: A Cost Analysis" webinar.

15 Jun 2021 4:51pm GMT

14 Jun 2021

feedPlanet Ubuntu

The Fridge: Ubuntu Weekly Newsletter Issue 687

Welcome to the Ubuntu Weekly Newsletter, Issue 687 for the week of June 6 - 12, 2021. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

14 Jun 2021 10:30pm GMT

Ubuntu Blog: Introducing Ubuntu Pro for Google Cloud

June 14th, 2021: Canonical and Google Cloud today announce Ubuntu Pro on Google Cloud, a new Ubuntu offering available to all Google Cloud users. Ubuntu Pro on Google Cloud allows instant access to security patching covering thousands of open source applications for up to 10 years and critical compliance features essential to running workloads in regulated environments.

Google Cloud has long partnered with Canonical to offer innovative developer solutions, from desktop to Kubernetes and AI/ML. In the vein of this collaboration, Google Cloud and Canonical have created a more secure, hardened, and cost-effective devops environment: Ubuntu Pro on Google Cloud for all enterprises to accelerate their cloud adoption.

"Enterprise customers are increasingly adopting Google Cloud to run their core business-critical and customer-facing applications," said June Yang, VP and GM, Compute, Google Cloud. "The availability of Ubuntu Pro on Google Cloud will offer our enterprise customers the additional security and compliance services needed for their mission-critical workloads."

Ubuntu Pro on Google Cloud is a premium version of Ubuntu focusing on enterprise and production use. It provides developers/administrators with a secured devops environment, addressing Security, one of the fundamental pillars for all IT systems. It is based on standard Ubuntu components, but comes with a set of additional services activated out of the box, including:

Ubuntu Pro for Google Cloud at Work

Gojek has evolved from a ride-hailing company to a technology company offering a suite of more than 20 services across payment, e-commerce, and transportation. Through their applications, they're now serving millions of users across Southeast Asia.

"We needed more time to comprehensively test and migrate our Ubuntu 16.04 LTS workloads to Ubuntu 20.04 LTS, which would mean stretching beyond the standard maintenance timelines for Ubuntu 16.04 LTS. With Ubuntu Pro on Google Cloud, we now can postpone this, and in moving our 16.04 workloads to Ubuntu Pro, we benefit from its live kernel patching and improved security coverage for our key open source components," said Kartik Gupta, Engineering Manager for CI/CD & FinOps at Gojek

To help customers get better visibility of cost and money savings, Ubuntu Pro for Google Cloud embeds a transparent and innovative approach to pricing. For instance, Ubuntu Pro will be 3-4.5% of your average computing cost, meaning the more computing resources you consume, the smaller percentage you pay for Ubuntu Pro. Customers can purchase Ubuntu Pro directly through GCP Console or Google Cloud Marketplace for a streamlined procurement process, enabling quicker access to these commercial features offered by Canonical.

"Since 2014, Canonical has been providing Ubuntu for Google Cloud customers. We continuously expand security coverage, great operational efficiency, and native compatibility with Google Cloud features," said Alex Gallagher, VP of Cloud GTM at Canonical. "I'm excited to witness the collaboration between Canonical and Google Cloud to make Ubuntu Pro available. Ubuntu Pro on Google Cloud sets a new standard for security of operating systems and facilitates your migration to Google Cloud."

Getting started

Getting started with Ubuntu Pro on Google Cloud is simple. You can now purchase these premium images directly from Google Cloud by selecting Ubuntu Pro as the operating system straight from the Google Cloud Console.

To learn more about Ubuntu Pro on Google Cloud, please visit the documentation page and read the announcement from Google.

14 Jun 2021 4:30pm GMT

Alan Pope: Adrift

Over the weekend I participated in FOSS Talk Live. Before The Event this would have been an in-person shindig at a pub in London. A bunch of (mostly) UK-based podcasters get together and record live versions of their shows in front of a "studio audience". It's mostly an opportunity for a bunch of us middle-aged farts who speak into microphones to get together, have a few beers and chat. Due to The Event, this year it was a virtual affair, done online via YouTube.

14 Jun 2021 11:00am GMT

13 Jun 2021

feedPlanet Ubuntu

Daniel Holbach: GitOps Days 2021 Tracklist

Earlier this week it was time for GitOps Days again. The third time now and the event has grown quite a bit since we started. Born out of the desire to bring GitOps practitioners together during pandemic times initially, this time we had a proper CFP and the outcome was just great: lots of participation from a very diverse crowd of experts - we had panels, case studies, technical deep dives, comparisons of different solutions and more.

13 Jun 2021 6:06am GMT

11 Jun 2021

feedPlanet Ubuntu

Stphane Graber: Inexpensive highly available LXD cluster: 6 months later

Over the past few posts, I covered the hardware I picked up to setup a small LXD cluster and get it all setup at a co-location site near home. I've then gone silent for about 6 months, not because anything went wrong but just because of not quite finding the time to come back and complete this story!

So let's pick things up where I left them with the last post and cover the last few bits of the network setup and then go over what happened over the past 6 months.

Routing in a HA environment

You may recall that the 3 servers are both connected to a top of the rack switch (bonded dual-gigabit) as well as connected to each other (bonded dual-10-gigabit). The netplan config in the previous post would allow each of the servers to talk to the others directly and establish a few VLANs on the link to the top of the rack switch.

Those are for:

Simply put, the servers have their main global address and default gateway on INFRA-HOSTS, the BMCs and switch have their management addresses in INFRA-BMC, INFRA-UPLINK is consumed by OVN and WAN-HIVE is how I access the internet.

In my setup, I then run three containers, one on each server which each gets direct access to all those VLANs and act as a router using FRR. FRR is configured to establish BGP sessions with both of my provider's core routers, getting routing to the internet that way and announcing my IPv4 and IPv6 subnets that way too.

LXD output showing the 3 FRR routers

On the internal side of things, I'm using VRRP to provide a virtual router internally. Typically this means that frr01 is the default gateway for all egress traffic while ingress traffic is somewhat spread across all 3 thanks to them having the same BGP weight (so my provider's routers distribute the connections across all active peers).

With that in place, so long as one of the FRR instances are running, connectivity is maintained. This makes doing maintenance quite easy as there is effectively no SPOF.

Enter LXD networks with OVN

Now for where things get a bit trickier. As I'm using OVN to provide virtual networks inside of LXD, each of those networks will typically need some amount of public addressing. For IPv6, I don't do NAT so each of my networks get a public /64 subnet. For IPv4, I have a limited number of those, so I just assign them one by one (/32) directly to specific instances.

Whenever such a network is created, it will grab an IPv4 and IPv6 address from the subnet configured on INFRA-UPLINK. That part is all good and the OVN gateway becomes immediately reachable.

The issue is with the public IPv6 subnet used by each network and with any additional addresses (IPv4 or IPv6) which are routed directly to its instances. For that to work, I need my routers to send the traffic headed for those subnets to the correct OVN gateway.

But how do you do that? Well, there are pretty much three options here:

Naturally I went with the last one. At the time, there was no way to do that through LXD, so I made my own by writing lxd-bgp. This is a pretty simple piece of software which uses the LXD API to inspect its networks, determine all OVN networks tied to a particular uplink network (INFRA-UPLINK in my case) and then inspect all instances running on that network.

It then sends announcements both for the subnets backing each OVN networks as well as for specific routes/addresses that are routed on top of that to specific instances running on the local system.

The result is that when an instance with a static IPv4 and IPv6 starts, the lxd-bgp instance running on that particular system will send an announcement for those addresses and traffic will start flowing.

Now deploy the same service on 3 servers, put them into 3 different LXD networks and set the exact same static IPv4 and IPv6 addresses on them and you now have a working anycast service. When one of the containers or its host go down for some reason, that route announcement goes away and the traffic now heads to the remaining instances. That does a good job at some simplistic load-balancing and provides pretty solid service availability!

LXD output of my 3 DNS servers (backing ns1.stgraber.org) and using anycast

The past 6 months

Now that we've covered the network setup I'm running, let's spend a bit of time going over what happened over the past 6 months!

The servers and switch installed in the cabinet

In short, well, not a whole lot. Things have pretty much just been working. The servers were installed in the datacenter on the 21st of December. I've then been busy migrating services from my old server at OVH over to the new cluster, finalizing that migration at the end of April.

I've gotten into the habit of doing a full reboot of the entire cluster every week and developed a bit of tooling for this called lxd-evacuate. This makes it easy to relocate any instance which isn't already highly available, emptying a specific machine and then letting me reboot it. By and large this has been working great and it's always nice to have confidence that should something happen, you know all the machines will boot up properly!

These days, I'm running 63 instances across 9 projects and a dozen networks. I spent a bit of time building up a Grafana dashboard which tracks and alerts on my network consumption (WAN port, uplink to servers and mesh), monitors the health of my servers (fan speeds, temperature, …), tracks CEPH consumption and performance, monitors the CPU, RAM and load of each of the servers and also track performance on my top services (NSD, unbound and HAProxy).

LXD also rolled out support for network ACLs somewhat recently, allowing for proper stateful firewalling directly through LXD and implemented in OVN. It took some time to setup all those ACLs for all instances and networks but that's now all done and makes me feel a whole lot better about service security!

What's next

On the LXD front, I'm excited about a few things we're doing over the next few months which will make environments like mine just that much nicer:

This will let me deprecate some of those side projects I had to start as part of this work, will reduce the amount of manual labor involved in setting up all the DNS records and will give me much better insight on what's consuming resources on the cluster.

I'm also in the process of securing my own ASN and address space through ARIN, mostly because that seemed like a fun thing to do and will give me a tiny bit more flexibility too (not to mention let me consolidate a whole bunch of subnets). So soon enough, I expect to have to deal with quite a bit of re-addressing, but I'm sure it will be a fun and interesting experience!

11 Jun 2021 10:07pm GMT

Kubuntu General News: Plasma 5.22 available for Hirsute Hippo 21.04 in backports PPA

We are pleased to announce that Plasma 5.22.0, is now available in our backports PPA for Kubuntu 21.04 Hirsute Hippo.

The release announcement detailing the new features and improvements in Plasma 5.22 can be found here.

To upgrade:

Add the following repository to your software sources list:

ppa:kubuntu-ppa/backports

or if it is already added, the updates should become available via your preferred update method.

The PPA can be added manually in the Konsole terminal with the command:

sudo add-apt-repository ppa:kubuntu-ppa/backports

and packages then updated with

sudo apt full-upgrade

IMPORTANT

Please note that more bugfix releases are scheduled by KDE for Plasma 5.22, so while we feel these backports will be beneficial to enthusiastic adopters, users wanting to use a Plasma release with more rounds of stabilisation/bugfixes 'baked in' may find it advisable to stay with Plasma 5.21 as included in the original 21.04 (Hirsute) release.

The Kubuntu Backports PPA for 21.04 also currently contains newer versions of KDE Frameworks, Applications, and other KDE software. The PPA will also continue to receive updates of KDE packages other than Plasma.

Issues with Plasma itself can be reported on the KDE bugtracker [1]. In the case of packaging or other issues, please provide feedback on our mailing list [2], IRC [3], and/or file a bug against our PPA packages [4].

1. KDE bugtracker: https://bugs.kde.org
2. Kubuntu-devel mailing list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
3. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on irc.libera.chat
4. Kubuntu ppa bugs: https://bugs.launchpad.net/kubuntu-ppa

11 Jun 2021 7:19pm GMT

Colin Watson: SSH quoting

A while back there was a thread on one of our company mailing lists about SSH quoting, and I posted a long answer to it. Since then a few people have asked me questions that caused me to reach for it, so I thought it might be helpful if I were to anonymize the original question and post my answer here.

The question was why a sequence of commands involving ssh and fiddly quoting produced the output they did. The first example was this:

$ ssh user@machine.local bash -lc "cd /tmp;pwd"
/home/user

Oh hi, my dubious life choices have been such that this is my specialist subject!

This is because SSH command-line parsing is not quite what you expect.

First, recall that your local shell will apply its usual parsing, and the actual OS-level execution of ssh will be like this:

[0]: ssh
[1]: user@machine.local
[2]: bash
[3]: -lc
[4]: cd /tmp;pwd

Now, the SSH wire protocol only takes a single string as the command, with the expectation that it should be passed to a shell by the remote end. The OpenSSH client deals with this by taking all its arguments after things like options and the target, which in this case are:

[0]: bash
[1]: -lc
[2]: cd /tmp;pwd

It then joins them with a single space:

bash -lc cd /tmp;pwd

This is passed as a string to the server, which then passes that entire string to a shell for evaluation, so as if you'd typed this directly on the server:

sh -c 'bash -lc cd /tmp;pwd'

The shell then parses this as two commands:

bash -lc cd /tmp
pwd

The directory change thus happens in a subshell (actually it doesn't quite even do that, because bash -lc cd /tmp in fact ends up just calling cd because of the way bash -c parses multiple arguments), and then that subshell exits, then pwd is called in the outer shell which still has the original working directory.

The second example was this:

$ ssh user@machine.local bash -lc "pwd;cd /tmp;pwd"
/home/user
/tmp

Following the logic above, this ends up as if you'd run this on the server:

sh -c 'bash -lc pwd; cd /tmp; pwd'

The third example was this:

$ ssh user@machine.local bash -lc "cd /tmp;cd /tmp;pwd"
/tmp

And this is as if you'd run:

sh -c 'bash -lc cd /tmp; cd /tmp; pwd'

Now, I wouldn't have implemented the SSH client this way, because I agree that it's confusing. But /usr/bin/ssh is used as a transport for other things so much that changing its behaviour now would be enormously disruptive, so it's probably impossible to fix. (I have occasionally agitated on openssh-unix-dev@ for at least documenting this better, but haven't made much headway yet; I need to get round to preparing a documentation patch.) Once you know about it you can use the proper quoting, though. In this case that would simply be:

ssh user@machine.local 'cd /tmp;pwd'

Or if you do need to specifically invoke bash -l there for some reason (I'm assuming that the original example was reduced from something more complicated), then you can minimise your confusion by passing the whole thing as a single string in the form you want the remote sh -c to see, in a way that ensures that the quotes are preserved and sent to the server rather than being removed by your local shell:

ssh user@machine.local 'bash -lc "cd /tmp;pwd"'

Shell parsing is hard.

11 Jun 2021 10:22am GMT

10 Jun 2021

feedPlanet Ubuntu

Podcast Ubuntu Portugal: Ep 146 – Caixote

Neste episódio falámos sobre a comunidade e o retomar ods encontros presenciais pós pandemia, actualidade da comunidade Ubuntu internacional e o revitalizar das relações entre a Canonical e as várias LoCos espalhadas pelo mundo e fizemos ainda um apanhado das actualidade Ubuntu.

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: "Won't see it comin' (Feat Aequality & N'sorte d'autruche)", por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

10 Jun 2021 9:45pm GMT

Ubuntu Podcast from the UK LoCo: S14E14 – Letter Copy Magic

This week we got a portable touch screen monitor. We discuss our favourite Linux apps, bring you a command line lurve and go over all your wonderful feedback.

It's Season 14 Episode 14 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week's show:

rpg ~/Scripts

  spider[2][xxxx]@~/Scripts

    hero[1][xxx-] -11hp
  spider[2][xxxx]  dodged!
    hero[1][x---] -13hp
  spider[2][xxxx]  dodged!
    hero[1][----] -12hp

    hero[1][----][----]@~/Scripts &#x1f480;

That's all for this week! If there's a topic you'd like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

10 Jun 2021 2:00pm GMT

09 Jun 2021

feedPlanet Ubuntu

Stephen Michael Kellat: Remembering Planning

The month has started off with some big surprises for me. For the low price equal to roughly 34 Beta Edition PinePhones or roughly 72 Raspberry Pi 400 units I wound up having to pay to get my home's central heating and cooling system replaced. It has been a few days of disruption since the unit failed which combined with the rather hot weather has made my home not quite fit for habitation.

Things like that help me appreciate events like the Fastly outage on Tuesday morning. A glitch in that content delivery network provider damaged the presences of quite a number of sites. While it was a brief event that happened while I was asleep it was apparently jarring to many people.

Both happenings point out that resilience is a journey rather than a concrete endpoint. How easily can you bounce back from the unexpected? If you operate an online service do you even have a plan for when something goes horribly wrong?

Fortunately when the central air unit at home ceased functioning we were able to stay with family while I tracked down a contractor to do an assessment which then turned into a replacement job. Fastly had a contingency plan that it executed to keep the incident down to less than an hour. Whether you are running a massive service or just a small shared server for friends you need to have some notion of what you intend to do when disaster strikes.

Tags: Contingencies

09 Jun 2021 4:42am GMT

07 Jun 2021

feedPlanet Ubuntu

The Fridge: Ubuntu Weekly Newsletter Issue 686

Welcome to the Ubuntu Weekly Newsletter, Issue 686 for the week of May 30 - June 5, 2021. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

07 Jun 2021 10:49pm GMT

05 Jun 2021

feedPlanet Ubuntu

David Tomaschik: GPU Accelerated Password Cracking in the Cloud: Speed and Cost-Effectiveness

Note: Though this testing was done on Google Cloud and I work at Google, this work and blog post represent my personal work and do not represent the views of my employer.

As a red teamer and security researcher, I occasionally find the need to crack some hashed passwords. It used to be that John the Ripper was the go-to tool for the job. With the advent of GPGPU technologies like CUDA and OpenCL, hashcat quickly eclipsed John for pure speed. Unfortunately, graphics cards are a bit hard to come by in 2021. I decided to take a look at the options for running hashcat on Google Cloud.

There are several steps involved in getting hashcat running with CUDA, and because I often only need to run the instance for a short period of time, I put together a script to spin up hashcat on a Google Cloud VM. It can either run the benchmark or spin up an instance with arbitrary flags. It starts the instance but does not stop it upon completion, so if you want to give it a try, make sure you shut down the instance when you're done with it. (It leaves the hashcat job running in a tmux session for you to examine.)

At the moment, there are 6 available GPU accelerators on Google Cloud, spanning the range of architectures from Kepler to Ampere (see pricing here):

Performance Results

I chose a handful of common hashes as representative samples across the different architectures. These include MD5, SHA1, NTLM, sha512crypt, and WPA-PBKDF2. These represent some of the most common password cracking situations encountered by penetration testers. Unsurprisingly, overall performance is most directly related to the number of CUDA cores, followed by speed and architecture.

Relative Performance Graph

Speeds in the graph are normalized to the slowest model in each test (the K80 in all cases).

Note that the Ampere-based A100 is 11-15 times as a fast as the slowest K80. (On some of the benchmarks, it can reach 55 times as fast, but these are less common.) There's a wide range of hardware here, and depending on availability and GPU type, you can attach from 1 to 16 GPUs to a single instance and hashcat can spread the load across all of the attached GPUs.

Full results of all of the tests, using the slowest hardware as a baseline for percentages:

Algorithm nvidia-tesla-k80 nvidia-tesla-p100 nvidia-tesla-p4 nvidia-tesla-v100 nvidia-tesla-t4 nvidia-tesla-a100
0 - MD5 4.3 GH/s 100.0% 27.1 GH/s 622.2% 16.6 GH/s 382.4% 55.8 GH/s 1283.7% 18.8 GH/s 432.9% 67.8 GH/s 1559.2%
100 - SHA1 1.9 GH/s 100.0% 9.7 GH/s 497.9% 5.6 GH/s 286.6% 17.5 GH/s 905.4% 6.6 GH/s 342.8% 21.7 GH/s 1119.1%
1400 - SHA2-256 845.7 MH/s 100.0% 3.3 GH/s 389.5% 2.0 GH/s 238.6% 7.7 GH/s 904.8% 2.8 GH/s 334.8% 9.4 GH/s 1116.7%
1700 - SHA2-512 230.3 MH/s 100.0% 1.1 GH/s 463.0% 672.5 MH/s 292.0% 2.4 GH/s 1039.9% 789.9 MH/s 343.0% 3.1 GH/s 1353.0%
22000 - WPA-PBKDF2-PMKID+EAPOL (Iterations: 4095) 80.7 kH/s 100.0% 471.4 kH/s 584.2% 292.9 kH/s 363.0% 883.5 kH/s 1094.9% 318.3 kH/s 394.5% 1.1 MH/s 1354.3%
1000 - NTLM 7.8 GH/s 100.0% 49.9 GH/s 643.7% 29.9 GH/s 385.2% 101.6 GH/s 1310.6% 33.3 GH/s 429.7% 115.3 GH/s 1487.3%
3000 - LM 3.8 GH/s 100.0% 25.0 GH/s 661.9% 13.1 GH/s 347.8% 41.5 GH/s 1098.4% 19.4 GH/s 514.2% 65.1 GH/s 1722.0%
5500 - NetNTLMv1 / NetNTLMv1+ESS 5.0 GH/s 100.0% 26.6 GH/s 533.0% 16.1 GH/s 322.6% 54.9 GH/s 1100.9% 19.7 GH/s 395.6% 70.6 GH/s 1415.7%
5600 - NetNTLMv2 322.1 MH/s 100.0% 1.8 GH/s 567.5% 1.1 GH/s 349.9% 3.8 GH/s 1179.7% 1.4 GH/s 439.4% 5.0 GH/s 1538.1%
1500 - descrypt, DES (Unix), Traditional DES 161.7 MH/s 100.0% 1.1 GH/s 681.5% 515.3 MH/s 318.7% 1.7 GH/s 1033.9% 815.9 MH/s 504.6% 2.6 GH/s 1606.8%
500 - md5crypt, MD5 (Unix), Cisco-IOS $1$ (MD5) (Iterations: 1000) 2.5 MH/s 100.0% 10.4 MH/s 416.4% 6.3 MH/s 251.1% 24.7 MH/s 989.4% 8.7 MH/s 347.6% 31.5 MH/s 1260.6%
3200 - bcrypt $2\*$, Blowfish (Unix) (Iterations: 32) 2.5 kH/s 100.0% 22.9 kH/s 922.9% 13.4 kH/s 540.7% 78.4 kH/s 3155.9% 26.7 kH/s 1073.8% 135.4 kH/s 5450.9%
1800 - sha512crypt $6$, SHA512 (Unix) (Iterations: 5000) 37.9 kH/s 100.0% 174.6 kH/s 460.6% 91.6 kH/s 241.8% 369.6 kH/s 975.0% 103.5 kH/s 273.0% 535.4 kH/s 1412.4%
7500 - Kerberos 5, etype 23, AS-REQ Pre-Auth 43.1 MH/s 100.0% 383.9 MH/s 889.8% 186.7 MH/s 432.7% 1.0 GH/s 2427.2% 295.0 MH/s 683.8% 1.8 GH/s 4281.9%
13100 - Kerberos 5, etype 23, TGS-REP 32.3 MH/s 100.0% 348.8 MH/s 1080.2% 185.3 MH/s 573.9% 1.0 GH/s 3123.0% 291.7 MH/s 903.4% 1.8 GH/s 5563.8%
15300 - DPAPI masterkey file v1 (Iterations: 23999) 15.6 kH/s 100.0% 80.8 kH/s 519.0% 50.2 kH/s 322.3% 150.9 kH/s 968.9% 55.6 kH/s 356.7% 187.2 kH/s 1202.0%
15900 - DPAPI masterkey file v2 (Iterations: 12899) 8.1 kH/s 100.0% 36.7 kH/s 451.0% 22.1 kH/s 271.9% 79.9 kH/s 981.4% 31.3 kH/s 385.0% 109.2 kH/s 1341.5%
7100 - macOS v10.8+ (PBKDF2-SHA512) (Iterations: 1023) 104.1 kH/s 100.0% 442.6 kH/s 425.2% 272.5 kH/s 261.8% 994.6 kH/s 955.4% 392.5 kH/s 377.0% 1.4 MH/s 1304.0%
11600 - 7-Zip (Iterations: 16384) 91.9 kH/s 100.0% 380.5 kH/s 413.8% 217.0 kH/s 236.0% 757.8 kH/s 824.2% 266.6 kH/s 290.0% 1.1 MH/s 1218.6%
12500 - RAR3-hp (Iterations: 262144) 12.1 kH/s 100.0% 64.2 kH/s 528.8% 20.3 kH/s 167.6% 102.2 kH/s 842.3% 28.1 kH/s 231.7% 155.4 kH/s 1280.8%
13000 - RAR5 (Iterations: 32799) 10.2 kH/s 100.0% 39.6 kH/s 389.3% 24.5 kH/s 240.6% 93.2 kH/s 916.6% 30.2 kH/s 297.0% 118.7 kH/s 1167.8%
6211 - TrueCrypt RIPEMD160 + XTS 512 bit (Iterations: 1999) 66.8 kH/s 100.0% 292.4 kH/s 437.6% 177.3 kH/s 265.3% 669.9 kH/s 1002.5% 232.1 kH/s 347.3% 822.4 kH/s 1230.8%
13400 - KeePass 1 (AES/Twofish) and KeePass 2 (AES) (Iterations: 24569) 10.9 kH/s 100.0% 67.0 kH/s 617.1% 19.0 kH/s 174.8% 111.2 kH/s 1024.8% 27.3 kH/s 251.2% 139.0 kH/s 1281.0%
6800 - LastPass + LastPass sniffed (Iterations: 499) 651.9 kH/s 100.0% 2.5 MH/s 390.4% 1.5 MH/s 232.2% 6.0 MH/s 914.8% 2.0 MH/s 304.7% 7.6 MH/s 1160.0%
11300 - Bitcoin/Litecoin wallet.dat (Iterations: 200459) 1.3 kH/s 100.0% 5.0 kH/s 389.9% 3.1 kH/s 241.5% 11.4 kH/s 892.3% 4.1 kH/s 325.3% 14.4 kH/s 1129.2%

Value Results

Believe it or not, speed doesn't tell the whole story, unless you're able to bill the cost directly to your customer - in that case, go straight for that 16-A100 instance. :)

You're probably more interested in value however - that is, hashes per dollar. This is computed based on the speed and price per hour, resulting in hash per dollar value. For each card, I computed the median relative performance across all of the hashes in the default hashcat benchmark. I then divided performance by price per hour, then normalized these values again.

Relative Value

Relative value is the mean speed per cost, in terms of the K80.

Card Performance Price Value
nvidia-tesla-k80 100.0 $0.45 1.00
nvidia-tesla-p100 519.0 $1.46 1.60
nvidia-tesla-p4 286.6 $0.60 2.15
nvidia-tesla-v100 1002.5 $2.48 1.82
nvidia-tesla-t4 356.7 $0.35 4.59
nvidia-tesla-a100 1341.5 $2.93 2.06

Though the NVIDIA T4 is nowhere near the fastest, it is the most efficient in terms of cost, primarily due to its very low $0.35/hr pricing. (At the time of writing.) If you have a particular hash to focus on, you may want to consider doing the math for that hash type, but the relative performances seem to have the same trend. It's actually a great value.

So maybe the next time you're on an engagement and need to crack hashes, you'll be able to figure out if the cloud is right for you.

05 Jun 2021 7:00am GMT

03 Jun 2021

feedPlanet Ubuntu

Podcast Ubuntu Portugal: Ep 145 – Amália

Um episódio feito em condições vocais particularmente exigentes para um dos elementos e que se revelou bastante exigente também para o outro, mas como dizia o Freddie Mercury: The Show Must Go On!

Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: "Won't see it comin' (Feat Aequality & N'sorte d'autruche)", por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

03 Jun 2021 9:45pm GMT

Ubuntu Podcast from the UK LoCo: S14E13 – Wants Photo Booth

This week we've been fixing phones and relearning trigonometry. We round up the news and events from the Ubuntu community and discuss news from the wider tech scene.

It's Season 14 Episode 13 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week's show:

That's all for this week! If there's a topic you'd like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

03 Jun 2021 2:00pm GMT

01 Jun 2021

feedPlanet Ubuntu

Bryan Quigley: Why hasn't snap or flatpak won yet?

Where win means becomes the universal way to get apps on Linux.

In short, I don't think either current iteration will. But why?

I started writing this a while ago, but Disabling snap Autorefresh reminded me to finish it. I also do not mean this as a "hit piece" against my former employer.

Here is a quick status of where we are:

Use case Snaps Flatpak
Desktop app ☑️ ☑️
Service/Server app ☑️ 🚫
Embedded ☑️ 🚫
Command Line apps ☑️ 🚫
Full independence option 🚫 ☑️
Build a complete desktop 🚫 ☑️
Controlling updates 🚫 ☑️

Desktop apps

Both Flatpaks and Snaps are pretty good at desktop apps. They share some bits and have some differences. Flatpak might have a slight edge because it's focused only on Desktop apps, but for the most part it's a wash.

Service/Server / Embedded / Command Line apps

Flatpak doesn't target these at all. Full stop.

Snap wins these without competition from Flatpak but this does show a security difference. sudo snap install xyz will just install it - it won't ask you if you think it's a service, desktop app or some combination (or prompt you for permissions like Flatpak does).

With Embedded using Ubuntu Core it requires strict confinement which is a plus (Which you read correctly, means "something less" confinement everywhere else).

Aside: As Fedora SilverBlue and Endless OS both only let you install Flatpaks, they also come with the container based Toolbox to make it possible to run other apps.

Full independence option / Build a complete desktop

Snaps

You can not go and (re)build your own distro and use upstream snapd.

Snaps are generally running from one LTS "core" behind what you might expect from your Ubuntu desktop version. For example: core18 is installed by default on Ubuntu 21.04. The embedded Ubuntu Core option is the only one that is using just one version of Ubuntu core code..

Flatpak

With Flatpak you can choose to use one of many public bases like the Freedesktop platform or Gnome platform. You can also build your own Platform like Fedora Silverblue does. All of the default flatpak that Silverblue comes with are derived from the "regular" Fedora of the same version. You can of course add other sources too. Example: The Gnome Calculator from Silverblue is built from the Fedora RPMs and depends on the org.fedoraproject.Platform built from that same version of Fedora.

Aside: I should note that to do that you need OSTree to make the Platforms.

Controlling updates

Flatpak itself does not do any updates automatically. It relies on your software application to do it (Gnome Software). It also has the ability for apps to check for their own updates and ask to update itself.

Snaps are more complicated, but why? Let's look at the Ubuntu IoT and device services that Canonical sells:

Dedicated app store ...complete control of application versions, updates and controlled rollouts for $15,000 per year.

Enterprise app store ...control snap updates and upgrades. Ensure that all device traffic goes through an audited communications channel and determine the precise versions of snaps used inside the business.

Control of the update process is one of the ways Canonical is trying to make money. I don't believe anyone has ever told me explicitly that this is why Snaps update work this way. it just makes sense given the business considerations.

So who is going to "win"?

One of them might go away, but neither is set to become the universal way to get apps on Linux at least not today.

It could change starting with something like:

  • Flatpak (or something like it) evolves to support command line or other apps.
  • A snap based Ubuntu desktop takes off and becomes the default Ubuntu.

Either isn't going to get it all the way there, but is needed to prove what the technology can do. In both cases, the underlying confinement technology is being improved for all.

Comments

Maybe I missed something? Feel free to make a PR to add comments!

01 Jun 2021 8:00pm GMT