22 Jun 2018

feedPlanet Ubuntu

Benjamin Mako Hill: I’m a maker, baby

What does the "maker movement" think of the song "Maker" by Fink?

Is it an accidental anthem or just unfortunate evidence of the semantic ambiguity around an overloaded term?

22 Jun 2018 11:34pm GMT

21 Jun 2018

feedPlanet Ubuntu

Ubuntu Podcast from the UK LoCo: S11E15 – Fifteen Minutes - Ubuntu Podcast

This week we get the Hades Canyon NUC fully working and play Pillars of Eternity II. We discuss the falling value of BitCoin, backdoored Docker images and Microsoft getting into hot water over their work with US Immigration and Customs Enforcement. Plus we round up the community news.

It's Season 11 Episode 15 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week's show:

That's all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there's a topic you'd like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

21 Jun 2018 3:01pm GMT

20 Jun 2018

feedPlanet Ubuntu

Jonathan Carter: Plans for DebCamp18

Dates

I'm going to DebCamp18! I should arrive at NCTU around noon on Saturday, 2018-07-21.

My Agenda

20 Jun 2018 8:32am GMT

19 Jun 2018

feedPlanet Ubuntu

Benjamin Mako Hill: How markets coopted free software’s most powerful weapon (LibrePlanet 2018 Keynote)

Several months ago, I gave the closing keynote address at LibrePlanet 2018. The talk was about the thing that scares me most about the future of free culture, free software, and peer production.

A video of the talk is online on Youtube and available as WebM video file (both links should skip the first 3m 19s of thanks and introductions).

Here's a summary of the talk:

App stores and the so-called "sharing economy" are two examples of business models that rely on techniques for the mass aggregation of distributed participation over the Internet and that simply didn't exist a decade ago. In my talk, I argue that the firms pioneering these new models have learned and adapted processes from commons-based peer production projects like free software, Wikipedia, and CouchSurfing.

The result is an important shift: A decade ago, the kind of mass collaboration that made Wikipedia, GNU/Linux, or Couchsurfing possible was the exclusive domain of people producing freely and openly in commons. Not only is this no longer true, new proprietary, firm-controlled, and money-based models are increasingly replacing, displacing, outcompeting, and potentially reducing what's available in the commons. For example, the number of people joining Couchsurfing to host others seems to have been in decline since Airbnb began its own meteoric growth.

In the talk, I talk about how this happened and what I think it means for folks of that are committed to working in commons. I talk a little bit about the free culture and free software should do now that mass collaboration, these communities' most powerful weapon, is being used against them.

I'm very much interested in feedback provided any way you want to reach me including in person, over email, in comments on my blog, on Mastodon, on Twitter, etc.


Work on the research that is reflected and described in this talk was supported by the National Science Foundation (awards IIS-1617129 and IIS-1617468). Some of the initial ideas behind this talk were developed while working on this paper (official link) which was led by Maximilian Klein and contributed to by Jinhao Zhao, Jiajun Ni, Isaac Johnson, and Haiyi Zhu.

19 Jun 2018 6:03pm GMT

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, May 2018

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In May, about 202 work hours have been dispatched among 12 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours increased to 190 hours per month thanks to a few new sponsors who joined to benefit from Wheezy's Extended LTS support.

We are currently in a transition phase. Wheezy is no longer supported by the LTS team and the LTS team will soon take over security support of Debian 8 Jessie from Debian's regular security team.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

19 Jun 2018 8:27am GMT

David Tomaschik: Pros vs Joes CTF: The Evolution of Blue Teams

Pros v Joes CTF is a CTF that holds a special place in my heart. Over the years, I've moved from playing in the 1st CTF as a day-of pickup player (signing up at the conference) to a Blue Team Pro, to core CTF staff. It's been an exciting journey, and Red Teaming there is about the only role I haven't held. (Which is somewhat ironic given that my day job is a red team lead.) As Blue teams have just formed, and I'm not currently attached to any single team, I wanted to share my thoughts on the evolution of Blue teaming in this unique CTF. In many ways, this will resemble the Blue Team player's guide I wrote about 3 years ago, but will be based on the evolution of the game and of the industry itself. That post remains relevant, and I encourage you to read it as well.

Basics

Let's start by a refresher of the basics, as they exist today. The gameplay is a two day game, with teams being completely "blue" (defensive) on the first day, and teams moving to a "purple" stance (defending their own network, and able to attack each other as well) on the second day. During the first day, there's a dedicated red team providing the offensive incentive to the blue teams, as well as a grey team representing the users/customers of the blue team services.

Each blue team consists of eight players and two pros. The role of the pros is increasingly mentorship and less "hands on keyboard", fitting with the Pros v Joes mission of providing education & mentorship.

Scoring

Scoring was originally based entirely on Health & Welfare checks (i.e., service up and responding) and flags that can be captured from the hosts. Originally, there were "integrity" flags (submitted by blue) and offense flags (submitted by red).

As of 2017, scoring included health & welfare (service uptime), beacons (red cell contacting the scoreboard from the server to prove that it is compromised), flags (in theory anyway), and an in-game marketplace that could have both positive and negative effects. 2018 scoring details have not yet been released, but check the 2018 rules when published.

The Environment

The environment changes every year, but it's a highly heterogenous network with all of the typical services you would find in a corporate network. At a minimum, you're likely to see:

The operating systems will vary, and will include older and newer OSs of both Windows and Linux varities. There has also always been a firewall under the control of each team segregating that team's network from the rest of the network. These have been both Cisco ASA firewalls as well as pfSense firewalls.

Each player connects to the game environment using OpenVPN based on configurations and credentials provided by Dichotomy.

Preparation

There has been an increasing amount of preparation involved in each of the years I have participated in PvJ. This preparation has essentially come in two core forms:

  1. Learning about the principles of hardening systems and networks.
  2. Preparing scripts, tools, and toolkits for use during the game.

Fundamentals

It turns out that a lot of the fundamental knowledge necessary in securing a network are just basically system administration fundamentals. Understanding how the system works and how systems interact with each other provides much of the basics of information security.

On both Windows and Linux, it is useful to understand:

Understanding basic networking is also useful, including:

Knowing some kind of scripting language as well can be very useful, especially if your team prepares some scripts in advance for common operations. Languages that I've found useful include:

Player Toolkit

Obviously, if you're playing in a CTF, you'll need a computer. Many of the tools you'll want to use are either designed for Linux or are more commonly used on Linux, so almost everyone will want to have some sort of a Linux environment available. I suggest that you use whatever operating system you are most comfortable with as your "bare metal" operating system, so if that's Windows, you'll want to run a Linux virtual machine.

If you use a Macbook (which seems to be the most common choice at a lot of security conferences), you may want both a Windows VM and a Linux VM, as the Windows Server administration tools (should you choose to use them) only run on Windows clients. It's also been reported that TunnelBlick is the best option for an OpenVPN Client on MacOS.

As to choice of Linux distribution, if you don't have any personal preference, I would suggest using Kali Linux. It's not that Kali has anything you can't get on other distributions, but it's well-known in the security industry, well documented, and based on Debian Linux, which makes it well-supported and a close cousin of Ubuntu Linux that many have worked with before.

There are some tools that are absolutely necessary and you should familiarize yourself with them in advance:

Other tools you'll probably want to get some experience with:

Useful Resources

Game Strategy

Every team has their own general strategy to the game, but there are a few things I've found that seem to make gameplay go more smoothly for the team:

Dos & Don'ts

Making the Most of It

Like so many things in life, the PvJ CTF is a case where you get out of it what you put into it. If you think you can learn it all by osmosis or being on the same team but without making effort, it's unlikely to work out. PvJ gives you an enthusiastic team, mentors willing to help, and a top-notch environment to try things out that you might not have the resources for in your environment.

To all the players: Good luck, learn new things, and have fun!

19 Jun 2018 7:00am GMT

18 Jun 2018

feedPlanet Ubuntu

The Fridge: Ubuntu Weekly Newsletter Issue 532

Welcome to the Ubuntu Weekly Newsletter, Issue 532 for the week of June 10 - 16, 2018. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

18 Jun 2018 10:00pm GMT

15 Jun 2018

feedPlanet Ubuntu

Daniel Pocock: The questions you really want FSFE to answer

As the last man standing as a fellowship representative in FSFE, I propose to give a report at the community meeting at RMLL.

I'm keen to get feedback from the wider community as well, including former fellows, volunteers and anybody else who has come into contact with FSFE.

It is important for me to understand the topics you want me to cover as so many things have happened in free software and in FSFE in recent times.

last man standing

Some of the things people already asked me about:

  • the status of the fellowship and the membership status of fellows
  • use of non-free software and cloud services in FSFE, deviating from the philosophy that people associate with the FSF / FSFE family
  • measuring both the impact and cost of campaigns, to see if we get value for money (a high level view of expenditure is here)

What are the issues you would like me to address? Please feel free to email me privately or publicly. If I don't have answers immediately I would seek to get them for you as I prepare my report. Without your support and feedback, I don't have a mandate to pursue these issues on your behalf so if you have any concerns, please reply.

Your fellowship representative

15 Jun 2018 7:28am GMT

14 Jun 2018

feedPlanet Ubuntu

Kees Cook: security things in Linux v4.17

Previously: v4.16.

Linux kernel v4.17 was released last week, and here are some of the security things I think are interesting:

Jailhouse hypervisor

Jan Kiszka landed Jailhouse hypervisor support, which uses static partitioning (i.e. no resource over-committing), where the root "cell" spawns new jails by shrinking its own CPU/memory/etc resources and hands them over to the new jail. There's a nice write-up of the hypervisor on LWN from 2014.

Sparc ADI

Khalid Aziz landed the userspace support for Sparc Application Data Integrity (ADI or SSM: Silicon Secured Memory), which is the hardware memory coloring (tagging) feature in Sparc M7. I'd love to see this extended into the kernel itself, as it would kill linear overflows between allocations, since the base pointer being used is tagged to belong to only a certain allocation (sized to a multiple of cache lines). Any attempt to increment beyond, into memory with a different tag, raises an exception. Enrico Perla has some great write-ups on using ADI in allocators and a comparison of ADI to Intel's MPX.

new kernel stacks cleared on fork

It was possible that old memory contents would live in a new process's kernel stack. While normally not visible, "uninitialized" memory read flaws or read overflows could expose these contents (especially stuff "deeper" in the stack that may never get overwritten for the life of the process). To avoid this, I made sure that new stacks were always zeroed. Oddly, this "priming" of the cache appeared to actually improve performance, though it was mostly in the noise.

MAP_FIXED_NOREPLACE

As part of further defense in depth against attacks like Stack Clash, Michal Hocko created MAP_FIXED_NOREPLACE. The regular MAP_FIXED has a subtle behavior not normally noticed (but used by some, so it couldn't just be fixed): it will replace any overlapping portion of a pre-existing mapping. This means the kernel would silently overlap the stack into mmap or text regions, since MAP_FIXED was being used to build a new process's memory layout. Instead, MAP_FIXED_NOREPLACE has all the features of MAP_FIXED without the replacement behavior: it will fail if a pre-existing mapping overlaps with the newly requested one. The ELF loader has been switched to use MAP_FIXED_NOREPLACE, and it's available to userspace too, for similar use-cases.

pin stack limit during exec

I used a big hammer and pinned the RLIMIT_STACK values during exec. There were multiple methods to change the limit (through at least setrlimit() and prlimit()), and there were multiple places the limit got used to make decisions, so it seemed best to just pin the values for the life of the exec so no games could get played with them. Too much assumed the value wasn't changing, so better to make that assumption actually true. Hopefully this is the last of the fixes for these bad interactions between stack limits and memory layouts during exec (which have all been defensive measures against flaws like Stack Clash).

Variable Length Array removals start

Following some discussion over Alexander Popov's ongoing port of the stackleak GCC plugin, Linus declared that Variable Length Arrays (VLAs) should be eliminated from the kernel entirely. This is great because it kills several stack exhaustion attacks, including weird stuff like stepping over guard pages with giant stack allocations. However, with several hundred uses in the kernel, this wasn't going to be an easy job. Thankfully, a whole bunch of people stepped up to help out: Gustavo A. R. Silva, Himanshu Jha, Joern Engel, Kyle Spiers, Laura Abbott, Lorenzo Bianconi, Nikolay Borisov, Salvatore Mesoraca, Stephen Kitt, Takashi Iwai, Tobin C. Harding, and Tycho Andersen. With Linus Torvalds and Martin Uecker, I also helped rewrite the max() macro to eliminate false positives seen by the -Wvla compiler option. Overall, about 1/3rd of the VLA instances were solved for v4.17, with many more coming for v4.18. I'm hoping we'll have entirely eliminated VLAs by the time v4.19 ships.

That's in for now! Please let me know if you think I missed anything. Stay tuned for v4.18; the merge window is open. :)

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

14 Jun 2018 11:23pm GMT

Simos Xenitellis: How to use LXD container hostnames on the host in Ubuntu 18.04

If you have two LXD containers, mycontainer1 and mycontainer2, then you can reference each other with those handy *.lxd hostnames like this,

$ lxc exec mycontainer1 -- sudo --user ubuntu --login
ubuntu@mycontainer1:~$ ping mycontainer2.lxd
PING mycontainer2.lxd(mycontainer2.lxd (fd42:cba6:557e:1a5a:24e:3eff:fce2:8d3)) 56 data bytes
64 bytes from mycontainer2.lxd (fd42:cba6:557e:1a5a:24e:3eff:fce2:8d3): icmp_seq=1 ttl=64 time=0.125 ms
^C
--- mycontainer2.lxd ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms
ubuntu@mycontainer1:~$

Those hostnames are provided automatically by LXD when you use a default private bridge like lxdbr0. They are provided by the dnsmasq service that LXD starts for you, and it's a service that binds specifically on that lxdbr0 network interface.

LXD does not make changes to the networking of the host, therefore you cannot use those hostnames from your host,

ubuntu@mycontainer1:~$ exit
$ ping mycontainer2.lxd
ping: unknown host mycontainer2.lxd
Exit 2

In this post we are going to see how to set up the host on Ubuntu 18.04 (any Linux distribution that uses systemd-resolve) so that the host can access the container hostnames.

The default configuration per systemd of the lxdbr0 bridge on the host is

$ systemd-resolve --status lxdbr0
Link 2 (lxdbr0)
      Current Scopes: none
       LLMNR setting: yes
MulticastDNS setting: no
      DNSSEC setting: no
    DNSSEC supported: no

The goal is to add the appropriate DNS server entries to appear in that configuration.

Let's get first the IP address of LXD's dnsmasq server for the network interface lxdbr0.

$ ip addr show dev lxdbr0
2: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether fe:2b:da:d9:49:4a brd ff:ff:ff:ff:ff:ff
inet 10.10.10.1/24 scope global lxdbr0
valid_lft forever preferred_lft forever
inet6 fd42:6a89:42d0:60b::1/64 scope global 
valid_lft forever preferred_lft forever
inet6 fe80::10cf:51ff:fe05:5383/64 scope link 
valid_lft forever preferred_lft forever

The IP address of the lxdbr0 interface in this case is 10.10.10.1 and that is the IP of LXD's DNS server.

Now we can move on by configuring the host to consult LXD's DNS server.

Temporary network configuration

Run the following command to configure temporarily the interface and add the DNS service details.

$ sudo systemd-resolve --interface lxdbr0 --set-dns 10.10.10.1 --set-domain lxd

In this command,

  1. we specify the network interface lxdbr0
  2. we set the DNS server to the IP address of the lxdbr0, the interface that dnsmasq is listening on.
  3. we set the domain to lxd, as the hostnames are of the form mycontainer.lxd.

Now, the configuration looks like

$ systemd-resolve --status lxdbr0
Link 2 (lxdbr0)
      Current Scopes: DNS
       LLMNR setting: yes
MulticastDNS setting: no
      DNSSEC setting: no
    DNSSEC supported: no
         DNS Servers: 10.10.10.1
          DNS Domain: lxd

You can now verify that you can, for example, get the IP address of the container by name:

$ host mycontainer1.lxd
mycontainer.lxd has address 10.10.10.88
mycontainer.lxd has IPv6 address fd42:8196:99f3:52ad:216:3eff:fe0f:bacb
$

Note: The first time that you try to resolve such a hostname, it will take a few seconds for systemd-resolved to complete the resolution. You will get the result shown above, but the command will not return immediately. The reason is that systemd-resolved is waiting to get a resolution from your default host's DNS server, and you are waiting for that resolution to timeout. The next attempts will be cached and return immediately.

You can also revert these settings with the following command,

$ systemd-resolve --interface lxdbr0 --revert
$ systemd-resolve --status lxdbr0
Link 3 (lxdbr0)
      Current Scopes: none
       LLMNR setting: yes
MulticastDNS setting: no
      DNSSEC setting: no
    DNSSEC supported: no
$

In general, this is a temporary network configuration and nothing has been saved to a file. When we reboot the computer, the configuration is gone.

Permanent network configuration

We are going to set up systemd to run automatically the temporary network configuration whenever LXD starts. That is, as soon as lxdbr0 is up, our additional script will run and configure the per-link network.

First, create the following auxiliary script files.

$ cat /usr/local/bin/lxdhostdns_start.sh 
#!/bin/sh

LXDINTERFACE=lxdbr0
LXDDOMAIN=lxd
LXDDNSIP=`ip addr show lxdbr0 | grep -Po 'inet \K[\d.]+'`

/usr/bin/systemd-resolve --interface ${LXDINTERFACE} \
                         --set-dns ${LXDDNSIP} \
                         --set-domain ${LXDDOMAIN}

$ cat /usr/local/bin/lxdhostdns_stop.sh 
#!/bin/sh

LXDINTERFACE=lxdbr0

/usr/bin/systemd-resolve --interface ${LXDINTERFACE} --revert

Second, make them executable.

$ sudo chmod +x /usr/local/bin/lxdhostdns_start.sh /usr/local/bin/lxdhostdns_stop.sh

Third, create the following systemd service file.

$ sudo cat /lib/systemd/system/lxd-host-dns.service 
[Unit]
Description=LXD host DNS service
After=lxd-containers.service

[Service]
Type=oneshot
ExecStart=/usr/local/bin/lxdhostdns_start.sh
RemainAfterExit=true
ExecStop=/usr/local/bin/lxdhostdnsi_stop.sh
StandardOutput=journal

[Install]
WantedBy=multi-user.target

This file

Fourth, now we reload systemd and enable the new service. The service is enabled so that when we reboot, it will start automatically.

$ sudo systemctl daemon-reload
$ sudo systemctl enable lxd-host-dns.service
Created symlink /etc/systemd/system/multi-user.target.wants/lxd-host-dns.service → /lib/systemd/system/lxd-host-dns.service.
$

Note: This should work better than the old (next section) instructions. Those old instructions would fail if the lxdbr0 network interface was not up. Still, I am not completely happy with this new section. It appears that when you explicitly start or stop the new service, the action may not run. To be tested.

(old section, not working) Permanent network configuration

In systemd, we can add per network interface configuration by adding a file in /etc/systemd/network/.

It should be a file with the extension .network, and the appropriate content.

Add the following file

$ cat /etc/systemd/network/lxd.network 
[Match]
Name=lxdbr0

[Network]
DNS=10.100.100.1
Domains=lxd

We chose the name lxd.network for the filename. As long as it has the .network extension, we are fine.

The [Match] section matches the name of the network interface, which is lxdbr0. The rest will only apply if the network interface is indeed lxdbr0.

The [Network] section has the specific network settings. We set the DNS to the IP of the LXD DNS server. And the Domains to the domain suffix of the hostnames. The lxd in Domains is the suffix that is configured in LXD's DNS server.

Now, let's restart the host and check the network configuration.

$ systemd-resolve --status
...
Link 2 (lxdbr0)
      Current Scopes: DNS
       LLMNR setting: yes
MulticastDNS setting: no
      DNSSEC setting: no
    DNSSEC supported: no
         DNS Servers: 10.100.100.1
                      fe80::a405:eade:4376:3817
          DNS Domain: lxd

Everything looks fine. By doing the configuration this way, systemd-resolve also picked up automatically the IPv6 address.

Conclusion

We have seen how to setup the host on a LXD installation so that processes on the host are able to see the hostnames of the containers. For Ubuntu 18.04 or any distribution that uses systemd for the DNS client needs.

If you use Ubuntu 16.04, then it requires a different way involving the dnsmasq-base configuration. There are instructions on this on the Internet, ask if you cannot find them.

Simos Xenitellis
https://blog.simos.info/

14 Jun 2018 6:32pm GMT

Ubuntu Podcast from the UK LoCo: S11E14.5 – Fourteen and a Half Pound Budgie - Ubuntu Podcast

This show was recorded in front of a live studio audience at FOSS Talk Live on Saturday 9th June 2018! We take you on a 40 year journey through our time trumpet and contribute to some open source projects for the first time and discuss the outcomes.

It's Season 11 Episode 14.5 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this live show:

That's all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there's a topic you'd like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

14 Jun 2018 2:00pm GMT

Stephen Michael Kellat: Active Searching

I generally am not trying to shoot for terse blog posts. That being said, my position at work is getting increasingly untenable since we're in a position of being physically unable to accomplish our mission goals prior to funding running out at 11:59:59 PM Eastern Time on September 30th. Conflicting imperatives were set and frankly we're starting to hit the point that neither are getting accomplished regardless of how many warm bodies we're throwing at the problem. It isn't good either when my co-workers who have any military experience are sounding out KBR, Academi, and Perspecta.

I'm actively seeking new opportunities. In lieu of a fancy resume in LaTeX, I put forward the relevant details at https://www.linkedin.com/in/stephenkellat/. I can handle LaTeX, though, as seen by the example here that has some copyright-restricted content stripped from it: http://erielookingproductions.info/saybrook-example.pdf.

Ideas for things I could do:

If your project/work/organization/endeavor/skunkworks is looking for a new team player I may prove a worthwhile addition. You more than likely could pay me more than my current employer does.

14 Jun 2018 2:00am GMT

13 Jun 2018

feedPlanet Ubuntu

Timo Aaltonen: Status of Ubuntu Mesa backports

It's been quite a while since the last post about Mesa backports, so here's a quick update on where we are now.

Ubuntu 18.04 was released with Mesa 18.0.0 which was built against libglvnd. This complicates things a bit when it comes to backporting Mesa to 16.04, because the packaging has changed a bit due to libglvnd and would break LTS->LTS upgrades without certain package updates.

So we first need to make sure 18.04 gets Mesa 18.0.5 (which is the last of the series, so no version bumps expected until the backport from 18.10) along with an updated libglvnd which bumps the Breaks/Replaces on old package versions to ensure that xenial -> bionic upgrade will go smoothly once 18.0.5 is backported to xenial, which will in fact be in -proposed soon.

What this also means is that the only release getting new Mesa backports via the x-updates PPA from now on is 18.04. And I've pushed Mesa 18.1.1 there today, enjoy!

13 Jun 2018 1:08pm GMT

12 Jun 2018

feedPlanet Ubuntu

Stuart Langridge: Little community conferences

This last weekend I was at FOSS Talk Live 2018. It was fun. And it led me into various thoughts of how I'd like there to be more of this sort of fun in and around the tech community, and how my feelings on success have changed a bit …

12 Jun 2018 9:07am GMT

11 Jun 2018

feedPlanet Ubuntu

The Fridge: Ubuntu Weekly Newsletter Issue 531

Welcome to the Ubuntu Weekly Newsletter, Issue 531 for the week of June 3 - 9, 2018. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

11 Jun 2018 9:56pm GMT

Jono Bacon: Closed Source and Ethics: Good, Bad, Or Ugly?

Recently the news broke that Microsoft are acquiring GitHub. Effusive opinions flowed from all directions: some saw the acquisition as a sensible fit for Microsoft to better support developers, and some saw it as a tyrant getting their grubby fingers on open source's ecosystem.

I am thrilled for Microsoft and GitHub for many reasons, and there will be a bright future ahead because of it, but I have been thinking more about the reaction some of the critics have had to this, and why.

I find it fascinating that there still seems to be a deep-seated discomfort in some about Microsoft and their involvement in open source. I understand that this is for historical reasons, and many moons ago Microsoft were definitely on the offensive against open source. I too was critical of Microsoft and their approach back in those days. I may have even said 'M$' instead of 'MS' (ugh.)

Things have changed though. Satya Nadella, their CEO, has had a profound impact on the company: they are a significant investor and participant in open source across a multitude of open source projects, they hire many open source developers, run their own open source projects (e.g. VSCode), and actively sponsor and support many open source conferences, events, and initiatives. I know many people who work at Microsoft and they love the company and their work there. These are not microserfs: they are people like you and me.

Things have changed, and I have literally never drunk Kool-aid; this or any other type. Are they perfect? No, but they don't claim to be. But is the Microsoft of today a radically different company to the Microsoft of the late nineties. No doubt.

Still though, this cynicism exists in some. Some see them as a trojan horse and ask if we can really trust them?

A little while ago I had a discussion with someone who was grumbling about Microsoft. After poking around his opinion, what shook out was that his real issue was not with Microsoft's open source work (he was supportive of this), but it was with the fact that they still produce proprietary software and use software patents in departments such as Windows and Office.

Put bluntly, he believed Microsoft are ethically unfit as a company because of these reasons, and these reasons were significant enough to diminish their open source work almost entirely.

Ethics?

Now, I am always fascinated when people use the word "ethics" in a debate. Often it smacks of holier-than-thou hyperbole as opposed to an objective assessment of what is actually right and wrong. Also, it seems that when some bring up "ethics" the discussion takes a nosedive and those involved become increasingly uninterested in other opinions (as I am sure we will see beautifully illustrated in the comments on this post 😉 )

In this case though, I think ethics explains a lot about the variance of views on this and why we should seek to understand those who differ with us. Let me explain why.

Many of the critics of proprietary software are people who believe that it is ethically unsound. They believe that the production and release of proprietary software is a fundamentally pernicious act; that it is harmful to society and the individuals within it.

I have spent my entire career, the last 20 years, working in the open source world. I have run a number of open source communities, some large, some small. I am a live and let live kind of guy and I have financially supported organizations I don't 100% agree with but who I think do interesting work. This includes the Free Software Foundation, Software Freedom Conservancy, and EFF, I have a close relationship with the Linux Foundation, and have worked with a number of companies on all sides of the field. Without wishing to sound like an egotistical clod, I believe I have earned my open source stripes.

Here's the thing though, and some of you won't like this: I don't believe proprietary software is unethical. Far from it.

Clearly murder, rape, human trafficking, child abuse, and other despicable acts are unethical, but I also consider dishonesty, knowingly lying, taking advantage of people, and other similar indiscretions are unethical. I am not an expert in ethics and I don't claim to be a perfectly ethical person, but by my reasoning unethical acts are a power imbalance that is forced on people without their consent.

Within my ethical code, software doesn't get a look in. Not even close.

I don't see proprietary software as a power imbalance. Sure, there are very dominant companies with proprietary platforms that people need to use (such as at your employer), and there are companies who have monopolies and tremendous power imbalances in the market. My ethical objection there though is with the market, not with the production of closed source software.

Now, before some of you combust. Let me be clear on this: I am deeply passionate about open source and free software and I do believe that proprietary software is sub-optimal in many situations. Heck, at least 60% of my clients are companies Ia m working with to build and deliver open source workflow.

In many situations, open source provides a much better model for collaboration, growth, security, community development, and other elements. Open source provides an incredible environment for people to shine: our broader open source ecosystem is littered with examples of under-represented groups doing great work and building fantastic careers and reputations. Open source and free software is one of the most profound technological revolutions, and it will generate great value and goodwill for many years to come.

Here lies the rub though: when I look at a company that ships proprietary products, I don't see an unethical company, I see a company that has chosen a different model. I don't believe the people working there are evil, that they are doing harm, and that they have mendacious intent. Is their model of building software sub-optimal? Probably, but it needs further judgement: open source clearly works in some areas (e.g. infrastructure software), but has struggled to catch on commercially in other areas (e.g. consumer software).

Put simply, open source does not guarantee success and proprietary software does not guarantee evil.

Be Productive

Throughout the course of my career I have always tried to understand other people's views and build relationships even if we see things differently.

As an example, earlier I mentioned I have financially supported the Free Software Foundation and Software Freedom Conservancy. Over the years I have had my disagreements with both RMS and Bradley Kuhn, largely based on this different perspective to the ethics of software, but I respect that they come from a different position. I don't believe they are "wrong" in their views. I believe the position they come from is different to mine. Let a thousand roses bloom: produce an ecosystem in which everyone can play a role and the best ideas will generally play out.

What is critical to me is taking a decent approach to this.

We don't get anywhere by labelling those who work at or run companies with proprietary products as evil and as part of a shadowy cabal. We also don't get anywhere by labelling those who do consider free software to be a part of their ethical code as "libtards" or something similarly derogatory. We need to learn more about other people's views rather than purely focusing on out-arguing people. Sure, have fun with other people's views, poke fun at them, but it should all be within the spirit of productive discourse.

Either way, no matter where you draw your line, or whatever your view is on the politique du jour, open source, community development, and open innovation is changing the world. We are succeeding, but we can do even greater work if we build bridges, not firebomb them. Be nice, people.

The post Closed Source and Ethics: Good, Bad, Or Ugly? appeared first on Jono Bacon.

11 Jun 2018 9:39pm GMT