25 Apr 2017
Over the past year, a change has emerged in the design team here at Canonical: we've started designing our websites and apps in public GitHub repositories, and therefore sharing the entire design process with the world.
One of the main things we wanted to improve was the design sign off process whilst increasing visibility for developers of which design was the final one among numerous iterations and inconsistent labelling of files and folders.
Here is the process we developed and have been using on multiple projects.
Design work items are initiated by creating a GitHub issue on the design repository relating to the project. Each project consists of two repositories: one for the code base and another for designs. The work item issue contains a short descriptive title followed by a detailed description of the problem or feature.
Code block styling from https://github.com/ubuntudesign/vanilla-design/issues/12
Once the designer has created one or more designs to present, they upload them to the issue with a description. Each image is titled with a version number to help reference in subsequent comments.
Whenever the designer updates the GitHub issue everyone who is watching the project receives an email update. It is important for anyone interested or with a stake in the project to watch the design repositories that are relevant to them.
The designer can continue to iterate on the task safe in the knowledge that everyone can see the designs in their own time and provide feedback if needed. The feedback that comes in at this stage is welcomed, as early feedback is usually better than late.
As iterations of the design are created, the designer simply adds them to the existing issue with a comment of the changes they made and any feedback from any review meetings.
Table with actions design from MAAS project
When the design is finalised a pull request is created and linked to the GitHub issue, by adding "Fixes #111" (where #111 is the number of the original issue) to the pull request description. The pull request contains the final design in a folder structure that makes sense for the project.
Just like with code, the pull request is then approved by another designer or the person with the final say. This may seem like an extra step, but it allows another person to look through the issue and make sure the design completes the design goal. On smaller teams, this pull request can be approved by a stakeholder or developer.
Once the pull request is approved it can be merged. This will close and archive the issue and add the final design to the code section of the design repository.
If all designers and developers of a project subscribe to the design repository, they will be included in the iterative design process with plenty of email reminders. This increases the visibility of designs in progress to stakeholders, developers and other designers, allowing for wider feedback at all stages of the design process.
Another benefit of this process is having a full history of decisions made and the evolution of a design all contained within a single page.
If your project is open source, this process makes your designs available to your community or anyone that is interested in the product automatically. This means that anyone who wants to contribute to the project has access to all the information and assets as the team members.
The code section of the design repository becomes the home for all signed off designs. If a developer is ever unsure as to what something should look like, they can reference the relevant folder in the design repository and be confident that it is the latest design.
Canonical is largely a company of remote workers and sometimes conversations are not documented, this means some people will be aware of the decisions and conversations. This design process has helped with the issue, as designs and discussions are all in a single place, with nicely laid out emails for all changes that anyone may be interested.
This process has helped our team improve velocity and transparency. Is this something you've considered or have done in your own projects? Let us know in the comments, we'd love to hear of any way we can improve the process.
25 Apr 2017 5:27pm GMT
The Free Software Foundation of Europe has just completed the process of electing a new fellowship representative to the General Assembly (GA) and I was surprised to find that out of seven very deserving candidates, members of the fellowship have selected me to represent them on the GA.
I'd like to thank all those who voted, the other candidates and Erik Albers for his efforts to administer this annual process.
Please consider becoming an FSFE fellow or donor
The FSFE runs on the support of both volunteers and financial donors, including organizations and individual members of the fellowship program. The fellowship program is not about money alone, it is an opportunity to become more aware of and involved in the debate about technology's impact on society, for better or worse. Developers, users and any other supporters of the organization's mission are welcome to join, here is the form. You don't need to be a fellow or pay any money to be an active part of the free software community and FSFE events generally don't exclude non-members, nonetheless, becoming a fellow gives you a stronger voice in processes such as this annual election.
Attending OSCAL'17, Tirana
During the election period, I promised to keep on doing the things I already do: volunteering, public speaking, mentoring, blogging and developing innovative new code. During May I hope to attend several events, including OSCAL'17 in Tirana, Albania on 13-14 May. I'll be running a workshop there on the Debian Hams blend and Software Defined Radio. Please come along and encourage other people you know in the region to consider attending.
What is your view on the Fellowship and FSFE structure?
Several candidates made comments about the Fellowship program and the way individual members and volunteers are involved in FSFE governance. This is not a new topic. Debate about this topic is very welcome and I would be particularly interested to hear any concerns or ideas for improvement that people may contribute. One of the best places to share these ideas would be through the FSFE's discussion list.
In any case, the fellowship representative can not single-handedly overhaul the organization. I hope to be a constructive part of the team and that whenever my term comes to an end, the organization and the free software community in general will be stronger and happier in some way.
25 Apr 2017 12:57pm GMT
Welcome to the Ubuntu Weekly Newsletter. This is issue #505 for the weeks April 10 - 23, 2017, and the full version is available here.
In this issue we cover:
- Ubuntu 17.04 (Zesty Zapus) released
- A new vantage point
- Ubuntu Membership Board call for nominations
- Welcome New Members and Developers
- Ubuntu Stats
- LoCo Events
- Alan Pope: My Ubuntu 16.04 GNOME Setup
- Kubuntu Team: KDE PIM update for Zesty available for testers
- Ubuntu Cloud News
- Canonical News
- In The Blogosphere
- Featured Audio and Video
- Weekly Ubuntu Development Team Meetings
- Upcoming Meetings and Events
- Updates and Security for 12.04, 14.04, 16.04, and 16.10
- And much more!
The issue of The Ubuntu Weekly Newsletter is brought to you by:
- Simon Quigley
- Chris Guiver
- Jim Connett
- And many others
Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License
25 Apr 2017 2:31am GMT
24 Apr 2017
Fragmentation is the nature of the beast in the IoT space with a variety of non-interoperable protocols, devices and vendors which are the natural results of years of evolution in the industrial space especially. However traditional standardisation processes and proprietary implementations have been the norm. But the slow nature of their progress make them a liability for the burgeoning future of IoT. For these reasons, multiple actions are being taken by many organisations to change the legacy IoT mode of operations in the quest for accelerated innovation and improved efficiencies.
To aid this progress, today, the Linux Foundation has announced a new open source software project called the EdgeX Foundry. The aim is to create an open framework and unify the marketplace to build an ecosystem of companies offering plug and play components on IoT edge solutions. The Linux Foundation has gathered over 50 companies to be the founding members of this project and Canonical is proud to be one of these.
Here at Canonical, we have been pushing for open source approaches to IoT fragmentation. Last year's introduction of snaps is one example of this - the creation of a universal Linux packaging format to make it easy for developers to manage the distribution of their applications across devices, distros and releases. They are also safer to run and faster to install. Looking forward, we want to see snaps as the default format across the board to work on any distribution or device from IoT to desktops and beyond.
Just like snaps, the EdgeX framework is designed to run on any operating system or hardware. It can quickly and easily deliver interoperability between connected devices, applications and services across a wide range of use cases. Fellow founding member, Dell, is seeding EdgeX Foundry with its FUSE source code base consisting of more than a dozen microservices and over 125,000 lines of code.
Adopting an open source edge software platform benefits the entire IoT ecosystem incorporating the system integrators, hardware manufacturers, independent software vendors and end customers themselves who are deploying IoT edge solutions. The project is also collaborating with other relevant open source projects and industry alliances to further ensure consistency and interoperability across IoT. These include the Cloud Foundry Foundation,EnOcean Alliance and ULE Alliance.
The EdgeX platform will be on display at the Hannover Messe in Germany from April 24th-28th 2017. Head to the Dell Technologies booth in Hall 8, Stand C24 to see the main demo.
24 Apr 2017 2:09pm GMT
- City Network joins the Ubuntu Certified Public Cloud (CPC) programme
- First major CPC Partner in the Nordics
City Network, a leading European provider of OpenStack infrastructure-as-a-service (IaaS) today joined the Ubuntu Certified Public Cloud programme. Through its public cloud service 'City Cloud', companies across the globe can purchase server and storage capacity as needed, paying for the capacity they use and leveraging the flexibility and scalability of the OpenStack-platform.
With dedicated and OpenStack-based City Cloud nodes in the US. Europe and Asia, City Network recently launched in Dubai. As such they are now the first official Ubuntu Certified Public Cloud in the Middle East offering a pure OpenStack-based platform running on Ubuntu OpenStack. Dubai has recently become the co-location and data center location of choice for the Middle East, as Cloud, IoT, and Digitization see massive uptake and market need from public sector, enterprise and SMEs in the region.
City Network provides public, private and hybrid cloud solutions based on OpenStack from 27 data centers around the world. Through its industry specific IaaS, City Network can ensure that their customers can comply with demands originating from specific laws and regulations concerning auditing, reputability, data handling and data security such as Basel and Solvency.
City Cloud Ubuntu lovers-from Stockholm to Dubai to Tokyo-will now be able to use official Ubuntu images, always stable and with the latest OpenStack release included, to run VMs and servers on their favourite cloud provider. Users of other distros on City Cloud are also now able to move to Ubuntu, the no. 1 cloud OS, and opt-in to Ubuntu Advantage support offering, which helps leading organisations around the world to manage their Ubuntu deployments.
"The disruptions of traditional business models and the speed in digital innovations, are key drivers for the great demand in open and flexible IaaS across the globe. Therefore, I am very pleased that we are now entering the Ubuntu Certified Public Cloud program, adding yet another opportunity for our customers to run their IT-infrastructure on an open, scalable and flexible platform," said Johan Christenson, CEO and founder of City Network.
"Canonical is passionate about bringing the best Ubuntu user experience to users of every public cloud, but is especially pleased to have an OpenStack provider such as City Cloud offering Ubuntu, the world's most widely used guest Linux," said Udi Nachmany, Head of Public Cloud, Canonical. "City Cloud is known for its focus on compliance, and will now bring their customers additional choice for their public infrastructure, with an official, secure, and supportable Ubuntu experience."
Ubuntu Advantage offers enterprise-grade SLAs for business-critical workloads, access to our Landscape systems management tool, the Canonical Livepatch Service for security vulnerabilities, and much more-all available from buy.ubuntu.com.
To start using Ubuntu on the City Cloud Infrastructure please visit https://www.citycloud.com
24 Apr 2017 9:04am GMT
23 Apr 2017
As you may know, Ubuntu Membership is a recognition of significant and sustained contribution to Ubuntu and the Ubuntu community. To this end, the Community Council recruits from our current member community for the valuable role of reviewing and evaluating the contributions of potential members to bring them on board or assist with having them achieve this goal.
We have seven members of our boards expiring from their terms , which means we need to do some restaffing of this Membership Board.
We have the following requirements for nominees:
- be an Ubuntu member (preferably for some time)
- be confident that you can evaluate contributions to various parts of our community
- be committed to attending the membership meetings broad insight into the Ubuntu community at large is a plus
Additionally, those sitting on membership boards should have a proven track record of activity in the community. They have shown themselves over time to be able to work well with others and display the positive aspects of the Ubuntu Code of Conduct. They should be people who can discern character and evaluate contribution quality without emotion while engaging in an interview/discussion that communicates interest, a welcoming atmosphere, and which is marked by humanity, gentleness, and kindness. Even when they must deny applications, they should do so in such a way that applicants walk away with a sense of hopefulness and a desire to return with a more complete application rather than feeling discouraged or hurt.
To nominate yourself or somebody else (please confirm they wish to accept the nomination and state you have done so), please send a mail to the membership boards mailing list (ubuntu-membership-boards at lists.ubuntu.com). You will want to include some information about the nominee, a launchpad profile link and which time slot (20:00 or 22:00) the nominee will be able to participate in.
We will be accepting nominations through Friday May 26th at 12:00 UTC. At that time all nominations will be forwarded to the Community Council who will make the final decision and announcement.
Thanks in advance to you and to the dedication everybody has put into their roles as board members.
Originally posted to the ubuntu-news-team mailing list on Sun Apr 23 20:20:38 UTC 2017 by Michael Hall
23 Apr 2017 8:30pm GMT
One of the best things about making software collaboratively is the translations. Sure I could make a UML diagramming tool or whatever all by my own but it's better if I let lots of other people help out and one of the best crowd-sourcing features of open community development is you get translated into many popular and obscure languages which it would cost a fortune to pay some company to do.
When KDE was monolithic is shipping translation files in separate kde-l10n tars so users would only have to install the tar for their languages and not waste disk space on all the other languages. This didn't work great because it's faffy for people to work out they need to install it and it doesn't help with all the other software on their system. In Ubuntu we did something similar where we extracted all the translations and put them into translation packages, doing it at the distro level makes more sense than at the collection-of-things-that-KDE-ships level but still has problems when you install updated software. So KDE has been moving to just shipping the translations along with the individual application or library which makes sense and it's not like the disk space from the unused languages is excessive.
So when KDE neon came along we had translations for KDE frameworks and KDE Plasma straight away because those are included in the tars. But KDE Applications still made kde-l10n tars which are separate and we quietly ignored them in the hope something better would come along, which pleasingly it now has. KDE Applications 17.04 now ships translations in the tars for stuff which uses Frameworks 5 (i.e. the stuff we care about in neon). So KDE neon User Editions now include translations for KDE Applications too. Not only that but Harald has done his genius and turned the releaseme tool into a library so KDE neon's builder can use it to extract the same translation files into the developer edition packages so translators can easily try out the Git master versions of apps to see what translations look missing or broken. There's even an x-test language which makes xxTextxx strings so app developers can use it to check if any strings are untranslated in their applications.
The old kde-l10n packages in the Ubuntu archive would have some file clashes with the in-tar translations which would often break installs in non-English languages (I got complaints about this but not too many which makes me wonder if KDE neon attracts the sort of person who just uses their computer in English). So I've built dummy empty kde-l10n packages so you can now install these without clashing files.
Still plenty to do. docs aren't in the Developer Edition builds. And System Settings needs some code to make a UI for installing locales and languages of the base system, currently that needs done by hand if it's not done at install time (apt install language-pack-es). But at last another important part of KDE's software is now handled directly by KDE rather than hoping a third party will do the right thing and trying them out is pleasingly trivial.
23 Apr 2017 1:00pm GMT
22 Apr 2017
I gave a talk in the main keynote room about our educational programme, in which I explained our mission and how we intend to achieve it.
Even if you saw my talk at OSDC 2011, I recommend that you watch this one. It is much improved and contains new and updated material. The YouTube version is above, but a higher quality version is available for download from Linux Australia.
The references for this talk are on our development wiki.
Here's a better version of the video I played near the beginning of my talk:
I should start by pointing out that OLPC is by no means a niche or minor project. XO laptops are in the hands of 8000 children in Australia, across 130 remote communities. Around the world, over 2.5 million children, across nearly 50 countries, have an XO.
Investment in our Children's Future
The key point of my talk is that OLPC Australia have a comprehensive education programme that highly values teacher empowerment and community engagement.
The investment to provide a connected learning device to every one of the 300 000 children in remote Australia is less than 0.1% of the annual education and connectivity budgets.
For low socio-economic status schools, the cost is only $80 AUD per child. Sponsorships, primarily from corporates, allow us to subsidise most of the expense (you too can donate to make a difference). Also keep in mind that this is a total cost of ownership, covering the essentials like teacher training, support and spare parts, as well as the XO and charging rack.
While our principal focus is on remote, low socio-economic status schools, our programme is available to any school in Australia. Yes, that means schools in the cities as well. The investment for non-subsidised schools to join the same programme is only $380 AUD per child.
Comprehensive Education Programme
We have a responsibility to invest in our children's education - it is not just another market. As a not-for-profit, we have the freedom and the desire to make this happen. We have no interest in vendor lock-in; building sustainability is an essential part of our mission. We have no incentive to build a dependency on us, and every incentive to ensure that schools and communities can help themselves and each other.
We only provide XOs to teachers who have been sufficiently enabled. Their training prepares them to constructively use XOs in their lessons, and is formally recognised as part of their professional development. Beyond the minimum 15-hour XO-certified course, a teacher may choose to undergo a further 5-10 hours to earn XO-expert status. This prepares them to be able to train other teachers, using OLPC Australia resources. Again, we are reducing dependency on us.
Training is conducted online, after the teacher signs up to our programme and they receive their XO. This scales well to let us effectively train many teachers spread across the country. Participants in our programme are encouraged to participate in our online community to share resources and assist one another.
We also want to recognise and encourage children who have shown enthusiasm and aptitude, with our XO-champion and XO-mechanic certifications. Not only does this promote sustainability in the school and give invaluable skills to the child, it reinforces our core principle of Child Ownership. Teacher aides, parents, elders and other non-teacher adults have the XO-basics (formerly known as XO-local) course designed for them. We want the child's learning experience to extend to the home environment and beyond, and not be constrained by the walls of the classroom.
There's a reason why I'm wearing a t-shirt that says "No, I won't fix your computer." We're on a mission to develop a programme that is self-sustaining. We've set high goals for ourselves, and we are determined to meet them. We won't get there overnight, but we're well on our way. Sustainability is about respect. We are taking the time to show them the ropes, helping them to own it, and developing our technology to make it easy. We fundamentally disagree with the attitude that ordinary people are not capable enough to take control of their own futures. Vendor lock-in is completely contradictory to our mission. Our schools are not just consumers; they are producers too.
As explained by Jonathan Nalder (a highly recommended read!), there are two primary notions guiding our programme. The first is that the nominal $80 investment per child is just enough for a school to take the programme seriously and make them a stakeholder, greatly improving the chances for success. The second is that this is a schools-centric programme, driven from grassroots demand rather than being a regime imposed from above. Schools that participate genuinely want the programme to succeed.
Technology as an Enabler
Enabling this educational programme is the clever development and use of technology. That's where I (as Engineering Manager at OLPC Australia) come in. For technology to be truly intrinsic to education, there must be no specialist expertise required. Teachers aren't IT professionals, and nor should they be expected to be. In short, we are using computers to teach, not teaching computers.
The key principles of the Engineering Department are:
- Technology is an integral and seamless part of the learning experience - the pen and paper of the 21st century.
- To eliminate dependence on technical expertise, through the development and deployment of sustainable technologies.
- Empowering children to be content producers and collaborators, not just content consumers.
- Open platform to allow learning from mistakes… and easy recovery.
OLPC have done a marvellous job in their design of the XO laptop, giving us a fantastic platform to build upon. I think that our engineering projects in Australia have been quite innovative in helping to cover the 'last mile' to the school. One thing I'm especially proud of is our instance on openness. We turn traditional systems administration practice on its head to completely empower the end-user. Technology that is deployed in corporate or educational settings is typically locked down to make administration and support easier. This takes control completely away from the end-user. They are severely limited on what they can do, and if something doesn't work as they expect then they are totally at the mercy of the admins to fix it.
In an educational setting this is disastrous - it severely limits what our children can learn. We learn most from our mistakes, so let's provide an environment in which children are able to safely make mistakes and recover from them. The software is quite resistant to failure, both at the technical level (being based on Fedora Linux) and at the user interface level (Sugar). If all goes wrong, reinstalling the operating system and restoring a journal (Sugar user files) backup is a trivial endeavour. The XO hardware is also renowned for its ruggedness and repairability. Less well-known are the amazing diagnostics tools, providing quick and easy indication that a component should be repaired/replaced. We provide a completely unlocked environment, with full access to the root user and the firmware. Some may call that dangerous, but I call that empowerment. If a child starts hacking on an XO, we want to hire that kid
My talk features the case study of Doomadgee State School, in far-north Queensland. Doomadgee have very enthusiastically taken on board the OLPC Australia programme. Every one of the 350 children aged 4-14 have been issued with an XO, as part of a comprehensive professional development and support programme. Since commencing in late 2010, the percentage of Year 3 pupils at or above national minimum standards in numeracy has leapt from 31% in 2010 to 95% in 2011. Other scores have also increased. Think what you may about NAPLAN, but nevertheless that is a staggering improvement.
Most importantly of all, quite simply, One Laptop per Child Australia delivers results in learning from the 5,000 students already engaged, showing impressive improvements in closing the gap generally and lifting access and participation rates in particular.
We are also engaged in longitudinal research, working closely with respected researchers to have a comprehensive evaluation of our programme. We will release more information on this as the evaluation process matures.
Join our mission
Schools can register their interest in our programme on our Education site.
Our Prospectus provides a high-level overview.
For a detailed analysis, see our Policy Document.
If you would like to get involved in our technical development, visit our development site.
Many thanks to Tracy Richardson (Education Manager) for some of the information and graphics used in this article.
22 Apr 2017 12:28pm GMT
Adam Holt and I were interviewed last night by the Australian Council for Computers in Education Learning Network about our not-for-profit work to improve educational opportunities for children in the developing world.
Australia poses some of its own challenges. As a country that is 90% urbanised, the remaining 10% are scattered across vast distances. The circumstances of these communities often share both developed and developing world characteristics. We developed the One Education programme to accommodate this.
These lessons have been developed further into Unleash Kids, an initiative that we are currently working on to support the community of volunteers worldwide and take to the movement to the next level.
22 Apr 2017 12:14pm GMT
21 Apr 2017
A fair amount of things happened since I last blogged something else than music. First of all we did actually hold a Debian Diversity meeting. It was quite nice, less people around than hoped for, and I account that to some extend to the trolls and haters that defaced the titanpad page for the agenda and destroyed the doodle entry for settling on a date for the meeting. They even tried to troll my blog with comments, and while I did approve controversial responses in the past, those went over the line of being acceptable and didn't carry any relevant content.
One response that I didn't approve but kept in my mailbox is even giving me strength to carry on. There is one sentence in it that speaks to me:
Think you can stop us? You can't you stupid b*tch. You have ruined the Debian community for us. The rest of the message is of no further relevance, but even though I can't take credit for being responsible for that, I'm glad to be a perceived part of ruining the Debian community for intolerant and hateful people.
A lot of other things happened since too. Mostly locally here in Vienna, several queer empowering groups were founding around me, some of them existed already, some formed with the help of myself. We now have several great regular meetings for non-binary people, for queer polyamory people about which we gave an interview, a queer playfight (I might explain that concept another time), a polyamory discussion group, two bi-/pansexual groups, a queer-feminist choir, and there will be an European Lesbian* Conference in October where I help with the organization …
… and on June 21st I'll finally receive the keys to my flat in Que[e]rbau Seestadt. I'm sooo looking forward to it. It will be part of the Let me come Home experience that I'm currently in. Another part of that experience is that I started changing my name (and gender marker) officially. I had my first appointment in the corresponding bureau, and I hope that it won't last too long because I have to get my papers in time for booking my flight to Montreal, and somewhen along the process my current passport won't contain correct data anymore. So for the people who have it in their signing policy to see government IDs this might be your chance to finally sign my key then.
I plan to do a diversity BoF at debconf where we can speak more directly on where we want to head with the project. I hope I'll find the time to do an IRC meeting beforehand. I'm just uncertain how to coordinate that one to make it accessible for interested parties while keeping the destructive trolls out. I'm open for ideas here.
21 Apr 2017 8:01am GMT
Since we missed by a whisker getting updated PIM (kontact, kmail, akregator, kgpg etc..) into Zesty for release day, and we believe it is important that our users have access to this significant update, packages are now available for testers in the Kubuntu backports landing ppa.
While we believe these packages should be relatively issue-free, please bear in mind that they have not been tested as comprehensively as those in the main ubuntu archive.
Testers should be prepared to troubleshoot and hopefully report issues that may occur. Please provide feedback on our mailing list , IRC , or optionally via social media.
After a period of testing and verification, we hope to move this update to the main backports ppa.
You should have some command line knowledge before testing.
Reading about how to use ppa purge is also advisable.
How to test KDE PIM 16.12.3 for Zesty:
Testing packages are currently in the Kubuntu Backports Landing PPA.
sudo add-apt-repository ppa:kubuntu-ppa/backports-landing
sudo apt-get update
sudo apt-get dist-upgrade
1. Kubuntu-devel mailing list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
2. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on irc.freenode.net
21 Apr 2017 1:31am GMT
20 Apr 2017
We spend some time discussing one rather important topic in the news and that's the announcement of Ubuntu's re-focus from mobile and convergence to the cloud and Internet of Things.
In this week's show:
- We discuss what we've been upto recently:
- We discuss the news:
- Growing Ubuntu for cloud and IoT, rather than phone and convergence.
- The first meeting between the Ubuntu GNOME team, the Ubuntu Desktop team and interested community members took place.
- We discuss the community news:
- This weeks cover image is taken from Wikimedia.
That's all for this week! If there's a topic you'd like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to firstname.lastname@example.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.
20 Apr 2017 2:00pm GMT
Over the past 6 months I've been running static analysis on linux-next with CoverityScan on a regular basis (to find new issues and fix some of them) as well as keeping a record of the defect count.
Since the beginning of September over 2000 defects have been eliminated by a host of upstream developers and the steady downward trend of outstanding issues is good to see. A proportion of the outstanding defects are false positives or issues where the code is being overly zealous, for example, bounds checking where some conditions can never happen. Considering there are millions of lines of code, the defect rate is about average for such a large project.
I plan to keep the static analysis running long term and I'll try and post stats every 6 months or so to see how things are progressing.
20 Apr 2017 12:47pm GMT
18 Apr 2017
For a long while my personal blog has been running WordPress. Every so often I've looked at other options but never really been motivated to change it, because everything worked, and it was not too much effort to manage.
Then I got 'hacked'.
I host my blog on a Bitfolk VPS. I had no idea my server had been compromised until I got a notification on Boxing Day from the lovely Bitfolk people. They informed me that there was a deluge of spam originating from my machine, so it was likely compromised. Their standard procedure is to shutdown the network connection, which they did.
At this point I had access to a console to diagnose and debug what had happened. My VPS had multiple copies of WordPress installed, for various different sites. It looks like I had an old theme or plugin on one of them, which the attackers used to splat their evil doings on my VPS filesystem.
Being the Christmas holidays I didn't really want to spend the family time doing lots of phorensics or system admin. I had full backups of the machine, so I requested that Bitfolk just nuke the machine from orbit and I'd start fresh.
Bitfolk have a really handy self-service provisioning tool for just these eventualities. All I needed to do was ssh to the console provided and follow the instructions on the wiki, after the network connection was re-enabled, of course.
However, during the use of the self-serve installer we unconvered a bug and a billing inconsistency. Andy at Bitfolk spent some time on Boxing Day to fix both the bug and the billing glitch, and by midnight that night I'd had a bank-transfer refund! He also debugged some DNS issues for me too. That's some above-and-beyond level of service right there!
Once I'd got a clean Ubuntu 16.04 install done, I had a not-so-long think about what I wanted to do for hosting my blog going forward. I went for Nikola - a static website generator. I'd been looking at Nikola on and off since talking about it over a beer with Martin in Heidelberg
As I'd considered this before, I was already a little prepared. Nikola supports importing data from an existing WordPress install. I'd already exported out my WordPress posts some weeks ago, so importing that dump into Nikola was easy, even though my server was offline.
The things that sold me on Nikola were pretty straightforward.
Being static HTML files on my server, I didn't have to worry about php files being compromised, so I could take off my sysadmin hat for a bit, as I wouldn't have to do WordPress maintenance all the time.
Nikola allows me to edit offline easily too. So I can just open my text editor of choice start bashing away some markdown (other formats are supported). Here you can see what it looks like when I'm writing a blog post in todays favourite editor, Atom. With the markdown preview on the right, I can easily see what my post is going to look like as I type. I imagine I could do this with WordPress too, sure.
Once posts are written I can easily preview the entire site locally before I publish. So I get two opportunities to spot errors, once in Atom while editing and previewing, and again when serving the content locally. It works well for me!
Nikola is configured easily by editing
conf.py. In there you'll find documentation in the form many comments to supplement the online Nikola Handbook. I set a few things like the theme, disqus comments account name, and configuration of the Bitfolk VPS remote server where I'm going to host it. With ssh keys all setup, I configured Nikola to deploy using rsync over ssh.
When I want to write a new blog post, here's what I do.
cd popey.com/site nikola new_post -t "Switching from WordPress to Nikola" -f markdown
I then edit the post at my leisure locally in Atom, and enable preview there with
Once I'm happy with the post I'll build the site:-
I can then start nikola serving the pages up on my laptop with:-
This starts a webserver on port 8000 on my local machine, so I can check the content in various browsers, and on mobile devices should I want to.
Obviously I can loop through those few steps over and again, to get my post right. Finally once I'm ready to publish I just issue:-
This sends the content to the remote host over rsync/ssh and it's live!
Nikola is great! The documentation is comprehensive, and the maintainers are active. I made a mistake in my config and immediately got a comment from the upstream author to let me know what to do to fix it!
I'm only using the bare bones features of Nikola, but it works perfectly for me. Easy to post & maintain and simple to deploy and debug.
Have you migrated away from WordPress? What did you use? Let me know in the comments below.
18 Apr 2017 12:00pm GMT
I thought I was being smart. By not buying through AVADirect I wasn't going to be using an insecure site to purchase my new computer.
For the curious I ended purchasing through eBay (A rating) and Newegg (A rating) a new Ryzen (very nice chip!) based machine that I assembled myself. Computer is working mostly ok, but has some stability issues. A Bios update comes out on the MSI website promising some stability fixes so I decide to apply it.
The page that links to the download is HTTPS, but the actual download itself is not.
I flash the BIOS and now appear to have a brick.
Given the poor security and now wanting a motherboard with a more reliable BIOS (currently I need to send the board back at my expense for an RMA) I looked at other Micro ATX motherboards starting with a Gigabyte which has even less pages using any HTTPS and the ones that do are even worse:
Unfortunately a survey of motherboard vendors indicates MSI failing with Fs might put them in second place. Most just have everything in the clear, including passwords. ASUS clearly leads the pack, but no one protects the actual firmware/drivers you download from them.
|Main Website||Support Site||RMA Process||Forum||Download Site||Actual Download|
|AsRock||Plain text||Plain text||Plain Text||Plain Text|
|Gigabyte (login site is F)||Plain text||Plain Text||Plain Text||Plain text||Plain Text||Plain Text|
|EVGA||Plain text default/A-||Plain text||Plain text||A||Plain Text||Plain Text|
|ASUS||A-||A-||B||Plain text default/A||A-||Plain Text|
|BIOSTAR||Plain text||Plain text||Plain text||n/a?||Plain Text||Plain Text|
A quick glance indicates that vendors that make full systems use more security (ASUS and MSI being examples of system builders).
We rely on the security of these vendors for most self-built PCs. We should demand HTTPS by default across the board. It's 2017 and a BIOS file is 8MB, cost hasn't been a factor for years.
18 Apr 2017 12:50am GMT
17 Apr 2017
March was a busy month, so this monthly report is a little late. I worked two weekends, and I was planning my Easter holiday, so there wasn't a lot of spare time.
- Updated Dominate to the latest version and uploaded to experimental (due to the Debian Stretch release freeze).
- Uploaded the latest version of abcmidi (also to experimental).
- Pinged the bugs for reverse dependencies of pygoocanvas and goocanvas with a view to getting them removed from the archive during the Buster cycle.
- Asked for help on the Ubuntu Studio developers and users mailing lists to test the coming Ubuntu Studio 17.04 release ISO, because I would be away on holiday for most of it.
- Worked on ubuntustudio-controls, reverting it back to an earlier revision that Len said was working fine. Unfortunately, when I built and installed it from my ppa, it crashed. Eventually found my mistake with the bzr reversion, fixed it and prepared an upload ready for sponsorship. Submitted a Freeze Exception bug in the hope that the Release Team would accept it even though we had missed the Final Beta.
- Put a new power supply in an old computer that was kaput, and got it working again. Set up Ubuntu Server 16.04 on it so that I could get a bit more experience with running a server. It won't last very long, because it is a 32 bit machine, and Ubuntu will probably drop support for that architecture eventually. I used two small spare drives to set up RAID 1 & LVM (so that I can add more space to it later). I set up some Samba shares, so that my wife will be able to get at them from her Windows machine. For music streaming, I set up Emby Server. I wold be great to see this packaged for Debian. I uploaded all of my photos and music for Emby to serve around the home (and remotely as well). Set up Obnam to back up the server to an external USB stick (temporarily until I set up something remote). Set LetsEncrypt with the wonderful Certbot program.
- Did the Release Notes for Ubuntu Studio 17.04 Final Beta. As I was in Brussels for two days, I was not able to do any ISO testing myself.
- Measured up the new model railway layout and documented it in xtrkcad.
- Started learning Ansible some more by setting up ssh on all my machines so that I could access them with Ansible and manipulate them using a playbook.
- Went to the Open Source Days conference just down the road in Copenhagen. Saw some good presentations. Of interest for my previous work in the Debian GIS Team, was a presentation from the Danish Municipalities on how they run projects using Open Source. I noted how their use of Proj 4 and OSGeo. I was also pleased to see a presentation from Ximin Luo on Reproducible Builds, and introduced myself briefly after his talk (during the break).
- Started looking at creating a Django website to store and publish my One Name Study sources (indexes). Started by creating a library to list some of my recently read Journals. I will eventually need to import all the others I have listed in a cvs spreadsheet that was originally exported from the commercial (Windows only) Custodian software.
Plan status from last month & update for next month
For the Debian Stretch release:
- Keep an eye on the Release Critical bugs list, and see if I can help fix any. - In Progress
- Package all the latest upstream versions of my Debian packages, and upload them to Experimental to keep them out of the way of the Stretch release. - In Progress
- Begin working again on all the new stuff I want packaged in Debian.
- Start working on an Ubuntu Studio package tracker website so that we can keep an eye on the status of the packages we are interested in. - Started
- Start testing & bug triaging Ubuntu Studio packages. - In progress
Test Len's work on ubuntustudio-controls- Done Do the Ubuntu Studio Zesty 17.04 Final Beta release.- Done
- Sort out the Blueprints for the coming Ubuntu Studio 17.10 release cycle.
- Give JMRI a good try out and look at what it would take to package it. - In progress
- Also look at OpenPLC for simulating the relay logic of real railway interlockings (i.e. a little bit of the day job at home involving free software - fun!). - In progress
17 Apr 2017 2:35pm GMT