03 Sep 2015
Find-a-Task is the Ubuntu community's job board for volunteers.
Introduced in January 2015, Find-a-Task shows fellow volunteers the variety of tasks and roles available.
The goal of Find-a-Task is for a volunteer, after exploring the Ubuntu Project, to land on a team or project's wiki page. They are interested, ready to join, and ready to start learning the skills and tools.
However, it only works if *you* use it, too.
Take a quick look, and see the variety of volunteer roles available. We have listings for many different skills and interests, including many non-technical tasks.
Is your team listed?
Hey teams, are you using Find-a-Task to recruit volunteers?
- Are your team roles listed?
- Are they accurate?
- Is your landing page welcoming and useful to a new volunteer?
When it's time to update your postings on the job board, simply jump into Freenode IRC: #ubuntu-community-team.
Gurus: Are your pointing Padwans toward it?
Find-a-Task is a great place to send new enthusiasts. No signup, no login, no questions. It's a great way to survey the roles available in the big, wide, Ubuntuverse, and get new enthusiasts involved in a team.
It's also handy for experienced enthusiasts looking for a new challenge, of course.
- If you're active in the various forums, refer new enthusiasts to Find-a-Task.
- Add it to your signature.
- If you know a Find-a-Task success story, please share.
Ideas to increase usage of Find-a-Task are welcome.
Ideas on how to improve the tool itself are also welcome.
Please share your suggestions to improve Find-a-Task on the ubuntu-community-team mailing list.
03 Sep 2015 1:50am GMT
02 Sep 2015
This is the Jonathan Riddell™ IP Policy. It applies to all Jonathan's intellectual property in Ubuntu archives. Jonathan is one of the top 5 uploaders, usually the top 1 uploader, to Ubuntu compiling hundreds of packages in the Ubuntu archive. Further Jonathan reviews new and updated packages in the archive. Further Jonathan selects compiler defaults and settings for KDE and Qt and other packages in the Ubuntu archive. Further Jonathan builds and runs tests for Ubuntu packages in the archives. Further Jonathan Riddell™ is a trademark of Jonathan Riddell™in Scotland, Catalunya and other countries; a trademark which is included in all packages edited by Jonathan Riddell™. Further Jonathan is the author of numberous works in the Ubuntu archive. Further Jonathan is the main contributor to the selection of software in Kubuntu. Therefore Jonathan has IP in the Ubuntu archive possibly including but not limited to copyright, patents, trademarks, sales marks, geographical indicators, database rights, compilation copyright, designs, personality rights and plant breeders rights. To deal with, distribute, modify, look at or smell Jonathan's IP you must comply with this policy.
Policy: give Jonathan a hug before using his IP.
If you want a licence for Jonathan's IP besides this one you must contact Jonathan first and agree one in writing.
Nothing in this policy shall be taken to override or conflict with free software licences already put on relevant works.
02 Sep 2015 4:54pm GMT
I'm receiving more requests for upload accounts to the Deb-o-Matic servers lately (yay!), but that means the resources need to be monitored and shared between the build daemons to prevent server lockups.
My servers are running systemd, so I decided to give systemd.resource-control a try. My goal was to assign lower CPU shares to the build processes (debomatic itself, sbuild, and all the related tools), in order to avoid blocking other important system services from being spawned when necessary.
I created a new slice, and set a lower CPU share weight:
$ cat /etc/systemd/system/debomatic.slice
Then, I assigned the slice to the service unit file controlling debomatic daemons by adding the Slice=debomatic.slice option under the Service directive.
That was not enough, though, as some processes were assigned to the user slice instead, which groups all the processes spawned by users:
This is probably because schroot spawns a login shell, and systemd considers it belonging to a different process group. So, I had to launch the command systemctl set-property user.slice CPUShares=512, so all processes belonging to the user.slice will receive the same share of the debomatic ones. I consider this a workaround, I'm open to suggestions how to properly solve this issue :)
I'll try to explore more options in the coming days, so I can improve my knowledge of systemd a little bit more :)
02 Sep 2015 4:31pm GMT
Here's a summary of what the Launchpad team got up to in August.
- Webhook support for Git repositories is almost finished, and only needs a bit more web UI work (#1474071)
- The summary of merge proposal pages now includes a link to the merged revision, if any (#892259)
- Viewing individual comments on Git-based merge proposals no longer OOPSes (#1485907)
Our internal stakeholders in Canonical recently asked us to work on improving the ability to filter Launchpad mail using Gmail. The core of this was the "Include filtering information in email footers" setting that we added recently, but we knew there was some more to do. Launchpad's mail notification code includes some of the oldest and least consistent code in our tree, and so improving this has entailed paying off quite a bit of technical debt along the way.
- Bug notifications and package upload notifications now honour the "Include filtering information in email footers" setting (#1474071)
- Bug notifications now log an OOPS rather than crashing if the SMTP server rejects an individual message (#314420, #916939)
- Recipe build notifications now include an X-Launchpad-Archive header (#776160)
- Question notification rationales are now more consistent, including team annotations for subscribers (#968578)
- Package upload notifications now include X-Launchpad-Message-Rationale and X-Launchpad-Notification-Type headers, and have more specific footers (#117155, #127917)
Package build infrastructure
- Launchpad now supports building source packages that use Debian's new build profiles syntax, currently only with no profiles activated
- Launchpad can now build snap packages (#1476405), with some limitations; this is currently only available to a group of alpha testers, so let us know if you're interested
- Builders can now access Launchpad's Git hosting (HTTPS only) in the same way that they can access its Bazaar hosting
- All amd64/i386 builds now take place in ScalingStack, and the corresponding bare-metal builders have been detached pending decommissioning; some of the newer of those machines will be used to further expand ScalingStack capacity
- We have a new ScalingStack region including POWER8-based ppc64el builders, which is currently undergoing production testing; this will replace the existing POWER7-based builders in a few weeks, and also provide virtualised build capacity for ppc64el PPAs
- We've fixed a race condition that sometimes caused a user's first PPA to be published unsigned for a while (#374395)
- The project release file upload limit is now 1 GiB rather than 200 MiB (#1479441)
- We spent some more time supporting translations for the overlay PPA used for current Ubuntu phone images, copying a number of existing translations into place from before the point when they were redirected automatically
- Your user index page now has a "Change password" link (#1471961)
- Bug attachments are no longer incorrectly hidden when displaying only some bug comments (#1105543)
02 Sep 2015 1:04pm GMT
01 Sep 2015
Release Metrics and Incoming Bugs
Release metrics and incoming bug data can be reviewed at the following link:
The current CVE status can be reviewed at the following link:
Status: Stable, Security, and Bugfix Kernel Updates - Precise/Trusty/lts-utopic/Vivid
Status for the main kernels, until today:
- Precise - Verification & Testing
- Trusty - Verification & Testing
- lts-Utopic - Verification & Testing
- Vivid - Verification & Testing
Current opened tracking bugs details:
For SRUs, SRU report is a good source of information:
cycle: 16-Aug through 05-Sep
14-Aug Last day for kernel commits for this cycle
15-Aug - 22-Aug Kernel prep week.
23-Aug - 29-Aug Bug verification & Regression testing.
30-Aug - 05-Sep Regression testing & Release to -updates.
Status: Wily Development Kernel
We have rebased and uploaded Wily master-next branch to 4.2 final from upstream.
Important upcoming dates:
Thurs Sep 24 - Final Beta (~3 weeks away)
Thurs Oct 8 - Kernel Freeze (~5 weeks away)
Thurs Oct 15 - Final Freeze (~6 weeks away)
Thurs Oct 22 - 15.10 Release (~7 weeks away)
Open Discussion or Questions? Raise your hand to be recognized
No open discussion.
01 Sep 2015 5:25pm GMT
My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it's one of the best ways to find volunteers to work with me on projects that matter to me.
This month I have been paid to work 6.5 hours on Debian LTS. In that time I did the following:
- Prepared and released DLA-301-1 fixing 2 CVE in python-django.
- Did one week of "LTS Frontdesk" with CVE triaging. I pushed 11 commits to the security tracker.
Apart from that, I also gave a talk about Debian LTS at DebConf 15 in Heidelberg and also coordinated a work session to discuss our plans for Wheezy. Have a look at the video recordings:
I attended DebConf 15 with great pleasure after having missed DebConf 14 last year. While I did not do lots of work there, I participated in many discussions and I certainly came back with a renewed motivation to work on Debian. That's always good.
For the concrete work I did during DebConf, I can only claim two schroot uploads to fix the lack of support of the new "overlay" filesystem that replaces "aufs" in the official Debian kernel, and some Distro Tracker work (fixing an issue that some people had when they were logged in via Debian's SSO).
While the numerous discussions I had during DebConf can't be qualified as "work", they certainly contribute to build up work plans for the future:
As a Kali developer, I attended multiple sessions related to derivatives (notably the Debian Derivatives Panel).
I was also interested by the "Debian in the corporate IT" BoF led by Michael Meskes (Credativ's CEO). He pointed out a number of problems that corporate users might have when they first consider using Debian and we will try to do something about this. Expect further news and discussions on the topic.
Martin Kraff, Luca Filipozzi, and me had a discussion with the Debian Project Leader (Neil) about how to revive/transform the Debian's Partner program. Nothing is fleshed out yet, but at least the process initiated by the former DPL (Lucas) is again moving forward.
Other Debian work
Sponsorship. I sponsored an NMU of pep8 by Daniel Stender as it was a requirement for prospector… which I also sponsored since all the required dependencies are now available in Debian. \o/
GPG issues with smartcard. Back from DebConf, when I wanted to sign some key, I stumbled again upon the problem which makes it impossible for me to use my two smartcards one after the other without first deleting the stubs for the private key. It's not a new issue but I decided that it was time to report it upstream, so I did it: #2079 on bugs.gnupg.org. Some research helped me to find a way to work-around the problem. Later in the month, after a dist-upgrade and a reboot, I was no longer able to use my smartcard as a SSH authentication key… again it was already reported but there was no clear analysis, so I tried to do my own one and added the results of my investigation in #795368. It looks like the culprit is pinentry-gnome3 not working when started by the gpg-agent which is started before the DBUS session. Simple fix is to restart the gpg-agent in the session… but I have no idea yet of what the proper fix should be (letting systemd manage the graphical user session and start gpg-agent would be my first answer, but that doesn't solve the issue for users of other init systems so it's not satisfying).
Distro Tracker. I merged two patches from Orestis Ioannou fixing some bugs tagged newcomer. There are more such bugs (I even filed two: #797096 and #797223), go grab them and do a first contribution to Distro Tracker like Orestis just did! I also merged a change from Christophe Siraut who presented Distro Tracker at DebConf.
I implemented in Distro Tracker the new authentication based on SSL client certificates that was recently announced by Enrico Zini. It's working nice, and this authentication scheme is far easier to support. Good job, Enrico!
tracker.debian.org broke during DebConf, it stopped being updated with new data. I tracked this down to a problem in the archive (see #796892). Apparently Ansgar Burchardt changed the set of compression tools used on some jessie repositorie, replacing bz2 by xz. He dropped the old Packages.bz2 but missed some Sources.bz2 which were thus stale… and APT reported "Hashsum mismatch" on the uncompressed content.
Misc. I pushed some small improvement to my Salt formulas: schroot-formula and sbuild-formula. They will now auto-detect which overlay filesystem is available with the current kernel (previously "aufs" was hardcoded).
See you next month for a new summary of my activities.
01 Sep 2015 11:49am GMT
Last thurday, the Unity 3D team announced providing some experimental build of Unity editor to Linux.
This was quite an exciting news, especially for me as a personal Unity 3D user. Perfect opportunity to implements this install support in Ubuntu Make, and this is now available for download! The "experimental" comes from the fact that it's experimental upstream as well, there is only one version out (and so, no download section when we'll always fetch latest) and no checksum support. We talked about it on upstream's IRC channel and will work with them on this in the future.
Of course, all things is, as usual, backed up with tests to ensure we spot any issue.
Speaking of tests, this release as well fix Arduino download support which broke due to upstream versioning scheme changes. This is where our heavy tests investment really shines as we could spot it before getting any bug reports on this!
Various more technical "under the wood" changes went in as well, to make contributors' life easier. We got recently even more excellent contributions (it's starting to be hard for me to keep up with them to be honest due to the load!), more on that next week with nice incoming goodies which are cooking up.
Our issue tracker is full of ideas and opportunities, and pull requests remain opened for any issues or suggestions! If you want to be the next featured contributor and want to give an hand, you can refer to this post with useful links!
01 Sep 2015 9:30am GMT
The design team members are constantly sharing interesting, fun, weird, links with each other, so we thought it might be a nice idea to share a selection of those links with everyone.
Here are the links that have been passed around during last month:
- Scaling Agile At Spotify
- Design Documentaries
- A Designer's Guide to Wearables
- PizzaTime: The Internet of (Fun) Things
- How Ashley Madison Onboards New Users
- UI & UX explained
- Desk Inspire
- Salesforce design system
- The Hamburger Menu Doesn't Work
- Content generator - Sketch plugin
01 Sep 2015 7:59am GMT
Welcome to the Ubuntu Weekly Newsletter. This is issue #432 for the week August 24 - 30, 2015, and the full version is available here.
In this issue we cover:
- Wily Werewolf Beta 1 Released
- Ubuntu Free Culture Showcase submissions are now open!
- Ubuntu Stats
- Visiting FrOSCon…
- LoCo Events
- Aaron Honeycutt: My contributions to KDE and Kubuntu since Akademy
- Lubuntu Blog: Happy 24th birthday, Linux!
- Jonathan Riddell: Ubuntu Archive Still Free Software
- Rohan Garg: Legalese is vague: Always consult a lawyer
- Jono Bacon: Ubuntu, Canonical, and IP
- Ubuntu Cloud News
- Ubuntu Touch OTA-6 released (arale, mako, flo and generic)
- Canonical News
- Ubuntu Linux continues to rule the cloud
- Canonical Kills Desktop Ubuntu Software Center, Focuses on Mobile Apps
- Full Circle Issue #100
- Other Articles of Interest
- Featured Audio and Video
- Weekly Ubuntu Development Team Meetings
- Upcoming Meetings and Events
- Updates and Security for 12.04, 14.04 and 15.04
- And much more!
The issue of The Ubuntu Weekly Newsletter is brought to you by:
- Paul White
- Elizabeth K. Joseph
- Chris Guiver
- And many others
Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License
01 Sep 2015 3:45am GMT
31 Aug 2015
Assume we have a stream of events coming in one at a time, and we need to count the frequency of the different types of events in the stream.
In other words: We are receiving fruits one at a time in no given order, and at any given time we need to be able to answer how many of a specific fruit did we receive.
The most naive implementation is a dictionary in the form of , and is most accurate and suitable for streams with limited types of events.
Let us assume a unique item consists of 15 bytes and has a dedicated uint32 (4 bytes) counter assigned to it.
At 10 million unique items we end up using 19 MB which is a bit much, but on the plus side its as accurate as it gets.
But what if we don't have the 19 MB. Or what if we have to keep track of several streams?
Maybe saving to a DB? Well when querying the DB upon request, something in the lines of:
SELECT count(event) WHERE event = ?)
The more items we add, the more resource intensive the query becomes.
Thankfully solutions come in the form of Probabalistic datastructures (sketches).
I won't get into details but to solve this problem I semi-evaluated the following data structures:
- Count-Min sketch (CMS) 
- Count-Min-Log sketch (CML) 
- Probabilistic Multiplicity Counting sketch (PMC) 
For each sketch I linearly added a new flow with equivalently linear events. So the first flow got 1 event inserted. The second flow for 2 events inserted, all the way up to 10k-th flow with 10k events inserted.
flow 1: 1 event flow 2: 2 events ... flow 10000: 10000 events
All three data structures were configured to have a size of 217KB (exactly 1739712 bits).
A couple dozen runs yielded the following results (based on my unoptimized code esp. for PMC and CML)
CMS: 07s for 50005000 insertion (fill rate: 31%) CML: 42s for 50005000 insertion (fill rate: 09%) PMC: 18s for 50005000 insertion (fill rate: 54%)
CMS with ɛ: 0.0001, δ: 0.99 (code)
Observe the biased estimation of CMS. CMS will never underestimate. In our case looking at the top border of the diagram we can see the there was a lot of overestimation.
CML with ɛ: 0.000025, δ: 0.99 (16-bit counters) (code)
Just like CMS, CML is also biased and will never underestimate. However unlike CMS the top border of the diagram is less noisy. Yet accuracy seems to be decreasing for the high counting flows.
PMC with (256x32) virtual metrices (code)
Unlike the previous two sketches. This sketch is unbiased, so underestimations exist. Also the estimate flow count increases with the actual flow count (linearly bigger errors). The drawback here is that PMC gets filled up very quickly which means at some point it will just have everything overestimated. It is recommended to know what the max num of different flows will be beforehand.
Bringing it all together
So what do you think. If you are familiar with these algorithms or can propose a different benchmarking scenario, please comment. I might be able to work on that on a weekend. The code was all written in Go, feel free to suggest optimizations of fix any bugs found (links above respective plots).
31 Aug 2015 10:25pm GMT
The Kubuntu team is proud to announce the references images for Plasma Mobile.
Plasma Mobile was announced today at KDE's Akademy conference.
Our images can be installed on a Nexus 5 phone.
More information on Plasma Mobile's website.
31 Aug 2015 10:09pm GMT
Now that we've open sourced the code for Ubuntu One filesync, I thoughts I'd highlight some of the interesting challenges we had while building and scaling the service to several million users.
The teams that built the service were roughly split into two: the foundations team, who was responsible for the lowest levels of the service (storage and retrieval of files, data model, client and server protocol for syncing) and the web team, focused on user-visible services (website to manage files, photos, music streaming, contacts and Android/iOS equivalent clients).
I joined the web team early on and stayed with it until we shut it down, so that's where a lot of my stories will be focused on.
Today I'm going to focus on the challenge we faced when launching the Photos and Music streaming services. Given that by the time we launched them we had a few years of experience serving files at scale, our challenge turned out to be in presenting and manipulating the metadata quickly to each user, and be able to show the data in appealing ways to users (showing music by artist, genre and searching, for example). Photos was a similar story, people tended to have many thousands of photos and songs and we needed to extract metadata, parse it, store it and then be able to present it back to users quickly in different ways. Easy, right? It is, until a certain scale
Our architecture for storing metadata at the time was about 8 PostgreSQL master databases where we sharded metadata across (essentially your metadata lived on a different DB server depending on your user id) plus at least one read-only slave per shard. These were really beefy servers with a truck load of CPUs, more than 128GB of RAM and very fast disks (when reading this, remember this was 2009-2013, hardware specs seem tiny as time goes by!). However, no matter how big these DB servers got, given how busy they were and how much metadata was stored (for years, we didn't delete any metadata, so for every change to every file we duplicated the metadata) after a certain time we couldn't get a simple listing of a user's photos or songs (essentially, some of their files filtered by mimetype) in a reasonable time-frame (less than 5 seconds). As it grew we added caches, indexes, optimized queries and code paths but we quickly hit a performance wall that left us no choice but a much feared major architectural change. I say much feared, because major architectural changes come with a lot of risk to running services that have low tolerance for outages or data loss, whenever you change something that's already running in a significant way you're basically throwing out most of your previous optimizations. On top of that as users we expect things to be fast, we take it for granted. A 5 person team spending 6 months to make things as you expect them isn't really something you can brag about in the middle of a race with many other companies to capture a growing market.
In the time since we had started the project, NoSQL had taken off and matured enough for it to be a viable alternative to SQL and seemed to fit many of our use cases much better (webscale!). After some research and prototyping, we decided to generate pre-computed views of each user's data in a NoSQL DB (Cassandra), and we decided to do that by extending our existing architecture instead of revamping it completely. Given our code was pretty well built into proper layers of responsibility we hooked up to the lowest layer of our code,-database transactions- an async process that would send messages to a queue whenever new data was written or modified. This meant essentially duplicating the metadata we stored for each user, but trading storage for computing is usually a good trade-off to make, both in cost and performance. So now we had a firehose queue of every change that went on in the system, and we could build a separate piece of infrastructure who's focus would only be to provide per-user metadata *fast* for any type of file so we could build interesting and flexible user interfaces for people to consume back their own content. The stated internal goals were: 1) Fast responses (under 1 second), 2) Less than 10 seconds between user action and UI update and 3) Complete isolation from existing infrastructure.
Here's a rough diagram of how the information flowed throw the system:
It's a little bit scary when looking at it like that, but in essence it was pretty simple: write each relevant change that happened in the system to a temporary table in PG in the same transaction that it's written to the permanent table. That way you get transactional guarantees that you won't loose any data on that layer for free and use PG's built in cache that keeps recently added records cheaply accessible.
Then we built a bunch of workers that looked through those rows, parsed them, sent them to a persistent queue in RabbitMQ and once it got confirmation it was queued it would delete it from the temporary PG table.
Following that we took advantage of Rabbit's queue exchange features to build different types of workers that processes the data differently depending on what it was (music was stored differently than photos, for example).
Once we completed all of this, accessing someone's photos was a quick and predictable read operation that would give us all their data back in an easy-to-parse format that would fit in memory. Eventually we moved all the metadata accessed from the website and REST APIs to these new pre-computed views and the result was a significant reduction in load on the main DB servers, while now getting predictable sub-second request times for all types of metadata in a horizontally scalable system (just add more workers and cassandra nodes).
All in all, it took about 6 months end-to-end, which included a prototype phase that used memcache as a key/value store.
You can see the code that wrote and read from the temporary PG table if you branch the code and look under: src/backends/txlog/
The worker code, as well as the web ui is still not available but will be in the future once we finish cleaning it up to make it available. I decided to write this up and publish it now because I believe the value is more in the architecture rather than the code itself
31 Aug 2015 9:17pm GMT
We want to show off the beauty of Plasma 5, as well as allow easy access for Kubuntu users to the latest news, downloads, documentation, and other resources.
We want your help! Whether you code/program or not.
Web development, packaging, bug triage, documentation, promotion and social media are all areas where we can use your talents and skill, as well as offering help to new or troubled users.
Sometimes their problems are easily solved, sometimes they have encountered real bugs, which we can help them file.
Please use our new site to find what you need, and tell us if you find something which needs improvement.
31 Aug 2015 3:45pm GMT
30 Aug 2015
Disclaimer: I am not a member of the Mycroft team, but I think this is neat and an important example of open innovation that needs support.
Mycroft is an Open Source, Open Hardware, Open APIs product that you talk to and it provides information and services. It is a wonderful example of open innovation at work.
They are running a kickstarter campaign that is pretty close to the goal, but it needs further backers to nail it.
I recorded a short video about why I think this is important. You can watch it here.
I encourage you to go and back the campaign. This kind of open innovation across technology, software, hardware, and APIs is how we make the world a better and more hackable place.
30 Aug 2015 9:42pm GMT
29 Aug 2015
I'd like to give a big thank you to Ubuntu Community who paid for my entrance ticket which let me take part in CCCamp 2015. The Chaos Communication Camp is an international meeting of hackers that takes place every four years and organized by the Chaos Computer Club (CCC).
My experience was amazing from the people who I talked to and the people I met.
There was quite a bit of talks so I'll highlight some of them:
How to make your software build reproducibly
Lunar, a Debian Developer, explained why it's important to make reproducible the build of packages from source code. The main issue in the chain of trust at the moment is we can read the source code of the packages we install, but we trust third party servers where packages has been build. To make the things secure we need to build packages in a deterministic way, so everyone could check if the package was build from the source without editing.
While the problem seems easy to solve, it isn't. Debian has worked on that problem for two years, and still haven't completed the fix to this issue.
How to organize a CTF
CTF - Capture the flag - are contests where the task is to maintain a server running multiple services, while simultaneously trying to get access to the other team's servers. Each successful penetration gains points, as well as keeping services up and functional during the course of the game.
At the camp itself there was a CTF contest, and it's incredible how some people could hack in a system and find vulnerabilities with very complicated way to bypass security system.
Towards Universal Access to All Knowledge: Internet Archive
Archive.org is a well-known service: their goal is to backup all the world! Brewster Kahle explained how they're working to take as much data as possible, and why it's important.
TLS interception considered harmful
With the more widespread use of encrypted HTTPS connections many software vendors intercept these connections by installing a certificate into the user's browser. This is widely done by Antivirus applications, parental filter software or ad injection software. This can go horribly wrong, as the examples of Superfish and Privdog have shown. But even if implemented properly these solutions almost always decrease the security of HTTPS.
The talk explains how bad some of these Software Companies are which reduce your security for their gains.
Let's Encrypt is a new free and automated certificate authority, launching in summer 2015.
In these dark times security and privacy are more important than ever. EFF, Mozilla, Cisco, Akamai, IdenTrust, and a team at the University of Michigan are working to make the web a safer place, adopting HTTPS everywhere. I really hope this project will have a large adoption.
As usual, the best part of events like this are meeting new people, listening to different stories and acquire a common knowledge. I'm not going to report them here: lot of them value their privacy (at the camp there were 'No photos' signs everywhere). I had an awesome time talking to people and learning new things from them.
A big thanks to my travel friends, Ruio and Bardo, for their knowledge and even more for their company in that long trip.
Also, a thanks to all the Italian Embassy - good guys, with a lot of free grappa. Awesome!
The event is organized by volunteers so a big thanks goes to all of them, they were able to provide energy and Internet for everyone all 5000 plus of us.
I want to say thanks again to the Ubuntu Community for sponsoring me, and to all guys I met.
The next CCCamp is in 4 years, I hope I'll be able to join again, 'cause it's a very strong experience.
29 Aug 2015 11:17pm GMT
The Intel SuspendResume project aims to help identify delays in suspend and resume. After seeing it demonstrated by Len Brown (Intel) at this years Linux Plumbers conference I gave it a quick spin and was delighted to see how easy it is to use.
The project has some excellent "getting started" documentation describing how to configure a system and run the suspend resume analysis script which should be read before diving in too deep.
For the impatient, one can do try it out using the following:
git clone https://github.com/01org/suspendresume.git
..and manually resume once after the machine has completed a successful suspend.
This will create a directory containing dumps of the kernel log and ftrace output as well as an html web page that one can read into your favourite web browser to view the results. One can zoom in/out of the web page to drill down and see where the delays are occurring, an example from the SuspendResume project page is shown below:
|example webpage (from https://01.org/suspendresume)|
It is a useful project, kudos to Intel for producing it. I thoroughly recommend using it to identify the delays in suspend/resume.
29 Aug 2015 5:45pm GMT