03 Sep 2015

feedPlanet Ubuntu

Ian Weisser: You should be using Find-a-Task

Find-a-Task is the Ubuntu community's job board for volunteers.

Introduced in January 2015, Find-a-Task shows fellow volunteers the variety of tasks and roles available.

The goal of Find-a-Task is for a volunteer, after exploring the Ubuntu Project, to land on a team or project's wiki page. They are interested, ready to join, and ready to start learning the skills and tools.

However, it only works if *you* use it, too.


Try it.


Take a quick look, and see the variety of volunteer roles available. We have listings for many different skills and interests, including many non-technical tasks.


Is your team listed?


Hey teams, are you using Find-a-Task to recruit volunteers?


When it's time to update your postings on the job board, simply jump into Freenode IRC: #ubuntu-community-team.


Gurus: Are your pointing Padwans toward it?


Find-a-Task is a great place to send new enthusiasts. No signup, no login, no questions. It's a great way to survey the roles available in the big, wide, Ubuntuverse, and get new enthusiasts involved in a team.

It's also handy for experienced enthusiasts looking for a new challenge, of course.


Improving Find-a-Task


Ideas to increase usage of Find-a-Task are welcome.
Ideas on how to improve the tool itself are also welcome.
Please share your suggestions to improve Find-a-Task on the ubuntu-community-team mailing list.

03 Sep 2015 1:50am GMT

02 Sep 2015

feedPlanet Ubuntu

Jonathan Riddell: Jonathan Riddell™ IP Policy

This is the Jonathan Riddell™ IP Policy. It applies to all Jonathan's intellectual property in Ubuntu archives. Jonathan is one of the top 5 uploaders, usually the top 1 uploader, to Ubuntu compiling hundreds of packages in the Ubuntu archive. Further Jonathan reviews new and updated packages in the archive. Further Jonathan selects compiler defaults and settings for KDE and Qt and other packages in the Ubuntu archive. Further Jonathan builds and runs tests for Ubuntu packages in the archives. Further Jonathan Riddell™ is a trademark of Jonathan Riddell™in Scotland, Catalunya and other countries; a trademark which is included in all packages edited by Jonathan Riddell™. Further Jonathan is the author of numberous works in the Ubuntu archive. Further Jonathan is the main contributor to the selection of software in Kubuntu. Therefore Jonathan has IP in the Ubuntu archive possibly including but not limited to copyright, patents, trademarks, sales marks, geographical indicators, database rights, compilation copyright, designs, personality rights and plant breeders rights. To deal with, distribute, modify, look at or smell Jonathan's IP you must comply with this policy.

Policy: give Jonathan a hug before using his IP.

If you want a licence for Jonathan's IP besides this one you must contact Jonathan first and agree one in writing.

Nothing in this policy shall be taken to override or conflict with free software licences already put on relevant works.

facebooktwittergoogle_pluslinkedinby feather

02 Sep 2015 4:54pm GMT

Luca Falavigna: Resource control with systemd

I'm receiving more requests for upload accounts to the Deb-o-Matic servers lately (yay!), but that means the resources need to be monitored and shared between the build daemons to prevent server lockups.

My servers are running systemd, so I decided to give systemd.resource-control a try. My goal was to assign lower CPU shares to the build processes (debomatic itself, sbuild, and all the related tools), in order to avoid blocking other important system services from being spawned when necessary.

I created a new slice, and set a lower CPU share weight:
$ cat /etc/systemd/system/debomatic.slice
[Slice]
CPUAccounting=true
CPUShares=512
$

Then, I assigned the slice to the service unit file controlling debomatic daemons by adding the Slice=debomatic.slice option under the Service directive.

That was not enough, though, as some processes were assigned to the user slice instead, which groups all the processes spawned by users:
systemd-cgls

This is probably because schroot spawns a login shell, and systemd considers it belonging to a different process group. So, I had to launch the command systemctl set-property user.slice CPUShares=512, so all processes belonging to the user.slice will receive the same share of the debomatic ones. I consider this a workaround, I'm open to suggestions how to properly solve this issue :)

I'll try to explore more options in the coming days, so I can improve my knowledge of systemd a little bit more :)


02 Sep 2015 4:31pm GMT

Launchpad News: Launchpad news, August 2015

Here's a summary of what the Launchpad team got up to in August.

Code

Mail notifications

Our internal stakeholders in Canonical recently asked us to work on improving the ability to filter Launchpad mail using Gmail. The core of this was the "Include filtering information in email footers" setting that we added recently, but we knew there was some more to do. Launchpad's mail notification code includes some of the oldest and least consistent code in our tree, and so improving this has entailed paying off quite a bit of technical debt along the way.

Package build infrastructure

Miscellaneous

02 Sep 2015 1:04pm GMT

01 Sep 2015

feedPlanet Ubuntu

Ubuntu Kernel Team: Kernel Team Meeting Minutes – September 01, 2015

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20150901 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:


Status: CVE's

The current CVE status can be reviewed at the following link:


Status: Stable, Security, and Bugfix Kernel Updates - Precise/Trusty/lts-utopic/Vivid

Status for the main kernels, until today:


Status: Wily Development Kernel

We have rebased and uploaded Wily master-next branch to 4.2 final from upstream.
--
Important upcoming dates:


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

01 Sep 2015 5:25pm GMT

Raphaël Hertzog: My Free Software Activities in August 2015

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it's one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I have been paid to work 6.5 hours on Debian LTS. In that time I did the following:

Apart from that, I also gave a talk about Debian LTS at DebConf 15 in Heidelberg and also coordinated a work session to discuss our plans for Wheezy. Have a look at the video recordings:

DebConf 15

I attended DebConf 15 with great pleasure after having missed DebConf 14 last year. While I did not do lots of work there, I participated in many discussions and I certainly came back with a renewed motivation to work on Debian. That's always good. :-)

For the concrete work I did during DebConf, I can only claim two schroot uploads to fix the lack of support of the new "overlay" filesystem that replaces "aufs" in the official Debian kernel, and some Distro Tracker work (fixing an issue that some people had when they were logged in via Debian's SSO).

While the numerous discussions I had during DebConf can't be qualified as "work", they certainly contribute to build up work plans for the future:

As a Kali developer, I attended multiple sessions related to derivatives (notably the Debian Derivatives Panel).

I was also interested by the "Debian in the corporate IT" BoF led by Michael Meskes (Credativ's CEO). He pointed out a number of problems that corporate users might have when they first consider using Debian and we will try to do something about this. Expect further news and discussions on the topic.

Martin Kraff, Luca Filipozzi, and me had a discussion with the Debian Project Leader (Neil) about how to revive/transform the Debian's Partner program. Nothing is fleshed out yet, but at least the process initiated by the former DPL (Lucas) is again moving forward.

Other Debian work

Sponsorship. I sponsored an NMU of pep8 by Daniel Stender as it was a requirement for prospector… which I also sponsored since all the required dependencies are now available in Debian. \o/

Packaging. I NMUed libxml2 2.9.2+really2.9.1+dfsg1-0.1 fixing 3 security issues and a RC bug that was breaking publican. Since there's no upstream fix for more than 8 months, I went back to the former version 2.9.1. It's in line with the new requirement of release managers… a package in unstable should migrate to testing reasonably quickly, it's not acceptable to keep it unfixed for months. With this annoying bug fixed, I could again upload a new upstream release of publican… so I prepared and uploaded 4.3.2-1. It was my first source only upload. This release was more work than I expected and I filed no less than 3 bug to upstream (new bash-completion install path, request to provide sources of a minified javascript file, drop a .po file for an invalid language code).

GPG issues with smartcard. Back from DebConf, when I wanted to sign some key, I stumbled again upon the problem which makes it impossible for me to use my two smartcards one after the other without first deleting the stubs for the private key. It's not a new issue but I decided that it was time to report it upstream, so I did it: #2079 on bugs.gnupg.org. Some research helped me to find a way to work-around the problem. Later in the month, after a dist-upgrade and a reboot, I was no longer able to use my smartcard as a SSH authentication key… again it was already reported but there was no clear analysis, so I tried to do my own one and added the results of my investigation in #795368. It looks like the culprit is pinentry-gnome3 not working when started by the gpg-agent which is started before the DBUS session. Simple fix is to restart the gpg-agent in the session… but I have no idea yet of what the proper fix should be (letting systemd manage the graphical user session and start gpg-agent would be my first answer, but that doesn't solve the issue for users of other init systems so it's not satisfying).

Distro Tracker. I merged two patches from Orestis Ioannou fixing some bugs tagged newcomer. There are more such bugs (I even filed two: #797096 and #797223), go grab them and do a first contribution to Distro Tracker like Orestis just did! I also merged a change from Christophe Siraut who presented Distro Tracker at DebConf.

I implemented in Distro Tracker the new authentication based on SSL client certificates that was recently announced by Enrico Zini. It's working nice, and this authentication scheme is far easier to support. Good job, Enrico!

tracker.debian.org broke during DebConf, it stopped being updated with new data. I tracked this down to a problem in the archive (see #796892). Apparently Ansgar Burchardt changed the set of compression tools used on some jessie repositorie, replacing bz2 by xz. He dropped the old Packages.bz2 but missed some Sources.bz2 which were thus stale… and APT reported "Hashsum mismatch" on the uncompressed content.

Misc. I pushed some small improvement to my Salt formulas: schroot-formula and sbuild-formula. They will now auto-detect which overlay filesystem is available with the current kernel (previously "aufs" was hardcoded).

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

01 Sep 2015 11:49am GMT

Didier Roche: Ubuntu Make 15.09 featuring experimental Unity 3D editor support

Last thurday, the Unity 3D team announced providing some experimental build of Unity editor to Linux.

This was quite an exciting news, especially for me as a personal Unity 3D user. Perfect opportunity to implements this install support in Ubuntu Make, and this is now available for download! The "experimental" comes from the fact that it's experimental upstream as well, there is only one version out (and so, no download section when we'll always fetch latest) and no checksum support. We talked about it on upstream's IRC channel and will work with them on this in the future.

Unity3D editor on Ubuntu!

Of course, all things is, as usual, backed up with tests to ensure we spot any issue.

Speaking of tests, this release as well fix Arduino download support which broke due to upstream versioning scheme changes. This is where our heavy tests investment really shines as we could spot it before getting any bug reports on this!

Various more technical "under the wood" changes went in as well, to make contributors' life easier. We got recently even more excellent contributions (it's starting to be hard for me to keep up with them to be honest due to the load!), more on that next week with nice incoming goodies which are cooking up.

The whole release details are available here. As usual, you can get this latest version direcly through its ppa for the 14.04 LTS, 15.05 and wily ubuntu releases.

Our issue tracker is full of ideas and opportunities, and pull requests remain opened for any issues or suggestions! If you want to be the next featured contributor and want to give an hand, you can refer to this post with useful links!

01 Sep 2015 9:30am GMT

Canonical Design Team: August’s reading list

The design team members are constantly sharing interesting, fun, weird, links with each other, so we thought it might be a nice idea to share a selection of those links with everyone.

Here are the links that have been passed around during last month:

Thanks to Robin, Luca, Elvira, Anthony, Jamie, Joe and me, for the links this month!

01 Sep 2015 7:59am GMT

The Fridge: Ubuntu Weekly Newsletter Issue 432

Welcome to the Ubuntu Weekly Newsletter. This is issue #432 for the week August 24 - 30, 2015, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

01 Sep 2015 3:45am GMT

31 Aug 2015

feedPlanet Ubuntu

Seif Lotfy: Counting flows (Semi-evaluation of CMS, CML and PMC)

Assume we have a stream of events coming in one at a time, and we need to count the frequency of the different types of events in the stream.

In other words: We are receiving fruits one at a time in no given order, and at any given time we need to be able to answer how many of a specific fruit did we receive.

The most naive implementation is a dictionary in the form of , and is most accurate and suitable for streams with limited types of events.

Let us assume a unique item consists of 15 bytes and has a dedicated uint32 (4 bytes) counter assigned to it.

At 10 million unique items we end up using 19 MB which is a bit much, but on the plus side its as accurate as it gets.

But what if we don't have the 19 MB. Or what if we have to keep track of several streams?

Maybe saving to a DB? Well when querying the DB upon request, something in the lines of:

SELECT count(event) WHERE event = ?)

The more items we add, the more resource intensive the query becomes.

Thankfully solutions come in the form of Probabalistic datastructures (sketches).

I won't get into details but to solve this problem I semi-evaluated the following data structures:

Test details:

For each sketch I linearly added a new flow with equivalently linear events. So the first flow got 1 event inserted. The second flow for 2 events inserted, all the way up to 10k-th flow with 10k events inserted.

flow 1: 1 event  
flow 2: 2 events  
...
flow 10000: 10000 events  

All three data structures were configured to have a size of 217KB (exactly 1739712 bits).

A couple dozen runs yielded the following results (based on my unoptimized code esp. for PMC and CML)

CMS: 07s for 50005000 insertion (fill rate: 31%)  
CML: 42s for 50005000 insertion (fill rate: 09%)  
PMC: 18s for 50005000 insertion (fill rate: 54%)  

CMS with ɛ: 0.0001, δ: 0.99 (code) CMS

Observe the biased estimation of CMS. CMS will never underestimate. In our case looking at the top border of the diagram we can see the there was a lot of overestimation.

CML with ɛ: 0.000025, δ: 0.99 (16-bit counters) (code)CML

Just like CMS, CML is also biased and will never underestimate. However unlike CMS the top border of the diagram is less noisy. Yet accuracy seems to be decreasing for the high counting flows.

PMC with (256x32) virtual metrices (code) PMC

Unlike the previous two sketches. This sketch is unbiased, so underestimations exist. Also the estimate flow count increases with the actual flow count (linearly bigger errors). The drawback here is that PMC gets filled up very quickly which means at some point it will just have everything overestimated. It is recommended to know what the max num of different flows will be beforehand.

Bringing it all together ALL

So what do you think. If you are familiar with these algorithms or can propose a different benchmarking scenario, please comment. I might be able to work on that on a weekend. The code was all written in Go, feel free to suggest optimizations of fix any bugs found (links above respective plots).

31 Aug 2015 10:25pm GMT

Kubuntu: Kubuntu Team Launches Plasma Mobile References Images

[PROMO] Plasma Evolving

The Kubuntu team is proud to announce the references images for Plasma Mobile.

Plasma Mobile was announced today at KDE's Akademy conference.

Our images can be installed on a Nexus 5 phone.

More information on Plasma Mobile's website.

31 Aug 2015 10:09pm GMT

Martin Albisetti: Developing and scaling Ubuntu One filesync, part 1

Now that we've open sourced the code for Ubuntu One filesync, I thoughts I'd highlight some of the interesting challenges we had while building and scaling the service to several million users.

The teams that built the service were roughly split into two: the foundations team, who was responsible for the lowest levels of the service (storage and retrieval of files, data model, client and server protocol for syncing) and the web team, focused on user-visible services (website to manage files, photos, music streaming, contacts and Android/iOS equivalent clients).
I joined the web team early on and stayed with it until we shut it down, so that's where a lot of my stories will be focused on.

Today I'm going to focus on the challenge we faced when launching the Photos and Music streaming services. Given that by the time we launched them we had a few years of experience serving files at scale, our challenge turned out to be in presenting and manipulating the metadata quickly to each user, and be able to show the data in appealing ways to users (showing music by artist, genre and searching, for example). Photos was a similar story, people tended to have many thousands of photos and songs and we needed to extract metadata, parse it, store it and then be able to present it back to users quickly in different ways. Easy, right? It is, until a certain scale :)
Our architecture for storing metadata at the time was about 8 PostgreSQL master databases where we sharded metadata across (essentially your metadata lived on a different DB server depending on your user id) plus at least one read-only slave per shard. These were really beefy servers with a truck load of CPUs, more than 128GB of RAM and very fast disks (when reading this, remember this was 2009-2013, hardware specs seem tiny as time goes by!). However, no matter how big these DB servers got, given how busy they were and how much metadata was stored (for years, we didn't delete any metadata, so for every change to every file we duplicated the metadata) after a certain time we couldn't get a simple listing of a user's photos or songs (essentially, some of their files filtered by mimetype) in a reasonable time-frame (less than 5 seconds). As it grew we added caches, indexes, optimized queries and code paths but we quickly hit a performance wall that left us no choice but a much feared major architectural change. I say much feared, because major architectural changes come with a lot of risk to running services that have low tolerance for outages or data loss, whenever you change something that's already running in a significant way you're basically throwing out most of your previous optimizations. On top of that as users we expect things to be fast, we take it for granted. A 5 person team spending 6 months to make things as you expect them isn't really something you can brag about in the middle of a race with many other companies to capture a growing market.
In the time since we had started the project, NoSQL had taken off and matured enough for it to be a viable alternative to SQL and seemed to fit many of our use cases much better (webscale!). After some research and prototyping, we decided to generate pre-computed views of each user's data in a NoSQL DB (Cassandra), and we decided to do that by extending our existing architecture instead of revamping it completely. Given our code was pretty well built into proper layers of responsibility we hooked up to the lowest layer of our code,-database transactions- an async process that would send messages to a queue whenever new data was written or modified. This meant essentially duplicating the metadata we stored for each user, but trading storage for computing is usually a good trade-off to make, both in cost and performance. So now we had a firehose queue of every change that went on in the system, and we could build a separate piece of infrastructure who's focus would only be to provide per-user metadata *fast* for any type of file so we could build interesting and flexible user interfaces for people to consume back their own content. The stated internal goals were: 1) Fast responses (under 1 second), 2) Less than 10 seconds between user action and UI update and 3) Complete isolation from existing infrastructure.
Here's a rough diagram of how the information flowed throw the system:

U1 Diagram

It's a little bit scary when looking at it like that, but in essence it was pretty simple: write each relevant change that happened in the system to a temporary table in PG in the same transaction that it's written to the permanent table. That way you get transactional guarantees that you won't loose any data on that layer for free and use PG's built in cache that keeps recently added records cheaply accessible.
Then we built a bunch of workers that looked through those rows, parsed them, sent them to a persistent queue in RabbitMQ and once it got confirmation it was queued it would delete it from the temporary PG table.
Following that we took advantage of Rabbit's queue exchange features to build different types of workers that processes the data differently depending on what it was (music was stored differently than photos, for example).
Once we completed all of this, accessing someone's photos was a quick and predictable read operation that would give us all their data back in an easy-to-parse format that would fit in memory. Eventually we moved all the metadata accessed from the website and REST APIs to these new pre-computed views and the result was a significant reduction in load on the main DB servers, while now getting predictable sub-second request times for all types of metadata in a horizontally scalable system (just add more workers and cassandra nodes).

All in all, it took about 6 months end-to-end, which included a prototype phase that used memcache as a key/value store.

You can see the code that wrote and read from the temporary PG table if you branch the code and look under: src/backends/txlog/
The worker code, as well as the web ui is still not available but will be in the future once we finish cleaning it up to make it available. I decided to write this up and publish it now because I believe the value is more in the architecture rather than the code itself :)

31 Aug 2015 9:17pm GMT

Kubuntu: Kubuntu Site Revamped

With the move to Plasma 5, updating the Kubuntu website seemed timely. Many people have contributed, including Ovidiu-Florin Bogdan, Aaron Honeycutt, Marcin Sągol and many others.

We want to show off the beauty of Plasma 5, as well as allow easy access for Kubuntu users to the latest news, downloads, documentation, and other resources.

We want your help! Whether you code/program or not.

Web development, packaging, bug triage, documentation, promotion and social media are all areas where we can use your talents and skill, as well as offering help to new or troubled users.

For instance, people regularly report problems on Facebook, Reddit, Google+, Twitter, Telegram now and of course, #kubuntu on Freenode IRC, rather than filing bugs.

Sometimes their problems are easily solved, sometimes they have encountered real bugs, which we can help them file.

Please use our new site to find what you need, and tell us if you find something which needs improvement.

31 Aug 2015 3:45pm GMT

30 Aug 2015

feedPlanet Ubuntu

Jono Bacon: Go and back the Mycroft Kickstarter campaign

Disclaimer: I am not a member of the Mycroft team, but I think this is neat and an important example of open innovation that needs support.

Mycroft is an Open Source, Open Hardware, Open APIs product that you talk to and it provides information and services. It is a wonderful example of open innovation at work.

They are running a kickstarter campaign that is pretty close to the goal, but it needs further backers to nail it.


I recorded a short video about why I think this is important. You can watch it here.

I encourage you to go and back the campaign. This kind of open innovation across technology, software, hardware, and APIs is how we make the world a better and more hackable place.

30 Aug 2015 9:42pm GMT

29 Aug 2015

feedPlanet Ubuntu

Riccardo Padovani: CCCamp 2015

I'd like to give a big thank you to Ubuntu Community who paid for my entrance ticket which let me take part in CCCamp 2015. The Chaos Communication Camp is an international meeting of hackers that takes place every four years and organized by the Chaos Computer Club (CCC).

cccamp

My experience was amazing from the people who I talked to and the people I met.

Talks

There was quite a bit of talks so I'll highlight some of them:

How to make your software build reproducibly

Lunar, a Debian Developer, explained why it's important to make reproducible the build of packages from source code. The main issue in the chain of trust at the moment is we can read the source code of the packages we install, but we trust third party servers where packages has been build. To make the things secure we need to build packages in a deterministic way, so everyone could check if the package was build from the source without editing.

While the problem seems easy to solve, it isn't. Debian has worked on that problem for two years, and still haven't completed the fix to this issue.

How to organize a CTF

CTF - Capture the flag - are contests where the task is to maintain a server running multiple services, while simultaneously trying to get access to the other team's servers. Each successful penetration gains points, as well as keeping services up and functional during the course of the game.

At the camp itself there was a CTF contest, and it's incredible how some people could hack in a system and find vulnerabilities with very complicated way to bypass security system.

Towards Universal Access to All Knowledge: Internet Archive

Archive.org is a well-known service: their goal is to backup all the world! Brewster Kahle explained how they're working to take as much data as possible, and why it's important.

TLS interception considered harmful

With the more widespread use of encrypted HTTPS connections many software vendors intercept these connections by installing a certificate into the user's browser. This is widely done by Antivirus applications, parental filter software or ad injection software. This can go horribly wrong, as the examples of Superfish and Privdog have shown. But even if implemented properly these solutions almost always decrease the security of HTTPS.

The talk explains how bad some of these Software Companies are which reduce your security for their gains.

Let's encrypt

Let's Encrypt is a new free and automated certificate authority, launching in summer 2015.

In these dark times security and privacy are more important than ever. EFF, Mozilla, Cisco, Akamai, IdenTrust, and a team at the University of Michigan are working to make the web a safer place, adopting HTTPS everywhere. I really hope this project will have a large adoption.

People

be excellent

As usual, the best part of events like this are meeting new people, listening to different stories and acquire a common knowledge. I'm not going to report them here: lot of them value their privacy (at the camp there were 'No photos' signs everywhere). I had an awesome time talking to people and learning new things from them.

A big thanks to my travel friends, Ruio and Bardo, for their knowledge and even more for their company in that long trip.

Also, a thanks to all the Italian Embassy - good guys, with a lot of free grappa. Awesome!

italian embassy

CCC Angels

The event is organized by volunteers so a big thanks goes to all of them, they were able to provide energy and Internet for everyone all 5000 plus of us.

I want to say thanks again to the Ubuntu Community for sponsoring me, and to all guys I met.

The next CCCamp is in 4 years, I hope I'll be able to join again, 'cause it's a very strong experience.

If you like my work and want to support me, just send me a Thank you! by email or offer me a beer:-)

Ciao,
R.

29 Aug 2015 11:17pm GMT

Colin King: Identifying Suspend/Resume delays

The Intel SuspendResume project aims to help identify delays in suspend and resume. After seeing it demonstrated by Len Brown (Intel) at this years Linux Plumbers conference I gave it a quick spin and was delighted to see how easy it is to use.

The project has some excellent "getting started" documentation describing how to configure a system and run the suspend resume analysis script which should be read before diving in too deep.

For the impatient, one can do try it out using the following:

git clone https://github.com/01org/suspendresume.git
cd suspendresume
sudo ./analyze_suspend.py


..and manually resume once after the machine has completed a successful suspend.

This will create a directory containing dumps of the kernel log and ftrace output as well as an html web page that one can read into your favourite web browser to view the results. One can zoom in/out of the web page to drill down and see where the delays are occurring, an example from the SuspendResume project page is shown below:

example webpage (from https://01.org/suspendresume)


It is a useful project, kudos to Intel for producing it. I thoroughly recommend using it to identify the delays in suspend/resume.

29 Aug 2015 5:45pm GMT