28 Jun 2017

feedPlanet Mozilla

Support.Mozilla.Org: Important Platform Update

Hello, SUMO Mozillians!

We have an important update regarding our site to share with you, so grab something cold/hot to drink (depending on your climate), sit down, and give us your attention for the next few minutes.

As you know, we have been hard at work for quite some time now migrating the site over to a new platform. You were a part of the process from day one (since we knew we needed to find a replacement for Kitsune) and we would like to once more thank you for your participation throughout that challenging and demanding period. Many of you have given us feedback or lent a hand with testing, checking, cleaning up, and generally supporting our small team before, during, and after the migration.

Over time and due to technical difficulties beyond our team's direct control, we decided to 'roll back' to Kitsune to better support the upcoming releases of Firefox and related Mozilla products.

The date of 'rolling forward' to Lithium was to be decided based on the outcome of leadership negotiations of contract terms and the solving of technical issues (such as redirects, content display, and localization flows, for example) by teams from both sides working together.

In the meantime, we have been using Kitsune to serve content to users and provide forum support.

We would like to inform you that a decision has been made on Mozilla's side to keep using Kitsune for the foreseeable future. Our team will investigate alternative options to improve and update Mozilla's support for our users and ways to empower your contributions in that area.

What are the reasons behind this decision?

  1. Technical challenges in shaping Lithium's platform to meet all of Mozilla's user support needs.
  2. The contributor community's feedback and requirements for contributing comfortably.
  3. The upcoming major releases for Firefox (and related products) requiring a smooth and uninterrupted user experience while accessing support resources.

What are the immediate implications of this decision?

  1. Mozilla will not be proceeding with a full 'roll forward' of SUMO to Lithium at this time. All open Lithium-related Bugzilla requests will be re-evaluated and may be closed as part of our next sprint (after the San Francisco All Hands).
  2. SUMO is going to remain on Kitsune for both support forum and knowledge base needs for now. Social support will continue on Respond.
  3. The SUMO team is going to kick off a reevaluation process for Kitsune's technical status and requirements with the help of Mozilla's IT team. This will include evaluating options of using Kitsune in combination with other tools/platforms to provide support for our users and contribution opportunities for Mozillians.

If you have questions about this update or want to discuss it, please use our community forums.

We are, as always, relying on your time and effort in successfully supporting millions of Mozilla's software users and fans around the world. Thank you for your ongoing participation in making the open web better!

Sincerely yours,

The SUMO team

P.S. Watch the video from the first day of the SFO All Hands if you want to see us discuss the above (and not only).

28 Jun 2017 1:47pm GMT

Chris Lord: Goodbye Mozilla

Today is effectively my last day at Mozilla, before I start at Impossible on Monday. I've been here for 6 years and a bit and it's been quite an experience. I think it's worth reflecting on, so here we go; Fair warning, if you have no interest in me or Mozilla, this is going to make pretty boring reading.

I started on June 6th 2011, several months before the (then new, since moved) London office opened. Although my skills lay (lie?) in user interface implementation, I was hired mainly for my graphics and systems knowledge. Mozilla was in the region of 500 or so employees then I think, and it was an interesting time. I'd been working on the code-base for several years prior at Intel, on a headless backend that we used to build a Clutter-based browser for Moblin netbooks. I wasn't completely unfamiliar with the code-base, but it still took a long time to get to grips with. We're talking several million lines of code with several years of legacy, in a language I still consider myself to be pretty novice at (C++).

I started on the mobile platform team, and I would consider this to be my most enjoyable time at the company. The mobile platform team was a multi-discipline team that did general low-level platform work for the mobile (Android and Meego) browser. When we started, the browser was based on XUL and was multi-process. Mobile was often the breeding ground for new technologies that would later go on to desktop. It wasn't long before we started developing a new browser based on a native Android UI, removing XUL and relegating Gecko to page rendering. At the time this felt like a disappointing move. The reason the XUL-based browser wasn't quite satisfactory was mainly due to performance issues, and as a platform guy, I wanted to see those issues fixed, rather than worked around. In retrospect, this was absolutely the right decision and lead to what I'd still consider to be one of Android's best browsers.

Despite performance issues being one of the major driving forces for making this move, we did a lot of platform work at the time too. As well as being multi-process, the XUL browser had a compositor system for rendering the page, but this wasn't easily portable. We ended up rewriting this, first almost entirely in Java (which was interesting), then with the rendering part of the compositor in native code. The input handling remained in Java for several years (pretty much until FirefoxOS, where we rewrote that part in native code, then later, switched Android over).

Most of my work during this period was based around improving performance (both perceived and real) and fluidity of the browser. Benoit Girard had written an excellent tiled rendering framework that I polished and got working with mobile. On top of that, I worked on progressive rendering and low precision rendering, which combined are probably the largest body of original work I've contributed to the Mozilla code-base. Neither of them are really active in the code-base at the moment, which shows how good a job I didn't do maintaining them, I suppose.

Although most of my work was graphics-focused on the platform team, I also got to to do some layout work. I worked on some over-invalidation issues before Matt Woodrow's DLBI work landed (which nullified that, but I think that work existed in at least one release). I also worked a lot on fixed position elements staying fixed to the correct positions during scrolling and zooming, another piece of work I was quite proud of (and probably my second-biggest contribution). There was also the opportunity for some UI work, when it intersected with platform. I implemented Firefox for Android's dynamic toolbar, and made sure it interacted well with fixed position elements (some of this work has unfortunately been undone with the move from the partially Java-based input manager to the native one). During this period, I was also regularly attending and presenting at FOSDEM.

I would consider my time on the mobile platform team a pretty happy and productive time. Unfortunately for me, those of us with graphics specialities on the mobile platform team were taken off that team and put on the graphics team. I think this was the start in a steady decline in my engagement with the company. At the time this move was made, Mozilla was apparently trying to consolidate teams around products, and this was the exact opposite happening. The move was never really explained to me and I know I wasn't the only one that wasn't happy about it. The graphics team was very different to the mobile platform team and I don't feel I fit in as well. It felt more boisterous and less democratic than the mobile platform team, and as someone that generally shies away from arguments and just wants to get work done, it was hard not to feel sidelined slightly. I was also quite disappointed that people didn't seem particular familiar with the graphics work I had already been doing and that I was tasked, at least initially, with working on some very different (and very boring) desktop Linux work, rather than my speciality of mobile.

I think my time on the graphics team was pretty unproductive, with the exception of the work I did on b2g, improving tiled rendering and getting graphics memory-mapped tiles working. This was particularly hard as the interface was basically undocumented, and its implementation details could vary wildly depending on the graphics driver. Though I made a huge contribution to this work, you won't see me credited in the tree unfortunately. I'm still a little bit sore about that. It wasn't long after this that I requested to move to the FirefoxOS systems front-end team. I'd been doing some work there already and I'd long wanted to go back to doing UI. It felt like I either needed a dramatic change or I needed to leave. I'm glad I didn't leave at this point.

Working on FirefoxOS was a blast. We had lots of new, very talented people, a clear and worthwhile mission, and a new code-base to work with. I worked mainly on the home-screen, first with performance improvements, then with added features (app-grouping being the major one), then with a hugely controversial and probably mismanaged (on my part, not my manager - who was excellent) rewrite. The rewrite was good and fixed many of the performance problems of what it was replacing, but unfortunately also removed features, at least initially. Turns out people really liked the app-grouping feature.

I really enjoyed my time working on FirefoxOS, and getting a nice clean break from platform work, but it was always bitter-sweet. Everyone working on the project was very enthusiastic to see it through and do a good job, but it never felt like upper management's focus was in the correct place. We spent far too much time kowtowing to the desires of phone carriers and trying to copy Android and not nearly enough time on basic features and polish. Up until around v2.0 and maybe even 2.2, the experience of using FirefoxOS was very rough. Unfortunately, as soon as it started to show some promise and as soon as we had freedom from carriers to actually do what we set out to do in the first place, the project was cancelled, in favour of the whole Connected Devices IoT debacle.

If there was anything that killed morale for me more than my unfortunate time on the graphics team, and more than having FirefoxOS prematurely cancelled, it would have to be the Connected Devices experience. I appreciate it as an opportunity to work on random semi-interesting things for a year or so, and to get some entrepreneurship training, but the mismanagement of that whole situation was pretty epic. To take a group of hundreds of UI-focused engineers and tell them that, with very little help, they should organised themselves into small teams and create IoT products still strikes me as an idea so crazy that it definitely won't work. Certainly not the way we did it anyway. The idea, I think, was that we'd be running several internal start-ups and we'd hopefully get some marketable products out of it. What business a not-for-profit company, based primarily on doing open-source, web-based engineering has making physical, commercial products is questionable, but it failed long before that could be considered.

The process involved coming up with an idea, presenting it and getting approval to run with it. You would then repeat this approval process at various stages during development. It was, however, very hard to get approval for enough resources (both time and people) to finesse an idea long enough to make it obviously a good or bad idea. That aside, I found it very demoralising to not have the opportunity to write code that people could use. I did manage it a few times, in spite of what was happening, but none of this work I would consider myself particularly proud of. Lots of very talented people left during this period, and then at the end of it, everyone else was laid off. Not a good time.

Luckily for me and the team I was on, we were moved under the umbrella of Emerging Technologies before the lay-offs happened, and this also allowed us to refocus away from trying to make an under-featured and pointless shopping-list assistant and back onto the underlying speech-recognition technology. This brings us almost to present day now.

The DeepSpeech speech recognition project is an extremely worthwhile project, with a clear mission, great promise and interesting underlying technology. So why would I leave? Well, I've practically ended up on this team by a series of accidents and random happenstance. It's been very interesting so far, I've learnt a lot and I think I've made a reasonable contribution to the code-base. I also rewrote python_speech_features in C for a pretty large performance boost, which I'm pretty pleased with. But at the end of the day, it doesn't feel like this team will miss me. I too often spend my time finding work to do, and to be honest, I'm just not interested enough in the subject matter to make that work long-term. Most of my time on this project has been spent pushing to open it up and make it more transparent to people outside of the company. I've added model exporting, better default behaviour, a client library, a native client, Python bindings (+ example client) and most recently, Node.js bindings (+ example client). We're starting to get noticed and starting to get external contributions, but I worry that we still aren't transparent enough and still aren't truly treating this as the open-source project it is and should be. I hope the team can push further towards this direction without me. I think it'll be one to watch.

Next week, I start working at a new job doing a new thing. It's odd to say goodbye to Mozilla after 6 years. It's not easy, but many of my peers and colleagues have already made the jump, so it feels like the right time. One of the big reasons I'm moving, and moving to Impossible specifically, is that I want to get back to doing impressive work again. This is the largest regret I have about my time at Mozilla. I used to blog regularly when I worked at OpenedHand and Intel, because I was excited about the work we were doing and I thought it was impressive. This wasn't just youthful exuberance (he says, realising how ridiculous that sounds at 32), I still consider much of the work we did to be impressive, even now. I want to be doing things like that again, and it feels like Impossible is a great opportunity to make that happen. Wish me luck!

28 Jun 2017 11:16am GMT

27 Jun 2017

feedPlanet Mozilla

Daniel Pocock: How did the world ever work without Facebook?

Almost every day, somebody tells me there is no way they can survive without some social media like Facebook or Twitter. Otherwise mature adults fearful that without these dubious services, they would have no human contact ever again, they would die of hunger and the sky would come crashing down too.

It is particularly disturbing for me to hear this attitude from community activists and campaigners. These are people who aspire to change the world, but can you really change the system using the tools the system gives you?

Revolutionaries like Gandi and the Bolsheviks don't have a lot in common: but both of them changed the world and both of them did so by going against the system. Gandi, of course, relied on non-violence while the Bolsheviks continued to rely on violence long after taking power. Neither of them needed social media but both are likely to be remembered far longer than any viral video clip you have seen recently.

With US border guards asking visitors for their Facebook profiles and Mark Zuckerberg being a regular participant at secretive Bilderberg meetings, it should be clear that Facebook and conventional social media is not on your side, it's on theirs.

Kettling has never been easier

When street protests erupt in major cities such as London, the police build fences around the protesters, cutting them off from the rest of the world. They become an island in the middle of the city, like a construction site or broken down bus that everybody else goes around. The police then set about arresting one person at a time, taking their name and photograph and then slowly letting them leave in different directions. This strategy is called kettling.

Facebook helps kettle activists in their arm chair. The police state can gather far more data about them, while their impact is even more muted than if they ventured out of their home.

You are more likely to win the lottery than make a viral campaign

Every week there is news about some social media campaign that has gone viral. Every day, marketing professionals, professional campaigners and motivated activists sit at their computer spending hours trying to replicate this phenomenon.

Do the math: how many of these campaigns can really be viral success stories? Society can only absorb a small number of these campaigns at any one time. For most of the people trying to ignite such campaigns, their time and energy is wasted, much like money spent buying lottery tickets and with odds that are just as bad.

It is far better to focus on the quality of your work in other ways than to waste any time on social media. If you do something that is truly extraordinary, then other people will pick it up and share it for you and that is how a viral campaign really begins. The time and effort you put into trying to force something to become viral is wasting the energy and concentration you need to make something that is worthy of really being viral.

An earthquake and an escaped lion never needed to announce themselves on social media to become an instant hit. If your news isn't extraordinary enough for random people to spontaneously post, share and tweet it in the first place, how can it ever go far?

The news media deliberately over-rates social media

News media outlets, including TV, radio and print, gain a significant benefit crowd-sourcing live information, free of charge, from the public on social media. It is only logical that they will cheer on social media sites and give them regular attention. Have you noticed that whenever Facebook's publicity department makes an announcement, the media are quick to publish it ahead of more significant stories about social or economic issues that impact our lives? Why do you think the media puts Facebook up on a podium like this, ahead of all other industries, if the media aren't getting something out of it too?

The tail doesn't wag the dog

One particular example is the news media's fascination with Donald Trump's Twitter account. Some people have gone as far as suggesting that this billionaire could have simply parked his jet and spent the whole of 2016 at one of his golf courses sending tweets and he would have won the presidency anyway. Suggesting that Trump's campaign revolved entirely around Twitter is like suggesting the tail wags the dog.

The reality is different: Trump has been a prominent public figure for decades, both in the business and entertainment world. During his presidential campaign, he had at least 220 major campaign rallies attended by over 1.2 million people in the real world. Without this real-world organization and history, the Twitter account would have been largely ignored like the majority of Twitter accounts.

On the left of politics, the media have been just as quick to suggest that Bernie Sanders and Jeremy Corbyn have been supported by the "Facebook generation". This label is superficial and deceiving. The reality, again, is a grass roots movement that has attracted young people to attend local campaign meetings in pubs up and down the country. Getting people to get out and be active is key. Social media is incidental to their campaign, not indispensible.

Real-world meetings, big or small, are immensely more powerful than a social media presence. Consider the Trump example again: if 100,000 people receive one of his tweets, how many even notice it in the non-stop stream of information we are bombarded with today? On the other hand, if 100,000 bellow out a racist slogan at one of his rallies, is there any doubt whether each and every one of those people is engaged with the campaign at that moment? If you could choose between 100 extra Twitter followers or 10 extra activists attending a meeting every month, which would you prefer?

Do we need this new definition of a Friend?

Facebook is redefining what it means to be a friend.

Is somebody who takes pictures of you and insists on sharing them with hundreds of people, tagging your face for the benefit of biometric profiling systems, really a friend?

If you want to find out what a real friend is and who your real friends really are, there is no better way to do so then blowing away your Facebook and Twitter account and waiting to see who contacts you personally about meeting up in the real world.

If you look at a profile on Facebook or Twitter, one of the most prominent features is the number of friends or followers they have. Research suggests that humans can realistically cope with no more than about 150 stable relationships. Facebook, however, has turned Friending people into something like a computer game.

This research is also given far more attention then it deserves though: the number of really meaningful friendships that one person can maintain is far smaller. Think about how many birthdays and spouse's names you can remember and those may be the number of real friendships you can manage well. In his book Busy, Tony Crabbe suggests between 10-20 friendships are in this category and you should spend all your time with these people rather than letting your time be spread thinly across superficial Facebook "friends".

This same logic can be extrapolated to activism and marketing in its many forms: is it better for a campaigner or publicist to have fifty journalists following him on Twitter (where tweets are often lost in the blink of an eye) or three journalists who he meets for drinks from time to time?

Facebook alternatives: the ultimate trap?

Numerous free, open source projects have tried to offer an equivalent to Facebook and Twitter. GNU social, Diaspora and identi.ca are some of the more well known examples.

Trying to persuade people to move from Facebook to one of these platforms rarely works. In most cases, Metcalfe's law suggests the size of Facebook will suck them back in like the gravity of a black hole.

To help people really beat these monstrosities, the most effective strategy is to help them live without social media, whether it is proprietary or not. The best way to convince them may be to give it up yourself and let them see how much you enjoy life without it.

Share your thoughts

The FSFE community has recently been debating the use of propriety software and services. Please feel free to join the list and click here to reply on the thread.

27 Jun 2017 7:29pm GMT

QMO: Firefox 55 Beta 4 Testday Results

Hello Mozillians!

As you may already know, last Friday - June 23rd - we held a new Testday event, for Firefox 55.0b4.

Thank you all for helping us make Mozilla a better place - Tiziana Sellitto, Gabriela (gaby2300) and Avinash Sharma.

From India team: Surentharan.R.A, Fahima Zulfath, Vinothini.K, Rohit R, Sriram B, Baranitharan, terryjohn, P Avinash Sharma, AbiramiSD.


- several test cases executed for the Screenshots and Simplify page features.

- 6 bugs verified: 1357964, 1370746, 1367767, 1355324, 1365638, 1361986
- 1 new bug filed: 1376184

Thanks for another successful testday 🙂

We hope to see you all in our next events, all the details will be posted on QMO!

27 Jun 2017 8:51am GMT

Emily Dunham: Opinion: Levels of Safety Online

Opinion: Levels of Safety Online

The Mozilla All-Hands this week gave me the opportunity to explore an exhibit about the "Mozilla Worldview" that Mitchell Baker has been working on. The exhibit sparked some interesting and sometimes heated discussion (as direct result of my decision to express unpopular-sounding opinions), and helped me refine my opinions on what it means for someone to be "safe" on the internet.

Spoiler: I think that there are many different levels of safety that someone can have online, and the most desirable ones are also the most difficult to attain.

Obligatory disclaimer: These are my opinions. You're welcome to think I'm wrong. I'd be happy to accept pull requests to this post adding tools for attaining each level of safety, but if you're convinced I'm wrong, the best place to say that would be your own blog. Feel free to drop me a link if you do write up something like that, as I'd enjoy reading it!

Safety to Consume Desired Information

I believe that the fundamental layer of safety that someone can have online is to be able to safely consume information. Even at this basic level, a lot of things can go wrong. To safely consume information, people need internet access. This might mean free public WiFi, or a cell phone data plan. Safety here means that the user won't come to harm solely as a result of what they choose to learn. "Desired information" means that the person gets a chance to find the "best" answer to their question that's available.

How could someone come to harm as a result of choosing to learn something? If you've ever joked about a particular search getting you "put on a watch list", I'm sure you can guess. I happen to hold the opinion that knowledge is an amoral tool, and it's the actions that people take for which they should be held accountable - if you believe that there exist facts that are inherently unethical to know, we'll necessarily differ on the importance of this safety.

How might someone fail to get the information they desired? Imagine someone searching for the best open source social networking tools on a "free" internet connection that's provided and monitored by a social networking giant. Do you think the articles that turn up in their search results would be comparable to what they'd get on a connection provided by a less biased organization?

Why "desired information", and not "truth"? My reason here is selfish. I enjoy learning about different viewpoints held by groups who each insist that the other is completely wrong. If somebody tried to moderate what information is "true" and what's "false", I would probably only be allowed to access the propaganda of at most one of those groups.

Sadly, if your ISP is monitoring your internet connection or tampering with the content you're trying to view, there's not a whole lot that you can do about it. The usual solution is to relocate - either physically, or feign relocation by using an onion router or proxy. By building better tools, legislation, and localization, it's plausible that we could extend this safety to almost everyone in the world within our lifetimes.

Safety to Produce Information Anonymously

I think the next layer of internet safety that people need is the ability to produce information anonymously. The caveat here is that, of course, nobody else is obligated to choose to host your content for you. The safety of hosting providers, especially coupled with their ability to take financial payment while maintaining user anonymity, is a whole other can of worms.

Why does producing information anonymously come before producing information with attribution? Consider the types of danger that accompany producing content online. Attackers often choose their victims based on characteristics that the victims have in the physical world. Attempted attacks often cause harm because the attacker could identify the victim's physical location or social identity. While the best solution would of course be to prevent the attackers from behaving harmfully at all, a less ambitious but more attainable project is to simply prevent them from being able to find targets for their aggression. Imagine an attacker determined to harm all people in a certain group, on an internet where nobody discloses whether or not they're a member of that group: The attacker is forced to go for nobody or everybody, neither of which is as effective as an individually targeted attack. And that's just for verbal or digital assaults - it is extremely difficult to threaten or enact physical harm upon someone whose location you do not know.

Systems that support anonymity and arbitrary account creation open themselves to attempted abuse, but they also provide people with extremely powerful tools to avoid being abused. There are of course tradeoffs - it takes a certain amount of mental overhead, and might feel duplicitous, to use separate accounts for discussing your unfashionable polticical views and planning the local block party - but there's no denying how much less harm it is possible to come to when behaving anonymously than when advertising your physical identity and location.

How do you produce information anonymously? First, you access the internet in a way that won't make it easy to trace your activity to your house. This could mean booting from a LiveCD and accessing a public internet connection, or booting from a LiveCD and using a proxy or onion router to connect to the sites you wish to access in order to mask your IP address. A LiveCD is safer than using your day-to-day computer profile because browsers store information from sites you visit, and some information about your operating system is sometimes visible to sites you visit. Using a brand-new copy of your operating system, which forgets everything when you shut down, is an easy way to avoid revealing those identifying pieces of information.

Proof read anything that you want to post anonymously to make sure it doesn't contain any details about where you live, or facts that only someone with your experiences would know.

How do you put information online anonymously? Once you have a connection that's hard to trace to your real-world self, it's pretty simple to set up free accounts on mail and web hosting sites under some placeholder name.

Be aware that the vocabulary you use and the way you structure your sentences can sometimes be identifying, as well. A good way to strip all of the uniqueness from your writing voice is to run a piece of writing through http://hemingwayapp.com/ and fix everything that it calls an error. After that, use a thesaurus to add some words you don't usually use anywhere else. Alternately, you could run it through a couple different translation tools to make it sound less like you wrote it.

How do you share something you wrote anonymously with your friends? Here's the hard part: You don't. If you're not careful, the way that you distribute a piece of information that you wrote anonymously can make it clear that it came from you. Anonymously posted information generally has to be shared publicly or to an entire forum, because to pick and choose exactly which individuals get to see a piece of content reveals a lot about the identity of the person posting it.

Doing these things can enable you to produce a piece of information on the internet that would be a real nuisance to trace back to you in real life. It's not impossible, of course - there are sneaky tricks like comparing the times when you use a proxy to the times when material shows up online - but someone would only attempt such tricks if they already had a high level of technical knowledge and a grudge against you in particular.

Long story short, in most places with internet access, it is possible but inconvenient to exercise your safety to produce information anonymously. By building better online tools and hosting options, we can extend this safety to more people who have internet access.

Safety to Produce Information Psuedonymously

An important thing to note about producing information anonymously is that if you step up and take credit for another piece of information you posted, you're less anonymous. Add another attribution, and you're easier still to track. It's most anonymous to produce every piece of information under a different throwaway identity, and least anonymous to produce everything under a single identity even if it's made up.

Producing information psuedonymously is when you use a fake name and biography, but otherwise go about the internet as the same person from day to day. The technical mechanics of producing a single psuedonymous post are identical to what I described for acting "anonymously", but I differentiate psyedonymity from anonymity in that the former is continuous - you can form friendships with other humans under a psuedonym.

The major hazard to a psuedonymous online presence is that if you aggregate enough details about your physical life under a single account, someone reading all those details might use them to figure out who you are offline. This is addressed by private forums and boards, which limit the number of possible attackers who can see your posts, as well as by being careful of what information you disclose. Beware, however, that any software vulnerability in a private forum may mean its contents suddenly becomes public.

In my opinion, psuedonymous identity is an excellent compromise between the social benefits of always being the same person, and physical safety from hypothetical attackers. I find that behaving psuedonymously rather than anonymously helps me build friendships with people whom I'm not sure at first whether to trust, while maintaining a sense of accountability for my reputation that's absent in strictly anonymous communication. But hey, I'm biased - you probably don't know my full name or home address from my web presence, so I'm on the psuedonymity spectrum too.

Safety to Produce Information with Accurate Attribution

The "safety" to produce information with attribution is extremely complex, and the one on which I believe that most social justice advocates tend to focus on. It is as it sounds: Will someone come to harm if they choose to post their opinions and location under their real name?

For some people, this is the easiest safety to acquire: If you're in a group that's not subject to hate crimes in your area, and your content is only consumed by people who agree with you or feel neutrally toward your views, you have this freedom by default.

For others, this safety is almost impossible to obtain. If the combination of your appearance and the views you're discussing would get you hurt if you said it in public, extreme social change would be required before you had even a chance at being comparably safe online.

I hold the opinion that solving the general case of linking created content to real-world identities is not a computer problem. It's a social problem, requiring a world in which no person offended by something on the internet and aware of where its creator lives is physically able to take action against the content's creator. So it'd be great, but we are not there yet, and the only fictional worlds I've encountered in which this safety can be said to exist are impossibly unrealistic, totalitarian dystopias, or both.

In Summary

In other words, I view misuse of the internet as a pattern of the form "Creator posts content -> attacker views content -> attacker identifies creator -> attacker harms creator". This chain can break, with varying degrees of difficulty, at several points:

First, this chain of outcomes won't begin if the creator doesn't post the content at all. This is the easiest solution, and I point out the "safety to consume desired content" because even someone who never posts online can derive major benefits from the information available on the internet. It's easy, but it's not good enough: Producing as well as consuming content is part of what sets the internet apart from TV or books.

The next essential link in the chain is the attacker identifying the content's creator. If someone has no way to contact you physically or digitally, all they can do is shout nasty things to the world online, and you're free to either ignore them or shout right back. Having nasty things shouted about your work isn't optimal, but it is difficult to feel that your physical or social wellbeing is jeopardized by someone when they have no idea who you are. This is why I believe that the safety to produce information anonymously is so important: It uses software to change the outcome even in circumstances where the attacker's behavior cannot be modified. Perfect psuedonymity also breaks this link, but any software mishap or accidental over-sharing can invalidate it instantly. The link is broken with fewer potential points of failure by creating content anonymously.

The third solution is what I alluded to when discussing the safety of psuedonymity: Prevent the attacker from viewing the content. This is what private, interest-specific forums accomplish reasonably well. There are hazards here, especially if a forum's contents become public unintentionally, or if a dedicated attacker masquerades as a member of the very group they wish to harm. So it helps, and can be improved technologically through proper security practices by forum administrators, and socially via appropriate moderation. It's better, from the perspective that assuming the same online identity each day allows creators to build social bonds with one another, but it's still not optimal.

The fourth and ideal solution is to break the cycle right at the very end, by preventing the attacker from harming the content creator. This seems to be where most advocates argue we should jump straight into, because it's really perfect - it requires no change or compromise from content creators, and total change from those who might be out to harm them. It's the only solution in which people of all appearances and beliefs and locations are equally safe online. However, it's also the most difficult place to break the cycle, and a place at which any error of implementation would create the potential for incalculable abuse.

I've listed these safeties in an order that I regard as how feasible they are to implement with today's social systems and technologies. I think it's possible to recognize the 4th safety as the top of the heap, without using that as an excuse to neglect the benefits which can come from bringing more of the world its 3 lesser but far more attainable cousins.

27 Jun 2017 7:00am GMT

This Week In Rust: This Week in Rust 188

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Friends of the Forest

Our community likes to recognize people who have made outstanding contributions to the Rust Project, its ecosystem, and its community. These people are 'friends of the forest'. The community team has been lax in making nominations for this on a regular basis, but we hope to get back on track!

Today's featured friend of the forest is Mark Simulacrum. As of Friday, June 23, Mark has made sure that all 2,634 open issues on the rust-lang/rust repo have a label! Thank you, Mark, for this heroic effort!

Crate of the Week

This week's crate is strum, a crate that allows you to derive stringify and parse operations for your enums. Thanks to lucab for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

94 pull requests were merged in the last week.

New Contributors

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

The RFC style is now the default style in Rustfmt - try it out and let us know what you think!

Issues in final comment period:

An interesting issue:

Good first issues:

We're happy to mentor these, please reach out to us in #rust-style if you'd like to get involved

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Regarding the C++ discussion, when I started programming the only viable oss version control system was cvs. It was horrible, but better than nothing. Then subversion was created and it was like a breath of fresh air, because it did the same thing well. Then alternatives exploded and among them git emerged as this amazing, amazing game-changer because it changed the whole approach to version control, enabling amazing things.

To me, Rust is that git-like game-changer of systems programming languages because it changes the whole approach, enabling amazing things.

- Nathan Stocks on TRPLF.

Thanks to Aleksey Kladov for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

27 Jun 2017 4:00am GMT

26 Jun 2017

feedPlanet Mozilla

Chris McDonald: Message Broker: Maybe Invented Here

NIH (Not Invented Here) is a common initialism that refers to excluding solutions that the project itself did not create. For example, some game studios NIH their own game engines, where others license an existing engine. There are advantages to both and neither is always correct. For me, I have a tendency to NIH things when working in Rust. This post is to help me understand why.

When I started with Rust, I built an example bot for one of the The AI Games challenges. The focus of that project was parsing the protocol and presenting it in a usable way for folks to get started with and extend into their bot. I built my parser from scratch, at first focusing on getting it working. Then spending time looking at other parsers to see how they go faster, and what the their interfaces in Rust look like. I updated the implementation to account for this research and I learned a lot about both Rust itself and implementing low level parsers.

I did similar for another couple projects. Spending a lot of time implementing things that there were libraries to do or at least assist with. Something I've started to notice over time: I'm using libraries for the parts I used to NIH. Especially the more serious I am about completing the project.

In my broker I'm doing little parsing of my own. I resisted using PlainTalk for my text protocol because I didn't want to write the parser in a few languages. I'm using JSON since most languages have an implementation already, even if it isn't the best to type. My only libraries so far are for encoding and decoding the allowed formats automatically. This means my socket and thread handling has been all custom or built into Rust itself.

I definitely get joy out of working at those layers. Which is an easy explanation for NIHing those parts while working on a project in general. I'm also learning a lot about the design as I implement my own. But I find myself at a crossroad. Continuing to NIH the layer and spend a week on getting a workable socket, thread, and job handling story. Or I can entangle the fate of my project with the Rust community more. To explain lets talk about some pros and cons of NIH or using other's projects.

NIHing something means you can build something custom to your situation. It often takes more up front time than using an external solution. Your project will need to bring in or build up the expertise to handle that solution. The more central to the heart of your project, the more you should NIH. If the heart of your project is learning, then it could make sense to NIH as much as possible.

Using something external means doing research into the many solutions that could fit. Narrowing down to a final few or one solutions to try. Then learning how to use that solution and adapt it to your project. Often the solution is not a perfect fit but the savings on time and required expertise can make up for it. There is an accepted risk of the external project having different goals or being discontinued.

This morning I found myself staring down the problem of reading from my sockets in the server. Wanting to be efficient on resources I didn't want to only rely on interval polling. I started by looking in the Rust standard library for a solution. The recommendations are to create a thread per connection, use interval polling, or external libraries. Thread per connection wont work for me with my goals. The resource cost of switching between a lot of threads shadows the cost of the work you are trying to perform. I had already ruled out interval polling. A less recommended path is wrapping the lower level mechanisms yourself.

So, I started looking into more and less complete solutions to these problems. When using less complete solutions, you can glue a few together. Creating a normalized interface on top of them that your project can use. The more complete solutions will do that normalization for you. Often at a cost of not closely matching your needs. This brings me to what I mean by entangling my project's fate with the Rust community more.

Tokio is a project the Rust community has started to center around for building services, protocol parsers, and other tools. Designed to handle a large part of the asynchronous layer for users. I heard about it at RustConf 2016 and read about it in This Week In Rust. My understanding stayed high level and I've not had any serious Rust projects to apply it. I began looking into it as a solution for my broker and was delighted. Their breakdown of this problem is similar to how I have been designing my broker already. The largest difference being their inclusion of futures.

The architecture match with Tokio, as well as the community's energy, makes it a good choice for me. I'll need to learn more about their framework and how to use it well as I go. But, I'm confident I'll be able to refactor my broker to run on top of it in a day or so. Then I can get the rest of the minimal story for this message broker done this week. Once I have it doing the basics with at least the Rust driver, I'll open source it.

26 Jun 2017 11:30pm GMT

The Mozilla Blog: Thoughts on the Latest Development in the U.S. Administration Travel Ban case

This morning, the U.S. Supreme Court decided to hear the lawfulness of the U.S. Administration's revised Travel Ban. We've opposed this Executive Order from the beginning as it undermines immigration law and impedes the travel necessary for people who build, maintain, and protect the Internet to come together.

Today's new development means that until the legal case is resolved the travel ban cannot be enforced against people from the six predominantly Muslim countries who have legitimate ties or relationships to family or business in the U.S. This includes company employees and those visiting close family members.

However, the Supreme Court departed from lower court opinions by allowing the ban to be enforced against visa applicants with no connection to the U.S. We hope that the Government will apply this standard in a manner so that qualified visa applicants who demonstrate valid reasons for travel to the U.S. are not discriminated against, and that these decisions are reliably made to avoid the chaos that travelers, families, and business experienced earlier this year.

Ultimately, we would like the Court to hold that blanket bans targeted at people of particular religions or nationalities are unlawful under the U.S. Constitution and harmfully impact families, businesses, and the global community. We will continue to follow this case and advocate for the free flow of information and ideas across borders, of which travel is a key part.

The post Thoughts on the Latest Development in the U.S. Administration Travel Ban case appeared first on The Mozilla Blog.

26 Jun 2017 7:55pm GMT

Hacks.Mozilla.Org: Opus audio codec version 1.2 released

The Opus audio codec just got another major upgrade with the release of version 1.2 (see demo). Opus is a totally open, royalty-free, audio codec that can be used for all audio applications, from music streaming and storage to high-quality video-conferencing and VoIP. Its standardization by the Internet Engineering Task Force (IETF) in 2012 (RFC 6716) was a major victory for open standards. Opus is the default codec for WebRTC and is now included in all major web browsers.

This new release brings many speech and music quality improvements, especially at low bitrates. The result is that Opus can now push stereo music bitrates down to 32 kb/s and encode full-band speech down to 14 kb/s. All that is achieved while remaining fully compatible with RFC 6716. The new release also includes optimizations, new options, as well as many bug fixes. This demo shows a few of the upgrades that users and implementers will care about the most, including audio samples. For those who haven't used Opus yet, now's a good time to give it a try.

26 Jun 2017 1:43pm GMT

Mozilla Marketing Engineering & Ops Blog: HTTP/2 on MDN

We enabled HTTP/2 on MDN's CDN.

We didn't do anything to optimize for HTTP/2, we just enabled it.

We're seeing performance improvements.

You don't have to get ready before you start using HTTP/2

While doing research to see if turning it on without doing any optimizations was a good idea I read things like:

"It also means that all of those HTTP1 performance techniques are harmful. They will make a HTTP2 website slower, not faster - don't use them." - HTTP2 for front-end web developers


"However, many of the things you think of as being best practices can be detrimental to performance under HTTP/2." - Getting Ready For HTTP2: A Guide For Web Designers And Developers

Which suggest that enabling HTTP/2 on a site optimized for HTTP/1.1 could result in a slower site.

A better way to interpret those quotes is:

If you optimize for HTTP/1.1 and turn on HTTP/2 your site will not be as fast as it could be - but it might still be faster!

On MDN we concatenate a lot of our files but we don't concatenate all of them. For example, our article pages have 9 different files coming from our CDN. I thought we could benefit from a bit of HTTP/2's multiplexing and header compression. And we did. You can see the DNS lookup time drop off in this waterfall from Pingdom:

Waterfall showing over 0.3s of DNS look up for each request.

Waterfall showing DNS lookup for only first asset requested.

Some numbers

Overall, our tests don't show a huge improvement in page load speed but there are small improvements for everyone, and a real improvement for users located far away from our servers. (Hi Australia and China!)

Service Location Browser HTTP/1.1 HTTP/2 Change
Pingdom Dallas Chrome 1.54s 1.34s 0.2s
Pingdom Melbourne Chrome 2.94s 2.80s 0.14s
WebPageTest London IE11 2.39s 2.37s 0.02s
WebPageTest Australia Firefox 5.61s 5.17s 0.44s
Google Analytics All Chrome 3.74s 3.04s 0.7s
Google Analytics All Firefox 3.99s 3.71s 0.28s
Google Analytics Australia All 3.01s 1.69s 1.32s
Google Analytics China All 8.10s 6.69s 1.41s

I tried to segment our users in Google Analytics to make sure we did not have a negative impact on users relying on HTTP/1.1 and… I couldn't find enough users to draw any conclusions. MDN is lucky like that. (It's possible the IE11 test in the table above is on Windows 7 and does not support HTTP/2, but WebPageTest doesn't identify the OS.) In theory, older browsers should not be affected because the protocol falls back to HTTP/1.1.

There was a lot of variation in the page speed data I examined. I recommend running your before and after benchmark tests multiple times on multiple days so you can take an average. Try to wait a week before drawing conclusions from your analytics data as well.

In a perfect world you don't increase the amount of code on your site or invalidate anyone's caches in the sample time period, but we don't develop in a perfect world.

Read more on HTTP/2


Get our pages into data centres around the world.

This involves changing our hosting services, not a small task, and changing our pages to serve the same content to all logged out users.

Decrease asset size by removing support for older browsers.

If you think working on MDN was a great job because we have very modern browser support requirements, remember we're also working on a 10 year old code base.

Thanks for using MDN!

26 Jun 2017 12:00am GMT

23 Jun 2017

feedPlanet Mozilla

Gervase Markham: Root Store Policy 2.5 Published

Version 2.5 of Mozilla's Root Store Policy has now been published. This document incorporates by reference the Common CCADB Policy 1.0.1.

With this update, we have mostly worked through the backlog of modernization proposals, and I'd call this a policy fit for a transparent, openly-run root program in 2017. That doesn't mean that there's not more that could be done, but we've come a long way from policy 2.2, which we were using until six months ago, and which hadn't been substantively updated since 2012.

We also hope that, very soon, more root store operators will join the CCADB, which will reduce everyone's costs and administrative burdens on all sides, and hopefully allow root programs to be more responsive to changing circumstances and requests for inclusion or change.

23 Jun 2017 4:00pm GMT

Alessio Placitelli: Getting Firefox data faster: the shutdown pingsender

The data our Firefox users share with us is the key to identify and fix performance issues that lead to a poor browsing experience. Collecting it is not enough if we don't manage to receive the data in an acceptable time-frame. My esteemed colleague Chris already wrote about this a couple of times: data latency … →

23 Jun 2017 3:54pm GMT

Hacks.Mozilla.Org: An inside look at Quantum DOM Scheduling

Use of multi-tab browsing is becoming heavier than ever as people spend more time on services like Facebook, Twitter, YouTube, Netflix, and Google Docs, making them a part of their daily life and work on the Internet.

Quantum DOM: Scheduling is a significant piece of Project Quantum, which focuses on making Firefox more responsive, especially when lots of tabs are open. In this article, we'll describe problems we identified in multi-tab browsing, the solutions we figured out, the current status of Quantum DOM, and opportunities for contribution to the project.

Problem 1: Task prioritization in different categories

Since multiprocess Firefox (e10s) was first enabled in Firefox 48, web content tabs now run in separate content processes in order to reduce overcrowding of OS resources in a given process. However, after further research, we found that the task queue of the main thread in the content process was still crowded with tasks in multiple categories. The tasks in the content process can come from a number of possible sources: through IPC (interprocess communication) from the main process (e.g. for input events, network data, and vsync), directly from web pages (e.g. from setTimeout, requestIdleCallback, or postMessage), or internally in the content process (e.g. for garbage collection or telemetry tasks). For better responsiveness, we've learned to prioritize tasks for user inputs and vsync above tasks for requestIdleCallback and garbage collection.

Problem 2: Lack of task prioritization between tabs

Inside Firefox, tasks running in foreground and background tabs are executed in First-Come-First-Served order, in a single task queue. It is quite reasonable to prioritize the foreground tasks over than the background ones, in order to increase the responsiveness of the user experience for Firefox users.

Goals & solutions

Let's take a look at how we approached these two scheduling challenges, breaking them into a series of actions leading to achievable goals:

Task categorization

To resolve our first problem, we divide the task queue of the main thread in the content processes into 3 prioritized queues: High (User Input and Refresh Driver), Normal (DOM Event, Networking, TimerCallback, WorkerMessage), and Low (Garbage Collection, IdleCallback). Note: The order of tasks of the same priority is kept unchanged.

Task grouping

Before describing the solution to our second problem, let's define a TabGroup as a set of open tabs that are associated via window.opener and window.parent. In the HTML standard, this is called a unit of related browsing contexts. Tasks are isolated and cannot affect each other if they belong to different TabGroups. Task grouping ensures that tasks from the same TabGroup are run in order while allowing us to interrupt tasks from background TabGroups in order to run tasks from a foreground TabGroup.

In Firefox internals, each window/document contains a reference to the TabGroup object it belongs to, which provides a set of useful dispatch APIs. These APIs make it easier for Firefox developers to associate a task with a particular TabGroup.

How tasks are grouped inside Firefox

Here are several examples to show how we group tasks in various categories inside Firefox:

  1. Inside the implementation of window.postMessage(), an asynchronous task called PostMessageEvent will be dispatched to the task queue of the main thread:
void nsGlobalWindow::PostMessageMozOuter(...) {
  RefPtr<PostMessageEvent> event = new PostMessageEvent(...);

With the new association of DOM windows to their TabGroups and the new dispatching API provided in TabGroup, we can now associate this task with the appropriate TabGroup and specify the TaskCategory:

void nsGlobalWindow::PostMessageMozOuter(...) {
  RefPtr<PostMessageEvent> event = new PostMessageEvent(...);
  // nsGlobalWindow::Dispatch() helps to find the TabGroup of this window for dispatching.
  Dispatch("PostMessageEvent", TaskCategory::Other, event);
  1. In addition to the tasks that can be associated with a TabGroup, there are several kinds of tasks inside the content process such as telemetry data collection and resource management via garbage collection, which have no relationship to any web content. Here is how garbage collection starts:
void GCTimerFired() {
  // A timer callback to start the process of Garbage Collection.

void nsJSContext::PokeGC(...) {
  // The callback of GCTimerFired will be invoked asynchronously by enqueuing a task
  // into the task queue of the main thread to run GCTimerFired() after timeout.
  sGCTimer->InitWithFuncCallback(GCTimerFired, ...);

To group tasks that have no TabGroup dependencies, a special group called SystemGroup is introduced. Then, the PokeGC() method can be revised as shown here:

void nsJSContext::PokeGC(...) {
  sGCTimer->InitWithFuncCallback(GCTimerFired, ...);

We have now grouped this GCTimerFired task to the SystemGroup with TaskCategory::GC specified. This allows the scheduler to interrupt the task to run tasks for any foreground tab.

  1. In some cases, the same task can be requested either by specific web content or by an internal Firefox script with system privileges in the content process. We'll have to decide if the SystemGroup makes sense for a request when it is not tied to any window/document. For example, in the implementation of DNSService in the content process, an optional TabGroup-versioned event target can be provided for dispatching the result callback after the DNS query is resolved. If the optional event target is not provided, the SystemGroup event target in TaskCategory::Network is chosen. We make the assumption that the request is fired from an internal script or an internal service which has no relationship to any window/document.
nsresult ChildDNSService::AsyncResolveExtendedNative(
 const nsACString &hostname,
 nsIDNSListener *listener,
 nsIEventTarget *target_,
 nsICancelable  **result)
  nsCOMPtr<nsIEventTarget> target = target_;
  if (!target) {
    target = SystemGroup::EventTargetFor(TaskCategory::Network);

  RefPtr<DNSRequestChild> childReq =
    new DNSRequestChild(hostname, listener, target);

  return NS_OK;

TabGroup categories

Once the task grouping is done inside the scheduler, we assign a cooperative thread per tab group from a pool to consume the tasks inside a TabGroup. Each cooperative thread is pre-emptable by the scheduler via JS interrupt at any safe point. The main thread is then virtualized via these cooperative threads.

In this new cooperative-thread approach, we ensure that only one thread at a time can run a task. This allocates more CPU time to the foreground TabGroup and also ensures internal data correctness in Firefox, which includes many services, managers, and data designed intentionally as singleton objects.

Obstacles to task grouping and scheduling

It's clear that the performance of Quantum-DOM scheduling is highly dependent on the work of task grouping. Ideally, we'd expect that each task should be associated with only one TabGroup. In reality, however, some tasks are designed to serve multiple TabGroups, which require refactoring in advance in order to support grouping, and not all the tasks can be grouped in time before scheduler is ready to be enabled. Hence, to enable the scheduler aggressively before all tasks are grouped, the following design is adopted to disable the preemption temporarily when an ungrouped task arrives because we never know which TabGroup this ungrouped task belongs to.

Current status of task grouping

We'd like to send thanks to the many engineers from various sub-modules including DOM, Graphic, ImageLib, Media, Layout, Network, Security, etc., who've helped clear these ungrouped (unlabeled) tasks according to the frequency shown in telemetry results.

The table below shows telemetry records of tasks running in the content process, providing a better picture of what Firefox is actually doing:

The good news is that over 80% of tasks (weighted with frequency) have cleared recently. However, there are still a fair amount of anonymous tasks to be cleared. Additional telemetry will help check the mean time between 2 ungrouped tasks arriving to the main thread. The larger the mean time, the more performance gain we'll see from Quantum-DOM Scheduler.

Contribute to Quantum DOM development

As mentioned above, the more tasks are grouped (labeled), the more benefit we gain from the scheduler. If you are interested in contributing to Quantum-DOM, here are some ways you can help:

If you get started fixing bugs and run into issues or questions, you can usually find the Quantum DOM team in Mozilla's #content IRC channel.

23 Jun 2017 2:56pm GMT

Carsten Book: Sheriff Survey Results

first a super big thanks for taking part in this years Sheriff Survey - this helps us a lot !
Here are the results.
1. Overall "satisfaction" - we have asked how People rate their interaction with us (from 1 (bad) to 10 (best)
So far from all results:
3,1 % = 5
3,1 % = 7
12,5 % = 8
43,8 % = 9
37,5 % = 10
2. What can we do better as Sheriffs?
We got a lot of Feedback thats its not easy to find out who is on "sheriffduty". We will take steps (like adding |sheriffduty tag to irc names etc) also we have https://bugzilla.mozilla.org/show_bug.cgi?id=1144589 with the target to have that name on treeherder.
Also we try to make sure to Set Needinfo requests on Backouts.
In any case, backouts are never meant to be personal and it's part of our job to try our best to keep our trees open for developers. We also try to provide as much information as possible in the bug for why we
backed out a change.
3. Things we can improve in general (not just sheriffs) ?
A interesting Idea in the Feedback we got was about Automation. We will follow up from the Feedback and i already filed https://bugzilla.mozilla.org/show_bug.cgi?id=1375520 for the Idea of having a "Backout Button" in Treeherder in case no Sheriff is around etc - more bugs from ideas to improve general stuff and workflows will follow.
Again thanks for taking part in the Survey and if you have questions/feedback/concerns/ideas you can of course contact me / the team at anytime !
- Tomcat

23 Jun 2017 12:39pm GMT

Doug Belshaw: "And she turned round to me and said..."

Star Trek - turning around

I'd always assumed that my grandmother's use of the sentence starter in this post's title came from her time working in factories. I imagined it being reference to someone turning around on the production line to say something bitchy or snarky. It turns out, however, that the phrase actually relates to performing a volte face. In other words it's a criticism of someone changing their opinion in a way that others find hypocritical.

This kind of social judgement plays an important normative role in our society. It's a delicate balance: too much of it and we feel restricted by cultural norms; not enough, and we have no common touchstones, experiences, and expectations.

I raise this as I feel we're knee-deep in developments happening around the area that can broadly considered 'notification literacy'. There's an element of technical understanding involved here, but on a social level it could be construed as walking the line between hypocrisy and protecting one's own interests.

Let's take the example of Facebook Messenger:

Facebook Messenger

The Sending… / Sent / Delivered / Read icons serve as ambient indicators that can add value to the interaction. However, that value is only added, I'd suggest, if the people involved in the conversation know how the indicators work, and are happy to 'play by the rules of the game'. In other words, they're in an active, consensual conversation without an unbalanced power dynamic or strained relationship.

I choose not to use Facebook products so can't check directly, but I wouldn't be surprised if there's no option to turn off this double-tick feature. As a result, users are left in a quandry: do they open a message to see a message in full (and therefore show that they've seen it), or do they just ignore it (and hope that the person goes away)? I've certainly overheard several conversations about how much of a difficult position this can be for users. Technology solves as well as causes social problems.

A more nuanced approach is demonstrated by Twitter's introduction of the double-tick feature to their direct messaging (DM). In this case, users have the option to turn off these 'read receipts'.

Twitter DM settings

As I have this option unchecked, people who DM me on Twitter can't see whether or not I've read their message. This is important, as I have 'opened up' my DMs, meaning anyone on Twitter can message me. Sometimes, I legitimately ignore people's messages after reading them in full. And because I have read receipts ('double ticks') turned off, they're none the wiser.

Interestingly, some platforms have gone even further than this. Path Messenger, for example, has experimented with allowing users to share more ambient statuses:

Path Messenger

This additional ambient information can be shared at the discretion of the user. It can be very useful in situations where you know the person you're interacting with well. In fact, as Path is designed to be used with your closest contacts, this is entirely appropriate.

I think we're still in a transition period with social networks and norms around them. These, as with all digital literacies, are context-dependent, so what's acceptable in one community may be very different to what's acceptable in another. It's going to be interesting to see how these design patterns evolve over time, and how people develop social norms to deal with them.

Comments? Questions? Write your own blog post referencing this one, or email me: hello@dynamicskillset.com

23 Jun 2017 10:54am GMT

Mozilla Reps Community: New Council Members – Spring 2017

We are very happy to announce that our new council members are already onboarded and working on their focus areas.

We are also extremely happy with the participation we had for these elections as for the first time we had the record number of 12 nominees and 215 (75% of the body) have voted.

Welcome Ankit, Daniele, Elio, Faye, and Flore, we are very excited to have you onboard.

Here are the areas that each of the new council members will work on:

Of course they will also all co-work with the old council members on the program's strategy and implementation bringing the Reps Program forward.

Also I would like to thank and send #mozlove to Adriano, Ioana, Rara and Faisal for all their hard work during their term as Reps Councils Members. Your work has been impactful and appreciated and we can't thank you enough.

The Mozilla Reps Council is the governing body of the Mozilla Reps Program. It provides the general vision of the program and oversees day-to-day operations globally. Currently, 7 volunteers and 2 paid staff sit on the council. Find out more on the Reps wiki.

Don't forget to congratulate the new Council members on the Discourse topic!

23 Jun 2017 9:39am GMT