23 Apr 2026

feedPlanet Mozilla

Jonathan Almeida: Gmail filters based on X-Phabricator-Stamps header

I want Phabricator emails to have a Gmail label so I can know which patches had me as a reviewer that then had follow-up comments from other folks.

This is useful for me when I review a patch and then I need to respond back to discussions in a more timely manner in comment threads that I've created.

It's difficult to do this today similar to Bugzilla Gmail filters because there are fewer identifiers that the more simplistic Gmail filter parameters can help with.

Today I learnt that there is an X-Phabricator-Stamps header in those Phabricator emails that let's you identify you as a the reviewer in a patch. So using that information, I wrote the Google script below to run every minute and avoid re-processing the same email twice.

A couple variables were added to the top and some console.logs are sprinkled around for my own debugging.

Code
var REVIEWER = "jonalmeida";
var LABEL_NAME = "Phabricator/Comments";
var BODY_MATCH = "commented on this revision.";
var SENDER = "phabricator@mozilla.com";

/**
 * Run once manually to install the per-minute trigger.
 */
function install() {
  uninstall();
  ScriptApp.newTrigger('processInbox')
  .timeBased()
  .everyMinutes(1)
  .create();
}

/**
 * Run once manually to remove the trigger.
 */
function uninstall() {
  ScriptApp.getProjectTriggers().forEach(function(t) {
    ScriptApp.deleteTrigger(t);
  });
  PropertiesService.getScriptProperties().deleteProperty('lastRun');
}

/**
 * Every run, we try to avoid processing the same email twice because
 * there is no API trigger to run a script on every new email received.
 */
function processInbox() {
  var props = PropertiesService.getScriptProperties();
  var lastRun = parseInt(props.getProperty('lastRun') || '0');
  var now = Math.floor(Date.now() / 1000);

  // On first run, look back 2 minutes
  if (lastRun === 0) {
    lastRun = now - 120;
  }

  var label = GmailApp.getUserLabelByName(LABEL_NAME);
  if (!label) {
    label = GmailApp.createLabel(LABEL_NAME);
  }

  console.log("last run: " + lastRun);
  var threads = GmailApp.search("from:" + SENDER + " after:" + lastRun);
  console.log("threads to process: " + threads.length);
  for (var i = 0; i < threads.length; i++) {
    var thread = threads[i];
    var messages = thread.getMessages();
    console.log("messages to process: " + messages.length);
    for (var j = 0; j < messages.length; j++) {
      if (hasReviewerStamp(messages[j])) {
        thread.addLabel(label);
        console.log(thread.getFirstMessageSubject());
        break;
      }
    }
  }

  props.setProperty('lastRun', String(now));
}

function hasReviewerStamp(message) {
  var raw = message.getRawContent();
  var match = raw.match(/^X-Phabricator-Stamps:\s*(.+)$/m);
  if (!match) {
    return false;
  }

  var stamps = match[1].trim().split(/\s+/);
  return (stamps.indexOf("reviewer(@" + REVIEWER + ")") > -1) && raw.indexOf(BODY_MATCH) > -1;
}

/**
 * For debugging - see the list of labels you can search which
 * differs from what is used in the Gmail UI filter.
 */
function listAllLabels() {
  console.log("All labels");
  var labels = GmailApp.getUserLabels();
  for (var i = 0; i < labels.length; i++) {
    console.log(labels[i].getName());
  }
}

23 Apr 2026 12:00am GMT

22 Apr 2026

feedPlanet Mozilla

Mozilla Performance Blog: Telemetry Alerting: How It Works

We recently released the telemetry alerting beta, and announced it in the blog post here! This blog post will dive into the details of how it works across Treeherder, and Mozdetect. At a high level, MozDetect handles the change point detection for telemetry probes, and Treeherder handles storing the detections, and producing the emails/bugs for these.

MozDetect

All of the existing, and any future change detection point techniques used for telemetry alerting are built in MozDetect. Having these live outside of Treeherder gives a low-barrier to entry for adding new features, and testing existing ones without having to set up everything needed for alerting in Treeherder. It's built as a python module that is run through uv. This makes it very easy for anyone to run the code because of uv's excellent python version, and dependency management. How to work with the code in this repository is outlined here, along with how to add your own techniques to it (note the access to mozdata through gcloud is required for this).

Detectors are split into two parts: (i) a detector that performs a comparison between two groups, and (ii) a detector that performs detection on a time series (using the detector from (i)). Our default detection technique, called cdf_squared lives here. The timeseries_detector_name is the name that will be used to access the detector from the telemetry probe side through the change_detection_technique field. The only method that absolutely needs to be implemented by these is the detect_changes method and it must return a list of Detection objects. These detection objects contain all the necessary information for producing an alert. There is also an optional_detection_info field that can contain additional things like attachments that would be added to Bugzilla bugs, and additional_data that can hold JSON data for storage in the DB. The cumulative distribution function (CDF) squared technique uses these to store the CDF before and after the detection along with a graph of these as an attachment for the Bugzilla bug.

Example of a CDF graph that is provided in bugs.

CDF Squared Detection Technique

The CDF squared technique detects changes in time-series histogram data by comparing CDFs between consecutive windows. It takes two CDFs, each representing the distribution of measurements over a time window, and computes the sum of squared differences between the two CDFs at each bin. The sign of the summed linear difference is then used to assign a direction to the squared difference score so that the output encodes whether the distribution moved to higher values (right shift) or lower values (left shift).

For time-series detection, this base comparison is applied in a rolling fashion across the full history of data. Each day's 7-day smoothed CDF is compared against the next one, producing a continuous signal of squared CDF differences over time. A Butterworth low-pass filter is then applied to that signal to remove high-frequency noise while preserving genuine trend changes. Finally, scipy's find_peaks function is used to locate statistically significant peaks and valleys in the filtered signal using a dynamic alert threshold based on the historical data. Information is extracted from those areas and then used to build the detection information needed for the alert generation process.

Alerting

Our alerting tooling lives in the Treeherder codebase. It's run through our PerfSheriff Bot (called Sherlock) and runs once per day. When a detection is produced from MozDetect, a telemetry alert is added to the database and then the TelemetryAlertManager is called to handle it. The manager's tasks are split into 6 ordered phases:

  1. Update alerts with changes from Bugzilla. This step ensures that any changes that happen in the bugs filed are mirrored into our database. Currently, we only track resolution changes here.
  2. Comment on existing bugs. This step is for updating existing bugs with information from new alerts. This step is not currently being used. In the future, this could be used to inform probe owners that a probe which doesn't produce bugs has produced an alert in the same time range.
  3. File new bugs for alerts. This step handles filing bugs for any new alerts on probes set up for producing bugs.
  4. Modify existing bugs with new alerts. This step handles any modifications needed to existing bugs based on the new bugs that were created. Currently, the "See Also" field is modified for existing bugs to include the new bugs.
  5. Produce emails for new alerts. This step handles producing emails for any alerts set up to produce emails.
  6. Housekeeping. This step handles redoing any failures that happen above in either the current run or past runs. Currently, it's being used to retry bug modifications and sending emails when we encounter a failure there. This excludes retrying bug filling since we delete the alert in that case and retry it the next time the alert is generated.

After the housekeeping step, the manager is done for the day and runs again on the next day to handle any updates and new alerts. Contrary to how alerting works for performance tests in CI, this process is fully automated and requires no human input at any point.

Setting up telemetry probes for alerting happens on the mozilla-central side in their probe schema using the new monitor field in the metadata section (example for email alerts, example for bug alerts). The telemetry alerting documentation has information about how to do this. We then use an index.json file from the telemetry dictionary to gather all the probes that should be alerting. The information there is supplemented by more granular information later in the pipeline to gather things like the time unit used for the probe to be able to better format the Bugzilla bug table.

Once a telemetry probe is set up for alerting and is found by our system, the owners (those listed in the email notification fields) will begin either receiving emails or have bugs produced for them. These can also be viewed by everyone on this dashboard.

Example of an alert being viewed in the dashboard.

Acknowledgements

Getting the project to this point involved work from people across multiple teams here at Mozilla. Special thanks to Eduardo Filho for his support on the telemetry probe side, to Bas Schouten for his guidance and work on the CDF Squared detection technique, and to Andrej Glavic and Beatrice Acasandrei for their help in reviewing the Treeherder-related changes.

If you hit any issues with the telemetry alerting system, or have any suggestions feel free to file a bug in the Testing :: Performance component or reach out to us in either #perf-help on Slack or in #perftest on Matrix.

22 Apr 2026 12:40am GMT

21 Apr 2026

feedPlanet Mozilla

Mozilla Data YouTube Channel: Data Incident Process

Mike Droettboom talks about Data @ Mozilla's process for handling incidents.

21 Apr 2026 11:46pm GMT