28 Mar 2024

feedPlanet Mozilla

Mozilla Privacy Blog: How the U.S. Government is leading by example on artificial intelligence

For years, the U.S. government has seen the challenges and opportunities of leveraging AI to advance its mission. Federal agencies have tried to use facial recognition to identify suspects and taxpayers, raising serious concerns about bias and privacy. Some agencies have tried to use AI to identify veterans at higher risk of suicide, where incorrect predictions in either direction can harm veterans' health and well-being.
 On the flip side, federal agencies are already harnessing AI in promising ways - from making it easier to forecast the weather, to predicting failures of air navigation equipment, to simply automating paperwork. If harnessed well, AI promises to improve the many federal services that Americans rely upon every day.

That's why we're thrilled that, today, the White House established a strong policy to empower federal agencies to responsibly harness the power of AI for public benefit. The policy carefully identifies riskier uses of AI and sets up strong guardrails to ensure those applications are responsible. And, the policy simultaneously creates leadership and incentives for agencies to fully leverage the potential of AI.

The policy is rooted in a simple observation: not all applications of AI are equally risky or equally beneficial. For example, it's far less risky to use AI for digitizing paper documents than to use AI for determining who receives asylum. The former doesn't need more scrutiny beyond existing rules, but the latter introduces risks to human rights and should be held to a much higher bar.

Diagram explaining how this policy mitigates AI risks.

Diagram explaining how this policy mitigates AI risks.

Hence, the policy takes a risk-based approach to prioritize resources for AI accountability. This approach largely ignores AI applications that are low risk or appropriately managed by other policies, and focuses on AI applications that could meaningfully impact people's safety or rights. For example, to use AI in electrical grids or autonomous vehicles, it needs to have an impact assessment, real-world testing, independent evaluation, ongoing monitoring, and appropriate public notice and human override. And, to use AI to filter resumes and approve loans, it needs to include the aforementioned protections for safety, mitigate against bias, incorporate public input, conduct ongoing monitoring, and provide reasonable opt-outs. These protections are based on common sense: AI that's integral to domains like critical infrastructure, public safety, and government benefits should be tested, monitored, and include human overrides. The specifics of these protections are aligned with years of rigorous research and incorporate public comment so that the interventions are more likely to be both effective and feasible.

The policy applies a similar approach to AI innovation. It calls for agencies to create AI strategies with a focus on prioritizing top AI use cases, reducing barriers to AI adoption, setting goals around AI maturity, and building the capacity needed to harness AI in the long run. This, paired with actions in the AI Executive Order that surge AI talent to high-priority locations across the federal government, sets agencies up to better deploy AI where it can be most impactful.

These rules are also coupled with oversight and transparency. Agencies are required to appoint senior Chief AI Officers who oversee both the accountability and innovation mandates in the policy, and agencies also have to publish their plans to comply with these rules and stop using AI that doesn't. In general, federal agencies also have to report their AI applications in annual AI use case inventories, and provide additional information about how they are managing risks from safety- and rights-impacting AI. The Office of Management and Budget (OMB) will oversee compliance, and that office is required to have sufficient visibility into any exemptions sought by agencies to the AI risk mitigation practices outlined in the policy.

These practices are slated to be highly impactful. Federal law enforcement agencies - including immigration and border enforcement - should now have many of their uses of facial recognition and predictive analytics subject to strong risk mitigation practices. Millions of people work for the U.S. Government, and now these federal workers will have the protections outlined in this policy if their employers try to surveil and manage their movements and behaviors via AI. And, when federal agencies try to use AI to identify fraud in programs such as food stamps and financial aid, those agencies will now have to make sure that the AI actually works and doesn't discriminate.

These rules also apply regardless of whether a federal agency builds the AI themselves or purchases it from a vendor. That will have a large market-shaping impact, as the U.S. government is the largest purchaser of goods and services in the world, and agencies will now be incentivized to only purchase AI services that comply with the policy. The policy further directs agencies to share their AI code, models, and data - promoting open-source approaches that are vital for the AI ecosystem broadly. Additionally, when procuring AI services, the policy recommends that agencies promote market competition and interoperability among AI vendors, and avoid self-preferential treatment and vendor lock-in. This all helps advance good government, making sure taxpayer dollars are spent on safe and effective AI solutions, not on risky and over-hyped snake oil from contractors.

Now, federal agencies will work to comply with this policy in the coming months. They will also develop follow-up guidance to support the implementation of this policy, advance better procurement of AI, and govern the use of AI in national security applications. The hard work is not over; there are still outstanding questions to tackle as part of this future work, such as figuring out how to embed open source requirements more explicitly as part of the AI procurement process, helping to reduce agencies' dependencies on specific AI vendors.

Amidst a flurry of government activity on AI, it's worth stepping back and reflecting: today is a big day for AI policy. The U.S. government is leading by example with its own rules for AI, and Mozilla stands ready to help make the implementation of this policy a success.

The post How the U.S. Government is leading by example on artificial intelligence appeared first on Open Policy & Advocacy.

28 Mar 2024 9:24am GMT

The Rust Programming Language Blog: Announcing Rust 1.77.1

The Rust team has published a new point release of Rust, 1.77.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.77.1 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website.

What's in 1.77.1

Cargo enabled stripping of debuginfo in release builds by default in Rust 1.77.0. However, due to a pre-existing issue, debuginfo stripping does not behave in the expected way on Windows with the MSVC toolchain.

Rust 1.77.1 therefore disables the new Cargo behavior on Windows for targets that use MSVC. There are no changes for other targets. We plan to eventually re-enable debuginfo stripping in release mode in a later Rust release.

Contributors to 1.77.1

Many people came together to create Rust 1.77.1. We couldn't have done it without all of you. Thanks!

28 Mar 2024 12:00am GMT

27 Mar 2024

feedPlanet Mozilla

The Mozilla Blog: Readouts from the Columbia Convening on Openness and AI

On February 29, Mozilla and the Columbia Institute of Global Politics brought together over 40 leading scholars and practitioners working on openness and AI. These individuals - spanning prominent open source AI startups and companies, non-profit AI labs, and civil society organizations - focused on exploring what "open" should mean in the AI era. We previously wrote about the convening, why it was important, and who we brought together.

Today, we are publishing two readouts from the convening.

The first is a technical memorandum that outlines three different approaches to openness in AI, and highlights different components and spectrums of openness. It includes an extensive appendix that outlines key components in the AI stack, and describes how more openness in each component can help advance system and societal goals. Finally, it outlines open questions that would be worthy of future exploration, digging deeper into the specifics of openness and AI. This memorandum will be helpful for technical leaders and practitioners who are shaping the future of AI, so that they can better incorporate principles of openness to make their own AI systems more effective for their goals and more beneficial for society.

The second is a policy memorandum that outlines how and why policymakers should support openness in AI. It outlines the societal benefits from openness in AI, provides a higher-level overview of how different parts of the AI stack contribute to different opportunities and risks, and lays out a series of recommendations about how policymakers can advance openness in AI. This memorandum will be helpful for policymakers, especially those who are grappling with the details of policy interventions related to openness in AI.

In the coming weeks, we will also be publishing a longer document that goes into greater detail about the dimensions of openness in AI. This will help advance our broader work with partners and allies to tackle complex and important topics around openness, competition, and accountability in AI. We will continue to keep mozilla.org/research/cc updated with materials stemming from the Columbia Convening on Openness and AI.

The post Readouts from the Columbia Convening on Openness and AI appeared first on The Mozilla Blog.

27 Mar 2024 9:50pm GMT