
On March 30, 2026, Governor Gavin Newsom signed Executive Order N‑5‑26, a move that tightens safety, privacy and procurement rules for the artificial intelligence that California uses or buys. The order immediately tells state technology and procurement teams to draft new vendor certifications, pilot programs and transparency measures meant to head off misuse. Californians can expect the state to both expand limited generative‑AI tools for public services and set tougher tests for companies that want to do business with Sacramento.
What the Order Requires
Under the order, state agencies have 120 days to propose new contracting certifications that would let vendors "attest to and explain their policies and safeguards" for AI systems. Those certifications are expected to spell out how companies prevent exploitation such as the spread of child sexual abuse material, how they reduce harmful bias and how they protect civil rights. As outlined in Executive Order N‑5‑26, the state’s chief information security officer must also review federal supply‑chain risk designations and recommend measures so state agencies can continue to procure from companies where appropriate.
The order further directs the Government Operations Agency and other departments to publish data‑minimization tools, to pilot generative‑AI applications for public services and to expand trainings for state employees on how these systems should and should not be used.
Watermarks and Transparency
The order specifically tells the Department of Technology to issue best‑practice guidance on watermarking state‑produced synthetic media. Agencies are instructed to "appropriately watermark" AI‑generated or significantly manipulated images and video so they are identifiable, as outlined in Executive Order N‑5‑26. The goal is to make it harder for bad actors to pass off synthetic content as authentic when it involves state communications or public‑facing services.
According to the order, that guidance will be aligned with existing California statutes that already govern deepfakes and digital‑likeness protections, so agencies are not freelancing new standards on their own.
Why California Is Acting Now
California has already been moving aggressively on AI policy. In 2025, the state passed the Transparency in Frontier Artificial Intelligence Act (SB 53), which set disclosure and safety duties for the largest AI developers, and regulators in Sacramento have been building the enforcement machinery ever since. As Brookings explains, SB 53 made California one of the first jurisdictions to require front‑end safety frameworks and incident reporting from frontier model builders.
This new executive order functions as the procurement arm of that broader strategy, using the state’s buying power to shape market behavior while the legislature and agencies continue to finalize detailed AI rules.
How This Fits the National Picture
The order lands in the middle of a national argument over whether federal policy should preempt state AI rules, and after recent federal actions that, in the view of California officials, have loosened procurement standards. The New York Times reported that Newsom cast the move as a contrast to Washington and called on state officials to begin watermarking AI‑generated or manipulated videos that the state creates.
That tug‑of‑war between state and federal approaches is likely to show up in procurement language, court challenges and intergovernmental talks over the coming months, as everyone tests where the lines of authority actually sit.
Legal Implications
The order is framed as an administrative directive and explicitly states it "is not intended to, and does not, create any rights or benefits, substantive or procedural, enforceable at law or in equity." At the same time, it empowers agencies to recommend reforms to contractor responsibility provisions, including potential suspension and ineligibility authorities for vendors that unlawfully undermine privacy or civil liberties.
Any concrete changes to which companies the state can hire, or to standard contract terms, would flow through those follow‑on administrative reforms and procurement rules rather than from this order alone. In practical terms, the real impact will depend on the specific certifications, contract clauses and departmental policies that agencies put on the table in the coming months.
What to Watch Next
Departments named in the order have a 120‑day window to deliver recommendations, which sets a late‑July deadline for detailed proposals on certifications, watermarks and procurement language. For government‑facing AI vendors, that likely means gearing up for new disclosure and attestation requirements and tighter scrutiny on bias mitigation, child‑safety measures and data‑minimization practices.
Californians who care about how the state uses AI may see opportunities for public comment as the guidance and contract language are drafted and circulated, and as agencies translate this high‑level directive into the fine print that will govern billions of dollars in state contracts.









