Double-Edged Sword: AI Boosts FOIA Efficiency While Threatening Public Access

AI is speeding up FOIA responses, but may also be creating legal and ethical hazards that could undermine public requests and government accountability.

By
AI FOIA agent

The genie can grant just so many wishes. After that, you’re on your own.

It’s the lesson learned—belatedly—by those who’ve possessed the magic lamp. And now, that same grim truth is dawning on America’s National Archives and Records Administration (NARA), when it comes to the government’s increased dependency on AI in the processing of Freedom of Information Act (FOIA) requests.

Nearly one-fifth (18.6 percent) of federal agencies are now using the new and virally spreading technologies of AI and machine learning for search, review and redaction—but NARA is warning that the human factor is still irreplaceable.

The Office of Government Information Services—an arm of NARA that oversees federal agency FOIA compliance—has just issued a report based on survey responses from some 280 federal entities. It concludes: “AI and machine learning have the potential to aid in FOIA processing but are not a substitute for the judgment of FOIA professionals on application of exemptions and foreseeable harm. It is important that agencies explore the use of AI and/or machine learning options to help improve FOIA processing response times.”

Worse, those algorithms can be manipulated.

AI—that cool technology that recommends what movies to watch, writes your shopping list or tries to predict your dating matches—that AI is being trusted by your government to sift, sort and decide thousands of sensitive, vital documents. It’s doing an efficient job of it, which is great, but NARA warns that AI’s decision-making process—what legal experts call a “black box” because even its creators can’t fully explain its inner workings—too often relies on strict, soldered-in agendas it has learned, rather than brainpower. That rigidity can lead to unjust FOIA denials and make accountability nearly impossible.

Worse, those algorithms can be manipulated.

By way of example, FOIA protects “trade secrets”—private business details that companies share with the government but don’t want the public (or competitors) to see. Under FOIA, these can be legally withheld if releasing them would harm the company. The problem is that FOIA AI systems might be trained or influenced in ways that favor protecting corporate information too much, even when the public has a right to know. Take, for example, a pharmaceutical company that submits safety reports about a new psychiatric drug to the FDA. A reporter files a FOIA request with the FDA, asking for those reports to see whether the drug has any undeclared harmful side effects.

When the FOIA system uses AI to sort and redact the records, the AI might mark every section labeled “proprietary data” of drug companies as OFF-LIMITS, because it’s been trained to protect “trade secrets.” But those reports could contain safety problems or testing results that the public should—in fact, must—know and see.

If judgment is based not on oversight but algorithm, no one can be blamed when things go wrong.

The other problem is clutter. Records of federal agencies are supposed to be retained for a certain period of time and then disposed of. As NARA details in its report, “By law, all federal records must be covered by a NARA-approved records schedule, and agencies must not destroy records until they are approved for destruction on an approved records schedule.” But nearly half (46.6 percent) of respondents in NARA’s report said they found records responsive to a FOIA request that were kept beyond their allotted period. Control of record retention is further complicated because AI systems are often designed to retain information to improve performance. Instead of properly disposing of records, AI may copy, store or “remember” data indefinitely. 

Beyond that, three negative outcomes can result from keeping sensitive or legally restricted records long after they were supposed to be deleted:

  1. Privacy violations. Personal or classified data that should have been deleted could be accidentally released.
  2. Legal violations. Government agencies could be inadvertently breaking federal record-keeping laws.
  3. Risk of bias or discrimination. Records retained beyond their allotted period could influence how the AI makes future FOIA decisions, leading to unfair or inaccurate results.

Let’s say the Department of War (formerly Defense) uses an AI system to process FOIA requests about military contracts. After a FOIA case is closed, federal law might require certain internal emails to be deleted after five years. But the AI system may have copied those emails into its training database so it can “learn” how to handle future FOIA requests.

Now, years later, those same emails—which might include private contractor information or sensitive national security discussions—still exist inside the AI’s memory. If the system is hacked, or if someone queries the AI in a way that retrieves old data, those sensitive records could be exposed.

Unless the agencies that use AI put in strict controls on what data those systems can keep—and for how long—those digital memories could create serious legal and ethical problems.

All of which is to say that what was supposed to improve efficiency may, in the long run, become a heavy foot on the brakes. Or worse, could crash the car.

And that’s where the real danger begins: When the tools that are supposed to help us do our jobs are instead handed the keys and steer the entire operation, they make decisions that no one bothers to understand or control.

If judgment is based not on oversight but algorithm, no one can be blamed when things go wrong.

The moral is plain: Accountability can’t be automated. And just like a genie gone rogue, that which has the potential for unlimited good can just as easily—and royally—mess things up.

| SHARE

RELATED

HUMAN RIGHTS

Another State Joins the Push to Make Hotels Safer From Human Trafficking

States are cracking down on the hospitality industry’s role in human trafficking. New mandates mean fewer blind eyes and faster interventions.

MENTAL HEALTH

Paul Durcan, Irish Poet Who Survived Psychiatric Abuse to Become a National Icon, Dies at 80

Psychiatry tried to crush his art with shock, drugs and fear. Instead, Paul Durcan built a legacy that defined Irish poetry for generations.

CORRUPTION

LA Ends Auto-Delete Messaging to Comply With Law—After Over a Decade

Auto-deleting messages have violated the California Public Records Act for over a decade, but it took the threat of a lawsuit to compel compliance from the City of Los Angeles.