Another AI Nightmare, that shows the importance of third party security assessments

07/12/2025

AI is nowadays everybody's buddy. For staffers it's a way to quicky get some results and / or tasks done, for organizations a firebrand way to boost shareholders value or pique interest of funding. As long as, of course, you stay away from "free" tools without guardrails and do not upload sensitive information in, for instance, ChatGPT. And of course be mindful of the risks of Agentic AI browsers, on which we already published an article in October. Seems a win for everyone right? Not so fast! 

The case of a real treasure trove of information being accessible.

On December 3rd an article emerged written by Alex Schapiro that outlined the immense risk organizations are running by using third party tooling. 

In his article, researcher Alex Schapiro revealed that the legal-tech platform Filevine suffered a dramatic security flaw: using a misconfigured API, he was able to access more than 100,000 confidential files — including legal documents, client information, internal logs and potentially court-protected materials — with zero authentication.

Filevine is a cloud-based legal technology company that offers a comprehensive "legal work" platform for law firms, government agencies and corporate legal departments. Its software combines case and matter management, secure document and contract handling, client communication tools, time-tracking, billing and payment processing all in one centralized workspace.

More recently, Filevine has embedded artificial-intelligence capabilities: users can leverage an AI legal assistant to chat with their case files, generate summaries, automatically draft documents, extract insights from vast collections of legal data, and automate repetitive workflows.

The issue came to light after Schapiro probed a seemingly innocuous subdomain of Filevine. Through subdomain enumeration and reverse-engineering of minified JavaScript, he discovered an open "recommend" endpoint on AWS that responded with a fully-scoped admin token for that law firm's entire file repository (akin to giving someone full rights to a shared drive).

Fortunately, Schapiro practiced responsible disclosure: he notified Filevine on October 27, 2025; the company responded on November 4, committed to remediate; and by November 20 a patch was deployed — with no indication that other Filevine clients were affected. The incident serves as a sharp warning for legal professionals (and any industry handling sensitive data) about the risks of rapidly adopting AI-powered tools without rigorous security vetting.

So what can be learned from this real-life example?

First and foremost, do not assume that your SaaS or softwareprovider has everything under control. The stark reality is that there is a huge competition ongoing to reap the potential revenue of AI tooling, and a first-to-market goals puts immense pressure on organizations to deliver services. It is important to request assurance, and even request the right to do security audits and even pen testing, especially in situations where very sensitive information is handled, like in the case Alex Schapiro found.

Three golden rules to avoid this kind of risk


1 Ensure admin-level API endpoints are never exposed without authentication and authorization

Every endpoint — especially those triggering powerful actions or returning sensitive data — must enforce proper identity and access controls. Even a "demo" or "test" subdomain should be treated as production-grade until explicitly sandboxed. As a company procuring services from third party SaaS providers, ensure this is indeed well sealed off.

2 Continuously audit and test all external interfaces (including hidden subdomains)

Use subdomain enumeration, fuzzing, and reverse-engineering tools as part of the security review lifecycle — especially for third-party SaaS or AI services. Attackers don't always target the "main" site; the weakest link is often a forgotten sub-domain or staging environment. This includes demanding mandatory encryption of all data, be it at rest or in transit.

3 Insist on transparency and responsible disclosure when adopting third-party platforms

Organizations should require vendors to have a clear vulnerability-reporting process, timely patching, and willing-ness to engage with security researchers — rather than hiding flaws under the guise of "NDA" or "secret bug bounty".

These are golden rules that any audit should verify. Unfortunately this is all too often still not the case. Luckily in the EU, under the recently adopted NIS2 directive, Third Party Risk management is specifically identified as part of the supply chain principle. But in stead of hoping you will have a lenient auditor for the next security audit, ask him to include these controls, or consider changing auditor if these controls are not audited. After all, a security certification is nothing but a piece of paper if you are breached and your data was exposed. The auditor will not lose sleep, you will.

Important note: please be aware that this was an isolated case and not all Filevine's customers were impacted.