The Counsel Brief
The frontlines of innovation are no longer just in boardrooms or labs—they’re in courtrooms and Congressional hearing rooms.
This week, the U.S. Copyright Office waded into the AI copyright wars, the SEC floated long-awaited crypto rules, and an Australian report revealed AI’s troubling blind spots when it comes to hiring. Whether you’re building, lawyering, or just watching this space, these developments hint at how fast the tech-legal landscape is evolving—and how much is still unsettled.
This Week in Tech x Law
U.S. Copyright Office Drops AI Training Bombshell
The Office released its long-awaited report on copyright issues in generative AI. Spoiler: training AI on copyrighted data—especially at scale—is likely not fair use.
While transformative uses may survive under the fair use doctrine, the report leans toward requiring licensing for training data. That’s a massive signal to OpenAI, Meta, and others scraping the internet like it’s a free buffet.
SEC Floats New Crypto Rules (Finally)
SEC Chair Paul Atkins says new regulations are coming to clarify when crypto tokens are securities—and how to handle them.
The agency is signaling a shift from blanket hostility to something more surgical. Think: registered tokens, cleaner custody rules, and maybe a safer path for startups to launch without immediately triggering enforcement.
Biased Bots and Broken Resumes
A recent Australian study revealed that AI-powered hiring tools disproportionately misinterpret candidates with disabilities or non-American English accents.
This is what happens when training data is pulled from the same LinkedIn-polished, U.S.-centric sources over and over. It’s not just bad UX—it’s discrimination by design.
The Counsel’s Take: Training on Thin Ice
The Copyright Office’s report is the clearest signal yet that the U.S. may diverge from Silicon Valley’s “train first, license never” mindset. The core issue? AI models are trained on the creative output of millions—journalists, musicians, authors, YouTubers—most of whom never gave consent, never got paid, and often don’t even know it’s happening.
This matters. Not just because of legal exposure (though that’s coming), but because if the courts side with creators, the business model underpinning many AI startups could fracture overnight.
There’s also a deeper question of values: Should we allow AI to thrive off the intellectual property of others without meaningful safeguards? And if we do—who gets to say what’s “transformative”? A judge? A company? An algorithm?
We’re heading toward a legal reckoning. And frankly, it’s overdue.
🧾 Quote of the Week
“You have companies using copyright-protected material to create a product that is capable of producing an infinite number of competing products… I just don’t understand how that can be fair use.” — Judge Vince Chhabria, on Meta’s use of copyrighted works in AI training
⚙️ Tool or Term of the Week
Legal Concept: Transformative Use (Fair Use Doctrine)
A use is transformative if it adds new expression, meaning, or message to the original work—rather than merely repackaging it. This one word could determine the fate of multi-billion-dollar AI companies.

