The advent of generative-AI tools has brought challenging questions of accountability to the forefront, especially when those tools generate content that may infringe on someone’s copyright. Determining liability—whether it falls on the user who prompted the tool, or the company that developed the tool—is complex. As technology outpaces current legal frameworks, courts have been cautious in navigating these unexplored areas of law and liability.
In Andersen v. Stability AI Ltd., a group of artists brought a class action against the developers of an image-generator. They alleged the tool infringes on their copyrights by producing images that are substantially similar to their own copyrighted work. Specifically, the artists claimed that every image the tool outputs infringes because it was trained on copies of their work.
The artists pointed to case law precedent about direct copying to support their claims. In the prior case, music companies claimed individuals were playing music without obtaining the necessary license or paying royalties. Proving infringement there was just a matter of showing the music was being played because the performers were wholly copying the music for their shows. The AI-generated outputs in Andersen, however, are not exact replicas but rather a chaotic mixture of everything used to train the tool. This makes it much more difficult to prove direct copying or substantial similarities.
In Andersen, the Northern District of California found it implausible that every image could be substantially similar enough to infringe on the artists’ copyrights. While the court dismissed their claims, it allowed them leave to amend their complaint to (1) clarify how their works were copied in the training process, and (2) identify infringing output images. In the artists’ amended complaint, they offered additional evidence to support their direct infringement claims, and they added a new claim: that the developers should be liable for enabling others to infringe by distributing a tool that can reproduce copyrighted images. This argument calls on precedent from a time when file-sharing platforms were becoming notorious for the unauthorized distribution of copyrighted music.
That era of novel legal developments established that developers of file-sharing services could be held accountable for intentionally encouraging and facilitating the exchange of infringing music files among users. Much like cases of copying music to perform it, proving file-sharing services infringed was simple, as the shared music was an exact copy of the original. The question now is whether this reasoning will extend to the outputs of generative-AI tools.
There may be a parallel to be drawn with liability for software vendors. In a Connecticut case, the vendor of a tenant screening software faced a lawsuit for breaching the Fair Housing Act due to alleged racial discrimination. The vendor offered a platform for housing providers to perform criminal background checks. Although the District of Connecticut did not directly rule on the vendor’s liability because it was not subject to the Fair Housing Act, it emphasized the vendor’s duty to not sell products that could enable customers to unknowingly violate the law.
Likewise, courts may begin to stress that AI-tool developers have a duty to not distribute tools that can violate copyrights by replicating copyrighted images. Lawsuits concerning the outputs of generative-AI tools continue to test the boundaries of copyrights as they make their way through the judicial system. The outcomes of these legal battles could set crucial precedent and shape the future of AI-generated content and its regulation.
Note: David Lindgren drafted this post while a Summer Associate at Fox Rothschild’s Minneapolis Office this summer.