Artificial Intelligence, commonly referred to as AI, is increasingly being used to perform tasks previously only capable of being done by humans. For example, some companies are pioneering automated journalism, allowing smart software to analyze data, match relevant phrases in a story template and put together a narrative that can be published. AI is also being used in songwriting and to create works of art. These advances generate numerous legal questions. Of course, to the extent these works create value, the question of ownership and whether these works are protectable under the current legal frameworks is important. Regardless of whether these works are protectable, however, the finished product created by AI could still read on other human works creating infringement concerns.
Assuming AI does write a story or create a work of art that reads on the work of another, who is liable? Is there an infringer or not? Is the infringer the developer who created the AI program that made the infringing work? What about the owner of the AI program? Would it matter if the AI program required significant training before it could/did make the infringing work? Is the infringer then the trainer of the AI?
One could argue that there should not be liability until the work is used or published. But what if the AI uses or publishes the work without human involvement? Alternatively, must a human that wishes to use or publish a work created by AI review all of the data the AI program analyzed in the creation of that work to ensure, e.g., that the machine did not copyright another work? Given that most AI programs constantly learn and synthesize new data, the amount of information analyzed for each subsequent work would likely grow exponentially, making such a task nigh impossible.
As we have previously discussed, the USPTO is struggling with these questions. Moreover, there may be no easy answer. New legal theories of infringement and liability are bound to spring forth. For now, it may be best to use AI work-product with caution.