top of page

Two of my books were used to train AI. Am I angry? Yes, I am!

  • Writer: Connie Lacy
    Connie Lacy
  • 23 minutes ago
  • 2 min read

I 

Author Connie Lacy had two books used to train an AI chatbot without her consent - "The Time Capsule" and "VisionSight: a Novel."


I recently learned that at least two of my books were pirated and used by Anthropic PBC to train large language models for use by its Claude chatbot. A class action lawsuit was filed last year against Anthropic for copyright infringement by three authors. In a settlement reached this fall Anthropic agreed to pay $1.5 billion to settle the lawsuit. Yes, $1.5 BILLION! Experts say if the company had not settled, it might very well have been put out of business paying multiple billions of dollars.

 

My novels The Time Capsule and VisionSight: a Novel are on the list of 500,000 titles Anthropic used illegally, which means I’ll be paid a small portion of the settlement. The books were reportedly copied without authors’ knowledge by Library Genesis (LibGen) and Pirate Library Mirror (PiLiMi). Anthropic claimed it used the books under the rules governing fair use. The judge disagreed, saying books the company obtained legally could be considered fair use but not books that were copied illegally, like mine.

 

Anthropic has also been sued for allegedly doing the same thing with song lyrics. Billie Eilish and Katy Perry are among the many musicians taking legal action against Anthropic.

 

Class action lawsuits against other AI companies are being filed by artists in various fields, including visual artists.

 

Regulating tech companies that specialize in a business model based on theft of other people’s art will be challenging. But allowing those companies to get away with it is unacceptable.

 

Claude, developed by Anthropic, is a competitor of ChatGPT. Interestingly, it was launched by a brother and sister who were part of OpenAI that created ChatGPT. They left OpenAI to found Anthropic over concerns about ethics and safety. The company says it trains its AI in accordance with principles including freedom, opposition to inhumane treatment and privacy.

 

Authors can learn more, including whether their books were used by Anthropic and how to file a claim by visiting the Authors Guild news page:

 

Of course, it’s not just artists and authors being hurt by advancements in AI. Average citizens have been targeted using AI deepfakes. Denmark is responding by writing a law expected to pass next year that would grant people a copyright to their own likeness and voice.




 
 
bottom of page