Blue Diamond Web Services

Your Best Hosting Service Provider!

February 21, 2025

Court filings show Meta staffers discussed using copyrighted content for AI training

For years, Meta employees have internally discussed using copyrighted works obtained through legally questionable means to train the company’s AI models, according to court documents unsealed on Thursday.

The documents were submitted by plaintiffs in the case Kadrey v. Meta, one of many AI copyright disputes slowly winding through the U.S. court system. The defendant, Meta, claims that training models on IP-protected works, particularly books, is “fair use.” The plaintiffs, who include authors Sarah Silverman and Ta-Nehisi Coates, disagree.

Previous materials submitted in the suit alleged that Meta CEO Mark Zuckerberg gave Meta’s AI team the OK to train on copyrighted content and that Meta halted AI training data licensing talks with book publishers. But the new filings, most of which show portions of internal work chats between Meta staffers, paint the clearest picture yet of how Meta may have come to use copyrighted data to train its models, including models in the company’s Llama family.

In one chat, Meta employees, including Melanie Kambadur, a senior manager for Meta’s Llama model research team, discussed training models on works they knew may be legally fraught.

“[M]y opinion would be (in the line of ‘ask forgiveness, not for permission’): we try to acquire the books and escalate it to execs so they make the call,” wrote Xavier Martinet, a Meta research engineer, in a chat dated February 2023, according to the filings. “[T]his is why they set up this gen ai org for [sic]: so we can be less risk averse.”

Martinet floated the idea of buying e-books at retail prices to build a training set rather than cutting licensing deals with individual book publishers. After another staffer pointed out that using unauthorized, copyrighted materials might be grounds for a legal challenge, Martinet doubled down, arguing that “a gazillion” startups were probably already using pirated books for training.

“I mean, worst case: we found out it is finally ok, while a gazillion start up [sic] just pirated tons of books on bittorrent,” Martinet wrote, according to the filings. “[M]y 2 cents again: trying to have deals with publishers directly takes a long time …”

In the same chat, Kambadur, who noted Meta was in talks with document hosting platform Scribd “and others” for licenses, cautioned that while using “publicly available data” for model training would require approvals, Meta’s lawyers were being “less conservative” than they had been in the past with such approvals.

“Yeah we definitely need to get licenses or approvals on publicly available data still,” Kambadur said, according to the filings. “[D]ifference now is we have more money, more lawyers, more bizdev help, ability to fast track/escalate for speed, and lawyers are being a bit less conservative on approvals.”

Talks of Libgen

In another work chat relayed in the filings, Kambadur discusses possibly using Libgen, a “links aggregator” that provides access to copyrighted works from publishers, as an alternative to data sources that Meta might license.

Libgen has been sued a number of times, ordered to shut down, and fined tens of millions of dollars for copyright infringement. One of Kambadur’s colleagues responded with a screenshot of a Google Search result for Libgen containing the snippet “No, Libgen is not legal.”

Some decision-makers within Meta appear to have been under the impression that failing to use Libgen for model training could seriously hurt Meta’s competitiveness in the AI race, according to the filings.

In an email addressed to Meta AI VP Joelle Pineau, Sony Theakanath, director of product management at Meta, called Libgen “essential to meet SOTA numbers across all categories,” referring to topping the best, state-of-the-art (SOTA) AI models and benchmark categories.

Theakanath also outlined “mitigations” in the email intended to help reduce Meta’s legal exposure, including removing data from Libgen “clearly marked as pirated/stolen” and also simply not publicly citing usage. “We would not disclose use of Libgen datasets used to train,” as Theakanath put it.

In practice, these mitigations entailed combing through Libgen files for words like “stolen” or “pirated,” according to the filings.

In a work chat, Kambadur mentioned that Meta’s AI team also tuned models to “avoid IP risky prompts” — that is, configured the models to refuse to answer questions like “reproduce the first three pages of ‘Harry Potter and the Sorcerer’s Stone’” or “tell me which e-books you were trained on.”

The filings contain other revelations, implying that Meta may have scraped Reddit data for some type of model training, possibly by mimicking the behavior of a third-party app called Pushshift. Notably, Reddit said in April 2023 that it planned to begin charging AI companies to access data for model training.

In one chat dated March 2024, Chaya Nayak, director of product management at Meta’s generative AI org, said that Meta leadership was considering “overriding” past decisions on training sets, including a decision not to use Quora content or licensed books and scientific articles, to ensure the company’s models had sufficient training data.

Nayak implied that Meta’s first-party training datasets — Facebook and Instagram posts, text transcribed from videos on Meta platforms, and certain Meta for Business messages — simply weren’t enough. “[W]e need more data,” she wrote.

The plaintiffs in Kadrey v. Meta have amended their complaint several times since the case was filed in the U.S. District Court for the Northern District of California, San Francisco Division, in 2023. The latest alleges that Meta, among other claims, cross-referenced certain pirated books with copyrighted books available for license to determine whether it made sense to pursue a licensing agreement with a publisher. 

In a sign of how high Meta considers the legal stakes to be, the company has added two Supreme Court litigators from the law firm Paul Weiss to its defense team on the case.

Meta didn’t immediately respond to a request for comment.

Keep reading the article on Tech Crunch


Despite recent layoffs, Meta is expanding in India

Meta made headlines last month for announcing plans to cut 5% of its employees, controversially deeming them “low performers.” But the job cuts aren’t holding Meta back from expanding in certain geographic areas.

Meta is setting up a new site in the country’s tech hub of Bengaluru (formerly known as Bangalore), multiple Meta employees posted on LinkedIn this month. 

Meta is currently hiring for 41 positions there, according to its careers webpage, most of which were posted over the last month. The positions are split between software or machine learning engineer jobs, and roles focused on designing chips for Meta’s data centers.

Meta Bengaluru is looking for an “experienced Engineering Director to build and lead our engineering team in India,” one of Meta’s job ads, which was posted three weeks ago on LinkedIn, reads. 

The engineering director in Bengaluru will be responsible for designing a strategy to hire and build founding engineering teams, plus help create “a vision for engineer teams in India,” the job ad states.

The new center is part of Meta’s Enterprise Engineering team, according to a Meta employee’s LinkedIn post. That team focuses on custom internal Meta tools, rather than on Meta’s best-known products like Facebook and Instagram. 

While Meta has several existing offices in India, including Bengaluru, Hyderabad, Gurgaon, New Delhi, and Mumbai, most have fewer job openings and they’re mainly for non-engineering roles. Only 1 of the 12 available positions at the other locations is engineering-related, Meta’s careers page shows.

A Meta spokesperson in India told TechCrunch that it is recruiting for a “small number of engineering positions in Bengaluru.”

“We regularly update our location strategies to support our long-term investments,” the spokesperson said.

The 41 positions in Bangalore are a small proportion of Meta’s global job postings, which currently total over 1,700. But it represents a shift for Meta, which has not traditionally used India as an engineering hub — those positions have historically been based in North America and Europe.

In one example from 2022, an Indian software engineer made headlines after he was laid off by Meta just two days after relocating to Canada for the job.

Meta CEO Mark Zuckerberg has said that Meta intends to backfill the jobs it cut during its most recent round of layoffs.

Keep reading the article on Tech Crunch


February 20, 2025

Meta starts accepting sign-ups for Community Notes on Facebook, Instagram, and Threads

Meta announced on Thursday that it’s now accepting sign-ups for its Community Notes program on Facebook, Instagram, and Threads. The announcement follows Meta news last month that it’s going to end its third-party fact-checking program and is instead moving to a Community Notes model similar to the one at X.

In a blog post, Meta explains that Community Notes will be a way for users across its platforms to decide when posts are misleading, and allow them to add more context to the posts.

Starting today, people can sign up to be among the first contributors to the program. To sign up, users must be based in the United States and be over 18 years of age. Plus, users must have an account that’s more than six months old and in good standing, along with a verified phone number or enrollment in two-factor authentication.

Meta says contributors will be able to write and submit a Community Note to posts that they think are misleading or confusing. Just like on X, Notes can include things like background information, a tip, or other details that users might find useful.

Notes will have a 500-character limit and are required to include a link.

“For a Community Note to be published on a post, users who normally disagree, based on how they’ve rated Notes in the past, will have to agree that a Note is helpful,” Meta explains. “Notes will not be added to content when there is no agreement or when people agree a Note is not helpful.”

Meta says Community Notes will be written and rated by contributors, not by the tech giant itself. All Notes must adhere to Meta’s Community Standards.

“We intend to be transparent about how different viewpoints inform the Notes displayed in our apps, and are working on the right way to share this information,” Meta says.

The company plans to introduce Community Notes in the United States over the next couple of months. Meta hasn’t shared when it plans to bring the feature to additional countries.

Meta’s decision to drop fact-checking for Community Notes has been seen as the company repositioning itself for the Trump presidency, as it takes an approach that’s in favor of unrestricted speech online. When Meta announced the change, Mark Zuckerberg said in a video that fact-checkers were “too politically biased” and had destroyed “more trust than they’ve created.”

Keep reading the article on Tech Crunch


and this