Blue Diamond Web Services

Your Best Hosting Service Provider!

September 18, 2024

LinkedIn scraped user data for training before updating its terms of service

LinkedIn may have trained AI models on user data without updating its terms.

LinkedIn users in the US — but not the EU, EEA, or Switzerland, likely due to those regions’ data privacy rules — have an opt-out toggle in their settings screen disclosing that LinkedIn scrapes personal data to train “content creation AI models.” The toggle isn’t new. But, as first reported by 404 Media, LinkedIn initially didn’t refresh its privacy policy to reflect the data use.

The terms of service have now been updated, but ordinarily that occurs well before a big change like using user data for a new purpose like this. The idea is it gives users an option to make account changes or leave the platform if they don’t like the changes. Not this time, it seems.

So what models is LinkedIn training? Its own, the company says in a Q&A, including models for writing suggestions and post recommendations. But LinkedIn also says that generative AI models on its platform may be trained by “another provider,” like its corporate parent Microsoft.

“As with most features on LinkedIn, when you engage with our platform we collect and use (or process) data about your use of the platform, including personal data,” the Q&A reads. “This could include your use of the generative AI (AI models used to create content) or other AI features, your posts and articles, how frequently you use LinkedIn, your language preference, and any feedback you may have provided to our teams. We use this data, consistent with our privacy policy, to improve or develop the LinkedIn services.”

LinkedIn previously told TechCrunch that it uses “privacy enhancing techniques, including redacting and removing information, to limit the personal information contained in datasets used for generative AI training.”

To opt out of LinkedIn’s data scraping, head to the “Data Privacy” section of the LinkedIn settings menu on desktop, click “Data for Generative AI improvement,” and then toggle off the “Use my data for training content creation AI models” option. You can also attempt to opt out more comprehensively via this form, but LinkedIn notes that any opt-out won’t affect training that’s already taken place.

The nonprofit Open Rights Group (ORG) has called on the Information Commissioner’s Office (ICO), the U.K.’s independent regulator for data protection rights, to investigate LinkedIn and other social networks that train on user data by default. Earlier this week, Meta announced that it was resuming plans to scrape user data for AI training after working with the ICO to make the opt-out process simpler.

“LinkedIn is the latest social media company found to be processing our data without asking for consent,” Mariano delli Santi, ORG’s legal and policy officer, said in a statement. “The opt-out model proves once again to be wholly inadequate to protect our rights: the public cannot be expected to monitor and chase every single online company that decides to use our data to train AI. Opt-in consent isn’t only legally mandated, but a common-sense requirement.”

Ireland’s Data Protection Commission (DPC), the supervisory authority responsible for monitoring compliance with the GDPR, the EU’s overarching privacy framework, told TechCrunch that LinkedIn informed it last week that clarifications to its global privacy policy would be issued today.

“LinkedIn advised us that the policy would include an opt-out setting for its members who did not want their data used for training content generating AI models,” a spokesperson for the DPC said. “This opt-out is not available to EU/EEA members as LinkedIn is not currently using EU/EEA member data to train or fine tune these models.”

TechCrunch has reached out to LinkedIn for comment. We’ll update this piece if we hear back.

The demand for more data to train generative AI models has led a growing number of platforms to repurpose or otherwise reuse their vast troves of user-generated content. Some have even moved to monetize this content — Tumblr owner Automattic, Photobucket, Reddit, and Stack Overflow are among the networks licensing data to AI model developers.

Not all of them have made it easy to opt out. When Stack Overflow announced that it would begin licensing content, several users deleted their posts in protest — only to see those posts restored and their accounts suspended.

Keep reading the article on Tech Crunch


Edera is building a better Kubernetes and AI security solution from the ground up

Edera, a startup looking to simplify and improve how Kubernetes containers and AI workloads are secured by offering a new hypervisor, today announced that it has raised a $5 million seed funding round led by 645 Ventures and Eniac Ventures.

Kubernetes is now 10 years old, but Edera founders Ariadne Conill (distinguished engineer), Emily Long (CEO), and Alex Zenla (CTO) argue that securing multi-tenancy workloads remains an unsolved problem.

Long was previously the COO at Chainguard and Anchore, and has an extensive background in operations and culture, while Conill was the creator of security-focused Linux distribution Wolfi and is a maintainer for Alpine Linux. Until starting Edera, Conill also worked at Chainguard, where she met Long.

Zenla, meanwhile, was an engineer at companies like Radix and Google and has long been an open-source maintainer and contributor. With a long experience in working on IoT at Google and an even longer tenure in the open source world, working on projects like Dart and Chromium, Zenla saw firsthand how difficult it was to do hardware virtualization on the edge.

Image Credits: Edera

“Hardware virtualization is often not available, both because the chips that run inside that hardware don’t have hardware virtualization at all, and because they might be disabled,” she said. “What I realized is there was no solution for this at the moment. There is no way to run an isolated container that didn’t sacrifice performance or require hardware virtualization. So I knew I had to look into this problem because I get frustrated when my stuff’s insecure.”

Zenla ended up going back to Xen, the open-source hypervisor project that, in many ways, enabled the cloud computing revolution. Xen does not require hardware virtualization, in part because it hadn’t been invented yet when Xen first launched in 2003.

“What I’ve realized is that old technologies kind of get misunderstood or put to the wayside when the new thing comes along,” she said. “No one seems to look at that and go, ‘Hmm, what were the good ideas there? Or what are the challenges that we have today and if those good ideas can help with that?’ I think a lot of innovation comes from looking at the past and merging that with the current and new, and so I started developing the concept when I realized that I could run Xen on the hardware device for the edge.”

To do that, Zenla essentially rewrote Xen in Rust, but at the time, her focus was on edge devices. It was only after talking to Conill and Long that she realized that she had maybe thought too small and that she could adapt the project to help secure all of their cloud-native infrastructure, not just on the edge. By now, this vision has shifted to also include protecting AI workloads that run on GPUs.

“The original design goals for Kubernetes were for ‘soft’ multi-tenancy where there was a level of trust between users of a cluster. But as Kubernetes has found its way into more domains, the need for stronger security protections has become apparent,” said Joe Beda, an angel investor in Edera and co-creator of Kubernetes. “Edera fills this gap by using virtualization to both reduce risks and, ultimately, reduce costs. It allows Kubernetes to go places it has never gone before!”

We’ve seen previous efforts to better protect containers, including the Kata Containers project. The Edera founders, however, argue that these solutions are essentially bolted onto existing projects, while Edera’s low-level hypervisor was built with security in mind from the ground up.

“People try to solve this problem by adding ridiculous amounts of layers,” Zenla said. “You see that with tool layering in general. It seems like every major enterprise has like 30 different Kubernetes tools and Kubernetes security tools. We hear from people that they just spend all day looking at logs and our idea is: what if we just fixed it?”

For the AI use cases, simply being able to virtualize — and hence share — a GPU is already a win for the industry, but the team is also working on adding support for confidential computing to its solution. The company is working with a set of design partners to test this technology out, but with today’s announcement, the company is also opening up its Kubernetes project to a wider audience.

As for the funding round, Long told me that the team, with its three female co-founders, “felt a certain amount of intimidation. Ultimately, we really found that there are a lot of VCs who share a common passion for both, obviously, the technology that we’re in, wanting to see computing change, and then also see a more diverse team do that.” The real struggle, she said, was to get people to understand the difference between typical Kubernetes security solutions that exist today — which focus more on observability, monitoring, and alerting, she argued — and what Edera was building.

In addition to 645 Ventures and Eniac Ventures, FPV Ventures, Generationship, Precursor Ventures, and Rosecliff Ventures also participated in this round. Angel investors include Joe Beda, Filippo Valsorda, Mandy Andress, Jeff Behl, and Kleiner Perkins scout Nikitha Suryadevara.

Keep reading the article on Tech Crunch


September 17, 2024

JPMorgan could take over Goldman’s Apple Card business

JPMorgan Chase is in talks to take over the Apple Card business from Goldman Sachs, the Wall Street Journal reports.

Goldman has issued credit for the Apple Card since its launch in 2019, but the Wall Street and Silicon Valley giants have been trying to untie their partnership since last year. In 2023, Goldman decided to abandon its push into consumer banking, including the $17 billion Apple Card program, because it was seen as a distraction from its core business.

Goldman and Apple reportedly approached multiple lenders about becoming the credit card’s new backer, including American Express. Some of those deals were held up due to the Apple Card’s high loss rate, which may cause Goldman to sell the business to JPMorgan for less than its face value.

Keep reading the article on Tech Crunch


and this