Blue Diamond Web Services

Your Best Hosting Service Provider!

September 19, 2024

FTC Says Social Media Platforms Engage in ‘Vast Surveillance’ of Users

Social media platforms are engaging in “vast surveillance” of people online and failing to protect children, according to a new report from the U.S. Federal Trade Commission. And if you thought Big Tech was serious about calling for FTC Chair Lina Khan to be fired before, just wait until this report properly trickles through Silicon Valley today.

The FTC issued a warning letter back in late 2020 to nine social media and video streaming services alleging their operations were “dangerously opaque” and said their data collection techniques and algorithms were “shrouded in secrecy.” The companies—Amazon, Facebook, YouTube, X, Snap, ByteDance, Discord, Reddit, and WhatsApp—were told the FTC would be investigating their practices and Thursday’s report is the result of those efforts.

The report notes that the amount of data collected by large tech companies is enormous, even using the words “simply staggering,” to describe how both users and non-users alike can be tracked in myriad ways. And that data that’s collected directly by platforms is then combined with data from third-party brokers to compile an even more detailed picture of any given person, according to the FTC.

“They track what we do on and off their platforms, often combining their own information with enormous data sets purchased through the largely unregulated consumer data market. And large firms are increasingly relying on hidden pixels and similar technologies—embedded on other websites—to track our behavior down to each click,” the FTC report reads.

“In fact, the Companies collected so much data that in response to the Commission’s questions, they often could not even identify all the data points they collected or all of the third parties
they shared that data with,” the report continues.

The report also warns that AI is complicating the picture even more, with companies feeding data into their artificial intelligence training without consistent approaches to monitoring or testing standards.

The report lists things the FTC would like policymakers to do, emphasizing that “self-regulation is not the answer,” while also laying out changes the big tech companies are supposed to make. On the policymaker side, the FTC says Congress should pass comprehensive federal privacy legislation to limit surveillance and give consumers rights over their data. The FTC also advocates for new privacy legislation that it says will “fill in the gap in privacy protections” that exist in the Children’s Online Privacy Protection Act of 1998, abbreviated as COPPA.

As for the companies, the FTC wants to see these platforms limit data collection and implement “concrete and enforceable data minimization and retention policies.” The FTC also calls on the companies to limit the sharing of data with third parties and to delete consumer data when it’s not needed anymore. The new report also calls on companies to, “not collect sensitive information through privacy-invasive ad tracking technologies,” which include pixel trackers, and give better protections to teens.

But, again, this report is likely to only increase the calls for Khan to be fired, which have grown louder in the business community in recent months.

“The report lays out how social media and video streaming companies harvest an enormous amount of Americans’ personal data and monetize it to the tune of billions of dollars a year,” Lina Khan said in a statement published online.

“While lucrative for the companies, these surveillance practices can endanger people’s privacy, threaten their freedoms, and expose them to a host of harms, from identify theft to stalking. Several firms’ failure to adequately protect kids and teens online is especially troubling. The Report’s findings are timely, particularly as state and federal policymakers consider legislation to protect people from abusive data practices.”

Gizmodo reached out to all nine of the tech companies mentioned by name in the new report but only Discord and Google responded immediately while Meta, which owns Facebook and WhatsApp, declined to comment.

Google gave Gizmodo a very short statement about the 129-page report, only focusing on rather narrow issues like reselling data and ad personalization for kids.

“Google has the strictest privacy policies in our industry—we never sell people’s personal information and we don’t use sensitive information to serve ads,” Google spokesperson José Castañeda said over email. “We prohibit ad personalization for users under 18 and we don’t personalize ads to anyone watching ‘made for kids content’ on YouTube.”

Discord sent a more robust statement and believes its business is very different from the other eight companies mentioned in the report.

“The FTC report’s intent and focus on consumers is an important step. However, the report lumps very different models into one bucket and paints a broad brush, which might confuse consumers and portray some platforms, like Discord, inaccurately,” said Kate Sheerin, Head of US/Canada Public Policy for Discord.

“The report itself says ‘the business model varies little across these nine companies.’ Discord’s business model is very different—we are a real-time communications platform with strong user privacy controls and no feeds for endless scrolling. At the time of the study, Discord did not run a formal digital advertising service, which is a central pillar of the report. We look forward to sharing more about Discord and how we protect our users.”

We’ll update this post if we hear back from any of the other companies referenced in the FTC report we didn’t hear from on Thursday.


September 18, 2024

This Week in AI: Why OpenAI’s o1 changes the AI regulation game

Hiya, folks, welcome to TechCrunch’s regular AI newsletter. If you want this in your inbox every Wednesday, sign up here.

It’s been just a few days since OpenAI revealed its latest flagship generative model, o1, to the world. Marketed as a “reasoning” model, o1 essentially takes longer to “think” about questions before answering them, breaking down problems and checking its own answers.

There’s a great many things o1 can’t do well — and OpenAI itself admits this. But on some tasks, like physics and math, o1 excels despite not necessarily having more parameters than OpenAI’s previous top-performing model, GPT-4o. (In AI and machine learning, “parameters,” usually in the billions, roughly correspond to a model’s problem-solving skills.)

And this has implications for AI regulation.

California’s proposed bill SB 1047, for example, imposes safety requirements on AI models that either cost over $100 million to develop or were trained using compute power beyond a certain threshold. Models like o1, however, demonstrate that scaling up training compute isn’t the only way to improve a model’s performance.

In a post on X, Nvidia research manager Jim Fan posited that future AI systems may rely on small, easier-to-train “reasoning cores” as opposed to the training-intensive architectures (e.g., Meta’s Llama 405B) that’ve been the trend lately. Recent academic studies, he notes, have shown that small models like o1 can greatly outperform large models given more time to noodle on questions.

So was it short-sighted for policymakers to tie AI regulatory measures to compute? Yes, says Sara Hooker, head of AI startup Cohere’s research lab, in an interview with TechCrunch:

[o1] kind of points out how incomplete a viewpoint this is, using model size as a proxy for risk. It doesn’t take into account everything you can do with inference or running a model. For me, it’s a combination of bad science combined with policies that put the emphasis on not the current risks that we see in the world now, but on future risks.

Now, does that mean legislators should rip AI bills up from their foundations and start over? No. Many were written to be easily amendable, under the assumption that AI would evolve far beyond their enactment. California’s bill, for instance, would give the state’s Government Operations Agency the authority to redefine the compute thresholds that trigger the law’s safety requirements.

The admittedly tricky part will be figuring out which metric could be a better proxy for risk than training compute. Like so many other aspects of AI regulation, it’s something to ponder as bills around the U.S. — and world — march toward passage.

News

Image Credits: David Paul Morris/Bloomberg / Getty Images

First reactions to o1: Max got initial impressions from AI researchers, startup founders, and VCs on o1 — and tested the model himself.

Altman departs safety committee: OpenAI CEO Sam Altman stepped down from the startup’s committee responsible for reviewing the safety of models such as o1, likely in response to concerns that he wouldn’t act impartially.

Slack turns into an agent hub: At its parent company Salesforce’s annual Dreamforce conference, Slack announced new features, including AI-generated meeting summaries and integrations with tools for image generation and AI-driven web searches.

Google begins flagging AI images: Google says that it plans to roll out changes to Google Search to make clearer which images in results were AI generated — or edited by AI tools.

Mistral launches a free tier: French AI startup Mistral launched a new free tier to let developers fine-tune and build test apps with the startup’s AI models.

Snap launches a video generator: At its annual Snap Partner Summit on Tuesday, Snapchat announced that it’s introducing a new AI video-generation tool for creators. The tool will allow select creators to generate AI videos from text prompts and, soon, from image prompts. 

Intel inks major chip deal: Intel says it will co-develop an AI chip with AWS using Intel’s 18A chip fabrication process. The companies described the deal as a “multi-year, multi-billion-dollar framework” that could potentially involve additional chip designs.

Oprah’s AI special: Oprah Winfrey aired a special on AI with guests such as OpenAI’s Sam Altman, Microsoft’s Bill Gates, tech influencer Marques Brownlee, and current FBI director Christopher Wray.

Research paper of the week

We know that AI can be persuasive, but can it dig out someone deep in a conspiracy rabbit hole? Well, not all by itself. But a new model from Costello et al. at MIT and Cornell can make a dent in beliefs about untrue conspiracies that persists for at least a couple months.

In the experiment, they had people who believed in conspiracy-related statements (e.g., “9/11 was an inside job”) talk with a chatbot that gently, patiently, and endlessly offered counterevidence to their arguments. These conversations led the humans involved to stating a 20% reduction in the associated belief two months later, at least as far as these things can be measured. Here’s an example of one of the conversations in progress:

It’s unlikely that those deep into reptilians and deep state conspiracies are likely to consult or believe an AI like this, but the approach could be more effective if it were used at a critical juncture like a person’s first foray into these theories. For instance, if a teenager searches for “Can jet fuel melt steel beams?” they may be experience a learning moment instead of a tragic one.

Model of the week

It’s not a model, but it has to do with models: Researchers at Microsoft this week published an AI benchmark called Eureka aimed at (in their words) “scaling up [model] evaluations … in an open and transparent manner.”

AI benchmarks are a dime a dozen. So what makes Eureka different? Well, the researchers say that, for Eureka — which is actually a collection of existing benchmarks — they chose tasks that remain challenging for “even the most capable models.” Specifically, Eureka tests for capabilities often overlooked in AI benchmarks, like visual-spatial navigation skills.

To show just how difficult Eureka can be for models, the researchers tested systems, including Anthropic’s Claude, OpenAI’s GPT-4o, and Meta’s Llama, on the benchmark. No single model scored well across all of Eureka’s tests, which the researchers say underscores the importance of “continued innovation” and “targeted improvements” to models.

Grab bag

In a win for professional actors, California passed two laws, AB 2602 and AB 1836, restricting the use of AI digital replicas.

The legislation, which was backed by SAG-AFTRA, the performers’ union, requires that companies relying on a performer’s digital replica (e.g., cloned voice or image) give a “reasonably specific” description of the replica’s intended use and negotiate with the performer’s legal counsel or labor union. It also requires that entertainment employers gain the consent of a deceased performer’s estate before using a digital replica of that person.

As the Hollywood Reporter notes in its coverage, the bills codify concepts that SAG-AFTRA fought for in its 118-day strike last year with studios and major streaming platforms. California is the second state after Tennessee to impose restrictions on the use of digital actor likenesses; SAG-AFTRA also sponsored the Tennessee effort.

Keep reading the article on Tech Crunch


and this