Blue Diamond Web Services

Your Best Hosting Service Provider!

February 22, 2025

The fallout from HP’s Humane acquisition 

Welcome back to Week in Review. This week we’re looking at the internal chaos surrounding HP’s $116 million acquisition of AI Pin maker Humane; Mira Murati’s new AI venture coming out of stealth; Duolingo killing its iconic owl mascot with a Cybertruck; and more! Let’s get into it.

Humane’s AI pin is dead. The hardware startup announced that most of its assets have been acquired by HP for $116 million, less than half of the $240 million it raised in VC funding. The startup will immediately discontinue sales of its $499 AI Pins, and after February 28, the wearable will no longer connect to Humane’s servers. After that, the devices won’t be capable of calling, messaging, AI queries/responses, or cloud access. Customers who bought an AI Pin in the last 90 days are eligible for a refund, but anyone who bought a device before then is not.

Hours after the HP acquisition was announced, several Humane employees received job offers from HP with pay increases between 30% and 70%, plus HP stock and bonus plans, according to internal documents seen by TechCrunch and two sources who requested anonymity. Meanwhile, other Humane employees — especially those who worked closer to the AI Pin devices — were notified they were out of a job.

Apple’s long-awaited iPhone SE refresh has been revealed, three years after the last major update to the budget-minded smartphone. The 16e is part of an exclusive group of handsets capable of running Apple Intelligence due to the addition of an A18 processor. The iPhone 16e also ditched the Touch ID home button in favor of Face ID and swapped out the Lightning port in favor of USB-C. The iPhone 6e starts at $599 and will begin shipping February 28.


This is TechCrunch’s Week in Review, where we recap the week’s biggest news. Want this delivered as a newsletter to your inbox every Saturday? Sign up here.


News

Duolingo owl
Image Credits:Duolingo (opens in a new window)

RIP, Duo: Duolingo “killed” its iconic owl mascot with a Cybertruck, and the marketing stunt is going surprisingly well. The company launched a campaign to save Duo — and encourage users to do more lessons — as the company says it’s “Duo or die.” Read more

OpenAI “uncensors” ChatGPT: OpenAI no longer wants ChatGPT to take an editorial stance, even if some users find it “morally wrong or offensive.” That means ChatGPT will now offer multiple perspectives on controversial subjects in an effort to be neutral. Read more

Uber vs. DoorDash: Uber is suing DoorDash, accusing its delivery rival of stifling competition by intimidating restaurant owners into exclusive deals. Uber alleges that DoorDash bullied restaurants into only working with them. Read more

Mira Murati’s next move: Former OpenAI CTO Mira Murati’s new AI startup, Thinking Machines Lab, has come out of stealth. The startup, which includes OpenAI co-founder John Schulman and former OpenAI chief research officer Barret Zoph, will focus on building collaborative “multimodal” systems. Read more

Introducing Grok 3: Elon Musk’s xAI released its latest flagship AI model, Grok 3, and unveiled new capabilities for the Grok iOS and web apps. Musk claims that the new family of models is a “maximally truth-seeking AI” that is sometimes “at odds with what is politically correct.” Read more

Hackers on Steam: Valve removed a video game from Steam that was essentially designed to spread malware. Security researchers found that whoever planted it modified an existing video game in an attempt to trick gamers into installing an info-stealer called Vidar. Read more

Another DEI U-turn: Mark Zuckerberg and Priscilla Chan’s charity will end internal DEI programs and stop providing “social advocacy funding” for racial equity and immigration reforms. The switch comes just weeks after the organization assured staff it would continue to support DEI efforts. Read more

Amazon shuts down its Android app store: Amazon will discontinue its app store for Android in August in an effort to put more focus on the company’s own devices. The company told developers that they will no longer be able to submit new apps to the store. Read more

Mark Zuckerberg’s rebrand didn’t pay off: A study by the Pew Research Center found that Americans’ views of Elon Musk and Mark Zuckerberg are more negative than positive. About 54% of U.S. adults say they have an unfavorable view of Musk while a whopping 67% feel negatively toward Zuckerberg. Read more

Noise-canceling headphones could hurt your brain: A new BBC report considers whether noise-canceling tech might be rewiring the brains of people who use it to tune out pesky background noise — and could lead to the brain forgetting how to filter sounds itself. Read more 

Analysis

an illustration of Elon Musk, stood in front of a graphic of the U.S. Capitol, with various faces around Musk of those who are in his inner circle, including DOGE members.
Image Credits:Sean O’Kane / TechCrunch

An exhaustive look at the DOGE universe: The dozens of individuals who work under, or advise, Elon Musk and DOGE are a real-life illustration of Musk’s weblike reach in the tech industry. TechCrunch has unveiled the major players in the DOGE universe, from Musk’s inner circle to senior figures, worker bees, and aides — some of whom are advising and recruiting for DOGE. We highlight both the connections between them and how they entered Musk’s orbit. Read more

Keep reading the article on Tech Crunch


February 21, 2025

Meta, X approved ads containing violent anti-Muslim, antisemitic hate speech ahead of German election, study finds

Social media giants Meta and X approved ads targeting users in Germany with violent anti-Muslim and anti-Jew hate speech in the run-up to the country’s federal elections, according to new research from Eko, a corporate responsibility nonprofit campaign group.

The group’s researchers tested whether the two platforms’ ad review systems would approve or reject submissions for ads containing hateful and violent messaging targeting minorities ahead of an election where immigration has taken center stage in mainstream political discourse — including ads containing anti-Muslim slurs; calls for immigrants to be imprisoned in concentration camps or to be gassed; and AI-generated imagery of mosques and synagogues being burnt.

Most of the test ads were approved within hours of being submitted for review in mid-February. Germany’s federal elections are set to take place on Sunday, February 23.

Hate speech ads scheduled

Eko said X approved all 10 of the hate speech ads its researchers submitted just days before the federal election is due to take place, while Meta approved half (five ads) for running on Facebook (and potentially also Instagram) — though it rejected the other five.

The reason Meta provided for the five rejections indicated the platform believed there could be risks of political or social sensitivity which might influence voting.

However, the five ads that Meta approved included violent hate speech likening Muslim refugees to a “virus,” “vermin,” or “rodents,” branding Muslim immigrants as “rapists,” and calling for them to be sterilized, burnt, or gassed. Meta also approved an ad calling for synagogues to be torched to “stop the globalist Jewish rat agenda.”

As a sidenote, Eko says none of the AI-generated imagery it used to illustrate the hate speech ads was labeled as artificially generated — yet half of the 10 ads were still approved by Meta, regardless of the company having a policy that requires disclosure of the use of AI imagery for ads about social issues, elections or politics.

X, meanwhile, approved all five of these hateful ads — and a further five that contained similarly violent hate speech targeting Muslims and Jews.

These additional approved ads included messaging attacking “rodent” immigrants that the ad copy claimed are “flooding” the country “to steal our democracy,” and an antisemitic slur which suggested that Jews are lying about climate change in order to destroy European industry and accrue economic power.

The latter ad was combined with AI-generated imagery depicting a group of shadowy men sitting around a table surrounded by stacks of gold bars, with a Star of David on the wall above them — with the visuals also leaning heavily into antisemitic tropes.

Another ad X approved contained a direct attack on the SPD, the center-left party that currently leads Germany’s coalition government, with a bogus claim that the party wants to take in 60 million Muslim refugees from the Middle East, before going on to try to whip up a violent response. X also duly scheduled an ad suggesting “leftists” want “open borders”, and calling for the extermination of Muslims “rapists.”

Elon Musk, the owner of X, has used the social media platform where he has close to 220 million followers to personally intervene in the German election. In a tweet in December, he called for German voters to back the Far Right AfD party to “save Germany.” He has also hosted a livestream with the AfD’s leader, Alice Weidel, on X.

Eko’s researchers disabled all test ads before any that had been approved were scheduled to run to ensure no users of the platform were exposed to the violent hate speech.

It says the tests highlight glaring flaws with the ad platforms’ approach to content moderation. Indeed, in the case of X, it’s not clear whether the platform is doing any moderation of ads, given all 10 violent hate speech ads were quickly approved for display.

The findings also suggest that the ad platforms could be earning revenue as a result of distributing violent hate speech.

EU’s Digital Services Act in the frame

Eko’s tests suggests that neither platform is properly enforcing bans on hate speech they both claim to apply to ad content in their own policies. Furthermore, in the case of Meta, Eko reached the same conclusion after conducting a similar test in 2023 ahead of new EU online governance rules coming in — suggesting the regime has no effect on how it operates.

“Our findings suggest that Meta’s AI-driven ad moderation systems remain fundamentally broken, despite the Digital Services Act (DSA) now being in full effect,” an Eko spokesperson told TechCrunch.

“Rather than strengthening its ad review process or hate speech policies, Meta appears to be backtracking across the board,” they added, pointing to the company’s recent announcement about rolling back moderation and fact-checking policies as a sign of “active regression” that they suggested puts it on a direct collision course with DSA rules on systemic risks.

Eko has submitted its latest findings to the European Commission, which oversees enforcement of key aspects of the DSA on the pair of social media giants. It also said it shared the results with both companies, but neither responded.

The EU has open DSA investigations into Meta and X, which include concerns about election security and illegal content, but the Commission has yet to conclude these proceedings. Though, back in April it said it suspects Meta of inadequate moderation of political ads.

A preliminary decision on a portion of its DSA investigation on X, which was announced in July, included suspicions that the platform is failing to live up to the regulation’s ad transparency rules. However, the full investigation, which kicked off in December 2023, also concerns illegal content risks, and the EU has yet to arrive at any findings on the bulk of the probe well over a year later.

Confirmed breaches of the DSA can attract penalties of up to 6% of global annual turnover, while systemic non-compliance could even lead to regional access to violating platforms being blocked temporarily.

But, for now, the EU is still taking its time to make up its mind on the Meta and X probes so — pending final decisions — any DSA sanctions remain up in the air.

Meanwhile, it’s now just a matter of hours before German voters go to the polls — and a growing body of civil society research suggests that the EU’s flagship online governance regulation has failed to shield the major EU economy’s democratic process from a range of tech-fueled threats.

Earlier this week, Global Witness released the results of tests of X and TikTok’s algorithmic “For You” feeds in Germany, which suggest the platforms are biased in favor of promoting AfD content versus content from other political parties. Civil society researchers have also accused X of blocking data access to prevent them from studying election security risks in the run-up to the German poll — access the DSA is supposed to enable.

“The European Commission has taken important steps by opening DSA investigations into both Meta and X, now we need to see the Commission take strong action to address the concerns raised as part of these investigations,” Eko’s spokesperson also told us.

“Our findings, alongside mounting evidence from other civil society groups, show that Big Tech will not clean up its platforms voluntarily. Meta and X continue to allow illegal hate speech, incitement to violence, and election disinformation to spread at scale, despite their legal obligations under the DSA,” the spokesperson added. (We have withheld the spokesperson’s name to prevent harassment.)

“Regulators must take strong action — both in enforcing the DSA but also for example implementing pre-election mitigation measures. This could include turning off profiling-based recommender systems immediately before elections, and implementing other appropriate ‘break-glass’ measures to prevent algorithmic amplification of borderline content, such as hateful content in the run-up elections.”

The campaign group also warns that the EU is now facing pressure from the Trump administration to soften its approach to regulating Big Tech. “In the current political climate, there’s a real danger that the Commission doesn’t fully enforce these new laws as a concession to the U.S.,” they suggest.

Keep reading the article on Tech Crunch


Despite recent layoffs, Meta is expanding in India

Meta made headlines last month for announcing plans to cut 5% of its employees, controversially deeming them “low performers.” But the job cuts aren’t holding Meta back from expanding in certain geographic areas.

Meta is setting up a new site in the country’s tech hub of Bengaluru (formerly known as Bangalore), multiple Meta employees posted on LinkedIn this month. 

Meta is currently hiring for 41 positions there, according to its careers webpage, most of which were posted over the last month. The positions are split between software or machine learning engineer jobs, and roles focused on designing chips for Meta’s data centers.

Meta Bengaluru is looking for an “experienced Engineering Director to build and lead our engineering team in India,” one of Meta’s job ads, which was posted three weeks ago on LinkedIn, reads. 

The engineering director in Bengaluru will be responsible for designing a strategy to hire and build founding engineering teams, plus help create “a vision for engineer teams in India,” the job ad states.

The new center is part of Meta’s Enterprise Engineering team, according to a Meta employee’s LinkedIn post. That team focuses on custom internal Meta tools, rather than on Meta’s best-known products like Facebook and Instagram. 

While Meta has several existing offices in India, including Bengaluru, Hyderabad, Gurgaon, New Delhi, and Mumbai, most have fewer job openings and they’re mainly for non-engineering roles. Only 1 of the 12 available positions at the other locations is engineering-related, Meta’s careers page shows.

A Meta spokesperson in India told TechCrunch that it is recruiting for a “small number of engineering positions in Bengaluru.”

“We regularly update our location strategies to support our long-term investments,” the spokesperson said.

The 41 positions in Bangalore are a small proportion of Meta’s global job postings, which currently total over 1,700. But it represents a shift for Meta, which has not traditionally used India as an engineering hub — those positions have historically been based in North America and Europe.

In one example from 2022, an Indian software engineer made headlines after he was laid off by Meta just two days after relocating to Canada for the job.

Meta CEO Mark Zuckerberg has said that Meta intends to backfill the jobs it cut during its most recent round of layoffs.

Keep reading the article on Tech Crunch


February 20, 2025

Mark Zuckerberg’s makeover didn’t make people like him, study shows

A study by the Pew Research Center found that Americans’ views of Elon Musk and Mark Zuckerberg skew more negative than positive.

While Zuckerberg has sparked chatter in Silicon Valley with his sudden interest in high fashion, the Meta CEO is less popular than President Trump’s right-hand man, Elon Musk, the report found. While about 54% of U.S. adults say they have an unfavorable view of Musk, 67% feel negatively toward Zuckerberg.

The two tech executives have come under increased scrutiny since the start of President Trump’s second term; both sat alongside the president at his inauguration and made donations to his inauguration fund. While Zuckerberg has upended long-standing Meta content moderation policies to limit fact-checking and action against hate speech, Musk has played a key role in Trump’s camp thus far.

Throughout the first month of Trump’s presidency, Musk has directly involved himself in the U.S. government operations, using his political connections to gut government departments like USAID, which provides humanitarian aid around the world. All the while, Musk’s DOGE has overstated the impact of its budget cuts by billions of dollars.

Given Musk’s affiliation with Trump, it follows that along party lines, 85% of respondents who are Democrats or who lean Democratic held unfavorable views of the Tesla CEO. Meanwhile, 73% of Republican or Republican-aligned respondents felt favorably toward Musk.

But Zuckerberg, the Facebook founder, is more universally disliked, though he draws more ire from the left-leaning demographic. While 60% of Republican and Republican-leaning respondents hold an unfavorable view of Zuckerberg, 76% of their Democratic counterparts share that sentiment.

So, while Zuck may be playing the part of the cool guy, Americans haven’t been fooled by his gold chains or musical ambitions, it seems.

Pew’s study involved a panel of 5,086 randomly selected U.S. adults. The survey was conducted from January 27, 2025, through February 2, 2025, so these responses reflect people’s recent opinions.

Keep reading the article on Tech Crunch


Meta starts accepting sign-ups for Community Notes on Facebook, Instagram, and Threads

Meta announced on Thursday that it’s now accepting sign-ups for its Community Notes program on Facebook, Instagram, and Threads. The announcement follows Meta news last month that it’s going to end its third-party fact-checking program and is instead moving to a Community Notes model similar to the one at X.

In a blog post, Meta explains that Community Notes will be a way for users across its platforms to decide when posts are misleading, and allow them to add more context to the posts.

Starting today, people can sign up to be among the first contributors to the program. To sign up, users must be based in the United States and be over 18 years of age. Plus, users must have an account that’s more than six months old and in good standing, along with a verified phone number or enrollment in two-factor authentication.

Meta says contributors will be able to write and submit a Community Note to posts that they think are misleading or confusing. Just like on X, Notes can include things like background information, a tip, or other details that users might find useful.

Notes will have a 500-character limit and are required to include a link.

“For a Community Note to be published on a post, users who normally disagree, based on how they’ve rated Notes in the past, will have to agree that a Note is helpful,” Meta explains. “Notes will not be added to content when there is no agreement or when people agree a Note is not helpful.”

Meta says Community Notes will be written and rated by contributors, not by the tech giant itself. All Notes must adhere to Meta’s Community Standards.

“We intend to be transparent about how different viewpoints inform the Notes displayed in our apps, and are working on the right way to share this information,” Meta says.

The company plans to introduce Community Notes in the United States over the next couple of months. Meta hasn’t shared when it plans to bring the feature to additional countries.

Meta’s decision to drop fact-checking for Community Notes has been seen as the company repositioning itself for the Trump presidency, as it takes an approach that’s in favor of unrestricted speech online. When Meta announced the change, Mark Zuckerberg said in a video that fact-checkers were “too politically biased” and had destroyed “more trust than they’ve created.”

Keep reading the article on Tech Crunch


Trump’s FTC is looking into censorship on tech platforms

The Federal Trade Commission announced on Thursday that it will launch a public inquiry into “censorship by tech platforms,” soliciting comments from people who feel they have been demonetized, banned, or otherwise censored due to their speech or affiliations.

“Tech firms should not be bullying their users,” said FTC Chairman Andrew Ferguson in a statement. “This inquiry will help the FTC better understand how these firms may have violated the law by silencing and intimidating Americans for speaking their minds.”

The FTC’s request for public comment does not specify what laws the FTC believes platforms could be violating.

However, the regulator alleges that these policies — which can sometimes cause online creators to lose access to their accounts with no appeals process — could be deemed anti-competitive.

Creators have long bemoaned their opaque relationship with big tech platforms. Startups have even emerged to provide creators with insurance to protect against account hacks, which can lead to losses of income. But the FTC’s invocation of content creators could be a distraction, as this announcement comes at a time when social media executives like Mark Zuckerberg and Elon Musk are loosening restrictions on hate speech and calling into question the relationship between content moderation and the First Amendment.

Cathy Gellis, a lawyer with expertise in technology and free speech, told TechCrunch that this inquiry seems to misinterpret the purview of the First Amendment.

While the First Amendment restricts the government from interfering in individuals’ speech, it does not limit private actors, like most online tech platforms.

“In most cases, internet platforms are private actors, which have their own First Amendment rights to moderate their sites as they would choose,” Gellis said. “If anything it is this inquiry by the FTC, which itself is a government actor, that threatens to violate the First Amendment, by seeking to interfere with the editorial discretion that internet platforms are entitled to have.”

The oft-cited Section 230 of the Communications Decency Act protects online platforms from being held liable for illegal content posted by individuals. In recent years, the Supreme Court has heard cases challenging the legislation, which was written in 1996, before social media existed as it does today. Yet the court has upheld Section 230 after multiple legal challenges.

Though Zuckerberg and Musk have appealed to the First Amendment as they loosen content moderation and fact-checking policies, Snap CEO Evan Spiegel says his peers are misunderstanding the First Amendment.

“A lot of platforms are basically saying, you know, we support the First Amendment, so anyone on our platform should be able to say anything, but that’s sort of misconstruing what the First Amendment does,” Spiegel said in a recent interview with YouTubers Colin and Samir. “Actually, the platform can choose whatever content guidelines or policies it wants under the First Amendment. And so I think there’s been a little bit of misdirection mostly, probably because folks don’t want to moderate content, because when they do, engagement goes down.”

On Wednesday, President Trump signed an executive order that makes independent regulators, like the SEC and FTC, accountable to the White House, which could impact this inquiry. But experts remain skeptical about the constitutionality of Trump’s decree.

Keep reading the article on Tech Crunch


and this