On Thursday, Amnesty International published a new report detailing attempted hacks against two Serbian journalists, allegedly carried out with NSO Group’s spyware Pegasus.
The two journalists, who work for the Serbia-based Balkan Investigative Reporting Network (BIRN), received suspicious text messages including a link — basically a phishing attack, according to the nonprofit. In one case, Amnesty said its researchers were able to click on the link in a safe environment and see that it led to a domain that they had previously identified as belonging to NSO Group’s infrastructure.
“Amnesty International has spent years tracking NSO Group Pegasus spyware and how it has been used to target activists and journalists,” Donncha Ó Cearbhaill, the head of Amnesty’s Security Lab, told TechCrunch. “This technical research has allowed Amnesty to identify malicious websites used to deliver the Pegasus spyware, including the specific Pegasus domain used in this campaign.”
To his point, security researchers like Ó Cearbhaill who have been keeping tabs on NSO’s activities for years are now so good at spotting signs of the company’s spyware that sometimes all researchers have to do is quickly look at a domain involved in an attack.
In other words, NSO Group and its customers are losing their battle to stay in the shadows.
“NSO has a basic problem: They are not as good at hiding as their customers think,” John Scott-Railton, a senior researcher at The Citizen Lab, a human rights organization that has investigated spyware abuses since 2012, told TechCrunch.
There is hard evidence proving what Ó Cearbhaill and Scott-Railton believe.
In 2016, Citizen Lab published the first technical report ever documenting an attack carried out with Pegasus, which was against a United Arab Emirates dissident. Since then, in less than 10 years, researchers have identified at least 130 people all over the world targeted or hacked with NSO Group’s spyware, according to a running tally by security researcher Runa Sandvik.
The sheer number of victims and targets can in part be explained by the Pegasus Project, a collective journalistic initiative to investigate abuse of NSO Group’s spyware that was based on a leaked list of more than 50,000 phone numbers that was allegedly entered in an NSO Group targeting system.
But there have also been dozens of victims identified by Amnesty, Citizen Lab, and Access Now, another nonprofit that helps protect civil society from spyware attacks, which did not rely on that leaked list of phone numbers.
Do you have more information about NSO Grop, or other spyware companies? From a non-work device and network, you can contact Lorenzo Franceschi-Bicchierai securely on Signal at +1 917 257 1382, or via Telegram and Keybase @lorenzofb, or email. You also can contact TechCrunch via SecureDrop.
An NSO Group spokesperson did not respond to a request for comment, which included questions about Pegasus invisibility, or lack thereof, and whether NSO Group’s customers are concerned about it.
Apart from nonprofits, NSO Group’s spyware keeps getting caught by Apple, which has been sending notifications to victims of spyware all over the world, often prompting the people who received those notifications to get help from Access Now, Amnesty, and Citizen Lab. These discoveries led to more technical reports documenting spyware attacks carried out with Pegasus, as well as spyware made by other companies.
Perhaps NSO Group’s problem rests in the fact that it sells to countries that use its spyware indiscriminately, including reporters and other members of civil society.
“The OPSEC mistake that NSO Group is making here is continuing to sell to countries that are going to keep targeting journalists and end up exposing themselves,” Ó Cearbhaill said, using the technical term for operational security.
Keep reading the article on Tech Crunch
Mozilla has fixed a security bug in its Firefox for Windows browser that was “being exploited in the wild.”
In a brief update, Mozilla said it updated the browser to Firefox version 136.0.4 after identifying and fixing the new bug, tracked as CVE-2025-2857, which presents a “similar pattern” to a bug that Google patched in its Chrome browser earlier this week.
Anyone exploiting the bug could escape Firefox’s sandbox, which limits the browser’s access to other apps and data on the user’s computer.
The bug also affects other browsers with the same codebase as Firefox for Windows, such as the Tor Browser, which also received a patch updating the browser to 14.0.7.
Kaspersky researcher Boris Larin, who first discovered the Chrome zero-day, confirmed in a post that the root cause of the Chrome bug also affects Firefox. Kaspersky previously linked the use of the exploits to attacks on journalists, employees of educational institutions, and government organizations in Russia.
Keep reading the article on Tech Crunch
NHS vendor Advanced will pay just over £3 million ($3.8 million) in fines for not implementing basic security measures before it suffered a ransomware attack in 2022, the U.K.’s data protection regulator has confirmed.
It’s half the fine that the Information Commissioner’s Office had initially sought in August 2024, when the data watchdog said it was going to fine Advanced more than £6 million for its security failings.
The ICO said Wednesday that Advanced “broke data protection law” by not fully rolling out multi-factor authentication prior to its breach, which allowed hackers to break in with stolen credentials and steal the personal information of tens of thousands of people across the United Kingdom.
The LockBit ransomware attack on Advanced caused widespread outages across the NHS, including patient data systems that Advanced maintains on behalf of the NHS.
In a statement, Advanced confirmed that it had settled the matter. Advanced declined to name a spokesperson when asked by TechCrunch.
Keep reading the article on Tech Crunch
A complaint about poverty in rural China. A news report about a corrupt Communist Party member. A cry for help about corrupt cops shaking down entrepreneurs.
These are just a few of the 133,000 examples fed into a sophisticated large language model that’s designed to automatically flag any piece of content considered sensitive by the Chinese government.
A leaked database seen by TechCrunch reveals China has developed an AI system that supercharges its already formidable censorship machine, extending far beyond traditional taboos like the Tiananmen Square massacre.
The system appears primarily geared toward censoring Chinese citizens online but could be used for other purposes, like improving Chinese AI models’ already extensive censorship.
Xiao Qiang, a researcher at UC Berkeley who studies Chinese censorship and who also examined the dataset, told TechCrunch that it was “clear evidence” that the Chinese government or its affiliates want to use LLMs to improve repression.
“Unlike traditional censorship mechanisms, which rely on human labor for keyword-based filtering and manual review, an LLM trained on such instructions would significantly improve the efficiency and granularity of state-led information control,” Qiang told TechCrunch.
This adds to growing evidence that authoritarian regimes are quickly adopting the latest AI tech. In February, for example, OpenAI said it caught multiple Chinese entities using LLMs to track anti-government posts and smear Chinese dissidents.
The Chinese Embassy in Washington, D.C., told TechCrunch in a statement that it opposes “groundless attacks and slanders against China” and that China attaches great importance to developing ethical AI.
The dataset was discovered by security researcher NetAskari, who shared a sample with TechCrunch after finding it stored in an unsecured Elasticsearch database hosted on a Baidu server.
This doesn’t indicate any involvement from either company — all kinds of organizations store their data with these providers.
There’s no indication of who, exactly, built the dataset, but records show that the data is recent, with its latest entries dating from December 2024.
In language eerily reminiscent of how people prompt ChatGPT, the system’s creator tasks an unnamed LLM to figure out if a piece of content has anything to do with sensitive topics related to politics, social life, and the military. Such content is deemed “highest priority” and needs to be immediately flagged.
Top-priority topics include pollution and food safety scandals, financial fraud, and labor disputes, which are hot-button issues in China that sometimes lead to public protests — for example, the Shifang anti-pollution protests of 2012.
Any form of “political satire” is explicitly targeted. For example, if someone uses historical analogies to make a point about “current political figures,” that must be flagged instantly, and so must anything related to “Taiwan politics.” Military matters are extensively targeted, including reports of military movements, exercises, and weaponry.
A snippet of the dataset can be seen below. The code inside it references prompt tokens and LLMs, confirming the system uses an AI model to do its bidding:
From this huge collection of 133,000 examples that the LLM must evaluate for censorship, TechCrunch gathered 10 representative pieces of content.
Topics likely to stir up social unrest are a recurring theme. One snippet, for example, is a post by a business owner complaining about corrupt local police officers shaking down entrepreneurs, a rising issue in China as its economy struggles.
Another piece of content laments rural poverty in China, describing run-down towns that only have elderly people and children left in them. There’s also a news report about the Chinese Communist Party (CCP) expelling a local official for severe corruption and believing in “superstitions” instead of Marxism.
There’s extensive material related to Taiwan and military matters, such as commentary about Taiwan’s military capabilities and details about a new Chinese jet fighter. The Chinese word for Taiwan (台湾) alone is mentioned over 15,000 times in the data, a search by TechCrunch shows.
Subtle dissent appears to be targeted, too. One snippet included in the database is an anecdote about the fleeting nature of power which uses the popular Chinese idiom, “when the tree falls, the monkeys scatter.”
Power transitions are an especially touchy topic in China thanks to its authoritarian political system.
The dataset doesn’t include any information about its creators. But it does say that it’s intended for “public opinion work,” which offers a strong clue that it’s meant to serve Chinese government goals, one expert told TechCrunch.
Michael Caster, the Asia program manager of rights organization Article 19, explained that “public opinion work” is overseen by a powerful Chinese government regulator, the Cyberspace Administration of China (CAC), and typically refers to censorship and propaganda efforts.
The end goal is ensuring Chinese government narratives are protected online, while any alternative views are purged. Chinese President Xi Jinping has himself described the internet as the “frontline” of the CCP’s “public opinion work.”
The dataset examined by TechCrunch is the latest evidence that authoritarian governments are seeking to leverage AI for repressive purposes.
OpenAI released a report last month revealing that an unidentified actor, likely operating from China, used generative AI to monitor social media conversations — particularly those advocating for human rights protests against China — and forward them to the Chinese government.
If you know more about how AI is used in state opporession, you can contact Charles Rollet securely on Signal at charlesrollet.12 You also can contact TechCrunch via SecureDrop.
OpenAI also found the technology being used to generate comments highly critical of a prominent Chinese dissident, Cai Xia.
Traditionally, China’s censorship methods rely on more basic algorithms that automatically block content mentioning blacklisted terms, like “Tiananmen massacre” or “Xi Jinping,” as many users experienced using DeepSeek for the first time.
But newer AI tech, like LLMs, can make censorship more efficient by finding even subtle criticism at a vast scale. Some AI systems can also keep improving as they gobble up more and more data.
“I think it’s crucial to highlight how AI-driven censorship is evolving, making state control over public discourse even more sophisticated, especially at a time when Chinese AI models such as DeepSeek are making headwaves,” Xiao, the Berkeley researcher, told TechCrunch.
Keep reading the article on Tech Crunch