Blue Diamond Web Services

Your Best Hosting Service Provider!

April 25, 2025

An OpenAI researcher who worked on GPT-4.5 had their green card denied

Kai Chen, a Canadian AI researcher working at OpenAI who’s lived in the U.S. for 12 years, was denied a green card, according to Noam Brown, a leading research scientist at the company. In a post on X, Brown said that Chen learned of the decision Friday and must soon leave the country.

“It’s deeply concerning that one of the best AI researchers I’ve worked with […] was denied a U.S. green card,” wrote Brown. “A Canadian who’s lived and contributed here for 12 years now has to leave. We’re risking America’s AI leadership when we turn away talent like this.”

Another OpenAI employee, Dylan Hunn, said in a post that Chen was “crucial” for GPT-4.5, one of OpenAI’s flagship AI models.

Green cards can be denied for all sorts of reasons, and the decision won’t cost Chen her job. In a follow-up post, Brown said that Chen plans to work remotely from an Airbnb in Vancouver “until [the] mess hopefully gets sorted out.” But it’s the latest example of foreign talent facing high barriers to living, working, and studying in the U.S. under the Trump administration.

OpenAI didn’t immediately respond to a request for comment. However, in a post on X in July 2023, CEO Sam Altman called for changes to make it easier for “high-skill” immigrants to move to and work in the U.S.

Over the past few months, more than 1,700 international students in the U.S., including AI researchers who’ve lived in the country for a number of years, have had their visa statuses challenged as part of an aggressive crackdown. While the government has accused some of these students of supporting Palestinian militant groups or engaging in “antisemitic” activities, others have been targeted for minor legal infractions, like speeding tickets or other traffic violations.

Meanwhile, the Trump administration has turned a skeptical eye toward many green card applicants, reportedly suspending processing of requests for legal permanent residency submitted by immigrants granted refugee or asylum status. It has also taken a hardline approach to green card holders it perceives as “national security” threats, detaining and threatening several with deportation.

AI labs like OpenAI rely heavily on foreign research talent. According to Shaun Ralston, an OpenAI contractor providing support for the company’s API customers, OpenAI filed more than 80 applications for H1-B visas last year alone and has sponsored more than 100 visas since 2022.

H1-B visas, favored by the tech industry, allow U.S. companies to temporarily employ foreign workers in “specialty occupations” that require at least a bachelor’s degree or the equivalent. Recently, immigration officials have begun issuing “requests for evidence” for H-1Bs and other employment-based immigration petitions, asking for home addresses and biometrics, a change some experts worry may lead to an uptick in denied applications.

Immigrants have played a major role in contributing to the growth of the U.S. AI industry.

According to a study from Georgetown’s Center for Security and Emerging Technology, 66% of the 50 “most promising” U.S.-based AI startups on Forbes’ 2019 “AI 50” list had an immigrant founder. A 2023 analysis by the National Foundation for American Policy found that 70% of full-time graduate students in fields related to AI are international students.

Ashish Vaswani, who moved to the U.S. to study computer science in the early 2000s, is one of the co-creators of the transformer, the seminal AI model architecture that underpins chatbots like ChatGPT. One of the co-founders of OpenAI, Wojciech Zaremba, earned his doctorate in AI from NYU on a student visa.

The U.S.’s immigration policies, cutbacks in grant funding, and hostility to certain sciences have many researchers contemplating moving out of the country. Responding to a Nature poll of over 1,600 scientists, 75% said that they were considering leaving for jobs abroad.

Keep reading the article on Tech Crunch


April 24, 2025

Public comments to White House on AI policy touch on copyright, tariffs

Individuals, industry groups, and local governments submitted over 10,000 comments to the White House about its work-in-progress national AI policy, also known as the AI Action Plan. The White House Office of Science and Technology Policy (OSTP) on Thursday published the text of the submissions in a PDF spanning 18,480 pages.

The comments, which touch on topics ranging from copyright to the environmental harms of AI data centers, come as President Donald Trump and allies rejigger the U.S. government’s AI priorities.

In January, President Trump repealed former President Joe Biden’s AI Executive Order, which had instructed the National Institute of Standards and Technology to author guidance that helps companies identify — and correct for — flaws in models, including biases. Critics allied with Trump argued that the order’s reporting requirements were onerous and effectively forced companies to disclose their trade secrets.

Shortly after revoking the AI Executive Order, Trump signed an order directing federal agencies to promote the development of AI “free from ideological bias” that promotes “human flourishing, economic competitiveness, and national security.” Importantly, Trump’s order made no mention of combating AI discrimination, which was a key tenet of Biden’s initiative.

Comments submitted to the White House make clear what’s at stake in the AI race.

A number of commenters asserted that AI is exploitative, in a word, trained on the works of creatives who aren’t compensated for their involuntary contributions, and petitioned the Trump administration to strengthen copyright regulation. On the opposing side, commenters such as VC firm Andreessen Horowitz accused rightsholders of putting up roadblocks to AI development.

Several AI companies, including Google and OpenAI, have also pushed for friendlier rules around AI training in earlier comments on the AI Action Plan.

Petitions from organizations including Americans for Prosperity, The Future of Life Institute, and the American Academy of Nursing emphasized the importance of investments in research at a time when the federal government is slashing scientific grants. AI experts have criticized the Trump administration’s recent cuts to scientific grant-making, and in particular, reductions championed by billionaire Elon Musk’s Department of Government Efficiency.

Some commenters on the AI Action Plan took aim at the Trump administration’s far-ranging tariffs on foreign goods, suggesting that they may harm domestic AI efforts. The Data Center Coalition, a trade association representing the data center sector, says tariffs on infrastructure components “will limit and slow” U.S. AI investments. Elsewhere, the Information Technology Industry Council, an advocacy group whose members include Amazon, Intel, and Microsoft, urged “smart” tariffs that “protect domestic industries without escalating trade wars that harm consumers.”

Only a handful of comments mentioned “AI censorship,” a topic top of mind for many of Trump’s close confidants. Elon Musk and crypto and AI “czar” David Sacks have alleged that popular chatbots censor conservative viewpoints, with Sacks singling out ChatGPT in particular as untruthful about politically sensitive subjects.

In truth, bias in AI is an intractable technical problem. Musk’s AI company, xAI, has itself struggled to create a chatbot that doesn’t endorse some political views over others.

President Trump has ramped up efforts to assemble an AI policy team in recent months.

In March, the Senate confirmed Trump’s pick for director of the OSTP, Michael Kratsios, who focused on AI policy in the OSTP during Trump’s first term. Toward the end of last year, Trump named former VC Sriram Krishnan as the White House’s senior policy advisor for AI.

Keep reading the article on Tech Crunch


Parents who lost children to online harms protest outside of Meta’s NYC office

Meta may have managed to kill a bipartisan bill to protect children online, but parents of children who have suffered from online harm are still putting pressure on social media companies to step up.

On Thursday, 45 families who lost children to online harms — from sextortion to cyberbullying — held a vigil outside one of Meta’s Manhattan offices to honor the memory of their kids and demand action and accountability from the company. 

Many dressed in white, holding roses, signs that read “Meta profits, kids pay the price,” and framed photos of their dead children — a scene that starkly contrasted with the otherwise sunny spring day in New York City. 

While each family’s story is different, the thread that holds them together is that “they’ve all been ignored by the tech companies when they tried to reach out to them and alert them to what happened to their kid,” Sarah Gardner, CEO of child safety advocacy Heat Initiative, one of the organizers of the event, told TechCrunch. 

One mother, Perla Mendoza, said her son died of fentanyl poisoning after taking drugs that he purchased off a dealer on Snapchat. She is one of many parents with similar stories who have filed suit against Snap, alleging the company did little to prevent illegal drug sales on the platform before or after her son’s death. She found her son’s dealer posting images advertising hundreds of pills and reported it to Snap, but she says it took the company eight months to flag his account. 

“His drug dealer was selling on Facebook, too,” Mendoza told TechCrunch. “It’s all connected. He was doing the same thing on all those apps, [including] Instagram. He had multiple accounts.”  

The vigil follows recent testimony from whistleblower Sarah Wynn-Williams, who reveals how Meta targeted 13- to 17-year-olds with ads when they were feeling down or depressed. It also comes four years after The Wall Street Journal published The Facebook Files, which show the company knew that Instagram was toxic for teen girls’ mental health despite downplaying the issue in public.  

Parents of children lost to online harms left an open letter to Meta CEO Mark Zuckerberg outside Meta’s office in NYC, April 24, 2025. Image Credits:Rebecca Bellan

Thursday’s event organizers, which also included advocacy groups ParentsTogether Action and Design It for Us, delivered an open letter addressed to Zuckerberg with more than 10,000 signatures. The letter demands that Meta stop promoting dangerous content to kids (including sexualizing content, racism, hate speech, content promoting disordered eating, and more); prevent sexual predators and other bad actors from using Meta platforms to reach kids; and provide transparent, fast resolutions to kids’ reports of problematic content or interactions. 

Gardner placed the letter on a pile of rose bouquets that were placed outside Meta’s office on Wanamaker Place as protesters chanted, “Build a future where children are respected.”

Over the past year, Meta has implemented new safeguards for children and teens across Facebook and Instagram, including working with law enforcement and other tech platforms to prevent child exploitation. Meta recently introduced Teen Accounts to Instagram, Facebook, and Messenger, which limits who can contact a teen on the app and restricts the type of content the account holder can view. More recently, Instagram began using AI to find teens lying about their age to bypass safeguards. 

“We know parents are concerned about their teens’ having unsafe or inappropriate experiences online,” Sophie Vogel, a Meta spokesperson, told TechCrunch. “It’s why we significantly changed the Instagram experience for teens with Teen Accounts, which were designed to address parents’ top concerns. Teen Accounts have built-in protections that limit who can contact teens and the content they see, and 94% of parents say these are helpful. We’ve also developed safety features to help prevent abuse, like warning teens when they’re chatting to someone in another country, and recently worked with Childhelp to launch a first-of-its kind online safety curriculum, helping middle schoolers recognize potential online harm and know where to go for help.”

Gardner says Meta’s actions don’t do enough to plug the gaps in safety.

For example, Gardner said, despite Meta’s stricter private messaging policies for teens, adults can still approach kids who are not in their network through post comments and ask them to approve their friend request. 

“We’ve had researchers go on and sign on as a 12- or 13-year-old, and within a few minutes, they’re getting really extremist, violent, or sexualized content,” Gardner said. “So it’s clearly not working, and it’s not nearly enough.”

Gardner also noted that Meta’s recent changes to its fact-checking and content moderation policy in favor of community notes are a signal that the company is “letting go of more responsibility, not leaning in.”

Meta and its army of lobbyists also led the opposition to the Kids Online Safety Act, which failed to make it through Congress at the end of 2024. The bill had been widely expected to pass in the House of Representatives after sailing through a Senate vote, and would have imposed rules on social media to prevent the addiction and mental health harms the sites are widely agreed to cause.

“I think what [Mark Zuckerberg] needs to see, and what the point of today is, is to show that parents are really upset about this, and not just the ones who’ve lost their own kids, but other Americans who are waking up to this reality and thinking, ‘I don’t want Mark Zuckerberg making decisions about my child’s online safety,’” Gardner said. 

Keep reading the article on Tech Crunch


and this