Blue Diamond Web Services

Your Best Hosting Service Provider!

June 1, 2025

Elon Musk tries to stick to spaceships

Elon Musk’s interview with CBS Sunday Morning seemed to get off to an awkward start, as reporter David Pogue asked the SpaceX CEO about his thoughts on his ally Donald Trump’s policies, including growing restrictions on international students.

“I think we want to stick to the subject of the day, which is, like, spaceships, as opposed to, you know, presidential policy,” Musk said.

Pogue looked surprised, replying, “Oh, okay, I was told, ‘Anything’s good.’”

“No,” Musk said, while looking into the distance. “Well … no.”

He did, however, comment on the controversy around his Department of Government Efficiency, which has been making aggressive cuts across federal agencies, and which Musk complained had become “the whipping boy for everything.”

“If there was some cut, real or imagined, everyone would blame DOGE,” he said.

Musk also suggested that he’s “a little stuck in a bind” when it comes to the Trump administration, where “I don’t want to speak out against the administration, but I also don’t want to take responsibility for everything the administration’s doing.”

Pogue’s interview was conducted before SpaceX’s Starship test flight on Tuesday, which saw the ship successfully launch but lose control on reentry. Asked whether there’s anything linking his various companies — in addition to SpaceX, there’s Tesla (which faces ongoing anti-Musk protests), xAI and X (formerly Twitter), Neuralink, and The Boring Company — Musk replied, “I guess you could think of the businesses as things that improve the probable trajectory of civilization.”

At the time, Musk was supposedly pulling back from his government work but said he would remain involved for a “day or two” per week. He told Pogue, “DOGE is going to continue, just as a way of life. And I will have some participation in that, but as I’ve said publicly, my focus has to be on the companies at this point.”

Pogue noted that after their conversation, an interview clip of Musk’s comments criticizing the Trump-backed budget bill drove a news cycle of their own — and soon after, Musk said he was ending his time as a special government employee. Trump, however, subsequently said Musk is “not really leaving.”

Keep reading the article on Tech Crunch


Sam Altman biographer Keach Hagey explains why the OpenAI CEO was ‘born for this moment’

In “The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future,” Wall Street Journal reporter Keach Hagey examines our AI-obsessed moment through one of its key figures — Sam Altman, co-founder and CEO of OpenAI.

Hagey begins with Altman’s Midwest childhood, then takes readers through his career at startup Loopt, accelerator Y Combinator, and now at OpenAI. She also sheds new light on the dramatic few days when Altman was fired, then quickly reinstated, as OpenAI’s CEO.

Looking back at what OpenAI employees now call “the Blip,” Hagey said the failed attempt to oust Altman revealed that OpenAI’s complex structure — with a for-profit company controlled by a nonprofit board — is “not stable.” And with OpenAI largely backing down from plans to let the for-profit side take control, Hagey predicted that this “fundamentally unstable arrangement” will “continue to give investors pause.”

Does that mean OpenAI could struggle to raise the funds it needs to keep going? Hagey replied that it could “absolutely” be an issue.

“My research into Sam suggests that he might well be up to that challenge,” she said. “But success is not guaranteed.”

In addition, Hagey’s biography (also available as an audiobook on Spotify) examines Altman’s politics, which she described as “pretty traditionally progressive” — making it a bit surprising that he’s struck massive infrastructure deals with the backing of the Trump administration.

“But this is one area where, in some ways, I feel like Sam Altman has been born for this moment, because he is a deal maker and Trump is a deal maker,” Hagey said. “Trump respects nothing so much as a big deal with a big price tag on it, and that is what Sam Altman is really great at.”

In an interview with TechCrunch, Hagey also discussed Altman’s response to the book, his trustworthiness, and the AI “hype universe.”

This interview has been edited for length and clarity. 

You open the book by acknowledging some of the reservations that Sam Altman had about the project —  this idea that we tend to focus too much on individuals rather than organizations or broad movements, and also that it’s way too early to assess the impact of OpenAI. Did you share those concerns?

Well, I don’t really share them, because this was a biography. This project was to look at a person, not an organization. And I also think that Sam Altman has set himself up in a way where it does matter what kind of moral choices he has made and what his moral formation has been, because the broad project of AI is really a moral project. That is the basis of OpenAI’s existence. So I think these are fair questions to ask about a person, not just an organization.

As far as whether it’s too soon, I mean, sure, it’s definitely [early to] assess the entire impact of AI. But it’s been an extraordinary story for OpenAI — just so far, it’s already changed the stock market, it has changed the entire narrative of business. I’m a business journalist. We do nothing but talk about AI, all day long, every day. So in that way, I don’t think it’s too early.

And despite those reservations, Altman did cooperate with you. Can you say more about what your relationship with him was like during the process of researching the book?

Well, he was definitely not happy when he was informed about the book’s existence. And there was a long period of negotiation, frankly. In the beginning, I figured I was going to write this book without his help — what we call, in the business, a write-around profile. I’ve done plenty of those over my career, and I figured this would just be one more.

Over time, as I made more and more calls, he opened up a little bit. And [eventually,] he was generous to sit down with me several times for long interviews and share his thoughts with me.

Has he responded to the finished book at all?

No. He did tweet about the project, about his decision to participate with it, but he was very clear that he was never going to read it. It’s the same way that I don’t like to watch my TV appearances or podcasts that I’m on.

In the book, he’s described as this emblematic Silicon Valley figure. What do you think are the key characteristics that make him representative of the Valley and the tech industry?

In the beginning, I think it was that he was young. The Valley really glorifies youth, and he was 19 years old when he started his first startup. You see him going into these meetings with people twice his age, doing deals with telecom operators for his first startup, and no one could get over that this kid was so smart.

The other is that he is a once-in-a-generation fundraising talent, and that’s really about being a storyteller. I don’t think it’s an accident that you have essentially a salesman and a fundraiser at the top of the most important AI company today,

That ties into one of the questions that runs through the book — this question about Altman’s trustworthiness. Can you say more about the concerns people seem to have about that? To what extent is he a trustworthy figure? 

Well, he’s a salesman, so he’s really excellent at getting in a room and convincing people that he can see the future and that he has something in common with them. He gets people to share his vision, which is a rare talent.

There are people who’ve watched that happen a bunch of times, who think, “Okay, what he says does not always map to reality,” and have, over time, lost trust in him. This happened both at his first startup and very famously at OpenAI, as well as at Y Combinator. So it is a pattern, but I think it’s a typical critique of people who have the salesman skill set.

So it’s not necessarily that he’s particularly untrustworthy, but it’s part-and-parcel of being a salesman leading these important companies.

I mean, there also are management issues that are detailed in the book, where he is not great at dealing with conflict, so he’ll basically tell people what they want to hear. That causes a lot of sturm-und-drang in the management ranks, and it’s a pattern. Something like that happened at Loopt, where the executives asked the board to replace him as CEO. And you saw it happen at OpenAI as well.

You’ve touched on Altman’s firing, which was also covered in a book excerpt that was published in the Wall Street Journal. One of the striking things to me, looking back at it, was just how complicated everything was — all the different factions within the company, all the people who seemed pro-Altman one day and then anti-Altman the next. When you pull back from the details, what do you think is the bigger significance of that incident?

The very big picture is that the nonprofit governance structure is not stable. You can’t really take investment from the likes of Microsoft and a bunch of other investors and then give them absolutely no say whatsoever in the governance of the company.

That’s what they have tried to do, but I think what we saw in that firing is how power actually works in the world. When you have stakeholders, even if there’s a piece of paper that says they have no rights, they still have power. And when it became clear that everyone in the company was going to go to Microsoft if they didn’t reinstate Sam Altman, they reinstated Sam Altman.

In the book, you take the story up to maybe the end of 2024. There have been all these developments since then, which you’ve continued to report on, including this announcement that actually, they’re not fully converting to a for-profit. How do you think that’s going to affect OpenAI going forward? 

It’s going to make it harder for them to raise money, because they basically had to do an about-face. I know that the new structure going forward of the public benefit corporation is not exactly the same as the current structure of the for-profit — it is a little bit more investor friendly, it does clarify some of those things.

But overall, what you have is a nonprofit board that controls a for-profit company, and that fundamentally unstable arrangement is what led to the so-called Blip. And I think you would continue to give investors pause, going forward, if they are going to have so little control over their investment.

Obviously, OpenAI is still such a capital intensive business. If they have challenges raising more money, is that an existential question for the company?

It absolutely could be. My research into Sam suggests that he might well be up to that challenge. But success is not guaranteed.

Like you said, there’s a dual perspective in the book that’s partly about who Sam is, and partly about what that says about where AI is going from here. How did that research into his particular story shape the way you now look at these broader debates about AI and society?

I went down a rabbit hole in the beginning of the book, [looking] into Sam’s father, Jerry Altman, in part because I thought it was striking how he’d been written out of basically every other thing that had ever been written about Sam Altman. What I found in this research was a very idealistic man who was, from youth, very interested in these public-private partnerships and the power of the government to set policy. He ended up having an impact on the way that affordable housing is still financed to this day.

And when I traced Sam’s development, I saw that he has long believed that the government should really be the one that is funding and guiding AI research. In the early days of OpenAI, they went and tried to get the government to invest, as he’s publicly said, and it didn’t work out. But he looks back to these great mid-20th century labs like Xerox PARC and Bell Labs, which are private, but there was a ton of government money running through and supporting that ecosystem. And he says, “That’s the right way to do it.”

Now I am watching daily as it seems like the United States is summoning the forces of state capitalism to get behind Sam Altman’s project to build these data centers, both in the United States and now there was just one last week announced in Abu Dhabi. This is a vision he has had for a very, very long time.

My sense of the vision, as he presented it earlier, was one where, on the one hand, the government is funding these things and building this infrastructure, and on the other hand, the government is also regulating and guiding AI development for safety purposes. And it now seems like the path being pursued is one where they’re backing away from the safety side and doubling down on the government investment side.

Absolutely. Isn’t it fascinating? 

You talk about Sam as a political figure, as someone who’s had political ambitions at different times, but also somebody who has what are in many ways traditionally liberal political views while being friends with folks like — at least early on — Elon Musk and Peter Thiel. And he’s done a very good job of navigating the Trump administration. What do you think his politics are right now?

I’m not sure his actual politics have changed, they are pretty traditionally progressive politics. Not completely — he’s been critical about things like cancel culture, but in general, he thinks the government is there to take tax revenue and solve problems.

His success in the Trump administration has been fascinating because he has been able to find their one area of overlap, which is the desire to build a lot of data centers, and just double down on that and not talk about any other stuff. But this is one area where, in some ways, I feel like Sam Altman has been born for this moment, because he is a deal maker and Trump is a deal maker. Trump respects nothing so much as a big deal with a big price tag on it, and that is what Sam Altman is really great at.

You open and close the book not just with Sam’s father, but with his family as a whole. What else is worth highlighting in terms of how his upbringing and family shapes who he is now?

Well, you see both the idealism from his father and also the incredible ambition from his mother, who was a doctor, and had four kids and worked as a dermatologist. I think both of these things work together to shape him. They also had a more troubled marriage than I realized going into the book. So I do think that there’s some anxiety there that Sam himself is very upfront about, that he was a pretty anxious person for much of his life, until he did some meditation and had some experiences.

And there’s his current family — he just had a baby and got married not too long ago. As a young gay man, growing up in the Midwest, he had to overcome some challenges, and I think those challenges both forged him in high school as a brave person who could stand up and take on a room as a public speaker, but also shaped his optimistic view of the world. Because, on that issue, I paint the scene of his wedding: That’s an unimaginable thing from the early ‘90s, or from the ‘80s when he was born. He’s watched society develop and progress in very tangible ways, and I do think that that has helped solidify his faith in progress.

Something that I’ve found writing about AI is that the different visions being presented by people in the field can be so diametrically opposed. You have these wildly utopian visions, but also these warnings that AI could end the world. It gets so hyperbolic that it feels like people are not living in the same reality. Was that a challenge for you in writing the book?

Well, I see those two visions — which feel very far apart — actually being part of the same vision, which is that AI is super important, and it’s going to completely transform everything. No one ever talks about the true opposite of that, which is, “Maybe this is going to be a cool enterprise tool, another way to waste time on the internet, and not quite change everything as much as everyone thinks.” So I see, I see the doomers and the boomers feeding off each other and being part of the same sort of hype universe.

As a journalist and as a biographer, you don’t necessarily come down on one side or the other — but actually, can you say where you come down on that?

Well, I will say that I find myself using it a lot more recently, because it’s gotten a lot better. In the early stages, when I was researching the book, I was definitely a lot more skeptical of its transformative economic power. I’m less skeptical now, because I just use it a lot more.

Keep reading the article on Tech Crunch


May 31, 2025

NAACP calls on Memphis officials to halt operations at xAI’s ‘dirty data center’

The NAACP is calling on local officials to halt operations at Colossus, the “supercomputer” facility operated by Elon Musk’s xAI in South Memphis.

As reported in NBC News, leaders from the civil rights group sent a letter Thursday to the Shelby County Health Department and Memphis Light Gas and Water criticizing the organizations’ “lackadaisical approach to the operation of this dirty data center” and calling on them to “issue an emergency order for xAI to stop operations completely” — or if there’s no order, to at least cite and stop the company from allegedly violating clean air laws.

The letter expressed particular concerns around the gas turbines that xAI runs to power Colossus. The company has applied for a permit to continue operating 15 gas turbines at the facility, although the NAACP said authorities have “allowed xAI to operate at least 35 gas turbines without any permitting” over the past year. City officials have previously said xAI did not need permits for the turbines’ first year of use.

These turbines reportedly emit hazardous air pollutants, including formaldehyde, at levels exceeding EPA limits. The NAACP’s letter also pointed to the turbines’ nitrogen-oxide emissions.

Noting that the Colossus facility is located near South Memphis’ Boxtown neighborhood, which the letter described as a “historically Black community,” the NAACP said the location perpetuates “the trend of industries adding pollution to communities who do not cause the problem.”

“Instead of [the Shelby County Health Department] working to reduce health issues known in the area including that cancer risks are already four times the national average, it has allowed xAI to operate above the law,” the NAACP added.

The NAACP’s letter is addressed to Shelby County Health Department Director Michelle Taylor, as well as Memphis Light Gas and Water’s commissioners; Taylor is leaving her role in Shelby County to become the commissioner of the Baltimore City Health Department.

TechCrunch has reached out to the NAACP and xAI for comment. A spokesperson for Memphis Light Gas and Water told NBC News that it had not yet received the NAACP letter.

Keep reading the article on Tech Crunch


Meta plans to automate many of its product risk assessments

An AI-powered system could soon take responsibility for evaluating the potential harms and privacy risks of up to 90% of updates made to Meta apps like Instagram and WhatsApp, according to internal documents reportedly viewed by NPR.

NPR says a 2012 agreement between Facebook (now Meta) and the Federal Trade Commission requires the company to conduct privacy reviews of its products, evaluating the risks of any potential updates. Until now, those reviews have been largely conducted by human evaluators.

Under the new system, Meta reportedly said product teams will be asked to fill out a questionaire about their work, then will usually receive an “instant decision” with AI-identified risks, along with requirements that an update or feature must meet before it launches.

This AI-centric approach would allow Meta to update its products more quickly, but one former executive told NPR it also creates “higher risks,” as “negative externalities of product changes are less likely to be prevented before they start causing problems in the world.”

In a statement, Meta seemed to confirm that it’s changing its review system, but it insisted that only “low-risk decisions” will be automated,  while “human expertise” will still be used to examine “novel and complex issues.”

Keep reading the article on Tech Crunch


May 30, 2025

DOGE left United States Institute of Peace office with water damage, rats, and roaches

The chief executive of the United States Institute of Peace (USIP) says Elon Musk’s Department of Government Efficiency left the nonprofit’s Washington, D.C., headquarters in disarray, full of water damage, rats, and roaches, according to a new sworn statement first reported by Court Watch.

The statement from the executive, George Moose, comes just a few days after a federal judge ruled that DOGE’s takeover of the nonprofit was illegal. And this week, Musk has claimed he is stepping away from DOGE, although he and President Trump have said he will continue to advise the administration.

DOGE started its takeover of USIP in mid-March after a standoff that saw the nonprofit call the police on Musk’s government workers. Moose said at the time that DOGE staff had “broken into” the USIP headquarters in Washington, despite the fact that the nonprofit is not part of the executive branch and isn’t subject to the White House’s whims.

“It was very clear that there was a desire on the part of the administration to dismantle a lot of what we call foreign assistance, and we are part of that family,” Moose said at the time, referencing the Trump administration’s and DOGE’s dismantling of the United States Agency for International Development.

Moose initially said the nonprofit’s headquarters appeared to be in decent shape at a press conference on May 21, where he discussed the judge’s ruling. But one day later, according to the statement, members of Moose’s staff spent a day surveying the building and documenting the problems they found.

Moose wrote in his statement that, ahead of the judge’s ruling, the headquarters had been “essentially abandoned for many weeks” before USIP regained control. He said that DOGE had failed to “maintain and secure the building,” including “evidence of rats and roaches.”

“Vermin were not a problem prior to March 17, 2025, when USIP was actively using and maintaining the building,” Moose wrote.

Staff also reported to Moose that the building’s vehicle barriers were poorly maintained and that they spotted water leaks and “missing ceiling tiles in multiple places in the building (which I have been told suggest likely water damage).”

Now Moose said USIP has “engaged a private security firm to guard the building and premises” and “taken over responsibility for the building’s maintenance.”

Keep reading the article on Tech Crunch


Trump administration to claw back $3.7B in clean energy and manufacturing awards

The Department of Energy announced today that it would be clawing back $3.7 billion worth of awards made under the Biden administration for clean energy and manufacturing. Large corporations and growing startups were caught up in the decision.

Energy secretary Chris Wright said the moves were “due diligence” on the part of the Trump administration. His statement did not cite specific reasons why the projects were canceled, but pointed to a memorandum he issued on May 15, which suggests that the department may attempt to use its audit powers to rescind the awards. 

In total, 24 projects are affected by the move, including ones being developed by oil and gas giant Exxon Mobil, food manufacturer Kraft Heinz, industrial heat startup Skyven, cement and alumina startup Brimstone, and cement startup Sublime Systems.

Here’s a sampling of some of the awards in jeopardy:

Sublime told TechCrunch that it was caught off guard.

“Sublime was surprised and disappointed to receive the news about the termination of our Industrial Demonstrations Program award, given the clear progress we’ve made in scaling our American-invented technology, partnering with some of the Western World’s largest cement producers, and generating a bankable customer base,” spokesperson Rob Kreis said via email. The startup is evaluating its options to continue scaling up its operations.

Brimstone is hopeful that things could be resolved with the DOE.

“Given our project’s strong alignment with President Trump’s priority to increase U.S. production of critical minerals, we believe this was a misunderstanding. Brimstone’s Rock Refinery represents the only economically viable way to produce the critical mineral alumina in the U.S. from U.S.-mined rocks,” Brimstone spokesperson Liza Darwin told TechCrunch via email. 

“As the first U.S.-based alumina plant in a generation, our project — which would also make Portland cement — would clear a ‘mine-to-metal’ path for U.S. aluminum production, fortifying the U.S. critical mineral supply chain and creating thousands of jobs,” she added.

Keep reading the article on Tech Crunch


and this