Samsung is closing in on an investment in AI search startup Perplexity that would also see the Korean tech giant incorporating the AI company’s tech into its devices, Bloomberg reported, citing anonymous sources.
As one of the biggest investors in the AI company’s new fundraising round, Samsung is discussing having Perplexity’s app and assistant pre-installed on its phones and integrating its search features within the default browser, the report said.
The companies are also discussing having Perplexity’s AI tech power some of Samsung’s own assistant Bixby’s features, Bloomberg reported, adding that the partnership could be unveiled this year.
Samsung and Perplexity did not immediately return requests for comment.
Perplexity is in the process of raising a $500 million round at a $14 billion valuation, Bloomberg reported in May.
Samsung is not alone in looking to integrate Perplexity features. Bloomberg previously reported that Apple has thought about adding Perplexity as a search engine option to Safari, and in April, Motorola announced a partnership with Perplexity to power AI features.
Keep reading the article on Tech Crunch
Generative AI comes in many forms. Increasingly, though, it’s marketed the same way: with human names and personas that make it feel less like code and more like a co-worker. A growing number of startups are anthropomorphizing AI to build trust fast — and soften its threat to human jobs. It’s dehumanizing, and it’s accelerating.
I get why this framing took off. In today’s upside-down economy, where every hire feels like a risk, enterprise startups — many emerging from the famed accelerator Y Combinator — are pitching AI not as software but as staff. They’re selling replacements. AI assistants. AI coders. AI employees. The language is deliberately designed to appeal to overwhelmed hiring managers.
Some don’t even bother with subtlety. Atlog, for instance, recently introduced an “AI employee for furniture stores” that handles everything from payments to marketing. One good manager, it gloats, can now run 20 stores at once. The implication: you don’t need to hire more people — just let the system scale for you. (What happens to the 19 managers it replaces is left unsaid.)
Consumer-facing startups are leaning into similar tactics. Anthropic named its platform “Claude” because it’s a warm, trustworthy-sounding companion for a faceless, disembodied neural net. It’s a tactic straight out of the fintech playbook where apps like Dave, Albert, and Charlie masked their transactional motives with approachable names. When handling money, it feels better to trust a “friend.”
The same logic has crept into AI. Would you rather share sensitive data with a machine learning model or your bestie Claude, who remembers you, greets you warmly, and almost never threatens you? (To OpenAI’s credit, it still tells you you’re chatting with a “generative pre-trained transformer.”)
But we’re reaching a tipping point. I’m genuinely excited about generative AI. Still, every new “AI employee” has begun to feel more dehumanizing. Every new “Devin” makes me wonder when the actual Devins of the world will push back on being abstracted into job-displacing bots.
Generative AI is no longer just a curiosity. Its reach is expanding, even if the impacts remain unclear. In mid-May, 1.9 million unemployed Americans were receiving continued jobless benefits — the highest since 2021. Many of those were laid-off tech workers. The signals are piling up.
Some of us still remember 2001: A Space Odyssey. HAL, the onboard computer, begins as a calm, helpful assistant before turning completely homicidal and cutting off the crew’s life support. It’s science fiction, but it hit a nerve for a reason.
Last week, Anthropic CEO Dario Amodei predicted that AI could eliminate half of entry-level white-collar jobs in the next one to five years, pushing unemployment as high as 20%. “Most [of these workers are] unaware that this is about to happen,” he told Axios. “It sounds crazy, and people just don’t believe it.”
You could argue that’s not comparable to cutting off someone’s oxygen, but the metaphor isn’t that far off. Automating more people out of paychecks will have consequences, and when the layoffs increase, the branding of AI as a “colleague” is going to look less clever and more callous.
The shift toward generative AI is happening regardless of how it’s packaged. But companies have a choice in how they describe these tools. IBM never called its mainframes “digital co-workers.” PCs weren’t “software assistants”; they were workstations and productivity tools.
Language still matters. Tools should empower. But more and more companies are marketing something else entirely, and that feels like a mistake.
We don’t need more AI “employees.” We need software that extends the potential of actual humans, making them more productive, creative, and competitive. So please stop talking about fake workers. Just show us the tools that help great managers run complex businesses. That’s all anyone is really asking for.
Keep reading the article on Tech Crunch
Elad Gil started betting on AI before most of the world took notice. By the time investors began grasping the implications of ChatGPT, Gil had already written seed checks to startups like Perplexity, Character.AI, and Harvey. Now, as the early winners of the AI wave become clearer, the renowned “solo” VC is increasingly focused on a fresh opportunity: using AI to reinvent traditional businesses and scale them through roll-ups.
The idea is to identify opportunities to buy mature, people-intensive businesses like law firms and other professional services firms, help them scale through AI, then use the improved margins to acquire other such businesses and repeat the process. He has been at it for three years.
“It just seems so obvious,” said Gil over a Zoom call earlier this week. “This type of generative AI is very good at understanding language, manipulating language, manipulating text, producing text. And that’s audio, that’s video, that includes coding, sales outreach, and different back-office processes.”
If you can “effectively transform some of those repetitive tasks into software,” he said, “you can increase the margins dramatically and create very different types of businesses.” The math is particularly compelling if one owns the business outright, he added.
“If you own the asset, you can [transform it] much more rapidly than if you’re just selling software as a vendor,” Gil said. “And because you take the gross margin of a company from, say, 10% to 40%, that’s a huge lift. Suddenly you can buy other companies at a higher price than anyone else because you have that increased cash flow per business; you have enormous leverage on the business on a relative basis, so you can do roll-ups in ways that others can’t.”
So far, Gil has backed two companies pursuing this strategy. According to The Information, one is a one-year-old company called Enam Co., focused on worker productivity, which has been valued at more than $300 million by its backers, including Andreessen Horowitz and OpenAI’s Startup Fund.
Though Gil says he can’t discuss specifics of the private deals, he suggests the approach represents something new. “There used to be these technology-enabled roll-ups 10 years ago, and most of them kind of ended up being not really that much of a user of technology,” he says. “It was kind of like a thin veneer painted on to increase the valuation of the company. I think in the case of AI, you can actually radically change the cost structure of these things.”
Whether the approach proves as lucrative as some of his other bets remains to be seen. Gil has famously backed a host of big brands that have produced riches for their backers, including Airbnb and Coinbase, both of which are now publicly traded, and privately held Stripe, whose valuation has bounced around but reportedly settled in the range of $91.5 billion earlier this year, when its earlier backers bought up more of its shares.
Part of the challenge with roll-ups is finding the right team composition — ideally including a strong technologist along with someone who is “very strong in PE” — and “those things don’t go hand-in-hand,” Gil noted. He said he’s met “maybe two dozen of these teams” so far and mostly looked past them, not because they “weren’t amazing” but because “they still need to sort some things out.”
Gil, who has deep relationships with firms across Silicon Valley, may also find himself competing with them more aggressively on roll-ups as more outfits like Khosla Ventures weigh whether or not they should also be pursuing such deals.
One senses that, either way, Gil is not in it for the money at this point if he ever was. He says his ability to spot trends earlier than most comes instead from the heart. “I love technology, and I love progress, and I love just engaging — both with people who are working on important, interesting things, but also the technology itself.”
When GPT-3 launched, for example, Gil was already experimenting with its predecessor, he said. “When GPT-3 came out, it was such a big leap from GPT-2 that you could just extrapolate out the technology curve. You’re like, ‘Oh my gosh, if this keeps going and scaling’ — all the scaling laws were kind of evident — ‘then this is going to be transformative.’”
That hands-on approach continues today with the small team Gil has assembled, including “people with very deep engineering backgrounds” who “periodically play around with all the AI front-end companies. One person on my team just writes a bunch of scripts and we run them, and we look at performance, and we look at tooling, and it’s super hands-on.”
It’s because of that constant tinkering that, after years of uncertainty in the AI market, Gil sees clear winners emerging. “I used to say, even six months ago, that the more I know about AI, the less I know, because the markets were so dynamic; the technologies were so dynamic,” he said. “And I feel like in the last couple months — maybe the last two quarters — a subset of markets have really crystallized.”
In legal, “we kind of know who the one or two main winners are probably going to be. That’s true in health care. That’s true in customer success and support,” said Gil, who clearly thinks these include his own portfolio companies, which he cited in our conversation.
Among these bets is Harvey, which develops large language models for law firms and in-house legal teams and is reportedly in talks to raise new funding at a $5 billion valuation; Abridge, a healthcare AI company that aims to improve doctors’ clinical documentation workflows (and whose $250 million Series D round was co-led by Gil back in February); and Sierra AI, co-founded by famed operator Bret Taylor, which helps companies implement AI agents for customer service. (The company was valued in the billions of dollars right out of the gate.)
Still, Gil is careful not to declare the game over. “I don’t mean to paint the picture that the game is over or that things are done. I think it’s more that there were two dozen companies that all seemed kind of interesting, and maybe now there’s three or four of them [per vertical]. The map of the likely winners is solidified.”
In the meantime, it’s clear in conversation that this moment represents more than just another investment cycle to him. “I just think it’s a really fun period of time, because so much change is happening, and so there’s just a ton to do,” he said.
Being at the intersection of two transformations — not just betting on the future of AI but on the future of how AI will reshape everything else — is “very exciting,” he added.
We’ll have more from our conversation with Gil — which also touched on guardrails, gatekeeping, and how companies can most adeptly integrate the technologies that will make or break their business — in the newest episode of the StrictlyVC Download podcast, which comes out on Tuesday.
Keep reading the article on Tech Crunch
In “The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future,” Wall Street Journal reporter Keach Hagey examines our AI-obsessed moment through one of its key figures — Sam Altman, co-founder and CEO of OpenAI.
Hagey begins with Altman’s Midwest childhood, then takes readers through his career at startup Loopt, accelerator Y Combinator, and now at OpenAI. She also sheds new light on the dramatic few days when Altman was fired, then quickly reinstated, as OpenAI’s CEO.
Looking back at what OpenAI employees now call “the Blip,” Hagey said the failed attempt to oust Altman revealed that OpenAI’s complex structure — with a for-profit company controlled by a nonprofit board — is “not stable.” And with OpenAI largely backing down from plans to let the for-profit side take control, Hagey predicted that this “fundamentally unstable arrangement” will “continue to give investors pause.”
Does that mean OpenAI could struggle to raise the funds it needs to keep going? Hagey replied that it could “absolutely” be an issue.
“My research into Sam suggests that he might well be up to that challenge,” she said. “But success is not guaranteed.”
In addition, Hagey’s biography (also available as an audiobook on Spotify) examines Altman’s politics, which she described as “pretty traditionally progressive” — making it a bit surprising that he’s struck massive infrastructure deals with the backing of the Trump administration.
“But this is one area where, in some ways, I feel like Sam Altman has been born for this moment, because he is a deal maker and Trump is a deal maker,” Hagey said. “Trump respects nothing so much as a big deal with a big price tag on it, and that is what Sam Altman is really great at.”
In an interview with TechCrunch, Hagey also discussed Altman’s response to the book, his trustworthiness, and the AI “hype universe.”
This interview has been edited for length and clarity.
You open the book by acknowledging some of the reservations that Sam Altman had about the project — this idea that we tend to focus too much on individuals rather than organizations or broad movements, and also that it’s way too early to assess the impact of OpenAI. Did you share those concerns?
Well, I don’t really share them, because this was a biography. This project was to look at a person, not an organization. And I also think that Sam Altman has set himself up in a way where it does matter what kind of moral choices he has made and what his moral formation has been, because the broad project of AI is really a moral project. That is the basis of OpenAI’s existence. So I think these are fair questions to ask about a person, not just an organization.
As far as whether it’s too soon, I mean, sure, it’s definitely [early to] assess the entire impact of AI. But it’s been an extraordinary story for OpenAI — just so far, it’s already changed the stock market, it has changed the entire narrative of business. I’m a business journalist. We do nothing but talk about AI, all day long, every day. So in that way, I don’t think it’s too early.
And despite those reservations, Altman did cooperate with you. Can you say more about what your relationship with him was like during the process of researching the book?
Well, he was definitely not happy when he was informed about the book’s existence. And there was a long period of negotiation, frankly. In the beginning, I figured I was going to write this book without his help — what we call, in the business, a write-around profile. I’ve done plenty of those over my career, and I figured this would just be one more.
Over time, as I made more and more calls, he opened up a little bit. And [eventually,] he was generous to sit down with me several times for long interviews and share his thoughts with me.
Has he responded to the finished book at all?
No. He did tweet about the project, about his decision to participate with it, but he was very clear that he was never going to read it. It’s the same way that I don’t like to watch my TV appearances or podcasts that I’m on.
In the book, he’s described as this emblematic Silicon Valley figure. What do you think are the key characteristics that make him representative of the Valley and the tech industry?
In the beginning, I think it was that he was young. The Valley really glorifies youth, and he was 19 years old when he started his first startup. You see him going into these meetings with people twice his age, doing deals with telecom operators for his first startup, and no one could get over that this kid was so smart.
The other is that he is a once-in-a-generation fundraising talent, and that’s really about being a storyteller. I don’t think it’s an accident that you have essentially a salesman and a fundraiser at the top of the most important AI company today,
That ties into one of the questions that runs through the book — this question about Altman’s trustworthiness. Can you say more about the concerns people seem to have about that? To what extent is he a trustworthy figure?
Well, he’s a salesman, so he’s really excellent at getting in a room and convincing people that he can see the future and that he has something in common with them. He gets people to share his vision, which is a rare talent.
There are people who’ve watched that happen a bunch of times, who think, “Okay, what he says does not always map to reality,” and have, over time, lost trust in him. This happened both at his first startup and very famously at OpenAI, as well as at Y Combinator. So it is a pattern, but I think it’s a typical critique of people who have the salesman skill set.
So it’s not necessarily that he’s particularly untrustworthy, but it’s part-and-parcel of being a salesman leading these important companies.
I mean, there also are management issues that are detailed in the book, where he is not great at dealing with conflict, so he’ll basically tell people what they want to hear. That causes a lot of sturm-und-drang in the management ranks, and it’s a pattern. Something like that happened at Loopt, where the executives asked the board to replace him as CEO. And you saw it happen at OpenAI as well.
You’ve touched on Altman’s firing, which was also covered in a book excerpt that was published in the Wall Street Journal. One of the striking things to me, looking back at it, was just how complicated everything was — all the different factions within the company, all the people who seemed pro-Altman one day and then anti-Altman the next. When you pull back from the details, what do you think is the bigger significance of that incident?
The very big picture is that the nonprofit governance structure is not stable. You can’t really take investment from the likes of Microsoft and a bunch of other investors and then give them absolutely no say whatsoever in the governance of the company.
That’s what they have tried to do, but I think what we saw in that firing is how power actually works in the world. When you have stakeholders, even if there’s a piece of paper that says they have no rights, they still have power. And when it became clear that everyone in the company was going to go to Microsoft if they didn’t reinstate Sam Altman, they reinstated Sam Altman.
In the book, you take the story up to maybe the end of 2024. There have been all these developments since then, which you’ve continued to report on, including this announcement that actually, they’re not fully converting to a for-profit. How do you think that’s going to affect OpenAI going forward?
It’s going to make it harder for them to raise money, because they basically had to do an about-face. I know that the new structure going forward of the public benefit corporation is not exactly the same as the current structure of the for-profit — it is a little bit more investor friendly, it does clarify some of those things.
But overall, what you have is a nonprofit board that controls a for-profit company, and that fundamentally unstable arrangement is what led to the so-called Blip. And I think you would continue to give investors pause, going forward, if they are going to have so little control over their investment.
Obviously, OpenAI is still such a capital intensive business. If they have challenges raising more money, is that an existential question for the company?
It absolutely could be. My research into Sam suggests that he might well be up to that challenge. But success is not guaranteed.
Like you said, there’s a dual perspective in the book that’s partly about who Sam is, and partly about what that says about where AI is going from here. How did that research into his particular story shape the way you now look at these broader debates about AI and society?
I went down a rabbit hole in the beginning of the book, [looking] into Sam’s father, Jerry Altman, in part because I thought it was striking how he’d been written out of basically every other thing that had ever been written about Sam Altman. What I found in this research was a very idealistic man who was, from youth, very interested in these public-private partnerships and the power of the government to set policy. He ended up having an impact on the way that affordable housing is still financed to this day.
And when I traced Sam’s development, I saw that he has long believed that the government should really be the one that is funding and guiding AI research. In the early days of OpenAI, they went and tried to get the government to invest, as he’s publicly said, and it didn’t work out. But he looks back to these great mid-20th century labs like Xerox PARC and Bell Labs, which are private, but there was a ton of government money running through and supporting that ecosystem. And he says, “That’s the right way to do it.”
Now I am watching daily as it seems like the United States is summoning the forces of state capitalism to get behind Sam Altman’s project to build these data centers, both in the United States and now there was just one last week announced in Abu Dhabi. This is a vision he has had for a very, very long time.
My sense of the vision, as he presented it earlier, was one where, on the one hand, the government is funding these things and building this infrastructure, and on the other hand, the government is also regulating and guiding AI development for safety purposes. And it now seems like the path being pursued is one where they’re backing away from the safety side and doubling down on the government investment side.
Absolutely. Isn’t it fascinating?
You talk about Sam as a political figure, as someone who’s had political ambitions at different times, but also somebody who has what are in many ways traditionally liberal political views while being friends with folks like — at least early on — Elon Musk and Peter Thiel. And he’s done a very good job of navigating the Trump administration. What do you think his politics are right now?
I’m not sure his actual politics have changed, they are pretty traditionally progressive politics. Not completely — he’s been critical about things like cancel culture, but in general, he thinks the government is there to take tax revenue and solve problems.
His success in the Trump administration has been fascinating because he has been able to find their one area of overlap, which is the desire to build a lot of data centers, and just double down on that and not talk about any other stuff. But this is one area where, in some ways, I feel like Sam Altman has been born for this moment, because he is a deal maker and Trump is a deal maker. Trump respects nothing so much as a big deal with a big price tag on it, and that is what Sam Altman is really great at.
You open and close the book not just with Sam’s father, but with his family as a whole. What else is worth highlighting in terms of how his upbringing and family shapes who he is now?
Well, you see both the idealism from his father and also the incredible ambition from his mother, who was a doctor, and had four kids and worked as a dermatologist. I think both of these things work together to shape him. They also had a more troubled marriage than I realized going into the book. So I do think that there’s some anxiety there that Sam himself is very upfront about, that he was a pretty anxious person for much of his life, until he did some meditation and had some experiences.
And there’s his current family — he just had a baby and got married not too long ago. As a young gay man, growing up in the Midwest, he had to overcome some challenges, and I think those challenges both forged him in high school as a brave person who could stand up and take on a room as a public speaker, but also shaped his optimistic view of the world. Because, on that issue, I paint the scene of his wedding: That’s an unimaginable thing from the early ‘90s, or from the ‘80s when he was born. He’s watched society develop and progress in very tangible ways, and I do think that that has helped solidify his faith in progress.
Something that I’ve found writing about AI is that the different visions being presented by people in the field can be so diametrically opposed. You have these wildly utopian visions, but also these warnings that AI could end the world. It gets so hyperbolic that it feels like people are not living in the same reality. Was that a challenge for you in writing the book?
Well, I see those two visions — which feel very far apart — actually being part of the same vision, which is that AI is super important, and it’s going to completely transform everything. No one ever talks about the true opposite of that, which is, “Maybe this is going to be a cool enterprise tool, another way to waste time on the internet, and not quite change everything as much as everyone thinks.” So I see, I see the doomers and the boomers feeding off each other and being part of the same sort of hype universe.
As a journalist and as a biographer, you don’t necessarily come down on one side or the other — but actually, can you say where you come down on that?
Well, I will say that I find myself using it a lot more recently, because it’s gotten a lot better. In the early stages, when I was researching the book, I was definitely a lot more skeptical of its transformative economic power. I’m less skeptical now, because I just use it a lot more.
Keep reading the article on Tech Crunch
TechCrunch Sessions: AI hits UC Berkeley’s Zellerbach Hall on June 5 — and today’s your shot at AI trivia glory and two tickets for the price of one.
Answer a few brain-busting questions on artificial intelligence, and if you ace it, you might just find a special promo code waiting in your inbox.
Every day brings new questions — so don’t get discouraged if you don’t know today’s answers. But don’t wait too long. The last day of Countdown AI Trivia is June 4. Don’t miss your chance to win big and be part of the AI action this Thursday.
Whether you know which AI model kicked off the large language model revolution or what year OpenAI launched ChatGPT, this is your time to shine.
Step 1: Answer the AI trivia questions on this form
Step 2: Watch your inbox for the special code if you win
Step 3: Use the code to claim your 2-for-1 ticket deal
Show off your AI knowledge in this quick trivia round.
Keep reading the article on Tech Crunch
Artificial intelligence has no shortage of visionaries—but the ones who matter are executing. In 4 days, TechCrunch Sessions: AI brings those builders, researchers, funders, and enthusiasts under one roof at UC Berkeley’s Zellerbach Hall.
This isn’t a parade of AI hype or a string of over-edited keynotes. It’s a single day designed for clarity, candor, and real connection.
It’s also your last chance to save. Ticket prices rise soon — but right now, you can save over $300 on your pass and get 50% off a second, so your partner, co-founder, or friend can dive in with you.
Maybe it’s a fireside chat with Jared Kaplan of Anthropic on frontier models. Maybe it’s a breakout session on enterprise deployment with leaders from SAP. Or maybe it’s a deep-dive conversation sparked through the Braindate app — our smarter tool for face-to-face matching based on shared interests. You never know where the game-changing idea will come from. You just need to be in the room.
So You Think You Can Pitch puts AI startups in front of investors for live, unscripted feedback. It’s fast-paced, transparent, and sharp—exactly what early founders need to understand how real funding decisions happen.
We’ve kept the pricing generous, but the clock is ticking. Save over $300 on your TC Sessions: AI pass and get a second one at 50% off. Group discounts apply too. On June 5, prices go full fare—and with them, your shot at big savings disappears. Lock in your low rate tickets here.
Interested in a deeper discount? Participate in our AI trivia for a chance to purchase a ticket at $200 and receive a second ticket for free.
Keep reading the article on Tech Crunch