Google will rename the Gulf of Mexico and Alaska’s Denali mountain in Google Maps once a federal mapping database reflects changes ordered by the Trump Administration, the company announced Monday.
Google is complying with an executive order issued last week by President Donald Trump that changed the names of several American landmarks. The executive order was followed up by a statement from the U.S. Department of the Interior, which said the name changes were now official and America’s geographic naming bodies were working “expeditiously” to fulfill Trump’s order.
“We have a longstanding practice of applying name changes when they have been updated in official government sources,” Google wrote in a post on X.
The Gulf of Mexico’s name in Google Maps will soon be the “Gulf of America” for U.S. users, an entirely new name created by the Trump administration.
Denali, which is North America’s highest peak, will soon show up in Google Maps for all global users as its previous name of Mount McKinley. The Alaskan mountain was named Mount McKinley in 1917 to honor America’s 25th president, William McKinley; the peak was renamed Denali during the Obama administration in 2015. The Trump administration is changing it back, despite protests from some Alaskan senators.
However, not everyone in the world using Google Maps will see the Gulf of America.
Geographic names that are contested between different countries show up in Google Maps under each country’s official name for their citizens, according to a social media post from Google. That means the Gulf of Mexico’s name will not change for Mexican users of Google Maps, where the country’s president, Claudia Sheinbaum, mocked Trump for suggesting a new name for the body of water. In other countries, Google Maps will display both America and Mexico’s names for the gulf side by side.
Google says it will make these changes when the official American naming database, the Geographic Names Information System or GNIS, is updated. Though the Interior Department announced the Gulf of America and Mount McKinley’s name changes were official on Friday, GNIS still reflects that both are named the Gulf of Mexico and Denali as of Monday evening.
Keep reading the article on Tech Crunch
Perplexity AI has submitted a revised proposal to merge with TikTok, in an arrangement that would give the U.S. government up to 50 percent ownership of the new entity.
The Associated Press first reported on the new proposal. A source with knowledge of the bid confirmed to TechCrunch that the AP’s reporting is accurate.
The AI search engine had previously proposed creating a new company by combining Perplexity, TikTok US, and additional equity investors. Under the new bid, the government would receive its stake after an initial public offering of at least $300 million, while TikTok’s current Chinese owner ByteDance could also retain ownership, according to the AP.
Perplexity reportedly revised its bid based on feedback from President Donald Trump’s administration.
TikTok briefly went down last weekend due to a law forcing ByteDance to sell the app or see it banned in the United States. It sprang back to life after Trump said he would sign an executive order extending the sale deadline. He also said he’d like to see the U.S. receive “50% ownership,” although it wasn’t clear whether he meant the government or U.S. investors.
Another report this week suggested that the White House was negotiating a deal that would see Oracle (which already provides the infrastructure for TikTok’s U.S. traffic) take over; when asked, Trump said he’s spoken to “many people about TikTok” but “not with Oracle.”
Keep reading the article on Tech Crunch
The Trump administration is negotiating a deal that would see Oracle take over TikTok alongside new U.S. investors, according to a report in NPR.
Lawmakers passed a bill last year forcing Chinese parent company ByteDance to either sell TikTok or see it banned in the U.S. The app briefly went dark before the law took effect on January 20 — until incoming President Donald Trump said he would issue an executive order delaying the ban.
At the time, Trump also outlined his “initial thought” on a deal to save TikTok — creating “a joint venture between the current owners and/or new owners whereby the U.S. gets a 50% ownership.”
NPR’s reporting suggests that a deal is now shaping up where Oracle would take control of TikTok’s global operations while ByteDance retains a minority stake.
Trump tried to force TikTok to sell during his first term, with Oracle emerging as a potential buyer. While that didn’t happen, TikTok later said it shifted all its U.S. traffic to Oracle servers. And at a press conference on Tuesday, Trump said he’d be open to either X owner Elon Musk or Oracle chairman Larry Ellison buying the app.
Meanwhile, some of the senators who supported the ban-or-sell bill have expressed confusion about Trump’s plans and said the law requires ByteDance to fully divest.
Keep reading the article on Tech Crunch
2024 was a busy year for lawmakers (and lobbyists) concerned about AI — most notably in California, where Gavin Newsom signed 18 new AI laws while also vetoing high-profile AI legislation.
And 2025 could see just as much activity, especially on the state level, according to Mark Weatherford. Weatherford has, in his words, seen the “sausage making of policy and legislation” at both the state and federal levels; he’s served as Chief Information Security Officer for the states of California and Colorado, as well as Deputy Under Secretary for Cybersecurity under President Barack Obama.
Weatherford said that in recent years, he’s held different job titles, but his role usually boils down to figuring out “how do we raise the level of conversation around security and around privacy so that we can help influence how policy is made.” Last fall, he joined synthetic data company Gretel as its vice president of policy and standards.
So I was excited to talk to him about what he thinks comes next in AI regulation and why he thinks states are likely to lead the way.
This interview has been edited for length and clarity.
That goal of raising the level of conversation will probably resonate with many folks in the tech industry, who have maybe watched congressional hearings about social media or related topics in the past and clutched their heads, seeing what some elected officials know and don’t know. How optimistic are you that lawmakers can get the context they need in order to make informed decisions around regulation?
Well, I’m very confident they can get there. What I’m less confident about is the timeline to get there. You know, AI is changing daily. It’s mindblowing to me that issues we were talking about just a month ago have already evolved into something else. So I am confident that the government will get there, but they need people to help guide them, staff them, educate them.
Earlier this week, the US House of Representatives had a task force they started about a year ago, a task force on artificial intelligence, and they released their report — well, it took them a year to do this. It’s a 230 page report; I’m wading through it right now. [Weatherford and I first spoke in December.]
[When it comes to] the sausage making of policy and legislation, you’ve got two different very partisan organizations, and they’re trying to come together and create something that makes everybody happy, which means everything gets watered down just a little bit. It just takes a long time, and now, as we move into a new administration, everything’s up in the air on how much attention certain things are going to get or not.
It sounds like your viewpoint is that we may see more regulatory action on the state level in 2025 than on the federal level. Is that right?
I absolutely believe that. I mean, in California, I think Governor [Gavin] Newsom, just within the last couple months, signed 12 pieces of legislation that had something to do with AI. [Again, it’s 18 by TechCrunch’s count.)] He vetoed the big bill on AI, which was going to really require AI companies to invest a lot more in testing and really slow things down.
In fact, I gave a talk in Sacramento yesterday to the California Cybersecurity Education Summit, and I talked a little bit about the legislation that’s happening across the entire US, all of the states, and it’s like something like over 400 different pieces of legislation at the state level have been introduced just in the past 12 months. So there’s a lot going on there.
And I think one of the big concerns, it’s a big concern in technology in general, and in cybersecurity, but we’re seeing it on the artificial intelligence side right now, is that there’s a harmonization requirement. Harmonization is the word that [the Department of Homeland Security] and Harry Coker at the [Biden] White House have been using to [refer to]: How do we harmonize all of these rules and regulations around these different things so that we don’t have this [situation] of everybody doing their own thing, which drives companies crazy. Because then they have to figure out, how do they comply with all these different laws and regulations in different states?
I do think there’s going to be a lot more activity on the state side, and hopefully we can harmonize these a little bit so there’s not this very diverse set of regulations that companies have to comply with.
I hadn’t heard that term, but that was going to be my next question: I imagine most people would agree that harmonization is a good goal, but are there mechanisms by which that’s happening? What incentive do the states have to actually make sure their laws and regulations are in line with each other?
Honestly, there’s not a lot of incentive to harmonize regulations, except that I can see the same kind of language popping up in different states — which to me, indicates that they’re all looking at what each other’s doing.
But from a purely, like, “Let’s take a strategic plan approach to this amongst all the states,” that’s not going to happen, I don’t have any high hopes for it happening.
Do you think other states might sort of follow California’s lead in terms of the general approach?
A lot of people don’t like to hear this, but California does kind of push the envelope [in tech legislation] that helps people to come along, because they do all the heavy lifting, they do a lot of the work to do the research that goes into some of that legislation.
The 12 bills that Governor Newsom just passed were across the map, everything from pornography to using data to train websites to all different kinds of things. They have been pretty comprehensive about leaning forward there.
Although my understanding is that they passed more targeted, specific measures and then the bigger regulation that got most of the attention, Governor Newsom ultimately vetoed it.
I could see both sides of it. There’s the privacy component that was driving the bill initially, but then you have to consider the cost of doing these things, and the requirements that it levies on artificial intelligence companies to be innovative. So there’s a balance there.
I would fully expect [in 2025] that California is going to pass something a little bit more strict than than what they did [in 2024].
And your sense is that on the federal level, there’s certainly interest, like the House report that you mentioned, but it’s not necessarily going to be as big a priority or that we’re going to see major legislation [in 2025]?
Well, I don’t know. It depends on how much emphasis the [new] Congress brings in. I think we’re going to see. I mean, you read what I read, and what I read is that there’s going to be an emphasis on less regulation. But technology in many respects, certainly around privacy and cybersecurity, it’s kind of a bipartisan issue, it’s good for everybody.
I’m not a huge fan of regulation, there’s a lot of duplication and a lot of wasted resources that happen with so much different legislation. But at the same time, when the safety and security of society is at stake, as it is with AI, I think there’s, there’s definitely a place for more regulation.
You mentioned it being a bipartisan issue. My sense is that when there is a split, it’s not always predictable — it isn’t just all the Republican votes versus all the Democratic votes.
That’s a great point. Geography matters, whether we like to admit it or not, that, and that’s why places like California are really being leaning forward in some of their legislation compared to some other states.
Obviously, this is an area that Gretel works in, but it seems like you believe, or the company believes, that as there’s more regulation, it pushes the industry in the direction of more synthetic data.
Maybe. One of the reasons I’m here is, I believe synthetic data is the future of AI. Without data, there’s no AI, and quality of data is becoming more of an issue, as the pool of data — either it gets used up or shrinks. There’s going to be more and more of a need for high quality synthetic data that ensures privacy and eliminates bias and takes care of all of those kind of nontechnical, soft issues. We believe that synthetic data is the answer to that. In fact, I’m 100% convinced of it.
This is less directly about policy, though I think it has sort of policy implications, but I would love to hear more about what brought you around to that point of view. I think there’s other folks who recognize the problems you’re talking about, but think of synthetic data potentially amplifying whatever biases or problems were in the original data, as opposed to solving the problem.
Sure, that’s the technical part of the conversation. Our customers feel like we have solved that, and there is this concept of the flywheel of data generation — that if you generate bad data, it gets worse and worse and worse, but building in controls into this flywheel that validates that the data is not getting worse, that it’s staying equally or getting better each time the fly will comes around. That’s the problem Gretel has solved.
Many Trump-aligned figures in Silicon Valley have been warning about AI “censorship” — the various weights and guardrails that companies put around the content created by generative AI. Do you think that’s likely to be regulated? Should it be?
Regarding concerns about AI censorship, the government has a number of administrative levers they can pull, and when there is a perceived risk to society, it’s almost certain they will take action.
However, finding that sweet spot between reasonable content moderation and restrictive censorship will be a challenge. The incoming administration has been pretty clear that “less regulation is better” will be the modus operandi, so whether through formal legislation or executive order, or less formal means such as [National Institute of Standards and Technology] guidelines and frameworks or joint statements via interagency coordination, we should expect some guidance.
I want to get back to this question of what good AI regulation might look like. There’s this big spread in terms of how people talk about AI, like it’s either going to save the world or going to destroy the world, it’s the most amazing technology, or it’s wildly overhyped. There’s so many divergent opinions about the technology’s potential and its risks. How can a single piece or even multiple pieces of AI regulation encompass that?
I think we have to be very careful about managing the sprawl of AI. We have already seen with deepfakes and some of the really negative aspects, it’s concerning to see young kids now in high school and even younger that are generating deep fakes that are getting them in trouble with the law. So I think there’s a place for legislation that controls how people can use artificial intelligence that doesn’t violate what may be an existing law — we create a new law that reinforces current law, but just taking the AI component into it.
I think we — those of us that have been in the technology space — all have to remember, a lot of this stuff that we just consider second nature to us, when I talk to my family members and some of my friends that are not in technology, they literally don’t have a clue what I’m talking about most of the time. We don’t want people to feel like that big government is over-regulating, but it’s important to talk about these things in language that non-technologists can understand.
But on the other hand, you probably can tell it just from talking to me, I am giddy about the future of AI. I see so much goodness coming. I do think we’re going to have a couple of bumpy years as people more in tune with it and more understand it, and legislation is going to have a place there, to both let people understand what AI means to them and put some guardrails up around AI.
Keep reading the article on Tech Crunch
Legendary musician Paul McCartney is warning against proposed changes to UK copyright law that would allow tech companies to freely train their models on online content unless the copyright holders actively opt out.
In excerpts of an interview with the BBC, McCartney said the government needs to do more to protect musicians and other artists.
“We’re the people, you’re the government!” he said. “You’re supposed to protect us. That’s your job. So if you’re putting through a bill, make sure you protect the creative thinkers, the creative artists, or you’re not going to have them.”
McCartney isn’t necessarily opposed to the use of AI in creating music — indeed, he took advantage of the technology last year to clean up an old John Lennon demo and create what McCartney called “the last Beatles record.” However, he suggested that AI (or at least AI with a loose approach to copyright) poses an economic threat to artists.
“You get young guys, girls, coming up, and they write a beautiful song, and they don’t own it, and they don’t have anything to do with it, and anyone who wants can just rip it off,” McCartney said.
Adding that “the money’s going somewhere,” he said the financial rewards for creating a hit song should go to the artist, not just “some tech giant somewhere.”
Keep reading the article on Tech Crunch
Companies spent significantly more lobbying AI issues at the U.S. federal level last year compared to 2023 amid regulatory uncertainty.
According to data compiled by OpenSecrets, 648 companies spent on AI lobbying in 2024 versus 458 in 2023, representing a 141% year-over-year increase.
Companies like Microsoft supported legislation such as the CREATE AI Act, which would support the benchmarking of AI systems developed in the U.S. Others, including OpenAI, put their weight behind the Advancement and Reliability Act, which would set up a dedicated government center for AI research.
Most AI labs — that is, companies dedicated almost exclusively to commercializing various kinds of AI tech — spent more backing legislative agenda items in 2024 than in 2023, the data shows.
OpenAI upped its lobbying expenditures to $1.76 million last year from $260,000 in 2023. Anthropic, OpenAI’s close rival, more than doubled its spend from $280,000 in 2023 to $720,000 last year, and enterprise-focused startup Cohere boosted its spending to $230,000 in 2024 from just $70,000 two years ago.
Both OpenAI and Anthropic made hires over the last year to coordinate their policymaker outreach. Anthropic brought on its first in-house lobbyist, Department of Justice alum Rachel Appleton, and OpenAI hired political veteran Chris Lehane as its new VP of policy.
All told, OpenAI, Anthropic, and Cohere set aside $2.71 million combined for their 2024 federal lobbying initiatives. That’s a tiny figure compared to what the larger tech industry put toward lobbying in the same timeframe ($61.5 million), but more than four times the total that the three AI labs spent in 2023 ($610,000).
TechCrunch reached out to OpenAI, Anthropic, and Cohere for comment but did not hear back as of press time.
Last year was a tumultuous one in domestic AI policymaking. In the first half alone, Congressional lawmakers considered more than 90 AI-related pieces of legislation, according to the Brennan Center. At the state level, over 700 laws were proposed.
Congress made little headway, prompting state lawmakers to forge ahead. Tennessee became the first state to protect voice artists from unauthorized AI cloning. Colorado adopted a tiered, risk-based approach to AI policy. And California governor Gavin Newsom signed dozens of AI-related safety bills, a few of which require AI companies to disclose details about their training.
However, no state officials were successful in enacting AI regulation as comprehensive as international frameworks like the EU’s AI Act.
After a protracted battle with special interests, Governor Newsom vetoed bill SB 1047, which would have imposed wide-ranging safety and transparency requirements on AI developers. Texas’ TRAIGA (Texas Responsible AI Governance Act) bill, which is even broader in scope, may suffer the same fate once it makes its way through the statehouse.
It’s unclear whether the federal government can make more progress on AI legislation this year versus last, or even whether there’s a strong appetite for codification. President Donald Trump has signaled his intention to largely deregulate the industry, clearing what he perceives to be roadblocks to U.S. dominance in AI.
During his first day in office, Trump revoked an executive order by former president Joe Biden that sought to reduce risks AI might pose to consumers, workers, and national security. On Thursday, Trump signed an EO instructing federal agencies to suspend certain Biden-era AI policies and programs, potentially including export rules on AI models.
In November, Anthropic called for “targeted” federal AI regulation within the next 18 months, warning that the window for “proactive risk prevention is closing fast.” For its part, OpenAI in a recent policy doc called on the U.S. government to take more substantive action on AI and infrastructure to support the technology’s development.
Keep reading the article on Tech Crunch