The Uncertain Path Forward

The rapid advancements in AI—along with the legal, ethical, and competitive battles they spark—highlight the unpredictable landscape of the tech industry. From Elon Musk’s latest trademark dispute over “Grok” to Databricks’ efforts to improve AI through reinforcement learning, and Nvidia’s bold bet on synthetic data, the future of artificial intelligence remains a mix of innovation, controversy, and uncertainty.
While AI continues to push boundaries, fundamental questions remain: Who owns the rights to names, data, and even the very intelligence that these systems generate? How will synthetic data reshape industries, and at what cost? As companies race to refine and deploy AI models, the industry will need to navigate regulatory scrutiny, ethical dilemmas, and the growing tensions between open-source ideals and corporate control.
For now, AI’s trajectory is clear—bigger models, more data, and deeper integration into every facet of business and society. But as history has shown, with great innovation comes even greater challenges. The coming years will reveal whether these advancements lead to a more equitable and efficient future or a battleground of competing interests and unresolved disputes.
Startup Founder Claims Elon Musk Is Stealing the Name ‘Grok’

Startup Founder Claims Elon Musk Is Stealing the Name ‘Grok’
Elon Musk said he borrowed the name from a 1960s science fiction novel, but another AI startup applied to trademark it before xAI launched its chatbot.
Elon Musk, chief executive officer of Tesla Inc. left during a cabinet meeting at the White House in Washington DC US on…
Elon Musk during a cabinet meeting at the White House on Monday, March 24, 2025.Photograph: Samuel Corum/Getty Images
Save
Elon Musk’s xAI is facing a potential trademark dispute over the name of its chatbot, Grok. The company’s trademark application with the US Patent and Trademark Office has been suspended after the agency argued the name could be confused with that of two other companies, AI chipmaker Groq and software provider Grokstream. Now, a third tech startup called Bizly is claiming it owns the rights to “Grok.”
This isn’t the first time Musk has chosen a name for one of his products that other companies say they trademarked first. Last month, Musk’s social media platform settled a lawsuit brought by a marketing firm that claimed it owns exclusive rights to the name X.
Bizly and xAI appear to have arrived at the name Grok independently. Bizly founder Ron Shah says he came up with it during a brainstorming session with a colleague who used the word as a verb. (The phrase “to grok” is frequently used in tech circles to mean “to understand.”) “I was like, that’s exactly the name,” Shah tells WIRED. “We got excited, high-fived, it was the name!”
Musk has said he named his chatbot after a term used in the 1961 science fiction novel Stranger in a Strange Land, according to The Times of India. Author Robert A. Heinlein imagined “grok” as a word in a Martian lexicon that also meant “to understand.”
Shah says he applied to trademark the name Grok in 2021. Two years later, he was in the midst of launching an AI-powered app for asynchronous meetings called Grok when Musk announced his chatbot with the same name. “It was a day I’ll never forget,” Shah says. “I woke up and looked at my phone, and there were so many messages from friends saying ‘did you get acquired by Elon? Congrats!’ It was a complete shock to me.”
Shah insists xAI infringed on his trademark. But under US law, trademark regulations are primarily designed to protect consumers rather than companies, says Josh Gerben, founder of Gerben IP, a law firm focused exclusively on trademarks. “The goal is to not have confusion as to who is behind a product or service,” he says.
For example, Musk’s former partner Grimes also trademarked the name Grok for a plushie AI-powered kids toy, but that application is very different from a software tool, reducing the likelihood of consumers getting them mixed up. “The details matter,” Gerben says. “What does the original Grok do, and what does this new one do? Are they operating in the same channel of trade?”
In Bizly’s case,the answers to those questions are fairly murky
Databricks Has a Trick That Lets AI Models Improve Themselves

Databricks, a company that helps big businesses build custom artificial intelligence models, has developed a machine-learning trick that can boost the performance of an AI model without the need for clean labeled data.
Jonathan Frankle, chief AI scientist at Databricks, spent the past year talking to customers about the key challenges they face in getting AI to work reliably.
The problem, Frankle says, is dirty data.
”Everybody has some data, and has an idea of what they want to do,” Frankle says. But the lack of clean data makes it challenging to fine-tune a model to perform a specific task. “Nobody shows up with nice, clean fine-tuning data that you can stick into a prompt or an [application programming interface]” for a model.
Databricks’ model could allow companies to eventually deploy their own agents to perform tasks, without data quality standing in the way.
The technique offers a rare look at some of the key tricks that engineers are now using to improve the abilities of advanced AI models, especially when good data is hard to come by. The method leverages ideas that have helped produce advanced reasoning models by combining reinforcement learning, a way for AI models to improve through practice, with “synthetic,” or AI-generated, training data.
The latest models from OpenAI, Google, and DeepSeek all rely heavily on reinforcement learning as well as synthetic training data. WIRED revealed that Nvidia plans to acquire Gretel, a company that specializes in synthetic data. “We’re all navigating this space,” Frankle says.
The Databricks method exploits the fact that, given enough tries, even a weak model can score well on a given task or benchmark. Researchers call this method of boosting a model’s performance “best-of-N.” Databricks trained a model to predict which best-of-N result human testers would prefer, based on examples. The Databricks reward model, or DBRM, can then be used to improve the performance of other models without the need for further labeled data.
DBRM is then used to select the best outputs from a given model. This creates synthetic training data for further fine-tuning the model so that it produces a better output the first time. Databricks calls its new approach Test-time Adaptive Optimization or TAO. “This method we’re talking about uses some relatively lightweight reinforcement learning to basically bake the benefits of best-of-N into the model itself,” Frankle says.
He adds that the research done by Databricks shows that the TAO method improves as it is scaled up to larger, more capable models. Reinforcement learning and synthetic data are already widely used, but combining them in order to improve language models is a relatively new and technically challenging technique.
Databricks is unusually open about how it develops AI, because it wants to show customers that it has the skills needed to create powerful custom models for them. The company previously revealed to WIRED how it developed DBX, a cutting-edge open source large language model (LLM) from scratch.
Without well-labeled, carefully curated data, it is challenging to fine-tune an LLM to do specific tasks more effectively, such as analyzing financial reports or health records to find patterns or identify problems. Many companies now hope to use LLMs to automate tasks with so-called agents.
An agent used in finance might, for example, analyze a company’s key performance then generate a report and automatically send it to different analysts. One used in health insurance might help guide customers toward information about a relevant drug or condition.
Nvidia Bets Big on Synthetic Data

Nvidia has acquired synthetic data startup Gretel to bolster the AI training data used by the chip maker’s customers and developers.LAS VEGAS USA JANUARY 06 Nvidia CEO Jensen Huang addresses participants at the keynote of CES 2025 in Las Vegas Nevada…
Nvidia CEO Jensen Huang addresses participants at the keynote of CES 2025 in Las Vegas, Nevada.Photograph: Artur Widak/Getty Images
Nvidia has acquired synthetic data firm Gretel for nine figures, according to two people with direct knowledge of the deal.
The acquisition price exceeds Gretel’s most recent valuation of $320 million, the sources say, though the exact terms of the purchase remain unknown. Gretel and its team of approximately 80 employees will be folded into Nvidia, where its technology will be deployed as part of the chip giant’s growing suite of cloud-based, generative AI services for developers.
The acquisition comes as Nvidia has been rolling out synthetic data generation tools, so that developers can train their own AI models and fine-tune them for specific apps. In theory, synthetic data could create a near-infinite supply of AI training data and help solve the data scarcity problem that has been looming over the AI industry since ChatGPT went mainstream in 2022—although experts say using synthetic data in generative AI comes with its own risks.
Gretel was founded in 2019 by Alex Watson, John Myers, and Ali Golshan, who also serves as CEO. The startup offers a synthetic data platform and a suite of APIs to developers who want to build generative AI models, but don’t have access to enough training data or have privacy concerns around using real people’s data. Gretel doesn’t build and license its own frontier AI models, but fine-tunes existing open source models to add differential privacy and safety features, then packages those together to sell them. The company raised more than $67 million in venture capital funding prior to the acquisition, according to Pitchbook.
A spokesperson for Gretel also declined to comment.
Unlike human-generated or real-world data, synthetic data is computer-generated and designed to mimic real-world data. Proponents say this makes the data generation required to build AI models more scalable, less labor intensive, and more accessible to smaller or less-resourced AI developers. Privacy-protection is another key selling point of synthetic data, making it an appealing option for health care providers, banks, and government agencies.
Nvidia has already been offering synthetic data tools for developers for years. In 2022 it launched Omniverse Replicator, which gives developers the ability to generate custom, physically accurate, synthetic 3D data to train neural networks. Last June, Nvidia began rolling out a family of open AI models that generate synthetic training data for developers to use in building or fine-tuning LLMs. Called Nematron-4 340B, these mini-models can be used by developers to drum up synthetic data for their own LLMs across “health care, finance, manufacturing, retail, and every other industry.”
During his keynote presentation at Nvidia’s annual developer conference this Tuesday, Nvidia cofounder and chief executive Jensen Huang spoke about the challenges the industry faces in rapidly scaling AI in a cost-effective way.
“There are three problems that we focus on,” he said. “One, how do you solve the data problem? How and where do you create the data necessary to train the AI? Two, what’s the model architecture? And then three, what are the scaling laws?” Huang went on to describe how the company is now using synthetic data generation in its robotics platforms.
Synthetic data can be used in at least a couple different ways, says Ana-Maria Cretu, a postdoctoral researcher at the École Polytechnique Fédérale de Lausanne in Switzerland, who studies synthetic data privacy. It can take the form of tabular data, like demographic or medical data, which can solve a data scarcity issue or create a more diverse dataset.
Cretu gives an example: If a hospital wants to build an AI model to track a certain type of cancer, but is working with a small data set from 1,000 patients, synthetic data can be used to fill out the data set, eliminate biases, and anonymize data from real humans.
