Meet Aleph Alpha, Europe’s Answer to OpenAI

The European Union is desperate for its own artificial intelligence giant. German startup Aleph Alpha might be its best hope.
Jonas Andrulis
Founder & CEO of Aleph Alpha, Jonas Andrulis.Photograph: Andreas Rentz/Getty Images

Europe wants its own Open AI. The bloc’s politicians are sick of regulating American tech giants from afar. They want Europe to build its own generative AI, which is why so many people are rooting for Jonas Andrulis, an easy-going German with a carefully pruned goatee.

Ask people within Europe’s tech bubble which AI companies they’re excited about and the names that come up most are Mistral, a French startup that has raised $100 million without releasing any products, and the company Andrulis founded, Aleph Alpha, which sells generative AI as a service to companies and governments and already has thousands of paying customers.

Skeptics in the industry question whether the company can really compete in the same league as Google and OpenAI, whose ChatGPT launched the current boom in generative AI. But many in the European Union are hoping that Aleph Alpha can counteract American dominance in what some believe will be an era-defining technology. The bloc has a long history of tussles over privacy and data security with US tech giants. Some Europeans feel the election of Donald Trump demonstrated how much their values have diverged from those of their counterparts in Washington DC. Others just don’t want to be passive observers with such an enormous economic opportunity at stake.

While Andrulis stresses that his company is not a “nationalist project”—there are plenty of Americans working at Aleph Alpha—he appears comfortable being at Europe’s vanguard. “I personally care a lot about helping Europe make a contribution beyond the cookie banner,” he says.

Now 41, Andrulis spent three years working on AI at Apple before leaving in 2019 to explore the technology’s potential outside the constraints of a big corporation. He set up Aleph Alpha in Heidelberg, a city in southwestern Germany. The company set to work building large language models, a type of AI that identifies patterns in human language in order to generate its own text or analyze huge numbers of documents. Two years later, Aleph Alpha raised $27 million, an amount that’s expected to be dwarfed by a new funding round Andriulis hints could be announced in the coming weeks.

Right now, the company’s clients—which range from banks to government agencies—are using Aleph Alpha’s LLM to write new financial reports, concisely summarize hundreds of pages, and build chatbots that are experts in how a certain company works. “I think a good rule of thumb is whatever you could teach an intern, our technology can do,” Andrulis says. The challenge, he says, is making the AI customizable so businesses using it feel in control and have a say in how it works. “If you’re a large international bank and you want to have a chatbot that is very insulting and sarcastic, I think you should have every right.”

But Andrulis considers LLMs just a stepping stone. “What we are building is artificial general intelligence,” he says. AGI, as it’s known, is widely seen as the ultimate aim of generative AI companies—an artificial, humanlike intelligence that can be applied to a wide range of tasks.

The interest Aleph Alpha has received so far—the company claims 10,000 customers across both business and government—shows it is able to compete, or at least coexist, with the emerging giants of the field, says Jörg Bienert, who is CEO of the German AI Association, an industry group. “This demand definitely shows it really makes sense to develop and provide these types of models in Germany,” he says. “Especially when it comes to governmental institutions that clearly want to have a solution that is developed and hosted in Europe.”

Last year, Aleph Alpha opened its first data center in Berlin so it could better cater to highly regulated industries, such as government or security clients, that want to ensure their sensitive data is hosted in Germany. The concern about sending private data overseas is just one reason it’s important to develop European AI, says Bienert. But another, he says, is that it’s important to make sure European languages are not excluded from AI developments.

Aleph Alpha’s model can already communicate in German, French, Spanish, Italian and English, and its training data includes the vast repository of multilingual public documents published by the European Parliament. But it’s not only the languages the company’s AI speaks that emphasize its European origins. The emphasis on transparent decision-making is part of an effort to combat the problem of AI systems “hallucinating,” or confidently sharing information that is wrong.

Andrulis jumps at the chance to demonstrate how Aleph Alpha’s AI explains its decisions. When he asks Aleph Alpha’s AI model to describe the protagonist in H. P. Lovecraft’s short story, The Terrible Old Man, the AI replies: “The terrible old man is described as exceedingly feeble, physically and mentally.”

Andrulis shows me how he can click on each of the words in that sentence to trace what informed the AI’s decision to say what it said. If Andrulis clicks on the word “mentally,” the AI refers him to the bit of text in the short story that informed that decision. This feature also works with images, he says. When the AI describes an image of the sun setting over Heidelberg, he can click on the word “sunset” and the AI again shows its workings—drawing a square around the part of the image where the horizon fades into layers of reds and yellows.

Even to AI experts, this feels new. “They have started experimenting with trustworthy AI features, such as explainability, that I haven’t seen before,” says Nicolas Moës, director of European AI governance at the Future Society think tank.

Moës believes these kinds of features could become more widespread once the EU passes its AI Act, sweeping legislation that is expected to include transparency requirements. Trade bodies, including the German AI association, complain that overly broad and onerous rules could slow Europe’s efforts to create a homegrown AI giant, forcing startups to focus on complying with the new rules instead of on innovation. But Moës argues the opposite, saying stricter rules could help European AI companies build better products and create a kind of standard of quality, echoing the success of other tightly regulated European industries. “The reason why German cars are seen as better is because there is a whole testing process,” he says.

But despite Aleph Alpha’s advanced explainability, there are still doubts about whether the company’s underlying technology is advanced enough to carry Europe’s hopes of building an AI giant.

“Anyone who has interacted with a wide range of language models notices that this is not the best model out there,” says Moës.

Aleph Alpha does not score better than its American competitors in the standardized tests companies use to prove the effectiveness of new AI models, according to Matthias Plappert, who spent four years as a researcher at OpenAI and now works as an AI consultant in Berlin. “People want this to be a success because there is a desire to have a European champion,” he says. “But I do think there’s been an overstatement of how good that company is with respect to the competition.”

But many Europeans remain adamant that they need a viable contender, and not simply for economic reasons. The EU AI industry argues that European companies are likely to be more sensitive to issues such as privacy and discrimination than their counterparts in the US.

“There’s no guarantee that what US [companies] will build will be a good representation of our values,” says Andrulis. That vague term—“European values”—comes up again and again when you ask Europeans why they can’t resign themselves to using American-made AI. Asked what the phrase means to him, the Aleph Alpha chief references the furor surrounding Facebook’s 2017 takedown of an image showing Michelangelo’s famous marble sculpture of David (Facebook told WIRED that paintings and sculptures depicting nudity are now allowed, according to its policy). “The fact that we [could not] post Michelangelo’s David on Facebook due to nudity, this would not be European values,” he says.

However it’s not his job to decide how European values should be translated into AI, he says. “My role is to build technology that is excellent and that’s transparent and that’s controllable.”