Star Trek’s holodeck is one of the most alluring sci-fi technologies: You give a few verbal instructions to a computer, and boom, you’re on a street in 1940s San Francisco, or wherever you want to be. We may never have holograms you can touch, but the part where a computer can generate any 3D scene it’s asked for is being worked on right now by a small studio in London.
At the Game Developers Conference in San Francisco on Wednesday, Anything World CEO Gordon Midwood asked me what I wanted to see. I said I wanted to see a donkey, and a few seconds later, a donkey was walking around on the screen in front of us. Sure, it sort of walked like a horse, and yeah, all it did was mosey around a field, but those are just details. The software delivered on its basic promise: I asked for a donkey and a donkey appeared.
For the next demonstration, Midwood took his hands away from the keyboard. “Let’s make an underwater world and add 100 sharks and a dolphin,” he said into a microphone. A few seconds later, I was looking at a dolphin who’d shown up to the wrong party: 100 swimming sharks.
Developers who want to use Anything World as a game development or prototyping tool will incorporate it into an engine like Unity, but as Midwood demonstrated, it can also produce scenes, objects, and creatures on the fly. It was the coolest thing I saw on the GDC show floor, and others have already noticed its potential. Roblox is exploring a deal with the company, and Ubisoft is already using the software for prototyping, as well as for a collaborative project called Rabbids Playground.
How it works
With so much blockchain stuff haunting GDC, the sight of an older tech buzzword was comforting: Anything World uses machine learning algorithms developed in part during a University of London research project which lasted over a year. In brief, they’ve built automated methods for teaching a system how to analyze 3D models from sources like Sketchfab and attempt to classify, segment, arrange, and animate them (or not) in a way that makes sense to human beings. Right now it can pull from over 500,000 models.
Of course, sometimes Anything World gets things wrong: The software once thought a table was a quadruped, and another time it believed the top of a pineapple was the legs of a spider, which was “scary,” says Midwood.
It’s early days (at least compared to Star Trek: The Next Generation, which takes place in the 2360s), but even at this rather crude stage it’s fun to see how an automated learning system pairs the 3D models it’s been given with what it ‘knows’ about animal locomotion—I felt oddly proud of my trotting donkey, as if I were somehow responsible for giving it life just by requesting it.
For non-developers, Midwood thinks Anything World has potential in super-accessible game creation tools, or just as a fun and useful thing to have at hand. For instance, you could use it to create green screen sets on the fly while streaming, or actually treat it like a holodeck computer, putting on a VR headset and requesting a scene to relax in.
Meta (the company formerly known as Facebook) demonstrated something similar last month, though without animated creatures. In response, Anything World released a parody demo. Interpreting what people want at the level of natural language is perhaps one of the end goals for all software, so it’s no surprise that there’s competition in the ‘make 3D things appear by asking for them’ sector. Anything World’s technology looks stronger than Meta’s right now, though. It’s a fairly small company, too, with six machine learning experts and nine other technical roles working on the tool.
In the future, Anything World plans to release versions with higher-fidelity models and animations—it’s got an Unreal Engine version coming, and plans to make use of Epic’s Quixel models—as well as its own consumer application of its own. Right now, it’s available to use with Unity.
Anything World is a long way from a Star Trek computer’s understanding of the physical world—I doubt it knows anything at all about 1940s San Francisco—but just because donkeys may walk a little like horses right now doesn’t mean they will tomorrow. Midwood won’t promise me a holodeck just yet, but he’s confident that the system’s ability to label and animate 3D models will only get more granular and nuanced.
Go to Source