Meta’s Make-A-Video AI achieves a new, nightmarish state of the art

Meta’s researchers have made a significant leap in the AI art generation field with Make-A-Video, the creatively named new technique for — you guessed it — making a video out of nothing but a text prompt. The results are impressive and varied, and all, with no exceptions, slightly creepy.

We’ve seen text-to-video models before — it’s a natural extension of text-to-image models like DALL-E, which output stills from prompts. But while the conceptual jump from still image to moving one is small for a human mind, it’s far from trivial to implement in a machine learning model.

Make-A-Video doesn’t actually change the game that much on the back end — as the researchers note in the paper describing it, “a model that has only seen text describing images is surprisingly effective at generating short videos.”

The AI uses the existing and effective diffusion technique for creating images, which essentially works backwards from pure visual static, “denoising” towards the target prompt. What’s added here is that the model was also given unsupervised training (that is to say, it examined the data itself with no strong guidance from humans) on a bunch of unlabeled video content.

What it knows from the first is how to make a realistic image; what it knows from the second is what sequential frames of a video look like. Amazingly, it is able to put these together very effectively with no particular training on how they should be combined.

“In all aspects, spatial and temporal resolution, faithfulness to text, and quality, Make-A-Video sets the new state-of-the-art in text-to-video generation, as determined by both qualitative and quantitative measures,” write the researchers.

It’s hard not to agree. Previous text-to-video systems used a different approach and the results were unimpressive but promising. Now Make-A-Video blows them out of the water, achieving fidelity in line with images from perhaps 18 months ago in original DALL-E or other past generation systems.

But it must be said: there’s definitely still something off about them. Not that we should expect photorealism or perfectly natural motion, but the results all have a sort of… well, there’s no other word for it: they’re a bit nightmarish, aren’t they?

Image Credits: Meta

Image Credits: Meta

There’s just some awful quality to them that is both dreamlike and terrible. The quality of the motion is strange, as if it’s a stop-motion movie. The corruption and artifacts give each piece a furry, surreal feel, like the objects are leaking. People blend into one another — there’s no understanding of objects’ boundaries or what something should terminate in or contact.

Image Credits: Meta

Image Credits: Meta

I don’t say all this as some kind of AI snob who only wants the best high-definition realistic imagery. I just think it’s fascinating that however realistic these videos are in one sense, they’re all so bizarre and off-putting in others. That they can be generated quickly and arbitrarily is incredible — and it will only get better. But even the best image generators still have that surreal quality that’s hard to put your finger on.

Make-A-Video also allows for transforming still images and other videos into variants or extensions thereof, much like how image generators can also be prompted with images themselves. The results are slightly less disturbing.

This really is a huge step up from what existed before, and the team is to be congratulated. It’s not available to the public just yet, but you can sign up here to get on the list for whatever form of access they decide on later.

Meta’s Make-A-Video AI achieves a new, nightmarish state of the art by Devin Coldewey originally published on TechCrunch