A picture is a moment frozen in time, but new research from the University of Washington could turn that static image into a moving memory.
Researchers at the university have trained a neural network to estimate movement in a photograph using thousands of videos of materials with fluid motion such as like waterfalls, oceans and rivers.
That's led to looping videos of waterfalls, rippling puddles and rivers being created that give life to previously still images, with the hopes that could be applied to people down the line.
"What's special about our method is that it doesn't require any user input or extra information," said Aleksander Hołyński, a doctoral student at the Paul G Allen School of Computer Science & Engineering.
"All you need is a picture. And it produces as output a high-resolution, seamlessly looping video that quite often looks like a real video."
"One of the biggest challenges facing the team was the requirement to, effectively, predict the future," said Hołyński.
"And in the real world, there are nearly infinite possibilities of what might happen next."
During training, the network was asked to guess what the motion would look like when only given a single frame of a video. That prediction was then compared to the actual video so the network could identify clues to make better future decisions.
They then used "symmetric splatting", which moves each pixel according to its predicted and past motion and combines them into one animation.
There are some limitations to the current technology, however, particularly when it comes to water reflections and how water distorts appearance.
"When we see a waterfall, we know how the water should behave. The same is true for fire or smoke. These types of motions obey the same set of physical laws, and there are usually cues in the image that tell us how things should be moving," Hołyński said.
But that hasn't stopped the development team from thinking how their technique could be applied in the future.
"We'd love to extend our work to operate on a wider range of objects, like animating a person's hair blowing in the wind," Hołyński said.
"I'm hoping that eventually the pictures that we share with our friends and family won't be static images. Instead, they'll all be dynamic animations like the ones our method produces."