Representational Image
Washington:
Scientists have developed a new algorithm that can automatically pick out the good parts from long videos to produce a miniature trailer.
The algorithm, called LiveLight, has been developed by computer scientists from Carnegie Mellon University.
The method constantly evaluates action in the video, looking for visual novelty and ignoring repetitive or eventless sequences, to create a summary that enables a viewer to get the gist of what happened.
Although not yet comparable to a professionally edited video, it can help people quickly review a long video of an event, a security camera feed, or video from a police cruiser's windshield camera, researchers said.
An application has used LiveLight to automatically digest videos from, say, GoPro or Google Glass, and quickly upload thumbnail trailers to social media.
The summarisation process thus avoids generating costly Internet data charges and tedious manual editing on long videos.
This application, along with the surveillance camera auto-summarisation, is now being developed for the retail market by PanOptus Inc, a startup founded by the inventors of LiveLight.
LiveLight might take 1-2 hours to process one hour of raw video and can do so on a conventional laptop. With a more powerful backend computing facility, production time can be shortened to mere minutes, according to the researchers.
As the algorithm processes the video, it compiles a dictionary of its content. The algorithm then uses the learned dictionary to decide in a very efficient way if a newly seen segment is similar to previously observed events, such as routine traffic on a highway.
Segments thus identified as trivial recurrences or eventless are excluded from the summary. Novel sequences not appearing in the learned dictionary, such as an erratic car, or a traffic accident, would be included in the summary.
Though LiveLight can produce these summaries automatically, people also can be included in the loop for compiling the summary.
In that case LiveLight provides a ranked list of novel sequences for a human editor to consider for the final video. The ability to detect unusual behaviours amidst long stretches of video could be a boon to security firms that monitor and review surveillance camera video, researchers said.
Eric P Xing, professor of machine learning, and Bin Zhao, a PhD student in the Machine Learning Department, will present their work on LiveLight at the Computer Vision and Pattern Recognition Conference in Columbus, Ohio.
The algorithm, called LiveLight, has been developed by computer scientists from Carnegie Mellon University.
The method constantly evaluates action in the video, looking for visual novelty and ignoring repetitive or eventless sequences, to create a summary that enables a viewer to get the gist of what happened.
Although not yet comparable to a professionally edited video, it can help people quickly review a long video of an event, a security camera feed, or video from a police cruiser's windshield camera, researchers said.
An application has used LiveLight to automatically digest videos from, say, GoPro or Google Glass, and quickly upload thumbnail trailers to social media.
The summarisation process thus avoids generating costly Internet data charges and tedious manual editing on long videos.
This application, along with the surveillance camera auto-summarisation, is now being developed for the retail market by PanOptus Inc, a startup founded by the inventors of LiveLight.
LiveLight might take 1-2 hours to process one hour of raw video and can do so on a conventional laptop. With a more powerful backend computing facility, production time can be shortened to mere minutes, according to the researchers.
As the algorithm processes the video, it compiles a dictionary of its content. The algorithm then uses the learned dictionary to decide in a very efficient way if a newly seen segment is similar to previously observed events, such as routine traffic on a highway.
Segments thus identified as trivial recurrences or eventless are excluded from the summary. Novel sequences not appearing in the learned dictionary, such as an erratic car, or a traffic accident, would be included in the summary.
Though LiveLight can produce these summaries automatically, people also can be included in the loop for compiling the summary.
In that case LiveLight provides a ranked list of novel sequences for a human editor to consider for the final video. The ability to detect unusual behaviours amidst long stretches of video could be a boon to security firms that monitor and review surveillance camera video, researchers said.
Eric P Xing, professor of machine learning, and Bin Zhao, a PhD student in the Machine Learning Department, will present their work on LiveLight at the Computer Vision and Pattern Recognition Conference in Columbus, Ohio.
Track Latest News Live on NDTV.com and get news updates from India and around the world