Drones are revolutionising how professionals and amateurs generate video content for films, live events,  AR/VR etc. Aerial cameras offer dynamic viewpoints compared to traditional devices. However, despite significant advancements in autonomous flight technology, creating expressive camera behaviors pose a challenge and requires non-technical users to edit a large number of unintuitive control parameters

Recently, researchers from Facebook AI, Carnegie Mellon University and the University of Sao Paulo have developed a data-driven framework to edit complex camera positioning parameters in semantic space.

In a research paper, ‘Batteries, camera, action! Learning a semantic control space for expressive robot cinematography,’ co-authors Jessica Hodgins, Mustafa Mukadam, Sebastian Scherer, Rogerio Bonatti and Arthur Bucker explained various frameworks implemented in the process.

Semantic space control framework

Semantic space control framework for drone cinematography (Source: arXiv.org)

For this, the researchers generated a database of clips with a diverse range of shots in a photo-realistic simulator, and used hundreds of participants in a crowdsourcing framework to obtain scores/ranks for a set of ‘semantic descriptors’ for each clip using machine learning models. The term ‘semantic descriptor’ is commonly used in computer vision which refers to a word or phrase that describes a given object.

Once the video scores are ready, the clips are analysed for correlations between descriptors, and semantic control space is built based on cinematography guidelines and human perception studies. It is then translated through a ‘generative model’ that can map a set of desired semantic video descriptors into low-level camera trajectory parameters.

This is followed by system evaluation to generate final shots rated by participants as per the expected degree of expression for each descriptor. 

How does it work?

“We do this in three main steps: First, we build a diverse feeder data set that is then evaluated by hundreds of users in a crowdsourcing platform. Then, we extract features from this data using machine learning models to learn a semantic control space. We produce all of our data in a photo-realistic simulator where we have an underlying motion planner that can avoid objects like trees, houses and wires, alongside producing our shots using six presets that are commonly used in drone cinematography,” explained Bonatti. The six presets include exciting, calm, enjoyable, revealing, establishing and nervous. 

On top of these presets, the researchers reproduced several variations of the camera positioning parameters, generating over 200 videos. The camera positioning parameters used in this experiment include tilt angle variations and angular speed variations.

Next, using a crowdsourcing framework, they build a ranking of emotions for each video. Here, users watch two videos simultaneously and have to answer which one is more exciting, interesting, enjoyable, etc., as shown below. 

(Source: Robotcam)

Based on these emotion rankings, a 3D emotional feature space based on psychology literature is created, which has three main axes — arousal, valence and dominance. “By using this data, we are able to learn a generative model that finds the most suitable shot parameters for given emotional expression,” said Bonatti. 

Showcasing various emotional expression generated using the generative model in multiple environments across simulation and reward experiments (Source: Robotcam

Outcome 

The researchers evaluated models in a series of experiments in simulation and real-world tests to ensure the model does not overfit features encountered in the training sets. The shots generated from the semantic space are rated by participants as having the expected degrees of expression for each attribute. The model generalises to different actors, activities, and background compositions. 

(Source: Robotcam)

“We also collected additional shots at the maximum expression of the enjoyable emotion. And we have shots where the actor is doing parkour dance movements and playing soccer in this work,” said Bonatti. 

Further, he said their framework targets non-technical users and can generate shot parameters directly from a semantic vector. However, expert users can quickly adapt it to gain more control over the model’s outcome. 

“Currently, our algorithm generates a single shot at a time. However, there is great potential in developing algorithms that can reason over longer durations and infer emotional expression over sequences of shots,” said one of the researchers. 

The post How AI Enables Intuitive Camera Control For Drone Cinematography appeared first on Analytics India Magazine.