Write-A-Video: Computational Video Montage from Themed Text

SIGGRAPH Asia 2019


Miao Wang, State Key Laboratory of Virtual Reality Technology and Systems Beihang University; Tsinghua University
Guo-Wei Yang, BNRist, Tsinghua University
Shi-Min Hu, BNRist, Tsinghua University
Shing-Tung Yau, Harvard University
Ariel Shamir, IDC Herzliya



Abstract

We present Write-A-Video, a tool for the creation of video montage using mostly text-editing. Given an input themed text and a related video repository either from online websites or personal albums, the tool allows novice users to generate a video montage much more easily than current video editing tools. The resulting video illustrates the given narrative, provides diverse visual content, and follows cinematographic guidelines. The process involves three simple steps: (1) the user provides input, mostly in the form of editing the text, (2) the tool automatically searches for semantically matching candidate shots from the video repository, and (3) an optimization method assembles the video montage. Visual-semantic matching between segmented text and shots is performed by cascaded keyword matching and visual-semantic embedding, that have better accuracy than alternative solutions. The video assembly is formulated as a hybrid optimization problem over a graph of shots, considering temporal constraints, cinematography metrics such as camera movement and tone, and user-specified cinematography idioms. Using our system, users without video editing experience are able to generate appealing videos.


Paper

Miao Wang, Guo-Wei Yang, Shi-Min Hu, Shing-Tung Yau and Ariel Shamir. Write-A-Video: Computational Video Montage from Themed Text. ACM Transactions on Graphics (Proc. SIGGRAPH Asia), 38 (6), Article No. 177, 2019. PDF | BibTeX


Downloads

Supplementary Video | Interaction Video | Press Release | More Results Coming Soon