#Human and AI Generated Multimedia

The ACM Multimedia 2020
Interactive Arts Exhibition

 

Multimedia research integrates the multiple perspectives of digital modalities including images, text, music, video, sensor data, or spoken audio. As a long-standing tradition, the Interactive Arts Exhibition of ACM Multimedia is presenting work addressing the artistic side of these perspectives. In 2020, the exhibition has a focus on “Human and AI Generated Multimedia”. Mirroring the recent developments in generative approaches for all modalities of multimedia, we present artworks showcasing interactions of humans and machines with human and automatically created digital content, highlighting and reflecting upon creativity, discovery, and critical thinking on every aspect involved. Artworks deal with fusions, intersections, transformations, and paradigm shifts of human and AI generated technologies; they provoke contemplation, address contemporary issues, and interactively engage viewers in discovery and stimulate intellectual adventure and creativity.

 

Curators + Interactive Arts Chairs

Peter Knees (TU Wien)
Zhe Gan (Microsoft)

 

Program Committee

Boyi Li (Cornell University)
Dan Buzzo (UWE Bristol)
Jie Lei (UNC Chapel Hill)
Kristina Andersen (TU Eindhoven)
Linjie Li (Microsoft)
Pengchuan Zhang (Microsoft Research AI)
Ruotian Luo (TTIC)
Yu Cheng (Microsoft)

 

Artworks

Portraits of No One: An Internet Artwork
MaLiang: An Emotion-driven Chinese Calligraphy Artwork Composition System
First Impression: AI Understands Personality
Draw portraits by music: A music based image style transformation
Little World: Virtual Humans Accompany Children on Dramatic Performance
Keep Running - AI Paintings of Horse Figure and Portrait
AI Mirror: Visualize AI’s Self-knowledge

 

Tiago Martins, João Correia, Sérgio Rebelo, João Bicker, and Penousal Machado

Portraits of No One: An Internet Artwork


Portraits of No One is an internet artwork that generates and displays artificial photo-realistic portraits of human faces. This artwork assumes the form of a web page that synthesises new portraits by automatically recombining the facial features of the users who interacted with it. The generated portraits invoke the capabilities of Artificial Intelligence to generate visual content that makes people question themselves about the veracity of what they are seeing. [Online Installation]

 

Ruixue Liu, Shaozu Yuan, Meng Chen, Baoyang Chen, Zhijie Qiu, and Xiaodong He

MaLiang: An Emotion-driven Chinese Calligraphy Artwork Composition System


MaLiang can generate aesthetic, stylistic and diverse calligraphy images based on the emotion status from the input text. Different from previous research, it is the first work to endow the calligraphy synthesis with the ability to express fickle emotions and composite a whole piece of discourse-level calligraphy artwork instead of single character images. The system consists of three modules: emotion detection, character image generation, and layout prediction.

 

Xiaohui Wang, Xia Liang, Miao Lu, and Jingyan Qin

First Impression: AI Understands Personality


When you first encounter a person, a mental image of that person is formed. First impression, an interactive art, is proposed to let AI understand human personality at first glance. The mental image is demonstrated by Beijing opera facial makeups, which shows the character personality with a combination of realism and symbolism. We build Beijing opera facial makeup dataset and semantic dataset of facial features to establish relationships among real faces, personalities and facial makeups. First impression detects faces, recognizes personality from facial appearance and finds the matching Beijing opera facial makeup. Finally, the morphing process from real face to facial makeup is shown to let users enjoy the process of AI understanding personality.

 

Siyu Jin, Jingyan Qin, and Wenfa Li

Draw portraits by music: A music based image style transformation


“Draw portraits by music”, an interactive work of art. Compared with music visualization and image style conversion, it’s AI’s imitation of human synaesthetic. New portraits gradually appear on the screen and are synchronized with music in real-time. Users select music and images as the main interactive contents, the parameters of the music are used as the dynamic expression of human emotions, and the new pixel generation process of the image is regarded as the result of emotions affecting humans.

 

Xiaohui Wang, Xiaoxue Ding, Jinke Li, and Jingyan Qin

Little World: Virtual Humans Accompany Children on Dramatic Performance


Every child is the leading actor in her/his unique world. To help them achieve performance, an interactive art called “little world” is proposed to let virtual humans accompany children on drama performance. Theatrical adaptation rewrites the novel to an interactive drama suitable for children. Little world builds drama scenes and virtual humans for characters and lets children interact with them by speech and actions.

 

James She, Carmen Ng, and Wadia Sheng

Keep Running - AI Paintings of Horse Figure and Portrait


“Keep Running” is a collection of human and machine generated paintings using a generative adversarial network technology. The horse artworks are produced during the lockdown period in the Middle East due to the Covid-19 pandemic. Besides the cultural and historic symbols that horses represent in this region, what unique with our work is showing the possibility of using AI to create paintings with distinguishable features and forms, while still rendering different aesthetic and even sentimental expressions. The artworks are not just artistic and meaningful, but also paying a salute to the early works of machine-assisted art by Eadweard Muybridge and Andy Warhol, for their influences to the art world today.

 

Siyu Hu, Bo Shui, Siyu Jin, and Xiaohui Wang

AI Mirror: Visualize AI’s Self-knowledge


“AI mirror”, an interactive art, tends to visualize the self-knowledge mechanism from the AI’s perspective, and arouses people’s reflection on artificial intelligence. In the first stage of the unconscious imitation, the visual neurons perceive environmental information and mirror neurons imitate human behavior. Then, the language and consciousness are generated from the long term of imitation, denoted as poet and coordinates in an affective space. In the final stage of conscious behavior, an affinity analysis is generated, and the mirror neurons will behave more harmoniously with the user or have the autonomous movements on its own, which evokes the user’s reflection on its undiscovered traits.

 

Contact: mm20-iac@sigmm.org