ULTRACHUNK (2018) is a collaboration between performer / composer Jennifer Walshe, and artist / researcher Memo Akten. It is a live improvisational performance with an AI learning key components of Walshe’s identity – both her voice and face. For one year, no matter where she was in the world, Walshe engaged in a daily ritual of performing solo improvisations in front of her webcam, collecting hours and hours of both video and audio material which Akten used to create and train a number of neural networks — including GRANNMA (Granular Neural Music and Audio). During the performance, the video and audio output from the machine are neither recordings nor processed — every frame and sound is generated live, constructed from the fragments of memories in the depths of the neural networks. The original and virtual Walshe inhabit the Uncanny Valley together, singing in duet, improvising, listening and responding to each other.
In the live performance, GRANNMA navigates the hypersphere, generating ca. 20 frames of video and 44,100 16-bit samples of audio per second in real time. The video and audio output from the machine are neither sampled nor processed — every single frame and sound is generated live, constructed from the fragments of memories in the depths of the neural networks.