two-channel animation, subwoofers, clear gelatin capsules, anti-biotics, anti-estrogen hormone, air ducts, generative algorithmic composition in collaboration with Jason Doell
Garrulous Guts sits at the centre of the space, pumping low-frequency sounds that activate the animated digestive system mapped with malleable texts. The metaphor of cannibalism in Brazilian literature and translation theory offers ways to rethink cultural assimilation in colonial and post-colonial conditions. Borrowing the same idea to think about technological incorporation, I blend industrial wastes with active chemicals and propose “vomit as a method” to redefine and reclaim human agencies in hyper-control societies. In the sound design of this installation, I trained WaveNet with a dataset of English conversations that I discovered in ESL textbooks. Due to inadequate time training, the results are glitched, stuttered, and sometimes unrecognizable English speech sounds. I then fed all the samples into an aleatory algorithm made by my collaborator to create a never-ending generative composition.
“Through intentional collaboration with machines, Ye recoups agency. A low-frequency crackling permeated the small room of “The Oral Logic,” part of a generative music score like ASMR but in a register of minor terror. The score popped and rumbled from two subwoofers, one upturned to hold empty and half-filled clear pill capsules in its speaker. Corrugated air ducts twisted around this installation — collectively titled Garrulous Guts (2019) — as physical stand-ins for its eponymous organs. An accompanying projection animated intestines in cyber blue and dusky pink, mapped with texts from translation theory, which wormed their outsides and innards through the viewer’s gaze.”
“In the centre of the space, the two-channel sound and animation installation, Garrulous Guts (2019), wraps around an entanglement of air ducts. The work’s audio was created through a collection of sound footage from online searches for “people vomiting” and speech clips generated by WaveNet, a feedforward neural network for generating advanced Text-to-Speech results. While the speech was trained in English, that was not what was heard emanating from the installation. Ye created new speech that were, quasi-English, which were then interpreted into sounds that didn’t make sense to English-speaking viewers or would even register as a recognized human-based language, potentially creating a new human-machine hybrid language. Despite its mechanical and cryptic tone, the soundtrack held some familiarity in structure and pace for my auditory sense to digest as dialect.”
— Emily Fitzpatrick, “Review: Xuan Ye – The Oral Logic” (Peripheral Review, 2020)