Voices from AI in Experimental Improvisation is a project by Tomomi Adachi, Andreas Dzialocha and Marcello Lussana.


Voices from AI in Experimental Improvisation is an attempt to improvise and interact with a computer software which “learns” about the performers voice and musical behaviour. The program named “tomomibot” is based on Artificial intelligence (AI) algorithms and enables a voice performer, Tomomi Adachi (human,) to perform with his AI-self-learning over time from his past performances.
The project is not only a musical experiment with a non-human performer but also an undertaking to make computer culture “audible”. The performance raises the question about the logics and politics of computers in relationship to human culture.
What we hear is the result of human software design and computational logics, carving out the limited space of these machines while listening, interacting and learning from them.
Tomomi Adachi is a sound artist known for his intense, fragmented and sound-based improvisation style which makes “tomomibot” more of a sound and noise machine than an “singing” software. This enables the program to learn from every sound source and type: What is the musical dramaturgy of orchestra music or war videos from YouTube? Through machine learning one can try to find and learn patterns in these sound documents and improvise musically with it, from the perspective of “tomomibot”.
“tomomibot” is a software based on LSTM (Long short-term Memory) algorithms, a form of sequential neural networks, deciding on which sound to play next, based on which live sounds it heard before. The software was designed and developed by Andreas Dzialocha. Experimenting with AI sound synthesis algorithms (WaveNet, WaveRNN, FFTNet) the developer Marcello Lussana generated a large database of sounds which sound like Tomomi. These sounds serve as the sound vocabulary “tomomibot” uses to improvise with human Tomomi.

The AI technology is adapted only for a personal artistic practice in this project. Adachi’s intense improvisation style already extends the borderline of music, how can we evaluate the musicality of “tomomibot”? In the performance, “tomomibot” behaves like Adachi, but also he might imitate the behavior of “tomomibot” at the same time. Is it a mutual collaboration between human and machine?

Videos YouTube Playlist 

Audio soundcloud

Codes github


Artistic direction, performance: Tomomi Adachi
AI programming, concept: Andreas Dzialocha
Programming, concept: Marcello Lussana


Funding: Initiative Neue Musik Berlin e. V. (2018) und Musikfonds e. V. (2019)

The project received the Award of Distinction from Ars Electronica 2019.

The printed program code is a collection of FONDAZIONE BONOTTO