The methodology used in this project can be best described as practice-based research. From a global perspective, the methodology was to develop prototypes and short musical pieces, though rapid iteration cycles. Each cycle allowed us to experiment with specific interaction principles and sonic results. Both technological adjustments of the prototypes and musical ideas were then tested conjointly. The series of public events allowed for assessing the different options, and critically reviewed the system. In addition to the performances described in the previous section, other workshops allowed us to experiment the concepts and technology. In particular, several workshops with musical students were held at Conservatoire of Saint-Brice sous Forêt.
The main objective of the project is to explore further how to control sound synthesis and generate musical content from users’ gestures, and more specifically how such an approach can be implemented in collective and participatory settings. This research is based on CoMo-Elements, a template application designed toward non-developer users for gesture-based and distributed User-Centered Machine Learning. As such, the application is specifically designed to provide an environment where users without expert programming knowledge can create their own instance of the application (e.g. behavior and mappings of different clients) by simply editing a configuration file.