Post Mortem
Introduction
Music in Motion was an exploratory, ETC student-pitched project that focused on examining the connection between body movement and sound creation within VR space. We were interested in examining how the Vive might be paired with ambisonic, procedurally generated audio to create new forms of musical control. This semester was spent researching the interactions and virtual environments that would allow us to make engaging and performative experiences centered on sound creation.
One important goal that we set at the beginning of the semester was to create this experience for an installation space. This goal also helped us to determine our target audience. We needed to design for non-musicians and a play-through would ideally last 3-5 minutes. With this in mind, we decided upon a surround-sound speaker system in order to make an observable experience. Our primary deliverable is a short installation piece that pairs VR and ambisonics and allows guests to interact with music through body movement.
Our project was in a bit of a unique position because we started as a pitch project in the fall of 2017. The faculty accepted our pitch but we lost our required ETC second-year student to a co-op opportunity. Because of this, on an official level we became a faculty project. Heather Kelly, our champion and faculty advisor, was our client. As we are closing out this project, we are reviewing the work we have completed as a team and considering our strengths, the improvements we could make, and the lessons we have learned.
What Went Right
We hit the ground running this semester. We began our planning and research from the very first week of semester. By the beginning of the second week, we had our first prototype, a VR Theremin. Every project member came into the team excited about being involved and ready to participate on many levels. We had a passionate group that was eager to learn. We were dedicated to the project and happy to put in “extra”.
Because we put in so much time, and tried to keep aggressive schedules, we were able to prototype a surprising amount. Over the course of a fifteen-week semester, including the down time of GDC and Spring Break, we were able to create twenty different prototypes. Although we began with the intent of building multiple musical worlds with linked interactions, we quickly realized that we needed to spend time exploring this novel area. So, “exploratory” became our motto, and we refocused our goals so that we could spend time examining different types of interactions and how they might map to procedurally generated audio. Our plan was to choose our most successful prototypes and build them into one cohesive experience.
Another major component of our ultimate success was the amount of tooling that we began creating from the start. We were lucky to be a team with many skilled programmers. We initially had some challenges integrating our sound engine, Supercollider, with Unity. But, by the end of the project we had tooling that allowed designers to quickly connect synth parameters to interactions and game objects. As we neared finals, we were able to iterate many times faster because we had tackled so many of our biggest problems up front. Further, we were able to pull from many of our original prototypes and assets while creating our final design.
What We Could Have Done Better
As stated previously, exploration was our motto this semester. And, while it is clear that it helped us land on an experience we now feel is fairly successful, it often kept us from being able to work towards a defined goal. At the end of the semester it felt good to have that definition in our work, and it was something we wish we could have had sooner. In this sense, balancing our exploration with more concrete decisions would have helped. If we had started our final prototype sooner, directly after Halves, we might have had a more stable approach to Softs. We also wish that we had focused on dance and movement as a primary aspect of the experience from the start. For much of the semester those aspects were secondary. When we made them a constraint for our design the process became much simpler.
The reason we struggled with deciding on prototypes and the direction for our final design was that we were guilty of over-thinking a lot. Each prototype that felt imperfect, or was too great of a technical challenge, often led us to quickly discard the original idea or aim. Further, we loved having visitors. Because we had no demands from a client, we did our best to make up for it with guests, outside advisors, students from similar programs, and industry experts. We were lucky to receive large quantities of very helpful feedback. That said, these visits often seemed to be moments of confusion. We did not balance our trust in ourselves with the external advice or suggestions we received. We needed more conviction in our own sense of direction.
Halves was also a struggle for us, and it changed our attitude for the remainder of the semester. We prepared, but we did not strategize as much as we should have. During our halves presentation we discussed what we were passionate about. For the majority of the team that is the technology behind the project: the audio engine, the ambisonic setup, and frequency analysis on body movement. However, this did not do us many favors. It did not tell the story of the long nights of prototyping we had lost, the struggles we were having with finding balance in a complex, emergent field, or our moments of surprising success. We needed to tell that story. For our finals presentation we focused on our narrative and it seemed to be a much stronger choice.
Lessons Learned and Conclusion
A major lesson learned was about the need for balance. There were several key goals we had for experience that made our design process feel like an ongoing battle. First, we did not want this to feel like a tool, or a one-to-one mapping of an existing instrument in VR. Because of this, quite a few doors were closed us in the ways we connected sounds to interactions. At first we shied away from any mappings that felt too literal or direct. Eventually we realized some of these worked in our favor and we needed to embrace them. The synth’s pitch being tied to an object’s height was an obvious and very useful mapping that we are now using.
On this front, we were not considering what an achievable level of interaction for our intended audience ought to be. As much as we wanted to focus on having our naïve guests involved in acts of music creation, we were asking them to go about it in a way that was not appropriate for their skill level. The tasks we needed them to complete were detailed, careful, and required attention. When we shifted the guest’s tasks to center on something innate, like movement, we were choosing a task that suited their headspace. Further, we were allowing them to remain immersed in the experience by giving them the correct level of mental and physical challenge.
We also wanted the experience to feel very open and free. We hoped the guest would have room to experiment and combine aspects of the world as they pleased. This sandbox environment was daunting to design around. Our first prototypes in the space never felt strongly musical and often left guests feeling like they were there to play with interesting objects and wacky physics. When we made our experience into something almost completely guided for our final deliverable, it allowed us to focus more on the cohesive sound of the world. It also gave us the chance to add an emotional arc to the experience, and to encourage guests to be part of the aesthetic through their movements. All of these aspects contributed to immersing them more deeply in our world that still had many components of music creation.
On a more practical level, we may have prototyped a little too rapidly. ‘”Twenty prototypes” was a lot of ground to cover. It became apparent towards the end of the project how important it was to refine the mappings between a synth and its corresponding interaction. The work to get that mapping into a good place could take over a week of work with all team members involved. However, that time commitment was important. It was only at that level of refinement that it became clear exactly how successful the prototype could be. If we were to continue this project, we would bring our porotypes to a greater level of depth and not worry about covering quite so much ground.
We are currently wrapping up the final touches on this experience and our hope is that it will have a life beyond this semester. We have built a 12 speaker ambisonics installation. This is supplemented with DMX lights controlled by the guest’s interaction with the virtual world. We hope that this can be a lasting piece within the ETC and a good example of how ambisonics can be combined with VR for an impressive and expressive experience. We are also considering submitting this to a conference. We feel that the New Interfaces for Musical Expression conference would be a strong fit.
Team Contact: Should you need to reach any of the team member, they can be reached through Rachel Rodgers at rachel.rodgers42@gmail.com.