Week 3
At the end of the last blog, I promised updates on three things if you were to come back for the third installment of our VR chair experience development blog: prototype progress, playtesting reception of those prototypes, and what direction that reception pushes us next. I am a man of my word, so here are those very things.
Prototypes
Our whole team channeled our inner first-year selves (strange to think that that was us last year) to prototype interactions for our chairs as fast as we could. Throughout the first two and a half weeks, we came up with 12 different prototypes. Each prototype was made with a specific intention in mind, testing things from chair functionality to guest preference to safety. Throughout Wednesday, we chose and polished 7 prototypes to put in front of our faculty and colleagues to get initial thoughts and feedback. We had our own favorite experiences and hypotheses about what other people would like, but we had to put our work where our mouths were.
Playtesting
On Thursday, we had over twenty playtesters with varying levels of VR experience and intentions come through our rooms to try our demos and talk to us about them. Guests would cycle through our demos, offer their feedback in spurts throughout the demos, fill out a survey about their favorite and least favorite parts, and finally go through a more thorough debriefing with Jue.
At one point, one of our faculty members asked us if we really needed to playtest with other people or if just internal testing would have accomplished the same goal of learning what was fun in the chair. We were at a loss for words because we weren’t sure what the answer to that question was because we assumed playtesting was always the answer to any question during development… and then we went through our results.
Our results affirmed that playtesting was a very worthwhile exercise because it showed us that our initial assumption that we would have a consensus favorite experience was very wrong. The answers to questions about favorite and least favorite experiences had equal and divided answers across all of our playtesters. Whether it was about level of guest input/control or chair-specific movements, we did not have any consensus agreement throughout the group.
… Is that bad?
No!
Our Next Direction
Our playtesting results may not have given us the “mechanic” direction we thought they would, but they did point us to a different direction: target audiences. We had a fairly diverse crew of playtesters and we did start to see some consensus among certain demographics of testers, which made us realize that we needed to pick our specific target audience before we moved further forward. Playtesting taught us that we cannot please everybody, so we just need to pick someone to please.
All day Friday, we spent time breaking down the different potential target audiences for our experience (re: a virtual reality theme park experience) and figuring out what makes each of those groups unique and happy. We chose three potential demographics to design for, detailed the strengths and weaknesses of designing for each group, and sent proposals to our clients for further discussion as to what would work best for them. With their help determining which niche to fill for their guests, we will be able to start making more involved and specific prototypes as well as developing stories that fit those groups.
With all of that in mind, we have “Quarters” at the ETC next week where our faculty come around and check in on our progress at the quarter-mark of the semester. We will be discussing our process of installation, research, prototyping and playtesting with the hopes of getting more advice as to how to attack our next round of prototypes when we figure out exactly who we are building our experience for.