For preparing for softs, we divided our work across several demos we planned to show. 1) Main Story Demo (Open Ended Questions), 2) Alien language demo (Pattern recognition), 3) Messaging system demo (Asynchronous messaging across different echo devices).
1) Story Demo (Open Ended Questions)
After voice recordings in week 12, Seth spent the majority of his time cutting the clips and creating a track of the entire experience. His goal was to find the story flow, and integrate appropriate SFX and music to accompany the experience. The main challenge was to use music and SFX to elevate the emotions of the story, particularly difficult in a 5 minute experience. In addition, Seth needed to consider the breaks in sound (every time a user interacted) when he composed the music to the experience. He settled for an ebb-and-flow style that would make the breaks for interaction less jarring for the user.
For the story, Sarabeth worked on expanding the number of possible answers in the “open ended” section with our voice actor, and tested throughout with the query system in the editor tool. Now the query system, contrasted with the intent system, allows us to parse sentences and assign it a probability to a given answer. Training it consists of entering in keywords, sentences and other inputs to help it make more accurate assessments. In addition to expanding the depth of the script (for the open ended section), Phan and Roy added the features of transcript retrieval and keyword function. We wanted to add this feature for several reasons. First, we found through playtesting that the first interaction with the user sets the stage for the rest of the experience. For example, having a yes-no question work with “sure” or “definitely” or “Ok” instead of only “Yes” frees up the possibility of nuanced responses. We wanted to set the tone that Alexa was really listening to the user, and decided to try out having her repeat the user’s answer back to them, and classifying it in a category from there. The second choice was including the user’s name. This allows us to immerse the user a little more later on in the story, letting Alexa address him/her by name.
2) Ha Koo (Patterns) Demo
Phan and Seth worked closely to develop the sound and story to support the pattern structure. They decided to proceed with a narrative demo for this interaction for a couple reasons. First, it provided a contrast with the main story demo in giving our guests a light-hearted option. Second, it demonstrated in very basic terms the recognition of syllables (contrasted with the word-based query recognition in the interactions above). The structure of the demo focused on gradually ramping up understanding and teaching of the “ha koo” langauge for users, and ends with the users navigating a conversation entirely with phrases “ha koo” and “koo koo”.
3) Messaging Demo
Roy’s asynchronous messaging demo integrated into a puzzle game, complete with SFX. Originally, we had planned to show this during the Softs walkaround, but were advised to leave room at the end for questions with the faculty.
Our Softs Feedback:
-show all demos to help show range of possible interactions with platform
-explain the limitations of the platform
-make a deliverable that you can share with the dev community
-enter work in the amazon echo competition
For finals, we plan to deliver the package of demos (4 in total) showing off different interaction types. (Template, Navigation, Query/Open-Ended, Pattern, Asynchronous Messaging). In addition to this, we will be creating an Echo developer handbook. Leading up to the ETC Showcase, we will be making revisions to the demos, adding user reprompting (so ensuring that there is enough time for users to respond) and packaging all the demos together via a navigation interaction.
Looking ahead we will be completing our documentation, filming our promotional videos and compiling our records for the end of the semester.