Week 15:
Spring Showcase:
During the Spring Showcase, we showed off our demos “Intruder Alert” and “HaKoo” to visitors. As part of the experience, we dimmed the lights, used our Alexa enabled light bulb in our fake fireplace, and made s’mores in a toaster oven. The experience showed us how much the environment of testing our demos elevated the experience, in other words, the theatrics of it. Yes, the guests who attended were predisposed to be “engaged” users, but it also showed us how magical the experience of open-ended interactions could be in this space. There were a couple groups who went through our demos without a hitch, and it was incredibly rewarding to see the surprise and excitement on their faces when the AI responded to their words.
CS Lecture with Rohit Prasad: Vice President and Head Scientist Amazon Alexa
The following day, the group attended the SCS Distinguished Lecture featuring Rohit Prasad, an executive and developer on the Alexa team with Amazon. We were heartened to hear that the focus areas from Amazon were similar to our efforts during the semester, namely their effort to bring conversational context (this open-ended interaction) to Alexa interactions overall. Our main takeaway from this lecture was the fact that the technology isn’t there yet, but isn’t that far away. Perhaps in 5 years, conversational chatbots might be more in reach. In the present, Amazon released Amazon Lex, a way for developers to start experimenting with these very challenges.
Week 16:
This week, we spent preparing our final presentation, as well as our deliverables for the semester’s end. During this time, Amazon announced Echo Show, a new hardware for the Echo system that incorporates a visual screen as well as added utility functions: messaging. This was another element that validated our efforts: messaging is a meaningful and gratifying addition to this platform. Roy’s asynchronous messaging system were in line with Amazon’s direction, as our open ended experience was. Indeed, the night before our presentation, Amazon deployed a software update to the Alexa app that allowed for voice calls and messages between Echo devices. While we worked to create an entirely audio UX for this space, Amazon relied on its own app, and interfaced with users’ mobile phones.
Final Presentation:
For the final presentation, we chose to focus on our efforts to explore the interactions within the space, and discuss it within the context of Amazon’s current traction and future development. Some of the questions from the audience related to the future of voice recognition in this space, and our team agrees – once voice recognition becomes conversational enough (and aware of conversational context), it will play a large role in UX design, from audio only projects, to VR and more.
Final Play Through:
In the final play throughs, we talked a bit more in depth about our findings overall, and ran through some of the demos we hadn’t shown off to the faculty. They urged us not to let our work disappear, and to this end, we’ve been adding even more information to our handbook, are creating videos showing play throughs of all our demos and plan to create a short article for publication online.
Final Deliverables:
1. Developer’s handbook
2. Alexa Skill Package
– Intruder Alert (Query)
– HaKoo (Syllable Recognition)
– Puns with Friends (Asynchronous Messaging)
– Fortune Teller (Template)