The final weeks of the semester saw the team
polish AI or Nay-I? and show it at the ETC’s Open House. After open house the
team prepared for final presentations on May 7th.
Final Presentations went well for the team. We
are proud of the work we were able to accomplish this semester. For a more
thorough examination of the semester, check out the post mortem linked below.
In week 14 the team presented AI or Nay-I? at the ETC’s soft opening. Then, after reviewing the faculty feedback, we set off to polish the game before final presentations.
The week began with a large push to incorporate as much as possible into our build of AI or Nay-I for soft opening on Tuesday, April 23. After a late Monday night, the team presented the game and received positive reactions. The faculty praised the smooth interactions and consistent art style throughout the experience. Also, our decision to cut the scope and focus on a more quality, albeit shorter, experience was wise. Suggested improvements included:
Balancing the biases in the public response tweets in an effort to more fairly examine the debate
Making the tweets more personal to the minister in order to increase emotional reactions
Increasing the resolution of the public surveillance minigame to better communicate what the AI is doing.
Splitting the minister’s decision into two parts: a press release and the official directive. Each would have their own round of responses. This would better explore the debate while allowing the player to change their mind, now with more information to base their decision on.
As we move into week 15, the team plans to incorporate as many of these changes as possible, while also polishing other small details.
P.S. We also shot and cut up a trailer for our project. Check it out on our media page!
In weeks 12 and 13, the team gathered feedback from faculty and industry experts. Then, using what we learned, we incorporated those changes into a new iteration of AI or Nay-I?. After pushing out this new build, we held another round of playtesting and made some further changes ahead of soft opening on April 23.
Feedback
Artificial Intelligentsia showed AI or Nay-I to a few industry experts in week 12. Jeff Litchford, VP of Deck Nine Games, praised the user interface and interactions but found the perspective shifts from Minister to AI Simulations and back to be jarring. Jeff also offered a helpful suggestion to ask the players “Are you sure?” an extra time after signing the directives. Later in the week, Joe Olson, Art Director, and Jonathan Brodsky, Creative Engineer, from Magic Leap stopped by. They enjoyed the game as well and suggested that there may be opportunities to infuse subtle color theory tricks into the public response tweets: like cool colors for positive responses, and warm for negative.
The team also showed the game to a couple faculty members for feedback. Jesse Schell encouraged us to include a feedback screen after the minigames to compare the abilities of the player to those of a real AI system. In addition, Shirley Saldamarco also completed a demo. Shirley advised us to be clear about the purpose of AI or Nay-I? when we are showing it. The game isn’t trying to convince the players that AI is good or bad, but rather portray the complexity of the issue and foster thoughtful discussion in the players.
Using the feedback we gathered in week 12, the team made a large number of changes before playtesting on Thursday of week 13. Those changes are listed below:
Changes Before Playtest Day 2:
Added “Are you sure?” after completing directives.
Updated the user interface on swiping sections.
Added accuracy score page after the phone security simulation.
Added speed score page after the public surveillance simulation
Made tweets movable with user’s finger.
With a new version of AI or Nay-I? in hand, we held another playtesting session on Thursday, April 18. Our playtesters were all ETC faculty and students. From these playtests, we found that the experience flowed better and much of the user friction was eliminated. However, there still tends to be a bit of confusion at the beginning of the phone security simulation. We will adjust the gyroscope calibration in our next version. Beyond that, AI or Nay-I? is achieving our transformational goals to a generally positive extent as seen in the data below. NOTE: 1 is strongly disagree while 5 is strongly agree.
Now with more playtest feedback, the team will continue to polish AI or Nay-I? ahead of soft opening. The changes we plan on making are listed below:
Changes for Soft Opening
Adding voting comparison page.
Adding twitter profile pictures and headline images
Adding survey page to outro
Adding About page
Adding Credit page
Adding page about Pamela McCorduck and her memoir.
In week 11 the team finally stitched together the project into our first build of the full experience. We then tested this build at the ETC Playtest day with a large group of playtesters.
After a week full of finishing assets, programming, and debugging, the team finally produced a build of the full experience, now titled AI or Nay-I. We were all excited to get the project in the hands of players. Some screenshots of the experience are show below.
ETC Playtest Day was a long but ultimately successful day for team Artificial Intelligentsia. We had 11 playtesters ranging from age 16 to 50. After running through the experience, we had the playtesters fill out two surveys. The first gauging how successful AI or Nay-I was at transforming the players while the other survey comprised of questions from the Schell Games Guide to Playtesting. Feedback from the surveys can be seen below.
With this feedback in mind, we will work to fix the bugs we observed and iterate on the design to make AI or Nay-I even more successful at creating discussion about the moral complexities of Artificial Intelligence
Week 10 saw the team continue development of the public surveillance scenario and solidify the game’s overall structure and narrative.
Satisfied with the status of the phone security simulation, the team dove right in to the development of public surveillance simulation. We quickly found out that the time we allotted for the development of the last two scenarios was not large enough. After meeting with our project adviser and ETC faculty member Scott Stevens, we decided to cut the tumor recognition simulation. As a result, our game’s focus is further narrowed down to focus on facial recognition systems.
With this change, the narrative designer finally produced a draft of the script for the game. A full flowchart of the game’s structure was made shortly after the script draft was completed. The flow chart can be seen below. The script for the game can be found in this slideshow.
The team also met with Eunsu Kang, an associate professor at The University of Akron. Eunsu is an artist that creates audiovisual installations and artworks using machine vision. With Eunsu, we did a playtest of our current simulations and discussed the overall scope of the project.
Next week, we plan to progress the public surveillance simulation to a playable state and stitch together the interstitial scenes to create a playable version of our game ahead of playtest day on Saturday, April 6.
In our seventh week we met with a few special guests,and continued work on simulation 1. Our guests were John Sharp, visiting scholar from Parsons, and Anthony Daniels, actor and ETC special faculty.
John and Anthony both playtested our current iteration of simulation 1. We noticed that both Anthony and John tried to use the gyroscope in the data collection phase of simulation 1. In addition, they both struggled to understand what their goal and point of view was in the experience. The sorting phase went a lot smoother. After each playtest, we discussed the project concept and design. John advised us to better connect the facial recognition gameplay to the exploration of AI in the project. Anthony agreed with this point. Anthony’s discussion was geared towards our target audience and why we are specifically targeting them. His questioning, though intense, helped us hone our focus moving forward.
After the playtest sessions, we updated the control mechanism of the data collection phase to use the gyroscope on the mobile devices. Players would now be presented with a “blank” face that resembles a wire frame of the character’s face. Then players will collect data, by tilting the phone to rotate the face. The feeling of data collection more resembles painting on a 3d model. We felt this iteration better communicates the process of 3d model creation that the AI in mobile devices do in real life while also engaging the player better.
To address the confusion with point of view, we hope an opening shutter at the beginning of data collection phase will better inform the viewer that they are inside the device in this phase. This can also be reinforced by short animations before the minigame in the opening scene of the game.
In addition to mechanical changes, we further iterated on the story of the game. Instead of playing as focus group member, players are now the Minister of Technology for a fictional nation. As Minister, players examine uses of computer vision AI, the minigames, then sign mandates that either ban or allow the technology to be used without regulation. Then players are presented with the ramifications of their decision, in the form of a social media feed that includes news headlines and posts from the public.The team feels that this raises the stakes of the game and highlights the moral complexities that advancements in AI bring.
Beyond the changes detailed above, we also purchased art assets to save production time.
Week 8 consisted of the implementation of the gyroscope mechanism and half-semester presentations. We spent the weekend between week 7 and 8 adding our new features. The beginning of the week was spent preparing a slide deck and rehearsing for our presentation on Wednesday. You can view our slide deck here.
Our presentation went over well. We received positive marks for our presentation quality. Our feedback on the product we presented was sufficient, though we were not thorough enough in our explanation of how we were tackling the moral complexities of the subject matter. The faculty advised us to include a few aspects that make the moral choices compelling, such as a virtual world that connects with the player and a set of choices that offer outcomes of significant moral weight.
With this feedback in our minds, we set off on spring break and the Game Developer Conference (taking place over week 9). After returning from GDC, our top priority is the completing of playable demos to playtest on ETC playtesting day Saturday, April 6. After this, we will complete the interstitial minister scenes and stitch together the scenes, all the while playtesting often to improve the total experience.
This week we began production on simulation 1, detailed the designs of simulations 2 and 3, completed a draft of our design document, and added filled out our backlog with new stories for our newly designed features.
The beginning of the week comprised of mostly setting up the basic mechanics of the FaceID simulation, specifically the data collection and user identification steps. By the end of the week, we got the basic functionality operational with placeholder images. You can see demos of these operations below.
After some internal playtests, we decided to switch the swiping directions to more closely resemble typical swiping mechanics found in similar apps. We also noticed that the perspective of the player in this simulation is unclear, are they an AI or a user. Because of this disparity, we are exploring alternative UI designs to suggest that the player is inside the phone, not just matching photos.
In addition, to this work we completed a draft of the design document for our game. You can read that below.
This is a living document that will undoubtedly change over the production process. On that subject, some large changes to our narrative have been made. Most notable of these is the shift from the player acting on an ethics committee, to the player being part of a focus group. The other non-player characters (NPCs) are other citizens that would fit in to our target demographics. The meeting is led by another NPC: a moderator and AI researcher. This decision was made because the player will most likely lack information about AI and we didn’t want to alienate them with AI jargon. The focus group members will also be able to voice excitements and concerns about the applications of computer vision presented in the simulations.
We ended the week by adding new stories to our agile backlog for our newly designed simulations and the interstitial scenes. We prioritized the new backlog and planned our next sprint accordingly. This sprint we will be focusing on polishing the first simulation and storyboarding/writing the focus member scenes.
This week we fleshed out the design for our experience and outlined our production plan. The week began with the team meeting with Heather Kelley, our adviser, to catch her up on the feedback we gathered the past week. After a group meeting, each member met with Heather individually to discuss our progress as individuals.
On Wednesday, we met with Jessica Hammer, a faculty member at the ETC, to discuss the design of our transformational experience. Jessica offered advice on how to vary the gameplay mechanics as well as how to structure the scenarios to follow a natural increase of difficulty. This structure and variation should help keep our players engaged as they move through the experience.
After our meeting with Jessica, we spent the rest of Wednesday and Thursday in design meetings. We devised the basic structure of the game as well as the message we want to send with each scenario, or mini-game. A basic flowchart and other design elements are pictured below. Over the next week, our designers will be constructing a design document that will go into more detail.
We closed out the week with planning our first sprint of production focusing on completing scenario 1 and the surrounding contextual information. First, we came up with stories detailing features of the game. Then, we prioritized them all and chose the first batch to pull in our sprint. Finally, we listed all the tasks for each story and split up the work among the team. This was a long, difficult process for us as none of us have worked within an agile framework before. That being said, we are excited to dig in and start working on our final product
This week we got our first batch of feedback from faculty and Pamela, playtested our prototypes, and, on Friday, finally picked the prototype we will develop into our final deliverable.
Monday consisted of ¼ walkarounds at the ETC. Walkarounds require project teams meeting with one to three faculty members to give an update on their progress and get feedback. Overall, the feedback we got during this process was immensely valuable and reassured us that we are on the right track.
On Thursday, we held our second conversation with Pamela McCorduck, our client. Prior to this meeting, she had reviewed our blog post, so she was generally up to date on our progress. Pamela expressed excitement in our progress thus far. We discussed our prototype ideas and narrowed down the misconceptions that we will tackle in our transformational experience. In addition, we clarified our target audience as well as how our project will interface with her forthcoming book, This Could Be Important.
Elsewhere in the week we playtested our prototypes with our fellow students at the ETC. We had around 10 playtesters for each prototype. To measure our transformational goals, we surveyed our playtesters to gauge their opinion of AI both before and after. After reviewing, the survey results, we determined that the prototype #3, The AI Ethics Investigator, best fit our goals both in its gameplay and transformational ability. We spent the majority of the day on Friday beginning to expand on the design for AI Inspector (working title). Next week we will catch up with our adviser, flesh out the design, and move into production.