Wonderland (VR Game Design)

As the lead designer and producer on the Wonderland project at Carnegie Mellon University’s Entertainment Technology Center, I lead my team to find the right questions as we designed, pitched, prototyped, and playtested six short experiences for the Oculus Rift with Touch.

Our client was the director of the Alice program. The Alice program is responsible for the educational software Alice, which helps beginning computer science students (particularly grade school students) learn to code in an appealing, image-based way. The goal of the Wonderland project was to explore virtual reality as a potential avenue for expanding this style of education.

Ultimately, we developed six experiences that each illustrated one specific computer science concept. We playtested with teachers who were enthusiastic and optimistic about the potential for these experiences, or others like them, to enhance their lessons. With their feedback, we came to the conclusion that virtual reality is most useful as a way to illustrate abstract concepts like parameter passing in a simple way that gives teachers and students a common vocabulary.

Finding the Question

A slide from our final presentation to faculty, where we explained the evolution of our understanding of our goal.

The biggest challenge of this project was framing the central question in a way that enabled us to break out of our own ideas of what computer science games look like. It took us a few tries to figure out the version of the overall question our project was trying to answer, beyond what each prototype was trying to accomplish. We started with “how do we teach computer science in virtual reality?” However, the designs we made based on this question were largely re-creations of existing, 2D computer science learning games.

Furthermore, as the semester went on it became clear that our individual ideas of what we were trying to accomplish were diverging. I met with each team member individually and got them to talk about what they felt we were trying to accomplish, then met with the client to get his version. Then I brought the team together and wrote each different version of our goal on our whiteboard, and we all agreed on the interpretation that aligned with our client’s. From then on, we were asking not “How to we teach computer science in VR?” but “What does VR have to offer to the world of computer science education?” That change opened up our designs and got us to start building teaching tools rather than lessons, which was far more effective.

Playtesting and Feedback

Technology teachers playtesting one of our prototypes.

Another challenge was figuring out how to evaluate something that we knew would not be a finished product by the end of the semester. The project was about prototyping and exploring, so our goal was never to release polished products. We would have loved to test in classrooms, but the fact was that equipment was limited and each prototype had usability quirks that required careful instruction to navigate, so attempting to lead several children through these experiences together was not a possibility. We needed a way to evaluate the potential of these games as teaching tools.

My solution to this problem was to go to teachers instead. After all, if teachers do not see the value in a teaching tool, the students will likely never see it. And working with a few teachers at a time meant that we could easily guide them through the games and make it clear that they were not experiencing finished products. From the beginning of the project, I sent out a survey to teachers that follow the Alice program to ask which computer science concepts they find most difficult to teach and why. We chose our topics based on the answers to this survey and designed to solve the problems they presented.

A group of teacher observing their colleague play one of our prototypes.

For playtesting the prototypes, I wrote a verbal questionnaire that asked their history with computer science and their level of familiarity with the concept the prototype they were about to experience was based on. I took notes as one of my teammates led them through the game (and recorded when we had permission), and then asked them follow-up questions about how well they felt the experience demonstrated the concept and how they might use it in their classroom. It turned out that some of the teachers were not familiar with some of the concepts we were demonstrating, so in their case I briefly explained the concept and checked to see if they connected it to what they had just experienced and felt like they could understand the concept. Finally, I asked teachers who experiences more than one of our prototypes to rank them in several categories like accuracy, enjoyment, and potential for classroom use.

 

A high school student playtesting one of our prototypes while I observe and guide her through it.

As it happened, we did get the opportunity to playtest with grade school students, and that presented another challenge. We did not have control over whether or not the students we playtested with were already familiar with computer science and the settings we were playtesting in required that we move quickly and did not allow for us to embed the experience in any type of lesson. So how do we evaluate whether or not the prototypes were working? I decided that since most of our verification was coming from teachers, what we really needed to know from the students was whether or not they would play games like these and whether or not they were able to identify patterns in what they were doing within the games.

Based on that, I asked them if they were familiar with the key ideas the prototype would be trying to get across. If they were already familiar with the concept, after they played I asked them to identify what elements or actions in the prototype represented which computer science ideas. If they were not familiar with the concept, I asked them to explain what they were doing in the experience and what process they were following. If what they described corresponded to the concept we were trying to get across, I took that as evidence that the experience was on the right track.

Documentation

As the team member most comfortable with writing and communication, figuring out our documentation was my responsibility. We needed documentation that would enable someone else to pick up our project and continue based on our conclusions, our process, and our prototypes. We also needed documentation that could reasonably enable teachers to use our prototypes in order to argue that they could fit into a lesson plan and to help our client to continue demonstrating our prototypes if need be. Therefore, I split up our documentation by its intended audience: designers, developers, teachers, and general interested parties. I also wrote a one page instruction document explaining what each section of the documentation was, so that anyone using it could find what they needed without having to search through everything.

The digital structure of our documentation.

The general documentation included my overall analysis of the project and our conclusions and high-level explanations of each prototype. The prototype explanations included the goals, the original pitched design, the final experience (with video), lessons we learned, and what we would do if we were to continue working on it. Those explanations also included where to find the code for anyone who wanted to play around with the experiences for themselves.

For the teachers, we wrote step by step walk through guides for each prototype, including common problems and how to either fix them or work around them. These guides also included suggestions for how to present the experience and verbally guide someone through it, informed by our own experiences playtesting. In addition, I had the programmers on the team write pseudo-code to go along with each prototype. The pseudo-code was not based on the code that ran the prototypes but was instead a simple example of what each concept looks like in code form. My reasoning was that the mostly likely way for these prototypes to be used was in conjunction with code so that students could relate chunks of code to moments in the experience. Therefore, we provided an example of what code for a lesson like that might look like, annotated with what it corresponded to in the experience.

A page from a prototype walk through showing a gesture used in the prototype.

The documentation for developers was standard technical documentation detailing how the code behind the prototypes worked, ways of programming them that we tried and discarded (no point in having future developers make the same mistakes we did), and elements we wanted to add if had more time.

The documentation for designers included a design review where I wrote about what we learned, our advice for future development, and what we concluded about the role of VR in teaching computer science. I also detailed our process, how it evolved, and what could have been better, as well as the elements of the project that we discovered were more or less important or time consuming than we expected.

The end result was that we have detailed, extensive documentation with information relevant to multiple audiences, organized by who each document is most relevant to. I designed this system based on feedback from teachers about what would be most useful to them, and based on the idea that another project might need to pick up where we left off sometime in the future.