On Reading Experiences

A semester ago I was a designer on a research project at the ETC working with EA to explore the possibilities of voice recognition in a reading experience for emergent readers. Read about the project here. As part of the project we got in contact with a reading specialist who taught graduate educational students at The University of Pittsburgh, and acted as a reading specialist for a local elementary school. She was instrumental in helping us validate our project. As part of this we also gained useful insight into the mind of a teaching specialist and parents for reading experiences. When helping parents find educational applications to help with reading she recommends they judge the experience in terms of the five areas of reading. The ultimate goal, she says, is to find experiences with as many of the five as possible.

The Five Areas of Reading

1. Phonemes

2. Phonemic Awareness

3. Vocabulary / vocab building

4. Comprehension 

5. Fluency

For more specifics on these search five areas of reading and you will have more choices than you will know what to do with.

The goal of getting as many in as you can is admirable, and possible, especially for larger experiences. However, if you're going for something a bit smaller, specializing can only help. We were only able to hit three of them (two and a half if you consider each area as encompassing multiple aspects). If you're just starting out on your reading experience development this information, if new to you, will be a great asset and can help you target learning objectives. I advise you to choose carefully, as there are a few areas where the leading product does such a phenomenal job that improving on their formula or getting a market share will be exceptionally difficult.   

 

The Button Fallacy

When I was studying at Carnegie Mellon University I was on a research project, READ, whose main purpose was to explore the use of voice recognition in children's reading experiences on a connected TV platform. As you can imagine this came with all of the usual problems of developing with voice recognition for kids, but I'm not going to get into that very much here. I'm more interested in a lesson I learned from this project: Take existing mechanics and advice from others with a grain of salt, think about your project and its goals before you make assumptions and implement them.

For my project this was a button, hence the title of this post. Researching experiences that have voice recognition for kids we found several great examples of it being done and one of my good friends was working at a studio that was making a voice recognition experience for kids too. In looking at these experiences most had a button in common. One of the prime examples of this is Thomas and Friends Talk To You, which is a great experience and the button totally works for them. 

Thomas and Friends talk to you screenshot

Thomas and Friends talk to you screenshot

 

There it is, in the bottom right, a button that needs to be pressed, and in some cases held down when talking. 

This button method has benefits, mainly it helps clean up your audio input. You know that the sound you are trying to recognize is the user trying to interact with your system instead of accidental sounds. This is actually important for voice recognition and works on some experiences. When I went to my friend for advice I was told this button was a must for voice recognition experiences. He gave the improved input explanation above. Hell a trusted advisor and a handful of successful voice recognition experiences all advocated it, it must be great. I accepted it as a necessity and didn't look back.

A week or two later we had a prototype far enough along that we could test. We could not get kids to press our button, forget about press and hold. Could not be done. We iterated, added feedback and indirect control to try to get kids to interact with the button. Each test passed and we saw our testers trying to speak to the device, but failing because of this button. And why did we need it? Just to get better audio input? But we weren't getting any because of it. It had to go. We knew when we expected users to interact with their voice, we didn't need a button to be pushed. When the time comes we just turn the microphone on and let programming magic take care of the possibly worse quality sound input. It finally worked. I learned my lesson the hard way, had I not accepted what others had done before me with out questioning it I would have saved my team at least one month (for part of the experience we were fortunate to be pushing other areas forward and testing more than voice recognition interaction with each test). I wish I had more pictures to illustrate this with my experience, but many of them for this project are lost because it is on a platform that is very difficult to setup again without the required serves and hardware.

Lessons from an Old Eye Tracker

About one year ago I was on a team working on a game for the EyeGaze, an older eye tracker. It didn't go too well and we ended up switching platforms to the Oculus DK2. The game would go on to become Into The Dark: A Bat's Tale, check out the page if you're interested. I was at GDC 2016 and saw a newer eye tracker coming to the market and thought maybe some of the things I've learned through working on an older one might be of interest to someone. I should note that I didn't try the new one. A friend got one though, so maybe there will be a post about that in the future.

Let’s start off with a bit of a backstory. When I worked on this eye tracker it was part of the Building Virtual Worlds class which is given through Carnegie Mellon's graduate program for entertainment technology. One of the platforms for this class is the EyeGaze. As such, there have been plenty of games built for it. However, the best I have ever seen is Star Gaze. It is brilliant because it uses our natural desire to look at light areas in dark spaces and moving objects.

For the record, I did not make Star Gaze. It is one of the best eye tracking games I have seen and helps illustrate the point I'm making well.

That being said it can suffer from a terrible flaw with eye tracking, poor calibration. Let me give you an example. Say I want to look at the star in the bottom right of this image.

 

If the eye tracker is not detecting my eyes well, then I can be looking at the star and it will actually think I’m looking where the red circle is.

 

When this happens the player’s only way of proceeding is to look beyond the star, further into the corner. This is very uncomfortable for the player (trust me on this one, I’ve had to do it too many times).

Having very precise points the player is supposed to look at, forces the player to look beyond objects if calibration is off, and it tends to not want to be on. Which leads us to the lesson.  

Precision Aiming Is Your Enemy

This!

Good! Larger area to look at compensates for possible poor calibration.

Good! Larger area to look at compensates for possible poor calibration.

Not This!

Bad! Precision aiming forces players to account for mistakes in calibration.

Bad! Precision aiming forces players to account for mistakes in calibration.

Try to design around possible calibration issues by having the acceptable eye location be a larger area. You do not have to make the target large, the star is still a great target. Just make the place an eye has to linger not be exactly where the star is. In the example above the white circle would not be visible to the player. I use Star Gaze as an example both because of how elegant and amazing it is and because it is easier to illustrate this lesson with than the game I worked on. This is because the game I worked on was a tunnel navigation game and there weren't points we wanted the player to look at, we just wanted the player to not look at obstacles in the cave. Also, it is entirely possible that eye tracking has gotten to a point where calibration is less of an issue, but if you are developing for eye tracking I would recommend you keep this in mind anyways, just in case.