Technology as intervention
by Gillian R. Hayes
October 27, 2016
If you’re reading this blog I’m going to go ahead and assume that you know at least a little bit about child development and learning. You may not, however, know much about technology, design, or my field, “Human Computer Interaction.” My hope over the next four posts is to give you a taste of just some of the ways we might think about technology design in light of child development as well as child development in light of computing.
Before we get started though, let’s define a few terms. For our purposes, I’m going to say “children” is a pretty broad term, ranging from newborns to teens to young adults. Likewise, “technology” is broad and can include the printing press and the pencil, but for today, let’s assume we are talking about digital tools of some kind.
And then there is design. Most of you probably don’t think of yourselves as designers. But you are.
“Thinking about technology as a means for intervention for improving kids’ lives now, but also for helping us learn more about their lives is one of the things I like best.”
David Kelley, founder of IDEO, once said “Look around you. The only thing not designed is nature.” Design is the process of creating or shaping tools or artifacts to be used by humans. Here is the trouble though; a lot of that design is pretty bad. That’s where human-computer interaction (HCI) comes in. This academic field focuses on the design, development, and evaluation of human-centric technologies and computing systems. Practically speaking, a lot of HCI is about crafting and studying excellent user experiences. This means ensuring that systems are usable, valuable, credible, and so on.
In a world saturated by media and technology, then, there are some major overlaps in HCI and Child Development. In this series, I will talk about four of them. The first will be “technology as intervention”.
Technology can support a variety of interventions
I should perhaps start with a disclaimer, I’m an unapologetic interventionist. I love tinkering with things, I see a problem and I can’t help but to start thinking about solutions immediately. I’m also a designer. But here is what is perhaps most important, technology design is not what I do, so much as how I do what I do. What does this mean? Well, mostly it means that I agree with Kurt Lewin who said “the best way to understand something is to try to change it.” So, thinking about technology as a means for intervention for improving kids’ lives now, but also for helping us learn more about their lives is one of the things I like best.
Technology can support a variety of interventions. In particular, novel technologies can make space for and allow us to think about interventions in ways we might never have before.
Technologies can foster cooperative behavior…
New technologies can allow for cooperative interventions. One example of this issome work that my research group has been doing for the last couple of years with a school in Orange County, California that focuses on supporting kids with behavioral challenges and ADHD. In the school, there is an extensive behavioral infrastructure for supporting students. All of the students participate in an elaborate token economy and have a wide variety of behavioral supports in place. However, the school wanted to work on helping the kids think of themselves more as a group, to support each other more.
So, we took advantage of some really big displays they had in their classrooms and created collaborative visualizations of group progress. Each student represented a square of a picture, and the pictures changed daily to be ones the kids liked, such as Minecraft or Star Wars. When the student’s behavior was considered acceptable for most of the day, the picture came in clear; when the behavior was close to acceptable but with some work to do, the picture was pixelated; when there was a lot of work to do, there was no picture at all. If 80% of the students performed well for the day, all of the students received a reward.
Intervention on collaborative behavior display. Source: Gillian Hayes / Social and technological action research group (STAR)
What we saw was astonishing. In classrooms where the students had largely been focused entirely on themselves—and in fact, for some, not obsessing about someone else’s behavior was even a goal—the students soon had a healthy interest in each other. Gone were the days of scolding others or complaining of unfair treatment. Instead, students began to cheer each other on and support one another.
Of course, this school already had an excellent behavioral program, so this display was not working alone by any means. It shows we can make some progress though by combining traditional intervention programs with these types of innovative supports.
…and make interventions independent of location
Interventions can also be made extensible through additional technologies. Working with collaborators in Mexico, my student, Kate Ringland, and I have been trying to understand how multisensory environments, best practices for sensory therapy, might be extended into homes. Although we know that sensory therapy can reduce the symptoms of autism, sensory processing disorder, and many other challenges, best practices can be hard to implement at home. The environments are complicated, expensive, and take up a lot of space. Additionally, the staff and therapists required to work with children in these kinds of environments are already overtaxed.
So we’ve been developing video game based systems that use off-the-shelf hardware for body-based interactions. In our first system, SensoryPaint, we focused on helping replicate some of the mirroring activities that kids found helpful in therapy. Kids can paint with their bodies using just a Kinect and our software, allowing this kind of therapy to happen in any room.
More recently, we’ve been working with a choreographer on our faculty, Andrew Palermo, and a pediatric exercise specialist, Kimberley Lakes, to create DanceCraft. This system allows for kids to practice therapeutic dance routines taught in class in person between sessions at home.
Sensory paint. Source: Gillian Hayes / Social and technological action research group (STAR)
Multi-sensory environment. Source: Gillian Hayes / Social and technological action research group (STAR)
Wearables extend the range of support…
One of the ways we can best extend interventions is to support them being used anytime and anywhere. The growing market for and interest in wearables fits right into this model. In my lab, we have been thinking about this in terms of interventions to support everyday life, and in particular social interaction, for individuals with autism. People with autism can struggle communicating in the way people without autism expect. In an ideal world, we would be able to educate everyone to be more aware of and sensitive to these differences.
In the meantime, though, we have been working with non-profits, government agencies, and schools to think about how we might improve employment rates and quality of life through support for people who are reaching out and having social interactions with those who may not have that awareness and sensitivity we would like. So, what does this look like in practice? Mostly, it looks like getting away from an approach to social skills that focuses on training, role-playing, and practice, and moving to a prosthetic model that provides support just in time.
My student LouAnne Boyd, visitors Alejandro Rangel and Xinlong Jiang, and a team of undergraduates have done this so far with two projects. In SayWAT, we used Google Glass to provide support for people who talk too loud in comparison to their conversation partners or who use a monotone voice. In ProCom, we used some custom built sensors and a smartphone app to show people how far away they are from a conversation partner and what an appropriate distance is for how well you know that person.
In both these projects, our early results suggest that we can support people in the moment with relatively unobtrusive wearable technologies. As the cost of these devices goes down, and the availability and adoption of them goes up, we can only assume things will get even better.
…and collect data to help us understand the needs of those who wear them
Something pretty cool also happened in the process of working on these projects. We had to make some fundamental discoveries at the same time. For example, while working on SayWAT, we needed a threshold for “monotonous” voice, and one didn’t exist in the literature. So, we looked at hours and hours of interview data we had on hand for similarly aged people with and without autism and found that young adults with autism in our sample pretty consistently spoke within 25Hz when speaking “monotonously,” and people without autism almost never did.
This is far from a diagnostic or even screening measure, so don’t get too excited. Our 25Hz threshold works well for intervention, but there is a long way to go and a lot more data to be collected before we can be sure that this approach would accurately distinguish between people with and without a diagnosis. There are some greatresearchers at Notre Dame, however, working on exactly this question.
In summary, technologies offer novel forms of intervention with potential forscalability, sustainability, and customization. At the same time, these technologies require and create a lot of data. Data collected through intervention can feed research and data-driven clinical decisions. But that’s a topic for next time…
The previous post first appeared on BOLD. We encourage you to read other blogs posted on BOLD.