Video Modeling For Children with Autism: How Does Discrete Video Modeling Work?
The Gemiini program is distinguished by Discrete Video Modeling, its unique learning approach. If you learn a dance step or a yoga posture from YouTube, that is also “video modeling,” but Discrete Video Modeling (DVM) is different. DVM uses isolation (cutting out the background), repetition, and generalization to communicate directly to the learning part of the brain.
Conventional video modeling assumes that the brain’s visual and auditory regions can distinguish one sound from another and connect them to the visuals. This type of processing does not come naturally to everyone.
Try learning Chinese using conventional “video modeling” in the first of these two videos.
Play the video:
Did you feel lost in a blur of Chinese sounds and animal imagery? That’s similar to what an autistic child might experience listening to a stream of spoken language and being unable to extract “discrete” bits of meaning from them.
Discrete Video Modeling (DVM) cuts out the distractions and allows the student to focus on one “learning bite” at a time. DVM breaks down concepts into understandable and digestible bits of knowledge, making it the optimum learning approach for people with special needs.
In this DVM example, you will learn to speak “alligator” in Chinese in three quick steps:
1. Title Card – The alligator’s cut-out picture, title, and pronunciation appear on the screen with no other distractions.
2. Mouth View – We see the actor’s lips, teeth, and mouth form the “uhh-yoo” sound.
3. Action Scene – The alligator is shown in his amphibian habitat – on both land and in water.
Play the video:
Congratulations. The next time you visit the zoo, you will summon the Chinese word for “alligator” from your memory. This is called generalization – transferring the DVM lesson from the screen into the world. You will be able to point at the toothy amphibian and say, Èyú.
DVM and Neuroscience
Neuroimaging research demonstrates how different functions live in specific regions of the brain. One group of brain structures – called the default mode network (DMN) – show lower levels of activity when we are paying attention but higher levels when we are daydreaming, thinking about the past and future, enjoying the scenery, thinking about others, and so on.
Unlike DMN, tasks that demand attention engage the “task-positive network,” – sometimes called the “dorsal attention network (DAT),” or even just the “active network.” In our daily activities, we seamlessly plug in and plug out from one network to another. You might be working at the computer (DAT) and then stop to enjoy a coffee outdoors, watching pedestrians, kids, and clouds go by (DMN). We switch between different brain networks throughout the day – but usually favor one over the over: gardening over chess or writing over jogging. A DAT person might feel so tightly wired that they sign up for yoga or meditation just to chill out in DMN.
Autism and Default Mode Network
Some studies point to a relationship between autism and heightened connectivity in the default mode network (DMW). While not definitive, these studies explain the difficulty of teaching language and other attention-dependent skills (DAT) to an autistic child who lives primarily in DMN. If switching gears does not happen easily, learning a language is difficult – for example, learning Chinese from a conventional video.
Discrete Video Modeling awakens the Active Network
Laura Kasbar, Gemiini’s Founder, discovered Discrete Video Modeling while observing her kids. “When I saw all six of my children glued to the television (in DMN), I couldn’t tell which were the autistic ones,” Laura explained. “That’s when I realized that discrete video modeling could engage the learning capacity of autistic children in a way that other forms of therapy could not. We believe that Gemiini’s DVM climbs into the students’ default network and teaches them there. In Discrete Video Modeling, we have embedded visual and auditory anomalies that wake you up. What’s more, with Gemiini, we are tricking the viewer’s brain into thinking that they themselves are talking – that they are making these articulations.”
Letting computers do what computers do best
“A child can never have enough face-to-face therapy,” Laura Kasbar said. “But we all know the reality: therapy is scarce and expensive. We are making those golden hours of face-to-face as productive and efficient as possible while letting computers do what they do best – repetitive tasks and teaching. When we let humans do what they do best, we bring out the richly communicative and loving people who are inside each of our kids.”