Speech pathology professor wins grant for innovative research proposal using crowdsourcing

Kaitlin Lansford, assistant professor in the School of Communication Science and Disorders at Florida State.

Crowdsourcing is a growing and efficient way of doing business. Whether the goal is raising money or conducting surveys, petitioning the help of large groups of people, especially from an online community, is becoming the norm in today’s ultra-connected world.

Florida State University assistant professor Kaitlin Lansford is now testing the use of crowdsourcing as a tool for gathering data in speech pathology research, and earned a 2014 New Century Scholars Research Grant for her research proposal “Use of Crowdsourcing to Assess the Ecological Validity of Perceptual Learning Paradigms in Dysarthria.”

The American Speech Language Hearing Foundation awards grants of $10,000 to support new research ideas and directions for investigators who are not currently funded in the proposed area of investigation. The competition is designed to advance the knowledge base in communication sciences and disorders. Lansford was one of only two researchers selected for the grant.

“Bringing this alternative method of data gathering to the attention of other behavioral researchers is almost equally important to me than as the actual goals of the grant,” Lansford said.

Lansford is examining the use of Amazon’s Mechanical Turk — a crowdsourcing tool — for training caregivers of patients with dysarthria, a speech disorder associated with neurological disease that often makes the speaker very difficult to understand. Because the condition of patients with dysarthria worsens over time, traditional speech therapy is usually not an option.

However, researchers are delving into alternative methods of improving understandability of the patient, such as special training for caregivers that will retrain their ear to understand speech that is less than perfect.

“It has been established that this sort of perceptional training does make listeners better understand speech, but it is always done in a laboratory setting that is very constrained where distractions are minimized in order to get the best results possible,” Lansford said. “The problem with that is if this type training is ever to be used as a clinical tool it wouldn’t be done in a tightly controlled laboratory —it would probably be done at somebody’s home with the TV on in the background and people running around.”

Through the mechanism of the crowdsourcing tool Mechanical Turk, Lansford is hoping to establish ecological validity for these training paradigms by comparing results of the training to data gathered in the lab.

“If we can see similar results of learning in less-than-ideal listening environments then this tells us that this listener-based intervention of training can actually be done in the real world,” Lansford said.

While Amazon’s Mechanical Turk was originally targeted for business use, researchers, especially those in behavioral disciplines, have found it to be a good tool for collecting experimental data.

“There is a small, but emerging evidence base behind it that supports the equivalence of data collected in a laboratory compared to what is being collected on Mechanical Turk,” Lansford said. “To me, it seemed like the perfect mechanism for looking at the validity of these listener-based interventions.”

If Lansford can show data equivalence between crowdsourcing and lab results, research costs and time could reduce drastically.

“You can collect perceptual data for about a quarter of the cost to get subjects in the lab,” Lansford said. “And for most studies in Mechanical Turk you get all the data you need in a two days where it took me well over a month to collect the data in the lab.”

Lansford worked with a student from the Research Computing Center, Lukas Bystricky, who programmed the experiment and helped set up the crowdsourcing element.