MedTech I.Q.

The Cutting Edge of Medical Technology Content, Community & Collaboration

Education Technology’s Machine Learning Problem

From Formula 1 to Yelp, industries across the board are seeking ways to apply machine learning to their work. Even academics and Goldman Sachs analysts tried using it to predict World Cup winners. (Those predictions proved very, very wrong.)

But how is machine learning playing out in education—and how does it impact not just students, educators and parents, but also the businesses building technology tools to support teaching and learning?

At the SF Edtech Meetup, hosted by EdSurge on July 10, four panelists gathered to discuss the challenges around deploying machine learning in the classroom and the boardroom. The speakers were Carlos Escapa (Senior Principal, AI/ML Business Development, Amazon Web Services), Vivienne Ming (Founder and CEO, Socos Labs), Matthew Ramirez (Director of Product Management, AI Writing Tools, Chegg) and Andrew Sutherland (CTO and co-founder, Quizlet).

The Data Problem

What makes machine learning work is data—but that data can be biased in problematic ways that can lead to misleading and disturbing outcomes. (Ming referenced the time when Google’s image recognition algorithms classified black people as gorillas.)

“When you’re doing this in advertising, who cares if you get some of it wrong,” Ming said. “When you’re doing it in diagnostics or in education or in hiring, you potentially just ruined someone’s life. You have a real moral obligation to understand why your system is making the recommendations it’s making.”

Education and human development is complex, Ming noted, and there’s still not a mass of training data that connects what activities four-year-olds do with their parents in the evenings to what their life outcomes will be. In complex data sets, particularly ones with lots of human data, she warned, the number of spurious correlations that don’t actually mean anything are often more common than evidence of reliable causal relationships.

If there’s something wrong with the data you’re using to support a decision, added Escapa, then there will be something wrong with the decision. Data must be representative of the population it’s trying to serve, and there must be sufficient data from that population. “It’s very easy to make a mistake creating a model where you’re trying to generalize a solution to a problem, and you actually have insufficient data from some of the demographics that you’re trying to address,” he said.

Great Data, Great Responsibility (and Costs)

So what can education companies do to reduce biases in their tools?

Ramirez advised that a company wanting to build out its machine learning system should figure out a “ground source of truth” that guides how a company is building its tools.

There’s human effort involved as well. Ming recommended that companies hire domain experts who can build the machine learning system. And what Ramirez has found helpful for his writing tool are human annotators who can look at the data and identify whether or not a student should receive a certain suggestion or piece of feedback.

Machine-learning technology can help humans perform these types of tasks. “I think AI assistive platforms are super fascinating and really interesting, because there are a lot of human experts that can do this kind of work, that are eager to do this kind of work, and are extremely well qualified to do this kind of work,” Ramirez said.

But getting this support isn’t cheap. For those that have raised venture capital, Ramirez says it’s reasonable to expect at least half of that money to go towards AI research and development—and hiring machine-learning engineers. Companies need to allocate a bigger budget to hire these experts than for general web developers, he noted.

These costs are “something that I think a lot of people going into this industry don’t foresee, and which venture capitalists are now just learning how to forecast.” For that reason, he welcomes the hype around AI among venture capital circles.

What Should Teachers Ask?

Investors are not the only ones barraged with pitches from edtech companies promising that their AI and machine learning tools with transform education. Increasingly, so too are educators and administrators who make decisions around what technologies to use and purchase.

So what how can they do to make smart choices—and not fall for gimmicks?

Escapa thinks educators should ask AI companies what data they use to generate their models, how they curate that data, how often they update those models and how they know their data doesn’t have bias. Also, educators should tell companies to convince them that their data is representative of the population they serve.

Sutherland said that educators should also be able to try artificial intelligence technology before making a purchase.

“It’s one thing to have Powerpoint demos or a fixed demo where you say, ‘Oh, it produced this decision or this outcome.’ But then there’s another way—you actually have a teacher try the tool for five minutes,” Sutherland said. “They know all the mistakes a student might make. They can try them on the tool, and if it doesn’t work, then it doesn’t work.”

Read more at http://bit.ly/2JgakOQ

Views: 9

Comment

You need to be a member of MedTech I.Q. to add comments!

Join MedTech I.Q.

© 2024   Created by CC-Conrad Clyburn-MedForeSight.   Powered by

Badges  |  Report an Issue  |  Terms of Service