And What Do You Mean by Machine Learning? (A Repost)
I recently posted this piece on the MV Ventures blog. It provoked a nice response, so I thought it was worth sharing (again) here on Rooted.
And What Do You Mean by Machine Learning?
We are living in times of accelerated change, the kind of era when transformation, without a doubt, needs a guide. Just think of the changes that have occurred this year with AI, ChatGPT and machine learning.
But one thing that hasn’t changed is the ingenuity, creativity, and curiosity of young learners – qualities that unmistakably show up in each and every one of them every single day. Every educator loves watching a child’s eyes get wide, eyebrows raised with an open-mouthed smile, announcing through the look of sheer joy: “Hey! I’ve got it! I think I get this!”
Human learning is a capacity unlike anything else we’ve seen in the natural or technological world, and its exceptionalism is fueled by our curiosity, a quality we don’t detect in the recent AI technologies that are accelerating so much change.
Now, machines are learning and doing so quickly, and there’s lots of buzz in the media for good reasons. After all:
- What are these machines actually learning?
- Can we explain how the machine is learning, knowing Deep Neural Networks are largely inexplicable?
- Whom or what is the machine learning from, and is that “learning material” biased?
- What are these machines learning about ME and about my private data?
- And who owns all this “learning”?
But here’s an even more basic question: What do we actually mean by “Machine Learning”? Is it the same as Human Learning? This conjures Seymour Sarason’s classic question: And What Do You Mean by Learning? (2004)
Neuroscientist Stanislas Dehaene understands human learning as being supported by four key pillars:
- Human learning requires focused attention. In a world with so much information, human brains are good at selecting, amplifying, and processing specific things. Machines, however, “waste considerable time analyzing all possible combinations of the data provided to them, instead of sorting out the information and focusing on the relevant bits” (148).
- Human learning requires active engagement: “We do not simply passively wait for new information to reach us–as do, most current artificial neural networks, which are simple input-output functions passively submitted to their environment. We humans are born with a passion to know, and we constantly seek novelty, actively exploring our environment to discover things we can learn” (187).  Active engagement requires humans from early ages to build cognitive models of the world, and curiosity occurs when there’s something limited or missing about our model that needs to be adjusted. “Even the most advanced computer architectures,” writes Dehaene, “fall short of any human infant’s ability to build abstract models of the world” (xxiv). It’s the difference between machines using “statistical regularities in data” and humans comprehending “high-level abstract concepts” (28-29).
- Human Learning requires positive error feedback, “which compares our predictions with reality and corrects our models of the world” (145). In some respects, Machine Learning has made a lot of advancements with feedback algorithms. However, the feedback used for Deep Neural Networks is what John Hattie might call “correct and direct feedback,” meaning it isn’t open ended or posited as wonders or what ifs for the learner to think about and comprehend. It’s the difference between “deep learning” and “deep understanding” (Marcus and Davis 66). “Correct and direct” feedback does not require cognitive reflection or higher levels of depth of knowledge, so much as recall and reproduction.
- Lastly, Human Learning requires consolidation, “which involves sleep as a key component” (146). Machines do not need to power down nearly as frequently for the learning to stick. However, when we sleep, Dehaene tells us, our brains are very active replaying experiences from the day all while transferring data to memory compartments of the brain, which is actually an advantage: “Sleep seems to solve a problem that all learning algorithms face: the scarcity of the data available for training. To learn, current artificial neural networks need huge data sets–but life is too short, and our brain has to make do with the limited amount of information it can gather during the day” (232). Sleep affords human learning another unique advantage: our sacred, human practice of daring to dream of other worlds, other lived experiences, and other realities, which fuels our ingenuity, creativity, and curiosity.
So, what do we mean by learning when it comes to humans or machines?
If Galileo were a machine learning system, he would have dropped objects after objects off the tower, recorded his results, and eventually detected a pattern to which a predicted output would be based, namely that all other objects are also going to drop at the same rate as the ones before. Machines, in other words, learn through inductive, statistical inferencing.
The human Galileo, on the other hand, had a hunch, an inspiration – dare we call it a dream? – based on his cognitive model of the world. Gary Marcus and Ernest Davis write, “The more you can make solid inferences without trying things out, the better. In this kind of everyday reasoning, humans are miles and miles ahead of anything we have ever seen in AI” (110). I’m sure Galileo tested his hypothesis, but that’s the point. It started with a hypothesis or a model of the world. And that’s the differentiator: we know that there is a world around us. We have intuition about organic and inorganic things, and we learn by adjusting and expanding our rich cognitive models as we encounter the infinite, novel aspects of our ever-changing environment.
It’s the difference between seeing data versus seeing concepts. It’s the difference between inductive methods of learning (machines) and what Erik Larson calls abductive methods for learning: “Whereas induction treats observation as facts (data) that can be analyzed, abduction views an observed fact as a sign that points to a feature of the world” (163). It’s why “the easy things are hard” as Marvin Minsky once said about AI’s greater challenges. We have image recognition technology that through millions of data samples can recognize what a dog is, whereas my daughter encountered a handful of dogs in her early years and was able to recognize all sorts of novel cases thereafter.
Her ingenuity, creativity, and curiosity fires on all cylinders because she knows she is in a world and she wants to soak it up. She wants to know about dogs, not because the pattern of pixels keeps recurring, but because she is sharing an experience with others.
So, what do we mean by machine learning? Well, it’s obviously something very different from what humans have been doing for eons. Knowing what machine learning is not, it makes one thing clear: we cannot kill curiosity in our children. It’s what sets them apart from the robots; it’s the reason we got into this profession in the first place, and it’s what inspires them to dream.
As AI expert Melanie Mitchell once wrote, “An integral part of understanding a situation is being able to use your mental models to imagine different possible futures” (238) In terms of Norman Webb’s depth of knowledge framework: are we getting kids to adjust and expand their mental models (human learning) by daring them to ask What if? Or should we really feel threatened by the rise of AI because most of what we’re doing is feeding lots of data to passive human learners?
Sources:
Dehaene, Stanislas. How We Learn: Why Brains Learn Better Than Any Machine… For Now. Penguin Books, 2021.
Hattie, John. Visible Learning. Routledge, 2008.
Larson, Erik J. The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do.
Belknap Press, 2022.
Marcus, Gary and Ernest Davis. Rebooting AI: Building Artificial Intelligence We Can Trust. Vintage, 2020.
Mitchell, Melanie. Artificial Intelligence: A Guide for Thinking Humans. Pelican Books, 2020.
Sarason, Seymour. And What Do You Mean by Learning? Heinemann, 2004.