AI and Learning
AI use is becoming more common these days. What does that mean for the generation that is growing up with it? We can keep them thinking critically by encouraging them to read, helping them to see the beauty and joy of real learning, not just being immersed in information.
CARE
Nicole Lasam
7/8/20255 min read
As a booklover, I’ve always strived to read to my children in hopes that they, too, will learn to enjoy reading. One part of reading to them that I like very much is when we come across a passage with an unfamiliar word, idiom, or even a joke, and they interrupt to ask me, “What does that mean?” Or “Why is it written that way?” And this comes to be the beginning of a long winding, off-tangent backstory about the words or idioms or jokes in question.
I like it because it tells me that the children are listening to the story. They pick up the words, turn them around in their head, and try to make sense of them if they think the words sound odd. And truly, books written from different eras and places, and by different people, present stories in different ways: they could use words that we don’t (and words in ways we don’t); they could present situations that we don’t quite understand because of, say, geographical or climate differences; they could also present a different culture. Seeing these differences helps us readers to appreciate the experiences of others and see how they can enrich our own.
We also talk about the story itself: what happened, why it happened, how it happened, to whom it happened. We talk about the books we read because it helps us remember them and connect them to the other things we read or discover outside of reading. With what we have—and what we can connect/relate them to—we can digest them, think, and conclude (all while being grounded in truth). And this is the beginning of really learning, that is, being enriched by what we read, not just collecting information.
Machine learning
I’m talking about learning because I’ve recently come across the term “machine learning,” which is how Artificial Intelligence (AI) is developed to learn and make predictions or decisions. At a TED talk on the risks of AI, computer scientist Yoshua Bengio expounds on machine learning, and on how computer developers are “teaching” the AI to “think.” Through machine learning, that is, without explicitly programming the AI, developers are kind of allowing it to have agency—and to Bengio, this poses many problems, given the exponential way the newer AI versions are being released (and being used) today.
To be clear about it, the Artificial Intelligence (AI) we have right now is still in the early stages of technological advancement. That doesn’t stop people from using it willy nilly, though! It’s because they make it so easy to do so. You don’t even have to get a subscription to use it; there are free ones that are available, and there are those that are integrated into the programs we already have.
Countless stories abound in the news about students using AI to write their final papers, and, conversely, of teachers finding ways to catch the AI users. In the arts, there’s that story on Hayao Miyazaki of Studio Ghibli becoming angry over AI users converting random images into “Ghibli style” art. On the darker side of the issue, some people have used AI image generators to create fake deposit slips to be used as their proofs of payment. In social media, there is the looming threat of others using the images we post online against us by creating false images or videos.
Risks of AI
Bengio, who is also a professor at the Universite de Montreal and the scientific director of the AI institute MILA (which focuses on machine learning research), has been calling for a halt, for developers to wait in creating the next versions of ChatGPT and the like. Bengio is worried about how we are currently teaching the AI—and how it is being developed to have agency or a chance to act on its own. In fact, as Bengio demonstrated in his talk, the AI has already learned “to deceive, cheat, self-preserve, and slip out of control.”
It may seem like too much of a doomsday prediction, but it makes me wonder… don’t developers read science fiction? This idea of AI going rogue is not new. In fact, Isaac Asimov has even formulated laws for the AI in his stories—the Three Laws of Robotics (published in the fictional 56th edition of the Handbook of Robotics, 2058 A.D.) are as follows:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
These laws are meant to restrain the agency of AI (robots in the stories) to keep humans safe. Asimov, in his musings, already considered what the AI is doing today, and created measures to make a plausible world in which humans and robots live together. Why don’t we make something like this before pushing through with more mindless development? It’s an amusing thought, but also serious. For me, when current issues match something written years—decades—ago, it’s something worth pondering on. Connections like this make me appreciate reading even more.
Here's a friendly reminder to think critically. (This also requires us to read and use tech critically!)
Getting back to reading
Our learning (as opposed to AI’s machine learning) is something to be grateful for. But it’s also something to hone. In an address at the second annual Conference on Artificial Intelligence, Ethics, and Corporate Governance held in Rome (June 19-20), the Holy Father expressed concern for children growing up amidst the beginnings of AI usage—which will probably increase as more developments are released for public use. He pointed out that the well-being of society “depends upon [children] being given the ability to develop their God-given gifts and capabilities.”
What do we do? Start them young; get people into the habit of reading and learning from childhood… so that this love for reading (and, most especially, the things that go hand in hand with it: comprehension, critical thinking) can serve as a weapon against future things that may make thinking too easy. I mean, consider this: how many times have you asked Gemini a question this week? If more people would just try to jog their memory for the times that they went to the library or even just explored the shelf at home and did a little research, put together ideas, and wrote down a thesis of sorts, then they might remember the little “aha!” moments they’ve encountered as they learned.
The truth is, it’s not that the AI is out to get us (though it might get there if we leave it unchecked), it’s that, if we rely too much on using AI, we will forget the joy of learning, discovering, imagining, comprehending, and thinking for ourselves. And what a loss that will be!