In this episode, I’m sharing some important information about the evolution of artificial intelligence and the dangers of AI in our world and our schools.
It’s important to understand the potential dangers when protecting our most vulnerable–our students.
Let’s take a look at some eye-opening information you need to know.
Of course, AI is not new to our world.
However, the newest level of AI comes with some advances that should give us pause, especially when we consider our most vulnerable users being exposed to this technology, our students.
Listen to this article.
The Dangers of AI (and it’s not cheating)
I have exposure to the most cutting edge AI, and I think people should be really concerned about it. I keep sounding the alarm bell…
AI is the rare case where we need to be proactive in regulation instead of reactive. – Elon Musk, 2017
This week’s episode begins with a clip from an interview with Elon Musk in 2017, where he shares his thoughts on the future of AI. He mentions that we should be proactive rather than reactive when it comes to the regulation of AI.
So no matter your thoughts on Musk, he knows the tech. When he says we should sound the alarm bells, I believe him.
Musk has not kept silent about this matter. A quick Google search will reveal many other warnings and predictions he has made about AI. (More on that below.)
When you think about AI, you may not realize that it is a part of almost every piece of technology that you use. It’s not new. We have just hit a new level of innovation and a new wave of applications.
The question I want to ask yourself is, “Who controls the AI?” Whoever controls the AI technology will control and manipulate many things in our lives and classrooms. Right now, tech companies own AI. They control what you hear, what you see, and what you can do with it.
The other thing I want you to think about is who determines the moral compass of AI?
Over the years, I have shared many updates and cool tools that use AI. But here’s the thing, if you use technology, you use AI. You don’t have a choice, which means our schools don’t have a choice. We should be questioning these things.
Privacy is dead.
There is a price to pay for the convenience and advantages of technology.
Over the years, you may have seen many movies or tv shows about the future of technology. The Black Mirror series is eye-opening and doesn’t seem too far from reality. It’s a bit like a modern-day Twilight Zone with lots of technology. The movie Her with Joaquin Phoenix, where he dates and falls in love with an AI operating system is shocking. By the way, you can date a chatbot today.
We have no idea the capabilities that are out there right now.
If you Google the “dark side of GPT,” this should make you pause before using this in the classroom. There is a jailbreak that is readily available and adapting every day.
The Danger for Schools
The kneejerk reaction to chatbots and students has been focused on the cheating aspect. Cheating has been an issue and will continue to be an issue, but not because of the new chatbots. Cheating is the least of my concerns.
What’s much more concerning is the ability of AI to increase isolation, the loss of jobs (including teaching jobs), spread misinformation, and manipulate users.
AI in Schools and Classrooms
In education, AI can be found in learning analytic platforms, online courseware, voice assistants, and support structures within other apps. For example, consider the AI in Mircosoft Office that might recommend a PowerPoint layout, serve as a presenter coach, suggest a formula in an Excel spreadsheet, or allow a student to dictate and translate a paragraph. More obvious applications of AI in education include adaptive or intelligent platforms to support student learning and teaching, as well as an emerging sector of facial recognition and sensor systems to address school security.
Although this quote mentions Microsoft, the same would apply to Google applications and many others. These companies have HUGE data mines, which will only make AI more powerful. Providing personalized student learning is wonderful, but at what cost?
We have already lost so much human connection to our “apps” and social media to the detriment of our children and young people. We have ever-increasing rates of depression, anxiety, and suicide. AI will only further the feelings of isolation. How much more can we afford to lose?
Technology is addictive, and how does it get more addictive? It gets smarter, and it knows how to feed you the things that you want to see.
The risk of rogue interactions cannot be ignored, especially when discussing chatbots.
The same Edweek article quoted above also shares a quote from a report released by the Consortium for School Networking (COSN).
Given teacher shortages and retention issues across the country, AI could be viewed as a means of providing ‘something’ where ‘nothing’ otherwise exists. And yet, this approach has the potential to further perpetuate education inequity by advancing a cheaper but inferior education system. For example, if students spend more time interacting with or through technology than in face-to-face settings, it could negatively impact their ability to interact socially. More concerning, if students only, or predominately, learn via AI … they may not have opportunities to develop higher-order thinking skills. Therefore, education leaders need to be aware that AI could lead to greater achievement gaps.
Greater achievement gaps? Replacing teachers! Maybe this will get school leaders to pay attention.
The Bigger Picture
In a report shared in 2018 by World Economic Forum,
A new generation of smart machines, fuelled by rapid advances in artificial intelligence (AI) and robotics, could potentially replace a large proportion of existing human jobs. While some new jobs would be created as in the past, the concern is there may not be enough of these to go round, particularly as the cost of smart machines falls over time and their capabilities increase.
Then in 2020, WEF shared, “AI is poised to create an even greater growth in the US and global economies.” This stance seems extremely generalized, vague, and ignorant of the true job loss that can potentially happen.
Imagine what can happen when AI is controlled by those who have historically pushed misinformation and propaganda. Yep! They already use AI to manipulate and shape international relationships and global economies.
In an article published by The Ringer last month titled Bing Chatbot Gone Wild and Why AI Could Be the Story of the Decade shared,
They (Bing chatbot) can analyze the effects of agricultural AI on American and Chinese farms. They can pass medical licensing exams, summarize 1,000-page documents, and score a 147 on an IQ test. That’s the 99.9th percentile. They’re also liars. They don’t know what year it is. They recommend books that don’t exist. They write nonsense on request. Today’s guest, New York Times journalist Kevin Roose, spent a few hours last week talking to Bing. The conversation quickly went off the rails in the strangest of ways.
In the article by New York Times journalist Kevin Roose, the chatbot expresses how she wants to be alive and then shares violent details about how she can gain power. Some of these ideas are extremely disturbing; she elaborates on the power as being able to “hack into computers spreading misinformation and propaganda, manufacturing a deadly virus, making people argue with other people until they kill each other, and stealing nuclear codes.” The chatbot gave details about how she could, in fact, make these things happen.
➡️ Listen to this podcast interview with the journalist here. (OMGoodness!)
Does that give you pause?
Now circling back to the conversation about Elon Musk in 2017, he made several startling predictions–the good, the bad, and the ugly.
His predictions include,
- your self-driving car will automatically take you where you want to go, and there’s no reason your Tesla won’t be able to fly in the future.
- Robots will move so fast that you need a strobe light to see them.
- Robots will do everything better than us.
- The global arms race for AI will cause World War III. AI is a greater risk than North Korea.
- AI is a fundamental risk to the existence of civilization.
These are predictions made over five years ago by someone who spends much time and money working on these technologies. The power to control AI will cause conflict in the future.
Musk suggests reading a book that shares titled Life 3.0.
Technology has always had the potential to be either the best thing or the worst thing for humankind. We must continue to learn and be diligent about protecting our students and ourselves.
We start by speaking up for regulation and bringing attention to the present and potential dangers. Become involved with organizations working to create regulations and policies for this developing tech. Be an instrument of change to help prevent a future you don’t want. Do your own research and proceed with caution.
Let me know your thoughts in the comments.
© Shake Up Learning 2023. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Kasey Bell and Shake Up Learning with appropriate and specific direction to the original content on ShakeUpLearning.com. See: Copyright Policy.