By Terri McKinnon
When I hear folks talk about AI, I instinctively get a little pit in my stomach. In my lifetime, I’ve seen us go from TVs with only four (4) channels to streaming thousands of shows on our computers. I’ve seen us go from sharing one phone – attached to a wall – per household, to everyone carrying smartphones in their pockets. As a matter of course, everything these days seems to have a computer in it.
I remember the exciting idea of one day having flying cars, like the Jetsons. And also the fears of the Terminator or Hal or the Matrix. But I am not a luddite. I am an IT professional.
And as an IT professional, I also see the other side of the coin when technology “advances”. I can see the opportunity for good outcomes and advances to equity. And, I also see the dominant push to innovate as a tool for revenue, rather than a force for positive change.
Just as we’re all getting used to everything living in the cloud, streaming, and mobile tech, along comes the next big wave: AI. Part of me wonders if we are just going too fast, too soon. To paraphrase a famous person, “While we were thinking about whether we could, we never stopped to think if we should or if we were ready.”
So, what should we really be concerned about with AI?
The first thing that comes up for me is privacy and ownership of our own ideas and creativity. When does it belong to oneself and when does it become part of the public sphere? Those lines get blurrier and blurrier every day. As we saw with the recent strikes out in Hollywood, it threatens so many creators. The Scarlett Johansson trial is showing us that we may not even own our voice. Deepfakes are here.
Next is education. If students use AI to do their work, what are they losing in the process? Is it cheating? If so, who’s responsible, the school, the student, or society? We’ve already seen the negative results of years of teaching to the test instead of teaching critical thinking skills. The outlook here is not great.
Yes, there are opportunities for AI to empower educators to support students. We must, however, understand the inequities and potential risks. Widespread adoption of AI has the potential of unintended impacts, and may create dangers for students. This is especially true for students who are already technologically underserved, including students of color and those from low-income backgrounds.
These students already have inequitable access to devices and high-speed internet. How then do we expect them to access AI? Students of color, who already contend with racial biases in education, risk further marginalization. As such, AI could widen the digital divide.
AI tools can also reinforce existing biases and stereotypes. AI learns from data. If the data is biased or lacks diversity, these tools will perpetuate discrimination. We are already seeing evidence that these tools amplify biases. If the algorithms aren’t vetted, these biases could lead students of color to be penalized or left out of opportunities. It’s important that these tools are properly designed and trained to deter these effects.
We also live in a time when humanity is struggling with the impact of our technology on the environment. We see the effects of climate change today. How does the rise of AI impact our planet?
Some quick statistics:
- Since 2012, the amount of computing power required to train cutting-edge AI has doubled every three-and-a-half (3.4) months.
- By 2040, the emissions from computing technology as a whole will account for fourteen percent (14%) of the world’s emissions.
- A recent study found that the amount of energy needed to train a certain popular large AI model is about six hundred twenty-six thousand (626,000) pounds of carbon dioxide. That’s the equivalent of around three hundred (300) round-trip flights between NY and San Francisco. This is nearly five (5) times the lifetime emissions of the average car.
- By 2050, the World Economic Forum projects that the total amount of e-waste generated will have surpassed one hundred twenty million (120,000,000) metric tonnes.
- Scientists believe that while training GPT-3 alone, Microsoft may have consumed an incredible seven hundred thousand (700,000) liters of water.
I recommend reading more here: https://earth.org/the-green-dilemma-can-ai-fulfil-its-potential-without-harming-the-environment/
And the list of negatives goes on. Lack of transparency, difficult to understand even for those that work closely with it, job losses, social manipulation, social surveillance, socioeconomic inequality, weakening ethics, etc. And of course, what happens when it becomes sentient?
I get asked pretty regularly now if someone or some organization should adopt AI. And what I want to tell you is that it is time to slow down. We need to get this technology regulated, understood, and diversified before we jump in with both feet. We should all be asking the question, “Just because we can, should we?” “Is easier and more convenient always better”?
So for today when I finish writing this I am going to take a break. I am going to go out on my deck and sit amongst the trees. I will take no phone, no computer, no music, no TV. I will look at the blue sky through the treetops, and see the leaves swaying in the light breeze. I will listen to the birds singing their multitudes of songs. I will watch the squirrels and chipmunks scamper up the trees and through the rustling dead leaves. And I will breathe. And I will slow down.
While I know that stopping technology now seems like holding back a threatening storm, I still have time to build my levee. I can still wait. I can take the time to learn and understand what is already at my fingertips before I move on to the next thing. I have time.