Is AI Ethically Good or Bad?

image source: https://www.clio.com/resources/ai-for-lawyers/ethics-ai-law/

By Sarah Naveed

March 8, 2025. LexAI Journal

Artificial intelligence initially deemed a future-building tool is now categorized as an “existential threat” by a “godfather of AI,” Geoffrey Hinton (Brown, 2023). How did a man-made tool become a potential stakeholder in irreversible societal damage? If the purpose of AI is to augment or replicate forms of human intelligence, then is it possible for the same flaws that we possess to be implemented in the consciousness of AI operations? I am inclined to agree. I am also inclined to speculate this is a core reason as to why consistent rapid development is feared in the first place.

It is a reasonable consideration that the ethical implications of AI affect our creative and logical thinking spaces, which are all intriguing topics to be expanded on in additional entries. This includes individuals who are unable to perform mundane tasks without the usage of AI tools such as ChatGPT. From planning your daily lives, vacation itineraries, to even tracking macros in your diet, what becomes the signs or even consequences of an overly reliant relationship?

Another aspect I find fascinating is the rise of artificial relationships, including the ethics behind AI ‘companion’ robots, chat rooms, and other facilities, and if it is on the rise to replace social interaction. I think any kind of over-reliance on technology for unconventional purposes can impact social and cognitive development skills, whether that is replacing human interaction or practical organizational abilities.

The over-reliance of AI in education fundamentally changes how current students and future generations think and produce work. There is also the evident issue of academic integrity, resulting in a lack of crucial writing, researching, and application skills necessary in future academic endeavours.

Finally, a topic I think does not garner a lot of attention is the ethics surrounding bias and inconsideration of minorities. This can be done quantitatively through a lack of accurate data representing target populations, and how it may not be an effective replacement for traditional survey or manual data collection methods.

Even though I have spent the last bit critiquing AI’s lack of ability to serve society ethically or environmentally (which is a whole discussion on its own), the existence of artificial intelligence itself is remarkable and a true pillar of innovation. 

In September 2022, Sara de Lagarde was going through her daily morning rush of heading to work through the local train. Her routine was interrupted as she slipped and fell on the platform, causing injuries in her right arm and portion of her right leg that could only be salvaged through amputation. Only a month later she would hike Mount Kilimanjaro with her husband. This almost inconceivable recovery was only possible through the innovation of AI. Ms. de Lagarde received a bionic arm and through machine learning, it can understand and apply patterns typical to her movement. In cases like this, it is hard to ignore the overwhelming potential for artificial intelligence not just in the healthcare field, but in every aspect of living. (Satariano, 2024).

One argued benefit of artificial intelligence is how it can automate repetitive tasks to make certain processes easier for working individuals. However, I would counter that it is only a specific type of individual that benefits from this replacement. I believe this take does not support or include individuals of varying socio-economic statuses that may benefit from the opportunities taken away by artificial intelligence.

In order to be purposive as a “future-building” tool, stakeholders and aspects important to that future must be considered in the conversation of its development. Whether that’s creating protection for artists and their creative spaces or researching and implementing ways for AI to broaden its lens to consider minority groups, it is very possible for all these problems to be solved by the same tool that created them. I believe the success or failure of artificial intelligence is a reflection upon society. Ignorance and critical perspectives towards AI can simply be considered as a parallel on how we subconsciously view society. To call out a lack of critical thinking in schools or misrepresentation amongst minorities can ultimately hint to realms of improvement in our own operational systems. Is it possible for data collection to improve if governmental systems included minority perspectives, such as Indigenous peoples, in government or judicial processes? Or increased funding in schools to introduce better hands-on approaches with augmented support to lessen the inclination of using artificial intelligence.

I think rather than fearing the label we have put on technology, it can be used as intended to replicate forms of human intelligence for the better and for good. Whether AI is ultimately good or bad depends on how we characterize it, and by addressing issues within society foremost, can our usage of AI reflect its potential.

References

  1. Brown, S. (2023, May 23). Why neural net pioneer Geoffrey Hinton is sounding the alarm on AI. MIT Sloan. Retrieved March, 2025, from https://mitsloan.mit.edu/ideas-made-to-matter/why-neural-net-pioneer-geoffrey-hinton-sounding-alarm-ai
  2. Coursera Staff. (2025, February 3). AI Ethics: What It Is, Why It Matters, and More. Coursera. Retrieved March, 2025, from https://www.coursera.org/articles/ai-ethics
  3. Means, P. (n.d.). AI—The good, the bad, and the scary. Engineering | Virginia Tech. Retrieved March, 2025, from https://eng.vt.edu/magazine/stories/fall-2023/ai.html
  4. Satariano, A. (2024, May 26). Her A.I. Arm. The New York Times. Retrieved March, 2025, from https://www.nytimes.com/card/2024/05/26/technology/ai-prosthetic-arm
    • image: https://www.nytimes.com/card/2024/05/26/technology/ai-prosthetic-arm
Please Note:

All content produced by the LexAI Journal Admin Team has been carefully written and edited to reduce biases or any personal infliction on the subject matter. If at any point you find the content misleading or inexact, please reach out to us on Instagram (@uoft_lexai) or send us an email at lawandethicsofai@gmail.com. Viewer's discretion is advised. Thank you and happy reading!

LexAI Journal Admin Team

Posted in Uncategorized.

Leave a Reply

Your email address will not be published. Required fields are marked *