About the Author:
Rob De La Espriella, BD3
Rob De La Espriella, is a former nuclear submarine officer, a regulator with the US Nuclear Regulatory Commission, and a senior manager at commercial nuclear power plants. Since 2007, he has been a Senior Policy Advisor for the Department of Energy and their contractors. In 2023, Rob was accepted into the Forbes Business Council, contributing articles and commentary to help businesses reach their full potential. Rob is also a leading expert in solving complex human-centric problems in modern, socio-technical work environments, and has re-defined how organizations solve proble
In the age of artificial intelligence, critical thinking has become more critical than ever. However, as society rushes towards embracing AI, we seem to be pushing aside the very skill that is essential to navigate this new world. It’s easy to fall into the trap of believing that AI is infallible, especially when it comes to decision-making. But as we’ve seen time and time again, relying on AI without critical thinking can lead to disastrous consequences.
One reason why critical thinking is essential in the age of AI is that, despite its many benefits, AI is not infallible. AI systems can make mistakes, suffer from bias, or generate inaccurate results if not designed, developed, and tested correctly. Therefore, it is crucial to approach AI-generated information and recommendations with a healthy dose of skepticism and scrutiny, especially if they have significant implications for decision-making.
One of the biggest challenges we face with AI is the issue of bias. AI systems are only as good as the data they are trained on, and if that data is biased, the AI will be biased as well. For example, if an AI system is trained on data that is predominantly from one race, gender, or socioeconomic group, it may not be able to make accurate predictions or recommendations for people outside that group. It takes critical thinking to recognize when bias is present in an AI system and to take steps to correct it. Here are recent articled on racial and gender bias finding its way into AI applications:
- “Amazon scraps secret AI recruiting tool that showed bias against women” (Reuters, October 2018): Amazon had to scrap an experimental AI recruiting tool after it showed bias against women. The tool was trained on resumes submitted to the company over a 10-year period, but it ended up downgrading resumes that included words like “women” or the names of all-female colleges.
- “Amazon’s facial recognition falsely matched 28 members of Congress with mugshots, ACLU says” (NBC News, July 2018). This article reports on a study by the American Civil Liberties Union (ACLU) that found that Amazon’s facial recognition software incorrectly matched 28 members of Congress with mugshots in a database of 25,000 publicly available arrest photos. The study also found that the software was less accurate for people of color and women.
- “Facial recognition software can be biased. It’s time for action.” (The Guardian, May 2019): Researchers have found that facial recognition software can be biased against people of color and women. In some cases, the software had a higher error rate for darker-skinned individuals, which could have serious consequences for people who are misidentified by law enforcement.
- “Google’s AI System for Diagnosing Eye Disease is Showing Bias Against Black Patients” (Time, December 2020): A study found that Google’s AI system for diagnosing eye disease was less accurate when it came to identifying disease in Black patients. The system was trained on a dataset that was 86% white, which may have contributed to its bias against Black patients.
- “Google’s medical AI was super accurate in a lab. Real life was a different story” (MIT Technology Review, May 2021). This article examines a study that found that an AI system developed by Google to identify breast cancer in mammograms was less accurate for patients who had not been screened before. The study’s authors suggested that this could be because the AI had been trained on data that was not representative of the population as a whole.
These articles illustrate the potential for bias in AI systems and highlight the need for critical thinking and evaluation to ensure that these systems are used in a fair and equitable manner.
Another issue with AI is the so-called black box problem. AI systems are often opaque, meaning that it’s difficult to understand how they arrive at their decisions. This lack of transparency can lead to distrust in the system and the decisions it makes. Critical thinking is necessary to understand the limitations of an AI system, to recognize when it’s appropriate to use AI, and to interpret the results it produces.
Finally, as AI continues to become more integrated into our lives, it’s essential to understand the ethical and social implications of its use. For example, if an AI system is used to make hiring decisions, we need to consider whether it’s fair to use an algorithm to make decisions that could affect someone’s livelihood. Critical thinking is necessary to weigh the benefits and drawbacks of using AI and to ensure that its use is in line with our values as a society.
The point of this post is that critical thinking is more important than ever as we rush into the age of artificial intelligence. As we embrace this new technology, it’s essential that we don’t forget the skills that have served us so well in the past. By combining the power of AI with critical thinking, we can ensure that we’re making the best decisions possible for ourselves and for society as a whole. That is why BlueDragon IPS was founded on the principles of critical thinking.
For more information on critical thinking and complex problem-solving, watch this video on our BlueDragon YouTube Training Channel: https://www.youtube.com/watch?v=ICWlE-xqFa8&t=71s