Most of us can't think much in our minds, imagine your brain struggling with a simple challenge while trying to make a division of 16951 by 67 without resorting to a pen and paper, or a calculator to save you the trouble of calculation, or think about doing a daily task, such as trying to do weekly shopping without writing the list of orders in a sheet
These challenges raise questions about the impact of technology on our mental abilities. Does the constant dependence on smart devices to facilitate our lives make us smarter or less intelligent? Are we replacing the efficiency we achieve with technology with a gradual decline in our cognitive abilities?
But these questions are not just fleeting reflections, but have become especially urgent in light of the rapid development of generative artificial intelligence technologies, such as ChatGPT, the chatbot developed by OpenAI, which is currently used by 400 million people a week, according to the latest statistics available at the time of writing this article.
Do these technologies enhance or weaken our capabilities?
According to a recent study conducted by a research team from Microsoft and Carnegie Mellon University, the answer is not simple, as the study suggests that the use of generative artificial intelligence may lead to a decline in some cognitive abilities, but it also offers many benefits, and this contradiction raises deep questions about how to achieve a balance between taking advantage of technology and maintaining our mental abilities.
The study offers valuable insights into the impact of generative AI on critical thinking and problem-solving. It highlights the importance of developing strategies to maximize the benefits and minimize the risks associated with these technologies.
Therefore, in this article, we will present the results of the study in detail, analyze the impact of artificial intelligence on our cognitive abilities, and try to answer the main question: Are we heading towards a smarter or less intelligent future
What criteria determine good thinking in the era of artificial intelligence?
Critical thinking is generally defined as the ability to think effectively and with high quality, which is achieved by comparing thought processes against specific criteria and methods, including accuracy, clarity, correctness, comprehensiveness, depth, significance, and the strength of arguments.
But the quality of thinking does not depend only on these abstract criteria; other factors creep into the process, such as: the influence of the worldviews we grew up on, the inherent cognitive biases that distort our perceptions sometimes without realizing, relying on mental models that may be incomplete or misleading, hindering our ability to reach accurate conclusions, and these elements add layers of complexity to how we think and evaluate the world around us.
In the study, the researchers relied on a classification developed by educational psychologist Benjamin Bloom and his colleagues in 1956, a hierarchical classification that divides cognitive skills into six levels: information recall, understanding, application, analysis, synthesis, and evaluation.
The researchers chose this classification for its simplicity and ease of application, although it has lost popularity in academic circles and even credibility, as it was discussed and questioned by researchers such as Robert Marzano and even Bloom himself at a later stage.
The essence of the criticism lies in the assumption of a hierarchical classification, where higher-ranked skills depend on lower-ranked skills. This assumption does not correspond to reality, as evaluation, which is usually a higher-ranked skill, can serve as a starting point for investigation or a straightforward process in some contexts. The context plays a crucial role in the development of thinking, not just perception.
One of the problems with using this classification in the study is that many generative AI models use it to guide their outputs, and this raises an important question, Does the study measure the impact of artificial intelligence on users ' critical thinking, or is it simply measuring how effective artificial intelligence – with its current design-is in guiding users to adopt a critical thinking style based on the bloom model
Moreover, Bloom's classification omits an essential aspect of real critical thinking, which is the inner motivation of the critical thinker, since the critical thinker not only performs cognitive skills and many other skills, but performs them with high quality driven by a deep interest in reaching the truth, an authentic human trait that artificial intelligence systems completely lack.
Is the increasing reliance on artificial intelligence leading to a decline in our ability to think critically?
Research published earlier this year revealed a strong and noticeable negative correlation between the intensive use of AI tools and the ability to think critically in individuals.
This result was the cornerstone on which the new study relied, as it sought to prove it accurately, so the new study relied on a survey of the opinions of 319 individuals working in various fields of knowledge, such as: doctors and nurses in healthcare, teachers in the educational sector, and engineers in technical industries, and these individuals participated in a detailed discussion about 936 professional tasks they carried out with the help of generative artificial intelligence techniques, which provided researchers with rich material to analyze the interaction between man and machine.
The results revealed an interesting paradox, as participants observed that they reduced the use of critical thinking during task execution with the aid of artificial intelligence, while increasing its use at later stages, such as checking and editing outputs.
In other words, the initial reliance on artificial intelligence to accomplish basic work seemed to reduce critical thinking at first, while human intervention later called for a higher level of thinking to ensure quality.
In high–risk work environments, such as hospitals or sensitive engineering projects, the researchers noted that two strong motives motivate users to activate their critical thinking. The first is the pursuit of high-quality results that meet professional standards. The second is the fear of negative consequences or criticism in case of failure, which prompts them to review the outputs of artificial intelligence very carefully.
However, participants noted by about a year that the benefits that artificial intelligence brings in terms of increasing efficiency and saving time far outweigh the additional effort required to supervise and correct these outputs.
In an interesting detail, the study revealed a close relationship between the level of trust and critical thinking: people who trust the capabilities of artificial intelligence showed significantly lower levels of critical thinking, while those who trust their personal abilities showed higher levels.
This discrepancy indicates that generative artificial intelligence does not negatively affect the critical thinking of the user in a fundamental way, but rather its impact depends on whether the user has a strong foundation for this skill from the very beginning, or not.
How to become a critical thinker in the age of artificial intelligence?
The new study carries with it a fundamental idea that slips within the lines, namely that activating critical thinking during the supervision phase of the output of generative artificial intelligence is, at the very least, a wiser step than completely and blindly surrendering to dependence on these technologies without scrutiny.
In light of this vision, the researchers suggest that generative AI developers integrate innovative features into their systems, specifically designed to motivate users to exercise critical supervision more effectively, perhaps through alerts that encourage auditing or tools that help evaluate outputs.
But the question that arises here is: is this action really enough to remedy the problem? The answer is not simple; real critical thinking requires constant mental presence that is not limited only to the supervision stage, but extends to every step in interaction with artificial intelligence.
This starts from the moment the user formulates his questions or hypotheses that he intends to test, through the stage of using the tool, and up to examining the resulting results to detect any possible biases or deviations in accuracy.
The only guarantee to prevent generative AI from undermining your critical thinking abilities is to be an experienced critical thinker before you use AI tools.
But becoming a critical thinker is not something that happens overnight; it is a careful process that requires you to learn how to figure out the hidden assumptions behind the claims – whether they come from you or from others – and then boldly challenge these assumptions to test their solidity.
It also requires openness to exploring multiple points of view, even those that may contradict your firm beliefs, and evaluating them objectively to reach a more holistic vision. Additionally, it requires practicing systematic and organized thinking, as well as promoting cooperation with others to exchange ideas and challenge their points of view. Just as simple tools like blackboards and chalk have contributed to improving our mathematical abilities, generative artificial intelligence can play a similar role in enhancing our critical thinking skills, provided that we use it carefully and consciously.
We can use generative AI as a tool to challenge ourselves and test our assumptions, which contributes to the development of critical thinking skills. But at the same time, we must realize that these technologies do not replace the effort to develop our abilities ourselves, there are always steps we can and should take to improve our critical thinking, rather than relying entirely on artificial intelligence to do the thinking for us, be it by practicing asking deep questions, analyzing information carefully, or discussing ideas with others, and then the initiative remains in our hands to ensure that our minds remain sharp and independent in an age increasingly dominated by machines.