As AI starts infiltrating our day-to-day work lives, the actual danger posed by it is not about losing our intelligence, but losing faith in our intelligence. According to new research, the key factor differentiating between the two scenarios is our attitude towards the technology.
The study, which involved almost 2,000 adult participants from various professions, showed that those individuals who used the technology the most often and tended to accept whatever AI came up with with minimal modifications were less confident in their own thinking and believed AI did the majority of the thinking for them.
However, this wasn't an inevitable outcome. The people who challenged the AI system's output, either through editing the output or rejecting it altogether, felt much more certain about their own reasoning process.
The results indicate that the technology is not responsible for diminishing our abilities; rather, the way we interact with it seems to be responsible for our experience. "Generative AI can cause cognitive decline or cognitive evolution – it all depends on your engagement style," Sarah Baldeo, the study's co-author and a PhD student at the Department of AI & Neuroscience at Middlesex University, London, told Newsweek. "When we analyze the neural signatures contingent on people's interaction styles with the tool, we observe increases and decreases in brain activity related to AI use. In short, it is nothing about the tool."
It is yet another example of why the right kind of relationship with the technology can be beneficial for us at work.
How using the technology impacts confidence
In an article published by Technology, Mind, and Behavior on April 16, 1,923 adults in the U.S. and Canada performed various tasks through the use of AI to perform work-based simulations such as writing a plan for salary negotiations or analyzing uncertain information. Unlike many studies where AI use was compared to its absence in a workplace setting, this one focused on how employees really use the technology.
The most interesting finding was that people exhibited different levels of reliance on technology. While some accepted the very first output they received and moved to the next step after it became clear that the output could be used, others spent extra time editing and refining it until they were fully satisfied. And all these actions were strongly correlated with how sure they were about their thought process.
This correlation depended on the type of task people had to do. Participants were the most likely to fully delegate their cognitive functions when doing planning or sequencing tasks, that is, tasks that required them to engage in a number of steps and solve a certain problem. However, once the assignment involved personal reflection or introspection, their behavior drastically changed. They were much more skeptical when it came to evaluating their own experience or traits of their personality because they trusted themselves.
Entry-level employees were less likely to modify the output provided by AI and were generally less confident, which implies that having some knowledge is essential for pushing back at AI. As Mollick put it, "if the AI solves your problem, then you don’t think about it. You lose opportunities to learn."
"If the AI solves a problem for you, you don’t think and you don’t learn," Ethan Mollick, Wharton Professor at the University of Pennsylvania whose research focuses on AI and workplace productivity and who authored Co-Intelligence and Co-Existence, says. On the other hand, if the technology can be used as a tutor, then it will have positive results.
The real variable is not AI—it's your choices
There is a compelling narrative regarding artificial intelligence and its relationship to the human brain. The tool is eating away at our brains, bit by bit, by outsourcing cognitive tasks. However, there has been significant pushback against this stark portrayal from AI experts. Decisions about how reliant people are on AI, Mollick asserts, are frequently subconscious choices. "Humans are inherently lazy and try to put forth as little effort as possible when doing anything," he explains, which leads to the natural inclination toward passive AI usage. "We are making decisions about what skills to allow the AI to take over."
Furthermore, Mollick says this dynamic is characteristic of any new technology. "We all surrender our abilities intentionally because we no longer need to do them," he continues. Take, for example, the question of how recently you manually performed long division. Long division has been replaced in most individuals' lives by calculators, which may be found on their phones at all times. "This comes down to choosing what to keep up with," Mollick elaborates. "If you want to retain a skill, you'll have to make the choice actively to do so."
Baldeo sees this trend as partially a vicious circle of self-confidence: Those who doubt their skills are more inclined to turn to AI—and in doing so, further erode their self-assurance. "Human beings with a greater sense of self-confidence will be more likely to utilize AI in a cognitively healthy manner," she claims. This trend holds true the other way around, too. "If you are already feeling unstable concerning yourself, using AI is really not recommended for you."
How to harness AI without losing your competitive advantage
For individuals looking for a middle ground, then, the lesson is not to shy away from AI; rather, it is to be more deliberate in their usage.
According to Mollick, what matters most is being intentional about the types of activities you actually want to perform personally while resisting the temptation to outsource everything just because you can. some feathers “Maybe it's worth spending a little bit of extra effort for practice,” he says, drawing an analogy with exercise: “You could probably use a machine or even pay someone else to lift things up and down for you, but you're going to do it yourself, because you want to retain the strength.”
Similarly, Baldeo suggests that users create what she refers to as "cognitive scaffolding," where one develops a basic knowledge of a task prior to delegating it and actively engages with AI through critical thinking about and fine-tuning of its output.
Finally, learning how to refute AI's conclusions is vital. Instead of accepting whatever the first answer generated is, Baldeo advises that you challenge the technology at least two or three feathers, either by disagreeing with it, pushing it for more detail, or asking it to justify itself further. “You can address it as though you are addressing a person,” she says. When AI generates a project outline that doesn't satisfy you, for example, you might say, “I don't believe you're considering this factor in your project plan—be more explicit.”
There is even a particular prompt that can help mitigate the problem of AI tending to flatter users. “Your answers must be based on third-party verifiable information. Do not try to flatter me and form an emotional connection,” is one such example, according to her. Add that to whatever large-scale AI system, and you will end up getting the answer that truly makes you think and offers valuable insights.
Another thing worth considering is looking closely at what AI delivers. In its early iterations, it may look great, but that does not always mean much, according to Mollick. “Sometimes it's very easy to be impressed by what looks impressive without necessarily being the case.”
The point is that people should stay intentional throughout their interactions with AI. “AI is not the entity who will decide to take some sort of action over us. The AI has no agency here,” Mollick explains. “We decide when we want to allow the AI to do certain things for us, and when we don't want it to do that.”
Post a Comment