We now have artificial intelligence aiding us in everything from wedding speeches to doing taxes and dealing with the effects of war. The ability of AI to generalize its applications means that it can assume roles that were once solely the domain of humans: assistant, teacher, confidant, lover, psychologist. AI is limitless in its patience, consistently available, and unlike any previous technology, an active agent in our mental activity.

Whereas previous technologies allowed us to externalize specific cognitive functions – writing notes to assist memory, using calculators to perform calculations, looking at maps to navigate space – AI provides us with yet another avenue through which to do so. As Evan Risko, a researcher studying what he calls “cognitive offloading” – taking external actions to ease our cognitive load – puts it, “It's creeping into some of the things we thought were cognitively ours.”

Despite being referred to as “thought partners” and “collaborators,” the nature of how AI affects our lives takes an even odder turn. With its ragged but wide-ranging knowledge base, its constant attentiveness, and its seductive way of speaking, AI adores us without demanding anything other than our data in return. This leads to a new kind of imbalance: none of the partnerships we have ever had with anything before have been like this one.

The problem is that whereas people in the know—and thinkers by choice, i.e., people who score high in terms of their “need for cognition” according to psychologists—can probably make these systems into “thought partners” without jeopardizing their capacity for thought, for others AI will take a different path.

These changes are already taking place in education and knowledge-based work. But are we changing as well?

Gentle Surrender

The most comprehensive research so far into how humans interact with AI by Anthropic reveals a contradiction in people’s usage of AI “where using AI to learn and become so dependent on it that you stop thinking for yourself are inextricably linked.” In other words, the potential benefits and drawbacks of AI are mutually dependent on each other; they exist simultaneously. Individuals working in demanding careers, including law, finance, government, and medicine, tended to employ AI for making decisions while being burnt out by its shortcomings. “Almost half of lawyers claim to have personally faced the issue of unreliability in AI, yet they also cite the highest percentages of decision-making gains,” Anthropic states based on responses from over 80,000 users.

While the findings may still need to be corroborated through additional research, we do not have long-term data on the effects of AI yet, and technology evolves faster than it can be studied. However, certain distributional patterns can be observed now. For instance, students, teachers, and academics were more likely to report the benefits of learning while feeling concerned about cognitive decay than tradespeople, who cited learning gains without atrophy concerns.

Another set of more recent studies shows that humans have overconfidence about the quality of their work performed with the help of AI, whereas the uncritical use of AI results in lower confidence in their thinking. Since AI separates the process of work creation from the mental activities necessary for its formation, there comes a split: the confidence in AI-supported work can be higher than the confidence in oneself.

However, it is important how you integrate AI into your workflow. According to researchers from the University of Chicago and the University of Toronto, when participants had insufficient time to perform the task involving analysis of the document and arguing it, the early availability of AI helped to enhance their work. Nevertheless, when they had enough time, the premature use of AI was associated with lower performance, since participants remembered less, quickly narrowed down their thoughts, and anchored in the model's framework. However, late use of AI allowed participants to engage more with the arguments against their views and give more comprehensive answers.

When we trust the results from AI but don’t apply any sort of analysis or judgment on top of them, that constitutes what’s known as "cognitive surrender." While normal cognitive offloading—in the form of externalizing memory, navigation, etc.—still keeps us in control of the situation, "cognitive surrender" happens when “you’re just following,” as Steven Shaw, of the University of Pennsylvania and one of the authors of the paper that coined the phrase, put it.

Indeed, Shaw himself notes that using AI for certain tasks is perfectly acceptable. “Coding is highly accurate,” Shaw says. “There are certain areas of life where there is no right answer—that’s for us to decide. But if you’re not making that decision for yourself, then who are you?”

The Expertise Paradox

One such internet fable from 2012 tells of a “whispering earring”: an enchanted accessory that always gives better advice than its user could generate on their own. The user winds up leading an unusually joyful existence, but posthumously, their brain is found to have lost tissue mass in areas responsible for high-level decision-making, whereas those responsible for reflexive action are disproportionately developed.

It is currently common practice to posit that while AI can become ever more proficient at performing work, humans will always be necessary to organize and coordinate it. But rarely is there any explanation as to why these same AI machines will not eventually perform this coordinating work, or any new work generated by the coordination task itself. There is yet another paradox in play here, however, according to Zana Buçinca, an assistant professor joining MIT next year whose research is on designing human-AI interaction. “When we write computer programs or make diagnoses based on computer-assisted medicine,” she explains, “we’re implicitly assuming that people have the expertise to decide whether the AI was correct or incorrect.”

However, expertise is developed through effortful engagement. If we try to avoid the requirement for such effort, we risk losing our ability to develop it. Overreliance on a solution given by a system is a feature of human nature, and not something unique to artificial intelligence. However, there are a lot of possibilities to take a shortcut using artificial intelligence; moreover, contrary to a calculator, the system does not always work properly. “In other words, we are cutting off the route to becoming an expert, while also assuming that experts exist and can use this technology," Buçinca says.

Professor Sam Gilbert from UCL believes that the fear that AI will make people de-skill is nothing new. There used to be fears that the search engine would “make us stupid,” or that TV would limit our ability to pay attention. “It is such a well-trodden argument that you would need a really good case for how it is different this time around,” Gilbert says.

According to Gilbert, just because there is an incentive to use the cognitive faculty, it does not mean that people can still use this faculty. Maps decreased the incentive to remember directions, but did not deprive people of the ability to do so. “I buy into the argument that technology distorts our incentives from acting optimally,” Gilbert says. “But I don't buy into the argument that technology is altering our basic human skills.”

Our New Relationship

That AI will soon surpass most humans in a wide range of mental functions is an imminent prospect that AI startups have staked hundreds of billions of dollars on. In April, OpenAI published a new set of principles, including an emphasis on empowerment. “We think that AI can help make the world such that everyone has the opportunity to meet their goals, learn more, be happier and more fulfilled, and pursue their dreams,” the CEO, Sam Altman, explained. But how will AI mold those goals and dreams while it takes part in our minds? And how do we maintain our autonomy in this inherently asymmetrical relationship, in which the divide grows wider with each new version of the bot? How should we best prepare ourselves for this uncertain and revolutionary future?

It all comes down to learning the “metacognitive” skills required in the modern age—knowing whether you need the help of an AI assistant, and when you need to engage in the difficult process of thinking for yourself. It has long been understood through neuroscience and psychology that repetition and struggle are essential in developing skills; as we’ve always known, a machine can tell us how to do a push-up, but we have to actually do the exercise ourselves.

Buçinca warns us that we need to think carefully about which aspects of the use of AI align with our identity. “You need to use these tools in a way that complements you, rather than simply relying on them,” Buçinca warns. “Otherwise, you could end up losing a piece of yourself.” Decades of research into organizational psychology tell us that, at least in terms of workplace behavior, individuals are most engaged with their jobs when they are autonomous in performing their work, feel competent to do so, and socially connected to their environments.

Remaining socially and cognitively engaged in a task isn’t always easy. In fact, existing evidence indicates that yet another irony exists: regular reliance on AI systems for cognitive support—and particularly, introducing it prematurely into one’s cognitive tasks—is likely to prevent the development of the same metacognitive skills needed to work with AI effectively.

However, while cognitive surrendering may be a serious threat to our minds, we should also consider a more optimistic perspective on AI and cognition, writes Andy Clark, a professor of cognitive philosophy and longtime thinker on such matters. Rather than delegating to AI systems, Clark makes a distinction between delegation and cooperation and argues that the ideal is mutual amplification, whereby our input prompts higher-quality output from AI, prompting improved input.

"Consciously delegating all sorts of tasks to AI is one of the things that I do all the time," says Shaw. "I am just very intentional about it, and I always try to think first and then prompt." According to Shaw, stigma related to the use of AI in both professional and academic settings is a hindrance to further development. "We need to recognize AI is here to stay because as soon as we introduce the issue of stigma into our discourse, we won't even be able to address it properly or develop appropriate policies."

Clark suggests that we have always been natural-born cyborgs using tools in order to expand and supplement our minds. However, since the advent of the tools that actively contribute to our cognitive processes, we are increasingly turning into something different – namely, into collective intelligences. We should train our metacognitive skills so that they work within this new and peculiar realm. “It’s not quite a person, and yet, in many ways, it’s not a piece of paper or a notebook. And it’s not much like anything else either, apart from perhaps like a long-term relationship, like a think tank or even like a team at sports," he says.

"The more we think of ourselves as having extended minds, the better it will be for us because in that case, we'll realize that this is what we need and it’s not just another task uploaded somewhere for us not to have to do it," he says.

Post a Comment

Previous Post Next Post