When AI Takes Over: The Hidden Cost of Technological Progress

By Chuck Dinerstein, MD, MBA — Apr 01, 2025
As AI quietly takes the wheel in medicine and other fields, once-sharp skills might quietly rust in the background. If AI is doing the thinking, are we still thinking at all?
Generated by AI

We tend to think of progress as gain—more efficiency, more accuracy, more time. But sometimes, progress quietly takes something away. As artificial intelligence increasingly takes over tasks once performed by human hands and brains, a creeping side effect emerges: deskilling. When machines handle the diagnosis, the decision tree, and the increasing “e-communication” of healthcare, what happens to the human professionals who used to do those things? The answer, backed by growing psychological and cognitive science, is that unused skills don’t just wait patiently in the background. They fade. And worse, we’re terrible at noticing. We underestimate how fast and far our abilities slip away when we stop using them, all while overestimating how easily we can get them back. This has profound implications in fields like medicine, where even small lapses in judgment or proficiency can have significant consequences. If AI is the shiny new co-pilot, we may want to think harder about what happens to the pilot when they stop flying. Research shows using it or losing it isn’t just a saying. It’s a warning.

To understand how this plays out in real life, researchers writing in the Social Science Research Network recently set out to test just how aware we are of our skill decay. What they found might make you rethink how well you “still got it.” Spoiler alert: It turns out we’re not just bad at remembering—we’re bad at remembering how bad we are at remembering.

In three experiments, researchers tested how well people could predict the decline in their skills after a break. Online participants learned a task with measurable skills: translating Latin, predicting code output, or tracing a line on a screen with mouse controls that had been reversed. [1] After a break of one, two, or three weeks, they returned to do it again; however, before they did, they guessed how well they’d perform.

The results were humbling. People knew their skills would fade but dramatically underestimated how muchOn average, they predicted only about half of the actual decline in their skills. Specifically, they guessed 47% of the decay for Latin, 41% for coding, and 72% for the motor skill tracing task. 

Neither the length of the break, the gender of the participant, nor their initial skill level reliably changed how wrong their predictions were. 

“…we find that older participants exhibit both steeper skill decay over the two-week interval, and less awareness of this decay.”

Older participants experienced steeper performance declines and were less aware of it. The data suggest they didn’t adjust their expectations to match the reality of faster skill loss. In other words, the older you are, the more likely you are to think you’re still great at something you haven’t practiced—when, in fact, your skills may have quietly snuck out the back door.

In a fourth study, participants made two predictions—one right after learning the skill and another two weeks later, just before they tried it again. The second prediction was better—42% more accurate but still missed the mark. Even with the decay freshly under their belts, they underestimated their performance loss by 36%. Awareness alone isn’t enough. “Knowing” that you’ve gotten worse doesn’t mean you fully understand how much worse you’ve gotten.

The fifth study looked at why people underestimate skill decay. Researchers brought in a new group of participants (Predictors) and asked them to predict how much someone else’s (Performers) skills would decline based on how much training that person had (none, a little, or a lot). The goal was to explore three possible reasons for our overconfidence

  • Projection bias – If people can’t imagine feeling less competent than they do now, prediction errors should grow as task experience and mastery feelings increase.
  • Ego bias – If overconfidence comes from wanting to believe we’re awesome, Predictors, with no personal stake, should make more accurate forecasts than Performers.
  • Faulty mental models – If people misunderstand what causes skill decay, both Predictors and Performers should make similar errors, regardless of experience or personal involvement.

Projection bias didn’t hold up. Prediction errors didn’t vary with how experienced the performer was, ruling out the idea that people struggle to imagine losing skills from a place of mastery. Unsurprisingly, ego bias was a big player. Predictors were far more accurate than Performers, suggesting we’re inflating our future performance because it preserves our sense of competence, even if it isn’t accurate.

The fact that predictors still demonstrate overestimates suggests that we have flawed models of skill decay. 

Researchers using machine learning found that actual performance was best predicted by average performance, peak performance, and age. But when people made predictions, they leaned heavily on their best moments—their peak performance—and basically ignored age. Interestingly, even those predicting others’ performance made similar mistakes. They didn’t rely on peak performance quite as much, but they still didn’t fully account for the effects of age. 

These findings suggest a shared misunderstanding: that people overemphasize their best moments while underestimating the impact of aging on skill decline, an ego bias and fail to fully account for age, a flawed mental models as key contributors to prediction errors.

Deskilling in the Age of AI

AI systems are increasingly being implemented in medical care to perform diagnostic tasks, interpret imaging, suggest treatments, and even engage in patient communication, all for increased productivity and error reduction. But, over time, reliance on AI systems can lead to a reduction in day-to-day hands-on experience. As this study shows, even a week or two of not using skills leads to some degree of loss. For physicians, this may well mean a decay of critical thinking and problem-solving skills. 

Physicians, like the Performers in this study, may drastically underestimate the time and practice required to get back up to speed after stepping back from day-to-day critical thinking in favor of AI tools. As AI systems advance, there’s a call for clinicians to shift their learning from traditional diagnostic skills toward tech fluency—understanding algorithms, managing AI inputs/outputs, and interpreting model uncertainty. 

When AI fails in what is described by techies as “edge-cases” and by physicians as “unusual presentations,” or a glitch renders the AI tools offline, there is a real potential for safety and liability consequences, especially given our ego-bias and failed mental models of just how quickly our skills may be lost. Deskilling makes systems more brittle and less resilient as humans become less capable of acting without digital crutches.

Before we hand over more of our cognitive load to machines, we need to pause and ask: what are we trading away in the name of efficiency? The allure of AI is real—faster decisions, fewer errors, streamlined care—but the slow erosion of human expertise is just as real and far less visible. Skill loss doesn’t announce itself with fanfare; it creeps in while we’re comfortably coasting. And worse, we tend to overestimate our ability to bounce back. Before we automate another task or sideline another human judgment, let’s think hard about what resilience means—and who we want to make the call when the system goes dark.

 

[1] “The Mirror Tracing task required participants to trace a thick black line displayed on their screen using a computer mouse with reversed directional controls. For instance, moving the mouse downward causes the on-screen cursor to move upward, and vice versa.”

 

Source: (Inaccurate) Beliefs about Skill Decay Social Science Research Network

Category

Chuck Dinerstein, MD, MBA

Director of Medicine

Dr. Charles Dinerstein, M.D., MBA, FACS is Director of Medicine at the American Council on Science and Health. He has over 25 years of experience as a vascular surgeon.

Recent articles by this author:
ACSH relies on donors like you. If you enjoy our work, please contribute.

Make your tax-deductible gift today!

 

 

Popular articles