Neszed-Mobile-header-logo
Friday, August 15, 2025
Newszed-Header-Logo
HomeGlobal EconomyCoffee Break: AI Deskills Healthcare, mRNA on the Block, Three Hominins Living...

Coffee Break: AI Deskills Healthcare, mRNA on the Block, Three Hominins Living Together, the Workweek, and a College Essay Worth Reading

Part the First: AI and Deskilling in Healthcare.  Yes, it does happen as described in the news article As AI spreads through health care, is the technology degrading providers’ skills? (New study suggests that, after having a specialized tool taken away, clinicians were less proficient at colonoscopies):

The AI colonoscopy tool rolled out across four health centers. As endoscopists snaked a camera through patients’ large intestines, the algorithm would draw a square around precancerous polyps known as adenomas. The more adenomas detected and removed, the less likely the patient would go on to develop colon cancer.

Researchers were interested in whether the AI could improve those adenoma detection rates. So they designed a trial: Half the time, endoscopists got to use the algorithm; the other half, they were on their own. But the researchers also took a look at a different question: Like students who try to write an essay independently after using ChatGPT one too many times, how well might doctors detect polyps without AI after they had gotten used to its help?

Not great. In the three months before the endoscopists started using the AI helper, they were finding adenomas in 28% of colonoscopies. After they had been using the AI for three months, the researchers found their unassisted adenoma detection rate fell significantly — to 22%. Researchers called their finding the first documentation of a potential “deskilling” effect from clinical AI.

The paper is from The Lancet: Gastroenterology & Hepatology for those who have library access.  Are the data convincing?  Yes.  The authors interpretation of their work is succinct:

Continuous exposure to AI might reduce the ADR of standard non-AI assisted colonoscopy, suggesting a negative effect on endoscopist behavior.

This is not surprising in any way, shape, or form but the investigators did not expect their result.  Static image analysis using AI trained on hundreds of thousands of images is very good at identifying problematic lesions, various skin cancers, for example.  But treatment still requires a confirmatory biopsy.  During a colonoscopy the images are anything but static.  And one wants an experienced gastroenterologist using the scope to identify lesions, snip and retain them for histology, and cauterize the wound.  In my world I have noticed similar deskilling as routine laboratory tasks have become increasingly automated.  When a scientist removes himself or herself from the data through an extra layer, no matter how routine, results are missed.  In the clinic:

Medicine’s artificial intelligence boom is predicated on the idea that doctors can be made better, faster, and more accurate with algorithmic support. But “we’re taking a big gamble right now,” said Adam Rodman, a clinical reasoning researcher and internist at Beth Israel Deaconess Medical Center in Boston. “We’re going full speed ahead without fully understanding the cognitive effects on humans.”

Perhaps this will be a nothing burger in the end.  But my Spidey sense, which is based on more than forty years in the laboratory, tingles otherwise.  And more importantly:

If exposure to AI does prove to degrade physicians’ skills, trainee endoscopists could be the most at risk.  Consider a gastroenterology fellow who trained for three years in a program that uses AI polyp detection, and then joins a practice that doesn’t have the technology.  “If this is the level of deskilling that happens when somebody who has been trained in the old way uses it for three months, what happens when somebody trains with this from the very beginning?” asked Rodman.  “Do they ever develop those skills?” (No)

If clinical AI “will definitely lead to deskilling,” the first pressing question for clinicians and health systems deploying AI tools is to choose which skills they’re comfortable losing, and which are essential to keep for patient safety.

Which skills are clinicians comfortable about losing?  That question sits at the head of the table where the tutor sits with eight medical students in the Problem-Based Learning tutorial room, which is also the top of a very long and very slippery slope right into the abyss of ignorance.

Part the Second.  mRNA Vaccines on the Block.  Yes, I know this will be a shock to everyone, but the evidence is not generally in RFKJr’s favor as outlined in Jake Scott’s article Kennedy’s case against mRNA vaccines collapses under his own evidence.  Dr. Scott is an infectious disease physician with an adjunct faculty appointment at Stanford University School of Medicine.  He does not argue from authority, as certain others from that august institution are wont to do:

When Health and Human Services Secretary Robert F. Kennedy Jr. terminated $500 million in federal funding for mRNA vaccine research last week, claiming he had “reviewed the science,” his press release linked to a 181-page document as justification.

I reviewed Kennedy’s “evidence.” It doesn’t support ending mRNA vaccine development. It makes the case for expanding it.

The document isn’t a government analysis or systematic review. It’s a bibliography assembled by outside authors that, according to its own title page, “originated with contributions to TOXIC SHOT: Facing the Dangers of the COVID ‘Vaccines’” with a foreword by Sen. Ron Johnson (R-Wisc.). The lead compiler is a dentist, not an immunologist, virologist, or vaccine expert.

NIH Director Jay Bhattacharya has suggested the funding was terminated due to lack of public trust in mRNA vaccines. But misrepresenting evidence to justify policy decisions is precisely what erodes public trust. If we want to restore confidence in public health, we need to start by accurately representing what the science actually says.

Most of the papers listed are laboratory studies using cultured cells that express the S-protein of SARS-CoV-2.  Viral S-protein binds to the surface of target cells and allows the virus to enter and begin replication and spread.  It is not surprising that S-protein makes cultured cells sick.  This kind of work is essential to understand the function of the S-protein, but it has little relevance for the mechanics of viral infection in the host animal, i.e., you and me.

Most damning is what’s absent. The compilation ignores the Danish nationwide study of approximately 1 million JN.1 booster recipients that found no increased risk for 29 specified conditions. It omits the Global Vaccine Data Network analysis of 99 million vaccinated across multiple countries finding no new or hidden safety signals. It excludes CDC data showing the unvaccinated had a 53-fold higher risk of death during Delta, demonstrating the critical importance of mRNA vaccination. The Commonwealth Fund estimates Covid vaccines prevented approximately 3.2 million U.S. deaths through 2022.

Based on my regular but certainly not exhaustive reading of the COVID-19 literature since the beginning of the pandemic, this is all true.  One thing to keep in mind is that since late 2019 nearly 479,000 papers have been “published” with “Covid” somewhere in them.  No one has read even a significant fraction of this literature.  As a comparison, since 1982 about 188,000 papers are retrieved when “HIV AIDS” is used as the query.  Something queer is going on here, in the science of COVID-19 and in the corrupt and corrupting business of scientific publication in the open-access, pay-to-publish virtually anything world.

The problem runs deeper, though.  There can be no doubt the COVID-19 vaccines have prevented severe disease in many people and have saved many lives, millions of them.  As a colleague backstage has asked, “How many people died because we did not treat COVID-19 as a lethal respiratory virus that should have been fought with non-pharmaceutical interventions such as air filtration, better ventilation, and effective masks?”

But in my view (your mileage certainly may vary), two things happened at the beginning of the pandemic that put us on the wrong path.  Pardon me for repeating myself.  The first is that scientists who should have known better went all-in on vaccines against a coronavirus, even though it has been known since shortly after the identification of avian Infectious Bronchitis Virus (IBV, probably the first coronavirus identified, in the 1930s) that vaccines do not work well with coronaviruses.  The corollary is that experimental mRNA vaccines were used for a problem they were not likely to solve.  And that is the production of durable immunity to a novel and lethal human coronavirus.  Another thing to keep in mind is that nothing in the technical production of mRNA vaccines is experimental.  The techniques have been developed over the past fifty years and are very robust.  But so far, no other mRNA vaccine (Zika was the first attempt to my limited knowledge) has worked as people have come to expect of vaccines, which is the prevention of serious disease and its transmission.

How the biomedical scientific community can back out of this cul de sac remains a daunting puzzle, while RFKJr and his minions use politics as well as anyone ever has to their advantage.  Given that mRNA-based cancer “vaccines” have shown great promise, throwing out mRNA therapeutics in general is stupid beyond measure.  But subtle and supple reasoning is not our strong suit these days.

Part the Third.  Three Hominins Lived in the Same Place – Did They Live There at the Same Time?  The first World Book Encyclopedia Yearbook we received in my house when I was about ten years old had a long article about the work of Louis B. Leakey on the evolutionary lineage that led to us.  It was fascinating then and remains so now.  The Riddle of Coexistence published in Science a few weeks ago indicates that three members of our evolutionary bush may have lived in the same valley in South Africa at the same time about two million years ago:

One morning in April 2014, José Braga squatted at the bottom of an open pit, cleaning a wall of red sediments with a trowel. Long ago, these rocks had formed the floor of a cave, and in 1938 they had yielded a spectacular skull of an early member of the human family, or hominin. But Braga had been scouring the sediments without luck for 12 years. He was considering throwing in his trowel and going off to search for fossils in Mongolia instead.

Then, a small, bright object fell from the wall above, bounced off his thigh, and landed in the dirt beside him. “I couldn’t believe what I was seeing: a well-preserved hominin tooth!” recalls Braga, a paleoanthropologist at the University of Toulouse.

A few months later, Braga’s team excavated a piece of a baby’s upper jaw from the wall of the pit. The fallen molar fit perfectly into the jaw. Together, the tooth and jaw solidified the specimen’s identity as an early member of our own genus, Homo.

The very next year, Braga’s team found another baby’s jawbone. The two infants’ remains had lain less than 30 centimeters apart for about 2 million years, but the new one was from a very different species: a baby Paranthropus, a short, robust hominin with massive molars and jaws. And an as-yet-unpublished skull found in 2019, just a few meters away, in sediments likely to be a bit older, is different again: It may belong to a third hominin genus, Australopithecus, a group of upright-walking apes with brains slightly larger than those of chimps.

The fossils’ close proximity, in the same cave or within a short walk, suggests these creatures might have met, or at least been aware of one another. “They were both on this landscape for such an extensive period of time, there’s no way they didn’t interact with each other,” says paleoanthropologist Stephanie Edwards Baker of the University of Johannesburg (UJ). She has found Paranthropus and early Homo in the same layers at nearby Drimolen cave with geochronologist Andy Herries of La Trobe University. In 2020, they proposed in Science that the region was a meeting ground for both genera as well as Australopithecus. 

Did these creatures really live together at Kromdraai?  Possibly.  And this is very good science that should be supported for as long as paleontologists are willing to shave red dirt very carefully with a trowel.  And if the National Science Foundation is not funding some of this work by an international consortium of scientists, we should be ashamed of ourselves.

Part the Fourth.  Can the Four-Day Workweek Work?  Yes, according to Biggest trial of four-day work week finds workers are happier and feel just as productive.  From July but still relevant, the conclusion is that “Compressing five days of work into four can create stress, but the benefits outweigh the downsides.”

Moving to a four-day work week without losing pay leaves employees happier, healthier and higher-performing, according to the largest study of such an intervention so far, encompassing six countries1. The research showed that a six-month trial of working four days a week reduced burnout, increased job satisfaction and improved mental and physical health.

To see whether shorter weeks might be the antidote for poor morale, researchers launched a study of 2,896 individuals at 141 companies in Australia, New Zealand, the United States, Canada, Ireland and the United Kingdom.

Before making the shift to reduced hours, each company that opted into the overhaul was given roughly eight weeks to restructure its workflow to maintain productivity at 80% of previous workforce hours, purging time-wasting activities such as unnecessary meetings. Two weeks before the trial started, each employee answered a series of questions to evaluate their well-being, including, “Does your work frustrate you?” and “How would you rate your mental health?” After six months on the new schedule, they revisited the same questions.

Overall, workers felt more satisfied with their job performance and reported better mental health after six months of a shortened work week than before it.

Would this ever be applicable to all jobs?  No.  To all careers?  No, but the number is likely to be higher than expected.  Will management ever “believe” this?  Don’t make us laugh.  But still, this has been floating around since the Personnel Department became the Department of Human Resources.  Some of us are old enough to remember the former.  It was a better time.  But regarding management:

A common criticism of the four-day work week is that employees can’t produce the same output in four days as in five. The study didn’t analyse company-wide productivity, but it offers an explanation for how workers can be more efficient over fewer hours. “When people are more well rested, they make fewer mistakes and work more intensely,” says Pedro Gomes, an economist at Birkbeck University of London. But Gomes would like to see more analysis of the impacts on productivity.

Fan notes that more than 90% of companies decided to keep the four-day work week after the trial, indicating that they weren’t worried about a drop in profits.

The authors also looked at whether the positive impacts of shorter work weeks would wane once the system lost its novelty. They collected data after workers had spent 12 months after the start of the trial and found that well-being stayed high.

Toward the end of a long working life, it is clear that most of the support functions at each of my employers, public and private, academic and other, could be handled in a 4-day workweek without much trouble.  And those of us who spend our time in the laboratories or offices doing and thinking about the next experiments, would get two Saturdays per week!  Win, win.

Part the Fifth.  On The True Meaning of Education.  From young Kinley Bowers of Grove City College in her essay “A World Written: A Response to Wendell Berry’s “In Defense of Literacy.”  In my estimation, worth your time:

Since graduating high school, I have told people that I specialize in impracticality. I love to read, write, sketch, sculpt, play piano, act, and birdwatch—all occupations thirsty for time and tending to flatten rather than fill my wallet.  I suspect that some might view me as a spritely ignoramus, dancing through cumulous visions, and fated to someday be cracked upside the head with the 9-iron of reality.  But Wendell Berry’s essay “In Defense of Literacy” offers a fresh angle on the common use of the term “practical,” defining it as “whatever will most predictably and most quickly make a profit.”  He then proceeds to assess two staples of practicality: predictability and speed.  These dual malefactors threaten the integrity of our language which impairs our literature and ultimately debilitates enriched lives.

And a bit later:

In a recent address at Grove City College, Andrew Peterson said that he used to take walks in the woods, but now he walks beneath poplars and oaks, sycamores and redbuds. Learning the vocabulary of a thing draws it into a realm of awareness and conversation. This endeavor also demonstrates care for the thing itself. Like Peterson, I used to watch birds on the feeder.  Now I watch nuthatches and woodpeckers, orioles and chickadees. I hear the songs of American robins, Eastern peewees, and Carolina wrens instead of noise from a great generalized lump called “birds.”

My question is this: Why do such good attitudes and essays seem to come from small colleges, mostly of the conservative variety?

Coffee Break: AI Deskills Healthcare, mRNA on the Block, Three Hominins Living Together, the Workweek, and a College Essay Worth Reading

Source link

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments