When headlines proclaimed that artificial intelligence (AI) can be used to create a “death calculator” that predicts the day you will die, it sounded like something from a terrifying science-fictional story. The reaction showed how readily people believe that AI has magical fortune-telling powers.
The reality was not as far-fetched. The paper that spawned the fracas, in the journal Nature Computational Science, did involve using AI to predict death, but it was not very precise. Using both economic and health data on thousands of people in Denmark, an AI-based system was able to predict with about 78 percent accuracy which people would die within the next four years.
The algorithms used to create actuarial tables already do this kind of statistical forecasting, but the new system, called life2vec, is more accurate and works in a completely different way. The lead author on the paper, University of Copenhagen complexity science professor Sune Lehmann, said life2vec predicts life events much the way ChatGPT predicts words.
The findings matter not because they might create a scarily accurate “death calculator,” but because of how the forecasts could be used. Such algorithms could be used for ill — to discriminate or deny people healthcare or insurance. Or they could be used for good, by highlighting factors that affect lifespan and helping people live longer. Or they might improve lifespan calculations, which some people use to plan their retirements.
It was “wild to see how the results were misrepresented,” Lehmann said. “People said this AI can predict the second you will die with incredible accuracy.”
This is because people do not understand the technology yet, and as science fiction legend Arthur C. Clarke has said, any sufficiently advanced technology is indistinguishable from magic.
At the same time, hospitals are incorporating AI to do all sorts of jobs. Will doctors and hospital administrators put too much faith in the decisions or forecasts of AI because it is fast and sounds confident? Can the medical system use AI responsibly if people have unrealistic or magical ideas about what it can do?
Lehmann said his work in this area is aimed at testing the powers of prediction for all kinds of life events, including job changes, income changes and moving.
He is looking for a more coherent scientific understanding of the way algorithms can predict complex phenomena, he said.
Often algorithms’ workings are treated as a mysterious black box. The researchers did not choose death out of any morbid preoccupation but because it is something that is precisely measured and recorded.
In groups of young people, the question is too easy — you would be mostly correct if you predicted that nobody dies over the next four years. Predicting death within one year is not too hard — you would just have to know who was sickest. The further out you go, the harder the future is to predict, until you get far enough ahead that almost everyone will have died.
At this stage, then, AI is not likely to surprise anyone about their life expectancy. If you are healthy and not extremely old, it would predict you would live more than four years. It cannot foresee that you would get in a freak accident or predict whether you will die in 10, 15 or 20 years, said Andrew Beam, a professor of biomedical informatics at Harvard Medical School.
Beam said there is a risk that AI could prompt humans to be misled by authority bias: “If you think someone is smarter than you or has access to information that you don’t have, there’s a real tendency to turn off critical thinking and believe anything that comes out — whether it’s a person or an AI.”
ChatGPT is good at synthesizing information, but it is not very selective and could fold in bad studies and flawed information. “So, if you’re in an area where the science is unsettled or the human knowledge is just not there yet, ChatGPT is going to be just as bad if not worse than a person,” he said.
Predicting a healthy person’s long-off death is just science fiction, he said: “We need to be careful when we’re asking it to do things that are still clearly sci-fi.”
Sometimes fiction can provide a reality check by reminding us that our actions influence the future — even in cases of life and death. Consider what happened in the classic Charles Dickens story A Christmas Carol. The Ghost of Christmas Future gave Ebenezer Scrooge a terrifying preview of loneliness, grief and death. Scrooge then asks a smart, critical question: “Are these the shadows of the things that Will be, or are they shadows of things that May be, only?”
If the reporters trying to scare people with life2vac had asked that question, they would have gotten the same answer Scrooge did from the ghost: Of course our actions can change the future. A forecast does not seal our fate in stone.
This new system reinforces what other studies have shown — that income and job type can affect the length of your life. Being poor and having a job where others have power over you is correlated with premature death. It is something Dickens recognized long ago. Maybe AI can turn this general observation into poignant real-life scenarios that could motivate modern-day Scrooges to address the inequalities that shorten so many lives.
F.D. Flam is a Bloomberg Opinion columnist covering science. She is host of the “Follow the Science” podcast. This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
US president-elect Donald Trump on Tuesday named US Representative Mike Waltz, a vocal supporter of arms sales to Taiwan who has called China an “existential threat,” as his national security advisor, and on Thursday named US Senator Marco Rubio, founding member of the Inter-Parliamentary Alliance on China — a global, cross-party alliance to address the challenges that China poses to the rules-based order — as his secretary of state. Trump’s appointments, including US Representative Elise Stefanik as US ambassador to the UN, who has been a strong supporter of Taiwan in the US Congress, and Robert Lighthizer as US trade
A nation has several pillars of national defense, among them are military strength, energy and food security, and national unity. Military strength is very much on the forefront of the debate, while several recent editorials have dealt with energy security. National unity and a sense of shared purpose — especially while a powerful, hostile state is becoming increasingly menacing — are problematic, and would continue to be until the nation’s schizophrenia is properly managed. The controversy over the past few days over former navy lieutenant commander Lu Li-shih’s (呂禮詩) usage of the term “our China” during an interview about his attendance
Following the BRICS summit held in Kazan, Russia, last month, media outlets circulated familiar narratives about Russia and China’s plans to dethrone the US dollar and build a BRICS-led global order. Each summit brings renewed buzz about a BRICS cross-border payment system designed to replace the SWIFT payment system, allowing members to trade without using US dollars. Articles often highlight the appeal of this concept to BRICS members — bypassing sanctions, reducing US dollar dependence and escaping US influence. They say that, if widely adopted, the US dollar could lose its global currency status. However, none of these articles provide
Bo Guagua (薄瓜瓜), the son of former Chinese Communist Party (CCP) Central Committee Politburo member and former Chongqing Municipal Communist Party secretary Bo Xilai (薄熙來), used his British passport to make a low-key entry into Taiwan on a flight originating in Canada. He is set to marry the granddaughter of former political heavyweight Hsu Wen-cheng (許文政), the founder of Luodong Poh-Ai Hospital in Yilan County’s Luodong Township (羅東). Bo Xilai is a former high-ranking CCP official who was once a challenger to Chinese President Xi Jinping (習近平) for the chairmanship of the CCP. That makes Bo Guagua a bona fide “third-generation red”