AI will not suddenly lead to an Alzheimer’s cure

SF Dot on Wood 2, Christian Rothenhagen (2023)
SF Dot on Wood 2, Christian Rothenhagen (2023)

I work on funding science for the sake of medical progress, but only 10% of our science funding at Open Philanthropy is related to AI. The whole premise of Open Philanthropy is we try to allocate money across causes to get as much bang for buck as possible. That leads us to fund a lot of biomedical science, since science prevents deaths and disease down the line in a cost-effective manner, especially if you focus on high stakes, neglected areas.

But only 10% of that science spending is related to AI. If AI could change everything, bringing new understanding and new cures for disease, why not ditch the 90% and bet it all on red?

AI is not enough, we need to fix other stuff too, some of that stuff is predictable-ish now, and time itself will still move at one year per year

The future is unknowable, but personally I am optimistic for AI driving further progress in drug design, modeling cell dynamics, writing code for bioinformatics, improving microscopy even further, improving sequencing even further, creating new diagnostics on a phone with no clinic visit/doctor needed, detecting fraud in publications/datasets, drafting and reviewing regulatory submissions for new drugs/devices, synthesising and generating new scientific ideas (including some target discovery), productively debating those ideas with other AI systems and humans, computational experiments for some of those ideas, and boosting the education of new (human!) scientists.

My guesses for bottlenecks on medical progress for the foreseeable future are the rows with red cells in the previous piece: safety data takes time, manufacturing and delivery cost money, research skews to rich world problems, public health programs aren’t always that ambitious, and people don’t always trust the medical system (or each other).

Those are why I do not believe that AI could end disease within 10 years on its own, nor that 50-100 years of medical progress could occur 5-10 years after powerful AI due to AI progress alone (though rapid progress in fields that rely less on data and experiments, like computer science, philosophy, and mathematics, is easier to imagine). AI will help, but it is not enough; resources need to go elsewhere too, to achieve medical progress, starting now.

This post argues that governments, rich people, and AI companies should invest+donate money to science and public health as complements to progress coming from AI. Going all in on AI is not the highest return-on-investment strategy if you care about human flourishing, even if AI might lead to rapid scientific advances. If you already agree with those conclusions, go to this companion piece instead, where you might find more to disagree with.

10 bottlenecks today, 1 example each

1. Understanding: there are now drugs that remove amyloid plaques, and they don’t stop Alzheimer’s progression much
In biotech and pharma, often you hear people bemoan: everyone’s going after the same targets! What does that complaint mean? It means scientists have demonstrated that a particular gene or protein is causally related to why a disease happens. PD-L1, TNF-α, PCSK9. Companies can then develop drugs that interfere with the protein, or reduce/increase the expression of the gene, and tilt your cells away from a diseased state to a healthier state.

Amyloid beta clearly has something to do with Alzheimer’s, since plaques build up in the brains of Alzheimer’s patients. But there are now drugs that remove amyloid plaques, and they don’t stop Alzheimer’s progression much. Something more complicated must be going on that we don’t understand yet (certainly that I don't understand yet), making drug development tougher.

Given it is hard to take measurements of the active brain, building an understanding of Alzheimer's is slow-going and you have to be inventive. When making recommendations to Good Ventures of what Alzheimer's research to fund, my colleagues Chris Somerville and Heather Youngs have tried to fund the discovery and unusual exploration of newer targets, to avoid well-funded dominant theories. I'm interested in other unusual sources of "data" for that reason, and I bet the AIs will be, too. That linked study was only possible because members of an extended family in Colombia who are genetically predisposed to Alzheimer's volunteered to participate in science. In another study in Oxford, veterans who sustained traumatic brain injuries donated their brains after death to allow scientists to observe how neurodegeneration gets affected by, e.g., bullet fragments physically blocking certain changes in your brain from occurring.

Such studies often do not get funded, because competition for funding is fierce and there's much to do. We need to keep public science going!
2. Wet lab models: mice don't develop chronic hepatitis B infections, so there's a lot of guesswork on which drug candidates to progress to the clinic

Hepatitis B is already treatable with various medicines, so it's absolutely worth knowing if you have a chronic infection. However, there is no full cure yet, and hep B kills 500,000+ people a year globally by leading to cirrhosis and liver cancer. Existing animal models aren't great at recapitulating human-like infection and disease. At Open Philanthropy, we’re supporting work to try and solve that, to allow for more iteration and downselection of drug candidates before the clinic.

3. Drug design: it took 6 years of tweaking an already-promising chemical structure to get the new "miracle" HIV drug lenacapavir

Shots of lenacapavir offer almost 100% protection against HIV for six months. The drug was approved for HIV prevention by the FDA this year.

In 2010, a scientist at Gilead spotted a poster at a conference describing Pfizer's molecule PF74, which targeted the capsid of HIV. The molecule had promise, but wasn't stable, so it couldn't be used as a drug – it had a 100% hepatic extraction rate, i.e. all of it gets eliminated in one pass through the liver. Pfizer no longer had commercial interest in the drug, so Gilead took it on. Starting with PF74, they began "adding an atom here, a molecular group there, until after six years they had a molecule that was 12,000 times more potent than PF74 and had a hepatic extraction of less than 1%." Designing stable, specific, potent molecules, and testing for those properties as you go in the lab, can be difficult – but can take you from a good idea that won't work in practice, to a miracle drug that prevents thousands of infections.

You can read more about the discovery story in this STAT profile, where that quote comes from. I'm actually understating things by saying "6 years of tweaking", because it took a lot of work at Pfizer to get to PF74, and the group at Gilead had been working on targeting the capsid before 2010 too, so some of what they'd learned probably informed later development. If you prefer podcasts, my friend Saloni Dattani and I went into depth on the discovery story of lenacapavir in the first episode of Hard Drugs.

4. Clinical efficacy data: cancer screening trials typically take 10-15 years to publish their effects on mortality

Siddhartha Mukherjee wrote a great piece in The New Yorker this June:

“‘Cancer screening trials typically take 10–15 years to publish on mortality, and when positive it typically takes a further 10–15 years before a screening programme with high coverage is rolled out nationally.’ ... by the time the final results arrive the technology may already be obsolete. What if, after thirty years, a trial yields a modestly positive signal – just as a newer, better test emerges?”

5. Clinical safety data: rates of brain bleeding and some maybe-related deaths were only observed years into development of two recent Alzheimer's drugs

Lecanemab and donanemab were approved in 2023 and 2024 for Alzheimer’s prevention after almost a decade of development. Brain bleeding and related deaths were observed in the phase 3 trials and post-approval, years after development began. (They are both still on the market, since some patients may be comfortable with their safety profile. We will learn more as more people take them.)

6. Trial participation: it took 6 years to statistically conclude that the only modern hepatitis C vaccine candidate that's been tested didn't work well

Only around 5% of Americans have taken part in a clinical trial. For hepatitis C, "it took six years to accumulate enough infections to reach a conclusion: the vaccine failed to provide adequate protection compared to the placebo."

7. Manufacturing and delivery: new gene therapies and CAR-T are expensive – but even monoclonal antibodies, the big “winners” of the last 50 years of biotech, are still not cheap enough to produce for use as preventive malaria drugs!

Monoclonals have had 50 years now, are the second most approved modality after small molecules, and most people would consider them commodities. Yet the production methods still can’t make them below $10 per gram, so they’re still not cheap enough for some “obvious” applications like preventing kids getting malaria in a rainy season.

8. Research predominance in rich countries: we don't know how many thousands of people have mycetoma, a debilitating tropical infection often leading to leg amputation, nor much about how infections spread

Research predominance in rich countries leads to skewed knowledge and product availability. If mycetoma, a disease where bacteria or fungi take over your foot, were common in the U.S., the NIH and pharma system would probably have developed broadly effective cures by now, and the CDC would have a plan for tracking cases to reduce spread. As it stands, we don’t even know how many people are affected, let alone have an adequate toolset to help them.

9. Public health: hepatitis C is curable, yet 10,000 people will die of it this year in the US, 300,000 globally

There are now many cures for hepatitis C, yet 10,000 people in the United States will die of cirrhosis or liver cancer caused by hep C this year, uncured. That’s despite the fact Egypt has shown screening people and treating those who test positive can drive down infections to near zero. Egypt is beating the US on this.

10. Societal trust: measles is a technologically "solved" issue, yet there are new outbreaks this year

At the time of writing, there’s an active measles outbreak in Canada. Canada had achieved measles elimination status in 1998. Measles is technologically “solved”, but not everyone trusts the solutions or their messengers.

Can’t AI work around those 10 bottlenecks?

Let’s visualise a specific case, Alzheimer's, then I’ll give my best guesses for that case.

Why is there no cure for Alzheimer's yet?

Perhaps it’s inherently unsolvable, but I’d place the blame on bottlenecks galore. The first alone, “understanding”, could be broken down into many different limitations for Alzheimer’s – e.g., it is difficult to take measurements of what’s going on in the active brain, and there has been a surprising amount of scientific fraud in the field over the last few decades.

What would it feel like if there were a cure for Alzheimer’s?

You, your spouse, your friend, your child, or probably some combination, notice – your forgetfulness is getting worse. You go to the doctor. They ask you and your family a bunch of questions about your medical history, and what’s been happening at home recently. They give you some puzzles to solve on paper. They do some blood tests to rule out vitamin B12 disorders and thyroid issues. You slide into an MRI machine, and your hippocampus looks smaller than it should. You receive a "probable Alzheimer's" diagnosis.
The doctor prescribes you one dose of Protollin up the nose, to activate your immune system against amyloid proteins in your brain, and a six-month course of Memorvio and Cathespian. You pick the bottle up at the pharmacy, and put it in the freezer. Each morning at breakfast (habit makes it easier), you squirt some mist up your nose. The mist contains copies of two strings of RNA in lipid nanoparticles. Some of those particles make it through your olfactory nerve into your brain; there, some make it into your brain cells. The Memorvio gets translated by your cells’ machinery into Brain-Derived Neurotrophic Factor (BDNF) proteins, and the Cathespian makes cathepsin D enzymes that lead to degradation of your tau tangles little by little each day. The RNA strings are designed to shut off translation if either protein is over-expressed. The spray smells a bit like ginger.
Over the course of the six months, you gradually feel your memory sharpen, and spend more of the day with your mojo back. You go back in for an MRI, and sure enough, your hippocampus hasn’t shrunk any further. A PET scan confirms you’ve gotten rid of the equivalent of 5 years' worth of tau buildup. The doctor says: let’s do annual checkups, but otherwise you’re good to go.

That first paragraph happens to people every day, and the second paragraph was science fiction. The cure described does not exist, though similar-ish ideas are being researched. Writing that second paragraph made me uncomfortable, for the act of typing made it feel real for a few seconds, and I visualised the people in my family who are scared of aging and how they’d feel reading it. The reason it’s uncomfortable to type as if it were true is because it isn’t true, but it matters so much.

That type of cure is something that, if we do not go off the rails as a species, I do expect to be available in 100 years. We are lucky to be born now, not 100 years ago, and we work today so that our descendants can be luckier than us. The apartment I live in was built by people my age in 1910, and I did nothing to lay the brick foundation. Hopefully those of us alive now can invent an Alzheimer’s cure, and pass that gift on.

"Are you sure that’s what it would feel like if there were a cure for Alzheimer’s?"

No, that’s one possible example. Here are two more (also science fiction):

  • Everyone who wants one gets a blood test that predicts their chance of developing Alzheimer’s later in life. They can then make changes to their diet, exercise differently, and start preventive drugs like the ones described above before they have any symptoms. That’s not a cure, because disease never manifested – but if it worked, it could be better than a cure, since e.g. your hippocampus wouldn’t have shrunk yet compared to the example above. The bar for safety on preventive measures is higher, though; if Memorvio and Cathespian have a safety profile that risks brain haemorrhage in 1 in every 500 people, you might take them if dementia is closing in on you, but not risk it if your chance of dementia is low enough or far enough in the future.
  • You take a daily medicine that reduces cellular aging, aiming to live longer, and it drops your dementia risk too.
"What can I do to reduce my chances of Alzheimer's today?"

I'm not a doctor, or an Alzheimer's expert; I'm not confident in these bullets, and you should seek out better advice if you need it. But, from what I've read:

  • Exercise (presumably because it reduces inflammation and increases blood flow, but maybe other stuff’s going on too?)
  • Reducing high blood pressure
  • Cognitive engagement like reading, puzzles, probably good conversations, probably writing?
  • The shingles vaccine reduces new dementia diagnoses by 20% over 7 years in this study (not randomised, but a neat natural experiment)
  • Maybe GLP-1s like Ozempic work too – awaiting randomized results in September!

OK, so can AI work around the 10 bottlenecks in the case of Alzheimer’s?

Yes to some; check out this sister piece if you’re interested in those ones. No, for the foreseeable future, to some others:

🧠
Understanding. I’ve put this one green because I’m hopeful for AI to help a lot. But it won’t be magic: we’ll still have to collect data in the physical world to learn about the disease, AI might just allow us to skip some steps or take new routes. It would be nice to know more about, for example, BDNF proteins, for the sake of Memorvio. If that means better cryo-EM imaging of proteins, AI should be able to help. If it means observing BDNF proteins while they're active in a brain... trickier. If it means sampling spinal fluid and AI spotting novel patterns in the data, that's more doable, but still not trivial. At Open Philanthropy we've supported trials that have failed to recruit enough patients in part because it's unpleasant to do spinal taps.
☠️
Clinical safety data takes a while to collect. Memorvio and Cathespian are taken for 6 months in the example above, or for years in the preventive use case. It would take a large patient population to statistically distinguish whether the drugs are increasing patients' chance of, say, strokes, because strokes occur in the population who aren't on the drugs too, and small numbers are noisy. Then if some side effects only show up after years of use, you'll have to wait years to observe those side effects.
🏭
Scaling up reliable manufacturing and delivery of new product classes, cheaply enough. It costs someone $1,000 to do the first MRI scan in the example above (hopefully not the patient directly), then another $1,000 for the confirmatory one at the end of 6 months, and $5,000 for the PET scan. Memorvio and Cathespian as described are gene therapies, which are not yet as cheap to produce as small molecule drugs – maybe they’d cost $1000s to make enough for 6 months each, too, though it’s hard to say (and beyond my expertise) without getting more specific about each drug’s complexity. But, for example, I wrote: “The RNA strings are designed to shut off translation if either protein is overexpressed.” That will not be easy or cheap or commodified for a while.

An Alzheimer’s drug approved in 2021 was projected to break Medicare, and that was a drug that didn’t really work. Now imagine the drug does work, and it costs $30,000/year, and tens of millions of seniors want to go on it.

The problem is not just about high prices for patented drugs, it's about high underlying costs for manufacturing and delivery: someone has to pay the doctors’ salaries, whether that’s a private health insurer, a public health system (Medicare), or a patient’s family. The more monitoring by health professionals required, the more the treatment will cost to deliver. The more specialist the help needed, the more inaccessible and/or expensive, too.

These are not insurmountable problems, and it would be vastly better to have Memorvio and Cathespian than not, of course – the argument I'm making is about whether AI will lead to Alzheimer's cures you can take in 5 to 10 years. We won't be able to afford spending half of national income on new drugs without a lot of economic growth first.

🤑
Research predominance in rich countries leading to skewed knowledge and product availability. We know less about Alzheimer’s in India in part because there are fewer MRI machines per million people and less research funding. Does Alzheimer's interact differently with different comorbidities present in countries with higher smoking rates, different nutrition, and so on?
🚀
Public health funding/ ambition/ management. Doctors’ offices, pharmacies, insurers, Congress – today, all run by humans.
🥷
Societal trust. You’re making me sniff something that goes up and controls my brain??

Objections and responses

“Isn't this all guesswork? Why are you trying to predict the future when the future is unknowable?”

The future gets shaped by what we work on today. I don't know what the future will look like, I'm trying to make educated guesses based on what I see now, plan appropriately, and stay adaptable!

“You’re not being ambitious enough with Memorvio and Cathespian – I want something that I sniff once, rather than daily for six months, that protects me for life.”

Fair, me too, but most of the same bottlenecks will apply in that case. You’ll want to know if it’s safe, to track patients for longer than just that day, it will cost a lot to make and monitor, etc.

“AI will cure all disease, not just Alzheimer's, by preventing aging.”

I chose to focus on one disease for this post, to make it more concrete. By definition it's harder to cure all diseases than one disease (curing all diseases involves at least curing Alzheimer's!), so the arguments here should also apply to that harder task.

"Aging" is harder to define medically than Alzheimer's, so would have been harder to write about. But the same bottlenecks would apply – and note that prevention generally has a higher safety bar to cross than treatment, because healthy people might not want to take as much risk. That's why vaccine safety is such a hot topic.

Then, as a side note, I'm not totally sure you can replace brain cells (neurons, say) without changing who you are. So, focusing on cellular aging in organs where cells are used to getting replaced more often (the liver, blood, and skin) may be "easier" than the brain.

Intelligence is upstream of every other problem you might want to solve, so first we should solve intelligence, then medicine will follow.”

This kind of thinking is a little too zoomed out for my tastes. A less extreme version, that intelligence is helpful for solving problems, is true, but does not have as singular implications. The more extreme version is too simple, and applying simple theories to our messy world tends to go wrong. This one in particular is, shall we say, not a terribly green flag when I see it pop up. So I suggest – descriptively and normatively – more pluralism.

Then hey, if you're going for one-thing-explains-all-other-things – energy is a better pick than intelligence anyway. It's better defined, and thermodynamics runs the universe. Go work on fusion!

“I'm concerned we may lose control of AI. Are you sure we should be making the AI investments you advocate? Are we playing with fire? Doesn't powerful molecular design mean bad actors (human or AI) can design harmful molecules too?”

In my experience, scientists and bioengineers making new tools often do not think about how those tools could be used by bad actors, because they're focused on solving problems. For example, when designing therapeutics, you want to make sure the patient's immune system doesn't reject the drugs, thinking they're invaders. Several scientists I've spoken to have been interested in making tools to evade the immune system for this reason, without noticing that evading the immune system is something you might want to be careful making general tools for. This makes me wonder, of course, what parallel blindspots-of-emphasis I have in things I work on.

The conversations about where these boundaries lie should be nuanced, in my opinion, not blunt. Biological research, and indeed computational biology, has been more helpful than harmful to the human species so far, so we can't have a general aversion to it. The same goes for machine learning – it's been great so far, and some AI systems have much higher risk of going off the rails than others. We can't have a general rule against machine learning, though we should watch out for dangerous ways AI systems can be designed.

I have included AI investments in the table where I think the benefit is clear, and the risk is not clear enough to stay away. Those are judgment calls that scientists, biosafety experts, and the public need to make together. I agree it is important not to be credulous about all AI systems, and to take seriously the risks. Part of why I've written this piece is to inject more nuance into the conversation about under what circumstances some of the hoped-for benefits from AI would, in fact, arise.

“I think AI may get a clear understanding of Alzheimer’s with no further experiments (or only a few easy experiments), by reasoning from 1) the laws of physics + 2) the human genome + 3) published biological data and papers and the internet. Then sure, the cures may not make it through the FDA for years, but there will be blueprints for the cures that an AI can give you. That counts as curing Alzheimer’s, and is just a problem with bureaucracy.”

Firstly, I personally bet you'll need more data and controlled experiments – but I don't know.

Secondly, the next steps from such a blueprint are not simply bureaucratic. It's not a blueprint in the sense of a chemical equation; curing all cases of Alzheimer's is unlikely to be possible with one small molecule. The blueprint may be for how to make a factory that makes a complicated type of drug the pharmaceutical industry hasn't made before, and that they'd mess up for a while until getting it right. In that world, humans still would not yet collectively have the 1) knowledge or 2) capability to cure Alzheimer’s. So it does not count as having an Alzheimer’s cure. If you get Alzheimer’s, you will not be cured with that blueprint.

Then even with a drug candidate in hand, it will take years to work through “the system” not simply because the system is bad or outdated. The system is not the FDA; if the FDA disappeared it would still take time. The safety data takes time to collect and could throw up surprises in the real world as people and their doctors use a physical instantiation of the drug outlined in the blueprint; manufacturing something at all and manufacturing it reliably are two different things; monitoring the correct/safe way to inject the product at the right personalised dose may take specialist doctors in short supply; etc.

“If an AI produces a precise therapeutic design but bottlenecks kick in afterwards to slow down the availability of the therapeutic, that is still a meaningful acceleration of the rate‑limiting step (discovery).”

Rate-limiting over what time period? Decades matter! And how sure are you discovery is the rate-limiting step? We have the technology to surgically remove blood clots from the brain when someone is having a stroke, and we are rate-limited on specialist surgeons, so most people with strokes don't get the surgery.

“Maybe the FDA won’t approve new drugs before there’s safety data, but that’s because the FDA is too risk averse. If a reliable enough AI system predicts that a drug is safe, I’d take it!”

I cannot rule out that superhuman AI systems would be able to predict safety almost-perfectly, without more data and controlled experiments (though I'd be surprised).

That said, I don’t trust anything at the 99 out of 100 level that could kill me, and drugs that are powerful enough to alter your brain to end Alzheimer’s could clearly alter your brain negatively too (e.g., the current available Alzheimer’s drugs give 10-20% of patients microbleeds). Then aside from the chemical prediction, black market drugs also introduce practical risks of incorrect manufacturing, and I’m glad someone (currently, in the US, the FDA) does manufacturing site visits to assure quality.

“You're assuming clinical trials work as they do today, but if we have much better candidates entering the pipeline then the system will adapt to a new equilibrium.”

I'm not assuming that. I'm hopeful the clinical trial system will adapt! Some drugs, like Gleevec, turn out to be so good in phase 2 trials that drug regulators just approve the drug and ask for a phase 3 to happen later, so patients can benefit sooner. I'm hopeful for more cases like that. Safety is harder to speed through, though.

Public trust may go up once AIs design new cures, not down.”

I agree that may happen. I put questions marks in that row of the table because I have no idea where it will all net out.

“Your examples of AI progress are too focused on narrow applications of machine learning to specific domains like protein folding. I think the breakthroughs will come from AGI and ASI, systems that are generally capable across domains.”

I’m open to that being true too – and expect that would help a lot with the first four bottlenecks in the list (understanding, replacing wet lab models with better computational models, designing drugs, and reducing the length of efficacy trials since better drugs are statistically easier to prove). I think the next six are likely still to apply, though.

“You’re just visualising what you know, Jacob. If something is going to change drastically, you can’t write an essay that rules out those changes. You don’t even know what to rule out! Thus, maybe those bottlenecks will disappear for reasons you can’t visualise.”

I sympathise with this over the long run, and it's familiar in the history of science. E.g. Edward Jenner could not have predicted subunit vaccines, because he made a smallpox vaccine before the germ theory of disease was understood and accepted, let alone the understanding of which subunits of a pathogen could be used on their own to prompt a safer, still-effective immune response. (He had no way to know what an antigen was. Or indeed what any protein was. Or indeed an antibody.)

But 5–10 years is not that long, even with vastly more powerful (software-based) AI. It takes time to gather data. It takes time to change human beings’ opinions.

“AI will be reasoning using concepts you don’t understand, so you can’t refute it with concepts you do understand.”

Maybe! That’s getting pretty into magic territory, though. Our biological concepts are not perfect, but I bet “DNA”, “proteins”, and “neurons” will survive in some form in the future. I don’t think future science is going to be entirely inexplicable to humans. (And, if it were, there would be limits to its democratic desirability.)

And even so, you need to get through the 10th bottleneck: trust. Superhuman AI would have to bring some of us along, and that will probably require some explanation using concepts we understand.

“Fine, some of the concepts will be ones you can understand – and specifically the main concept is nanobots, I’m talking about nanobots, tiny machines that have their own sensors and swim around your blood stream and your brain fixing stuff when they see stuff that’s wrong.”

OK. You try them first! I’ll wait for the safety data.

More philosophically, “nanobots” is a concept that fills in at the level of abstraction where you might just be recreating the functions of cells, or a human body, inside your body. Something will have to give those nanobots energy, something will have to give them ways of receiving and sending signals, of helping protect you from invaders (immune system bots)… your body is about to get full up, and in order not to double your weight, those bots may have to start replacing some of your cells… maybe we could make them out of carbon and hydrogen and…

Regardless, nanobots would not be cheap to make, nor cheap to administer, in the next 5–10 years, per bottleneck #7. Nanobots do not escape bottlenecks #8 or #9, really, either.

“I don't think bottleneck #7, manufacturing and delivery, will be a bottleneck. If they can make a railway station in China in under a day by having lots of people work on it, why can't you make a biomanufacturing centre in a day (or, fine, a week) if there are thousands of people or, you guessed it, thousands of humanoid robots loaded up with GPT-6 helping out?”

Here I plead the 5th. Or the 10th. By which I mean I refer you to my 5 to 10 year timeframe.

Biomanufacturing sites are difficult to build for reasons unrelated to "intelligence"; you need reliability, no contamination, etc. And delivery and monitoring still involves human doctors, flesh and bone, who patients will talk to in ways they don't talk to robots.

(Here's a video of the train station construction.)

“I don't think bottleneck #8, research predominance in rich countries, is separate from bottleneck #7. It's just downstream of the fact that we haven't invested enough thinking and capital in driving down the costs of manufacturing far enough that people in lower income countries can get the same drugs.”

Small molecule drugs are already commoditised to the extent that it costs pennies to make enough active ingredient to cure someone of malaria. 600,000 people die of malaria each year.

“Once AIs can replace human-level researchers at tasks then research progress on AI would increase even more rapidly than before, leading to a software-only explosion where you end up with the equivalent of >> 100 million ‘human-minds’ worth of compute happening. At that point replacing human physical labor with machines becomes economical (the larger barrier on how expensive robots are is the software) and within a few years you have an industrial takeoff with <2 year doubling times. At some point in this process it seems like human diseases stop being that difficult to cure, since the amount of thinking time + data collection thrown at the problem becomes several OOMs larger than in the past, and physical challenges to manufacture become moot because robots can do the things for you.”

1) There are a lot of stacked assumptions there within an internally coherent logic that I think sounds simpler than it would turn out in reality. I think it's more complicated. See the answers above, in particular to the "intelligence is upstream" question.

2) I still don’t think this vision gets around bottlenecks #5 through #10, without complementary non-AI/non-robotic investments. Perhaps the vision involves AI systems deciding to make more of those investments because we humans were too slow to... but a lot of those investments will be the non-AI ones I write about in this post!

3) In that industrially explosive world, for a while at least, Alzheimer’s would not make the cut on the list of concerns.

I am writing about the next 5 to 10 years only. I can revisit this one later if we see more evidence for it (interest rates spiking to fuel investment, rapid improvement in robotics, cost of production of one new drug modality dropping much quicker than usual, etc).

What do you recommend we do, then, if we want to drive medical progress?

Depends who “we” means; I have a limited perspective based on my experience and life so far. (Hello readers from around the world! I hope some of these ideas are useful to you wherever you are. First up, check whether your country contributes to the Global Fund or Gavi.) I’m a dual US-UK citizen and live in the US, so tilting towards what I know better:

  • Continue trying to apply deep learning to parts of medical research that could be improved incrementally or drastically by AI: in particular modeling cells virtually to high fidelity, modeling molecular dynamics, drug discovery (there are many chronically underfunded biotech startups with talented founding teams), and toxicology + off-target effects prediction before finding out the hard way
  • Reform the NIH and NSF and increase total public science funding, through them and other mechanisms. That includes global research funding, e.g. NIH subawards and the Fogarty Center
  • Reform clinical trials. Reform the FDA, and some other policy stuff in those links too that’s hard to summarise in a bullet
  • Fix market failures in part by being more generous as a society
    • There would be no AlphaFold without $10 billion+ of public funding over 50 years, supporting PhD students painstakingly figuring out 100,000+ protein structures and adding them to the Protein Data Bank
    • Other machine learning models would get nowhere if it weren't for the billions of dollars spent on the Human Genome Project, that paved the way for it now to cost under $1000 to sequence a genome
    • There is no point in a leading American antiviral company developing a miracle drug that prevents HIV unless people who need it can access it
  • That means rich people too, giving away more money to good things, burdening their kids less
  • Public and private funders try to prioritise funding by health impact, since some areas of science are 10X+ less funded than others for no good reason. Swing for the fences.

Why stop this analysis at 5-10 years? Will AI skirt around the bottlenecks after that?

It is hard enough to speculate 5-10 years out. Maybe I’ll be around to write another post later. If it’s 2035 when you read this, and I haven’t followed up, I hereby pass the torch to you.


Congratulations! You’ve reached a successful ending. This time, try making the opposite choices to see what happens.

I have benefited from reading many people's work on these topics over the years (including bloggers!). This piece builds on other people's thinking, and is intended to state an argument as I see it, rather than to be "original". Most of all, I would like to thank the tremendously generative intellectual environment of Open Philanthropy, 2018-today, for driving much of my thinking on AI and medicine. I have debated these topics over the years with many people at Open Phil, including Matt Clancy, Ajeya Cotra, Tom Davidson, Chris Somerville, Heather Youngs, Joe Carlsmith, Alexander Berger, Emily Oehlsen, and others. There is a frustrating, refreshing, and residual disagreement on several of these issues between people I respect greatly, which has been indispensable fodder for my own thinking and leaves me confident I've gotten something wrong in this piece that I don't yet understand. It is safe to say that the people in that list do not all agree with my arguments here, and any mistakes are my own.

I have benefited further from comments on this piece by Magnus Bauer, Damon Binder, and Nan Ransohoff. Thank you!