How Artificial Intelligence Is Changing the Way We Find Medicines
Faster drug discovery, but with risks we can’t ignore
Have you ever wondered why it takes so long for new medicine to reach people? Traditionally, bringing one medicine from an idea to pharmacy shelves can take up to 10–15 years and cost $2B–$3B (Nature Medicine 2025). And even then, about 90% of drugs that start clinical trials never make it through approvals.
Now AI is stepping into labs and reshaping that entire structure. Scientists are starting to use it to discover medicines faster, cheaper, and sometimes even more accurately.
In this blog, I explore and discuss how AI is changing the entire process of drug discovery, what that means for science and healthcare, and when this technology crosses the line from helping humans to harming them.
The Difficulties of Discovering New Medicine
The closest analogy I can think of for drug discovery is designing a key for a lock. First, scientists have to analyze the “lock,” which in this case is the disease or targeted gene. Then they begin the discovery process: trying to create a key that fits perfectly. Any extra bump or dent and the key won’t work.
Similarly, the chemical formulation must fit perfectly and trigger the right reactions for the gene or the disease. Many drug discovery researchers test thousands of compounds and end up failing along the way. Before a drug can reach humans, it has to pass a series of tests, including preclinical tests in cells and animals. After that, it must be tested over multiple phases of clinical trials in humans. Each stage can take years and cost millions.
And the worst part? If the drug fails later in the process, everything—money, time, and effort—is lost. This is the reason why huge pharma companies like Pfizer, AstraZeneca, or Novartis invest billions annually in their R&D and still face low success rates. The traditional approach is not completely an approach of hope, but it does require a substantial amount of trial-and-error and continuous learning.
AI’s Power in Drug Discovery
AI is not magic. It is a set of algorithms that learn the patterns from data and can infer outcomes based on what it learns. As a result, AI can analyze massive datasets, ranging from chemical structures to disease pathways, predict the outcomes, and find patterns that humans may miss.
According to a Microsoft Industries report, AI will potentially cut early-stage discovery timelines by about 70%, and this report has been proven true several times.
The biotech firm Insilico Medicine reported that they used AI to design a drug for idiopathic pulmonary fibrosis, a deadly lung disease. They reached phase one clinical trials in about 2.5 years, half the time it would normally take them. In this process, not only did they save time and research, but they also saved a lot of money.
Seeing examples like this across the world, I truly believe that AI has the potential to reshape how quickly life-saving drugs move from idea to shelves to the patients.
Now let’s look into how AI actually does it.
This is the part that I find most fascinating. Chemical companies have chemical libraries containing millions of possible compounds. If humans tested each one, it would take them lifetimes. AI, on the other hand, can identify which molecules have the most promising structures in only a few hours.
From there, AI can predict which drugs work. AI can forecast how a compound will behave inside the body, whether it will bind to a target protein, or if it’s toxic. This means a scientist can eliminate inadequate options before wasting time and money on experiments. In other words, AI is a filter.
That’s not all. When existing compounds don’t work, AI can create and design new molecules from scratch. Using a process called generative modeling, AI can create molecules that fit specific criteria—like avoiding certain side effects. There is active research in which AI can even imagine structures that scientists may have never thought of. For example, AstraZeneca reported that their AI systems found 170+ potential antibody drugs in three days, when the traditional methods had found zero after months of searching. Additionally, MIT researchers trained an AI model to search for new antibiotics. The AI was able to recognize patterns and identify compounds that would kill antibiotic-resistant bacteria, an observation no research scientist noticed before (MIT 2020).
AI’s use doesn’t just stop at the early stages.
Once a potential drug is found, clinical trials begin—and this is where most drugs fail. Trials are expensive and time-consuming; any failures at this stage are very expensive. No wonder AI helps here too.
It can analyze available health records and genetic data to find likely patients who are going to respond to a specific drug. By learning those patterns, it can even predict potential side effects earlier. At scale, it can monitor data almost in real time and therefore allow researchers to have enough information to adjust their trial designs faster.
So instead of waiting years to find out a drug’s failure, scientists can make decisions and figure it out in weeks, improving their chances of success.
It is pretty evident that AI represents a revolution in drug discovery. It brings in the speed of turning decade-long iterative and painful processes into a short period of time. Additionally, not only does it decrease cost so money goes far, but it also improves the precision of clinical trials, reducing the rate of trial and error. All of this sounds almost too good to be true, and in some ways, it is. The same speed that makes AI powerful also brings real risks, especially when we don’t slow down and look at the consequences.
The Risks
AI, like any other technology, is not perfect, especially in medicine. It learns from data, and if that data is biased or inaccurate, it learns incorrect patterns. It can also hallucinate and find patterns that don’t exist. We’ve all seen examples where ChatGPT gave us answers that didn’t fully make sense or solved a math problem incorrectly. While that may not seem like a major issue, a mistake in healthcare is the difference between life and death.
Let me walk you through a few examples. If you have an AI system that is trained on genetic data from a certain population, patients from other populations may receive misleading information.
For example, the majority of genetic data studies conducted use information from people of European descent, while individuals from places like Asia or Africa aren’t represented as well. Since AI uses data from existing databases to extract patterns, it can give misleading and incorrect data about groups that hardly appear in the dataset. This kind of biased data can lead to misdiagnoses, wrong dosages, or the lack of care for entire populations. Over time it will not just harm individuals—it widens the health gap, because the groups with more data get better care while others are left behind (The Journal of Global Health, 2025). In fact, in 2019, a healthcare algorithm used in U.S. hospitals was found to assign Black patients similar risk scores to white patients, even though the Black patients were actually sicker. That meant many Black patients received less care and fewer medical resources than they needed (Hopkins Bloomberg, 2019).
Unfortunately, these biases are not just data collection problems; they are also ethical ones. In some of the recent AI battles, we have seen how data privacy, lack of consent, and inappropriate use of data are big ethical issues in the world of AI. These issues carry over into the world of healthcare as well. Some companies have already trained models on patient data without proper consent, sometimes without even realizing it. And the truth is, machines don’t understand ethics; they only process whatever information they’re given.
And then there is the potential black box problem. We hear a lot about deep learning models, but those models are not always debuggable. My Spotify can recommend songs based on my music taste, but it may not always be able to explain why it chose those specific songs.
Now imagine doing this for something as serious as drug discovery or a treatment plan. A patient should be able to understand why a certain drug or treatment plan is being prescribed to them. The lack of transparency and observability in AI models makes that hard. It also makes accountability unclear. If something goes wrong, who’s to blame? The researcher? The doctor who prescribed it? The AI? In healthcare, accountability is very important, and losing accountability also loses the trust of patients and doctors.
And lastly, one of the biggest risks is over-reliance. Let’s assume a future world where AI is predicting and analyzing medicine and discovering drugs, but it doesn’t understand the ethical issues of empathy, pain, or fairness. A human scientist may pause, debate, and discuss a risky drug or take an ethical issue into consideration. AI wouldn’t, however. Relying too heavily on AI can turn medical discovery into a cold, automated system that forgets the human side of patients.
My Viewpoint
I believe we must slow down before we speed ahead. I actually don’t think AI is the problem; instead, the problem lies in how we are using it. I’m a big believer that AI has real potential to change medicine. It can find molecules faster, analyze patterns that we would miss, and predict how certain drugs may react. With proper testing and regulations, I truly believe AI can save millions of lives.
But we can’t ignore the consequences. In healthcare, faster isn’t always better. AI models can learn patterns around drugs and diagnoses but can’t question the ethics behind those methods. Earlier, I mentioned the importance of the diversity of data, and when data is missing, AI may overlook the populations that are most at risk. Similarly, AI may deem a medicine as “safe” in a dataset, even if the targeted group gets sicker from it. This means that with the rise of reliance on AI, others can receive less health care and support than before, and that’s not progress.
And it’s not just bias. Over-reliance is another danger. The more we trust algorithms, the more we stop questioning them. If we let AI do all the work, we lose the skills and judgment of understanding what is happening inside that black box. If AI makes a mistake, we don’t want the doctor to say, “Because the machine said so.” That’s the biggest fear: people forgetting to double-check. In healthcare, we can never afford blind trust.
That’s why I think we need teams of scientists, governing bodies, doctors, and lawmakers working together to pace this rollout, make the datasets more diverse, invest in extensive testing of these systems, and make sure we don’t cut down the processes that have served us for a long time. Because in the end, medicine is a responsibility.
I actually like what the World Health Organization recommended: a human-in-command approach for all AI in medicine systems. This means that the research scientists stay at the center overseeing every step, validating every AI prediction, and being accountable for the final calls. In other words, AI is your assistant; it’s not making decisions on your behalf. This gives room for human creativity and intuition. When a molecule fails, a scientist can still learn something new. AI just moves on to the next problem to solve.
Closing thoughts
Right now, a simple Google search shows that there are over 100 AI-designed drugs in various stages of testing or trials. Many are for critical diseases like cancers, rare genetic disorders, or infectious diseases that haven’t had the resources or the funding for treatment in decades. Even if a very few of these succeed, we could see a future where new medicines are taking 2–3 years from concept to shelves versus the 15 earlier. It is not just faster science; it is a lifeline for patients waiting for a cure. Patients whose diseases were rare enough that it did not make economic sense to invest in their research.
But speed and cost aren’t the whole story. The key unlock in this revolution will be machines and humans working together. So we leverage the strength of computation from AI, leverage them as assistants, but have humans solve for ethics, for observability, and for over-reliance.

