RxEconomics Logo - Health Economics and Outcomes Research Consultancy specializing in AI-driven analytics

March 27, 2025

AI in HEOR: The Current Landscape

By Dr. M. Christopher Roebuck
Share:

Introduction

AI has rapidly evolved, and its impact on Health Economics and Outcomes Research today is profound. We now live in the age of generative AI and advanced machine learning, where algorithms can not only analyze data but also create data or text, and uncover patterns from unstructured information. In the current landscape, HEOR analysts are harnessing AI to accelerate research workflows and glean insights that were previously out of reach. This article explores how modern AI techniques – especially generative models and natural language processing – are being applied in HEOR as of today, transforming tasks like evidence review, data generation, real-world evidence analysis, and economic modeling.

The integration of AI into HEOR has been a gradual process, with early applications focusing on data analysis and visualization. However, recent advancements in machine learning and natural language processing have opened up new possibilities for AI-driven research. Today, AI is being used to automate systematic literature reviews, generate synthetic patient data, and develop predictive models for disease progression and treatment outcomes.

Despite the many benefits of AI in HEOR, there are also challenges to be addressed. One of the main concerns is the potential for bias in AI algorithms, which can perpetuate existing health disparities if not properly addressed. Additionally, the use of AI in HEOR raises questions about data privacy and security, as well as the need for transparency and explainability in AI-driven research.

In this article, we will explore the current state of AI in HEOR, including its applications, benefits, and challenges. We will also discuss the future of AI in HEOR, including the potential for AI to transform the field and improve patient outcomes.

Generative AI for Knowledge Synthesis

One of the most promising current applications of AI in HEOR is automating systematic literature reviews (SLRs) and evidence synthesis. SLRs are vital in HEOR for gathering inputs to cost-effectiveness models or understanding real-world treatment outcomes, but they are labor-intensive. Enter large language models (LLMs) – the kind of generative AI behind ChatGPT – which can read and summarize vast amounts of text. Recent developments indicate that AI can assist at multiple stages of literature review: proposing search strategies, screening abstracts for relevance, extracting data from studies, and even drafting summaries of findings.

Traditional review software has employed simpler machine learning (e.g. classifiers to prioritize relevant abstracts), but today's LLMs take it further by understanding context and content at a near-human level. For example, an LLM can be prompted with a PICO (Population, Intervention, Comparator, Outcome) framework and generate a list of likely relevant search terms and databases, potentially speeding up the protocol development for an SLR.

While fully autonomous literature reviews are not yet reality, AI-powered tools are already helping researchers screen studies faster by learning from inclusion/exclusion decisions. The present state of generative AI in this domain is augmentation, not replacement: it acts as a tireless research assistant, freeing human reviewers to focus on critical appraisal. As this technology matures, we expect the lines to blur further, with AI drafting portions of evidence reports that humans then refine.

Real-World Evidence and NLP

Another area where AI is making a significant impact is in generating real-world evidence (RWE) from the deluge of healthcare data now available. Contemporary HEOR studies often rely on real-world data (electronic health records, insurance claims, patient registries, social media, etc.) to understand how treatments perform outside of clinical trials. Much of this data is unstructured – think doctors' free-text notes, pathology reports, or patient forum posts. Natural language processing (NLP), a branch of AI, has become indispensable for converting such unstructured text into usable evidence.

For instance, AI algorithms can scan through millions of clinical notes to identify patients with a certain condition or to extract outcomes like occurrence of side effects. Generative AI models can interpret free-text descriptions and classify them (e.g., determining from a note whether a patient's cancer is responding to treatment). This capability greatly expands the real-world evidence that HEOR analysts can use – we are no longer limited to structured fields in databases.

A current best practice is using NLP pipelines to enrich datasets: for example, a health system might use AI to comb through pathology reports to find biomarker statuses of patients, then use that information in an outcomes analysis to see if a new therapy works better in biomarker-positive patients. Such analyses were onerous or impossible at scale before AI. Now, with modern NLP (often powered by deep learning and transformer models), HEOR researchers can unlock insights from textual data with relative ease.

A tangible real-world example is the use of AI for clinical trial matching. In outcomes research and market access studies, understanding trial populations and improving trial enrollment are crucial. Advanced AI-driven systems can parse unstructured patient records and trial protocols to pair patients with appropriate clinical trials. In practice, this technology has demonstrated remarkable success at major medical centers, where it increased breast cancer trial enrollment by approximately 80% by efficiently matching patients to trials. The AI reads through patients' charts (including doctor's notes, lab results) and a trial's eligibility criteria, then outputs a list of patients likely eligible for each trial.

This approach not only streamlines research (getting trials populated faster, thus generating outcomes data sooner) but also has direct patient benefits, ensuring more individuals gain access to cutting-edge therapies. From an HEOR perspective, such AI-driven matching improves the generation of real-world evidence by making trials more inclusive and representative.

Beyond text, AI is also analyzing images and other data for HEOR. For example, algorithms scan radiology images to detect disease progression as an outcome metric, or wearable device streams to gauge real-world patient activity levels (a proxy for quality of life). The present trend is clear: HEOR is embracing AI to make use of all available real-world data, structured or unstructured. The challenge of course is ensuring the quality and validity of AI-extracted data – hence why best practices involve validation steps, like manual review of a sample of AI-categorized notes, to verify accuracy.

Synthetic Data Generation

Generative AI isn't only good for summarizing text – it can also generate synthetic data that mimics real patient data. In today's HEOR landscape, synthetic data is gaining traction as a solution to data scarcity and privacy concerns. Using techniques like generative adversarial networks (GANs) or variational autoencoders, AI can learn the patterns in real datasets (for example, a claims database) and then create entirely new "fake" patients that have statistically similar characteristics to the real ones.

These synthetic patients can be used to test hypotheses or even as part of analyses when real data is limited or sensitive. Currently, synthetic data is especially useful for scenarios like rare diseases or simulation of long-term outcomes. For instance, if an HEOR team has short-term trial data, they might train a generative model to simulate longer-term outcome data that can feed into a cost-effectiveness model. Likewise, companies are starting to offer synthetic datasets that mirror real-world populations, allowing researchers to do exploratory analyses without violating patient privacy. Automation in economic modeling can integrate such synthetic cohorts to project outcomes under different scenarios.

It's important to note, however, that while synthetic data holds promise, it must be used cautiously. The AI needs to strike a balance: the synthetic records should reflect reality well enough to yield valid conclusions, but not be so detailed that they risk re-identifying individuals or perpetuating biases from the source data. Current best practice is to treat synthetic data as a supplement rather than a replacement – for example, using it to validate findings or to fill in gaps where real data are unavailable.

The industry is actively researching how close the correspondence is between conclusions drawn from synthetic versus actual real-world data. Early indications show that when properly generated, synthetic datasets can indeed reproduce the key insights of real data, making them a powerful new asset in the HEOR toolkit. Some experts even project that synthetic data might be used more widely than real data by 2030, illustrating the growing importance of AI in data generation for health economics research.

AI-Assisted Economic Modeling

At the core of HEOR are health economic models – such as cost-effectiveness models, budget impact models, and disease simulators – which traditionally are built by health economists manually programming disease states, transition probabilities, and costs. Today's AI is starting to assist in model development and analysis. While we're not yet at the point of an AI completely building a Markov model from scratch, there are several ways AI is augmenting this process:

  • Parameter estimation and calibration: Machine learning algorithms can efficiently estimate complex model parameters by fitting models to data. For example, calibration of a disease simulation model to real-world epidemiological targets (like making sure a model's predicted survival curve matches observed data) can be framed as an optimization problem that AI techniques solve faster than trial-and-error human approaches.
  • Handling complexity: AI can help explore large decision trees or state spaces that would be unwieldy manually. Techniques like reinforcement learning have even been experimented with to derive optimal treatment pathways in a simulation, effectively letting the AI "learn" the best strategy which a model can then evaluate economically.
  • Meta-modeling: Generative AI can write snippets of analytical code. There are instances of analysts using advanced large language models to generate sections of model code or to document model assumptions, speeding up the programming of economic models. Moreover, large language models trained on health economics literature can assist in identifying which model structure or utility inputs are appropriate, by drawing on prior studies.
  • Automated sensitivity analyses: AI can run numerous simulations with varied inputs (Monte Carlo simulations) and then use ML to identify which parameters drive uncertainty the most. This helps modelers focus on key drivers of cost-effectiveness results. Some AI-driven platforms can even suggest scenario analyses that humans might not have considered, by detecting nonlinear interactions between inputs.

In practice, companies and research groups are beginning to integrate these AI capabilities. For example, an HEOR analyst might use an AI-driven tool to generate a draft model in Excel or Python after providing basic specifications (population, interventions, outcomes of interest). The tool could populate it with data from literature (using an AI-curated evidence library) and even provide a narrative report of the results. While human expertise is still essential to ensure the model makes sense and to adjust for context, the heavy lifting of computation and even first-pass drafting can be accelerated by AI.

Organizations like ISPOR have recognized this trend. Recent working groups have identified health economic modeling as one of three key areas where generative AI is poised to significantly impact HTA/HEOR processes. Experts note that AI could increase efficiency in model development and validation – for instance, automatically checking consistency of model equations or assisting in converting an outdated model to a new programming language. We are already seeing early software prototypes that incorporate such features.

Spotlight on Contributions

To illustrate how AI is being used in HEOR today, we can look at my consulting work with IBM Watson Health, a leader in applying AI to healthcare problems and one of my previous clients. One notable project was IBM's Watson for Oncology, an AI clinical decision support tool that assessed cancer patients' medical records and provided treatment recommendations aligned with guidelines and evidence. While this was primarily a clinical tool, its evaluation fell into outcomes research – essentially comparing AI-recommended treatments to human oncologists' decisions and patient outcomes.

Real-world studies on using Watson for Oncology in breast cancer care found that the AI's recommended therapies were highly concordant (in agreement) with those chosen by experienced oncologists. In high-risk colon cancer as well, Watson's options matched expert decisions most of the time and showed potential to reduce over-treatment. These findings were significant in HEOR terms: they suggested AI could consistently replicate expert decision-making, which implied potential for standardizing quality of care and possibly improving outcomes if used appropriately.

Another area where I contributed as a consultant to IBM Watson Health was in the previously mentioned clinical trial matching. IBM's Watson for Clinical Trial Matching was an AI-driven system that parsed unstructured patient records and trial protocols to pair patients with appropriate clinical trials. Deploying and assessing this system had substantial impact from an HEOR perspective – by boosting trial enrollment and accelerating research, AI indirectly produced better evidence on treatments which fed into health economic analyses. For instance, more robust data on survival or quality of life from trials meant more reliable cost-effectiveness models. The success at Mayo Clinic, where Watson for Clinical Trial Matching significantly increased breast cancer trial enrollment, stood as a testament to the practical value of these systems.

My consulting role with IBM Watson Health included ensuring that such AI tools were evaluated not just on accuracy but also on outcomes like speed of recruitment, diversity of patients enrolled, and downstream cost implications for research. These are classic HEOR considerations applied to an AI tool. Beyond these specific examples, my current consulting work reflects AI's permeation in HEOR practice. Advising organizations on deploying predictive analytics in healthcare economics demonstrates a best practice of our present day: involving domain experts to guide AI applications. The human-in-the-loop approach is considered vital now – AI can process information at scale, but experts are needed to interpret results, adjust models for real-world plausibility, and translate findings into strategy.

Current Best Practices and Considerations

As of today, the use of AI in HEOR comes with a set of emerging best practices aimed at maximizing benefits while managing risks:

  • Human Oversight and Validation: Perhaps the most important principle is that AI outputs are not taken at face value without expert review. Whether it's an LLM summarizing literature or a predictive model identifying high-risk patients, human experts validate and sanity-check the results. ISPOR emphasizes the need for human oversight, transparency, and adherence to ethical standards as we navigate this new era. In practice, this means HEOR analysts double-check AI-extracted data against a gold standard and clinicians or economists review AI-generated conclusions for sensibility.
  • Transparency and Explainability: There is a push for AI models to be as transparent as possible. Black-box models (like some deep neural networks) can be problematic in a field like HEOR, where decisions can impact patient access to therapies. Current best practice is to use explainable AI techniques – for instance, providing feature importance from an ML model to show which variables drove a cost prediction – so that stakeholders trust the results. Many organizations also document their AI algorithms in detail (a kind of "model registry") which aligns with principles of good research practice.
  • Bias and Fairness Checks: HEOR professionals are acutely aware of disparities in healthcare. AI systems, if not carefully developed, can inadvertently perpetuate biases present in training data (for example, underrepresenting certain patient groups). Thus, a present-day best practice is to evaluate AI models for bias – e.g., does a risk prediction model systematically overpredict risk for one demographic group? – and adjust or constrain them as needed. This is both an ethical mandate and a practical one (biased results can lead to suboptimal or even harmful decisions in resource allocation).
  • Regulatory Compliance: While regulations are still catching up to AI, HEOR practitioners ensure compliance with current data privacy laws (like GDPR, HIPAA) when using AI, especially for patient data. De-identification techniques and secure computing environments are standard when running AI on sensitive health data. Furthermore, if AI-derived evidence is used in submissions to payers or health authorities, researchers are prepared to explain the methodology in traditional terms.
  • Integration with Workflows: Finally, successful AI adoption today means integrating tools into existing HEOR workflows. An AI might be great in a standalone demo, but if an analyst cannot easily incorporate its output into an economic model or if a decision-maker cannot intuitively interact with it, its value is lost. Thus, we see AI being embedded in familiar software (plugins for Excel, Python libraries, etc.) and outputs formatted in user-friendly ways. For example, an AI literature review tool might export results into a standard evidence table that teams are used to.

Conclusion

In summary, the present state of AI in HEOR is exciting and dynamic. We have moved well beyond the experimental phase – AI is now enhancing daily HEOR tasks, from data curation to analysis to reporting. Real-world examples of clinical decision support and trial matching show that AI is delivering tangible improvements in outcomes research processes. Yet, the prevailing philosophy is one of augmented intelligence: using AI to amplify human expertise, not replace it.

By adhering to best practices of validation, transparency, and ethical use, HEOR professionals are increasingly confident in deploying AI solutions. This sets the stage for even more transformative uses of AI on the horizon. In the next and final part of this series, we will turn our attention to the future – exploring how AI might revolutionize HEOR in the coming years, and how we can prepare for it.

Ready to Accelerate Your Health Economics Strategy?

Schedule your complimentary consultation to discover how RxEconomics delivers rapid analytics, credible real-world evidence, and strategic clarity.

Request Your Consult