HTA Strategy

How to Reduce HTA Evidence Preparation for Payers from Months to Days Using AI

Pier Lasalvia, MD
Pier Lasalvia, MDCo-founder, CTO & Co-CEO
Camilo Castañeda, MD
Camilo Castañeda, MDCo-founder, COO
March 20, 2026 13 min read

Market access teams in Latin America spend between four and eight months building the evidence a payer will evaluate in twenty minutes. This article describes a practical framework — with concrete steps, documented benchmarks, and tools available today — to compress that process without sacrificing methodological rigor.

A pharmaceutical company that reaches the payer with solid evidence two months ahead of its competitor does not have a marginal advantage — it has market access and its competitor does not. In Latin America, where HTA evaluation windows are narrower and less frequent than in Europe, the speed of evidence preparation is a strategic variable, not an operational one. Generative AI is turning that variable into an achievable advantage for any team that knows how to use it.

This is not an article about the future of AI in pharma. It is a practical guide to what can be done today to reduce HTA evidence preparation time — with verifiable methodology, documented results, and steps that any market access team in LATAM can implement.

What this article covers

(1) Why the traditional HTA evidence preparation process takes so long. (2) Which specific bottlenecks are most impacted by AI. (3) A five-step framework for implementing AI in the HTA evidence process without compromising methodological rigor. (4) The validation frameworks that HTA agencies accept. And (5) what this looks like in practice for teams operating across multiple LATAM markets simultaneously.

1. The real problem: four to eight months to reach the payer

Building a complete HTA evidence package — from the systematic review to the cost-effectiveness model adapted to the local market — takes between four and eight months using traditional methods. That is not the time the payer needs to evaluate the evidence. It is the time the team needs to prepare it.

The problem has three well-identified structural roots, and each one has a specific AI-powered solution:

BottleneckCauseTypical timeAI solution
Systematic literature reviewManual screening of thousands of references by expert reviewers3–6 weeksAutomated screening reduces to 2–4 days with equal or greater accuracy
Economic model constructionExcel/VBA programming, calibration, sensitivity analysis4–6 weeksAI replicates published models with less than 1% error and adapts to local parameters in 2–5 days
Value dossier narrativeIterative drafting, multiple rounds of internal review6–12 weeksAI generates first drafts; 80% require no significant editing
Adaptation to multiple LATAM marketsSequential process: one country at a time2–4 wks/countryParallel adaptations: 7 countries in 2–5 total weeks

The cost nobody accounts for

Companies measure the cost of the HTA evidence process in consulting fees and internal team hours. But the highest cost never appears on any invoice: each month of delay in obtaining market access represents unrealized revenue and, in critical indications, patients without treatment. For a drug with projected sales of USD $10M/year in LATAM, two months of delay equals USD $1.7M that is never recovered.

2. The five steps of the AI-accelerated process

This is the framework that separates teams that use AI effectively from those that use it in frustrating ways. The difference is not in the technology — it is in the sequence and in knowing what a human validates and what they do not.

Step 1: Define the protocol before touching AI

The most common mistake is starting to use AI without a defined review protocol. Before any literature search or modeling, the team must establish: PICO criteria (population, intervention, comparator, outcome), data sources to consult, cost-effectiveness thresholds for the target market, and relevant local comparators. AI is only as good as the protocol guiding it.

Step 2: Literature screening with supervised AI

Using AI tools for the first level of screening (title and abstract) reduces the time from 3–6 weeks to 2–4 days with sensitivity rates comparable to those of human reviewers. The human expert intervenes at the second level (full-text reading) and in data extraction. That combination — AI for volume, human for judgment — is where the greatest efficiency impact lies.

Step 3: Adapt the economic model to the local market

Rather than building the model from scratch, the most efficient practice is to start from a published or validated cost-effectiveness model and use AI to adapt it to local parameters: market GDP per capita, health system unit costs (SIGTAP in Brazil, Manual ISS in Colombia, Cuadro Basico IMSS in Mexico), local epidemiology, and comparators available in the formulary. GPT-4 has replicated published models with less than 1% error margin in under 15 minutes.

Step 4: Build the dossier narrative with AI co-authorship

The most extensive sections of the dossier — disease landscape, clinical evidence description, patient-reported outcomes — are also the ones that benefit most from AI. The ISPOR Working Group documented that 80% of AI-generated Disease Overview drafts required no significant editing. The expert reviews, validates, and adjusts the tone for the specific audience (public payer vs. private committee vs. HTA agency).

Step 5: Validate with methodological frameworks before submission

Before presenting any evidence to an HTA agency, verify compliance with ELEVATE-GenAI (transparency and traceability of LLM use) and CHEERS-AI (economic model reporting). Document what the AI generated and what the expert validated. That record is what distinguishes an acceptable submission from one rejected for lack of methodological transparency.

5 steps

Implementation framework

From protocol to submission: define criteria, AI screening, adapt model, narrative co-authorship, and methodological validation.

3. Documented benchmarks: what is already possible today

These are not projections. They are results published or reported by verifiable organizations:

ProcessBeforeWith AIVerifiable source
Systematic review screening3–6 weeks (1 reviewer)2–4 days (~80% less)Blaizot et al., J Med Internet Res, 2024
GVD Disease Overview section3–4 weeks1–2 weeks (80% without editing)ISPOR Annual Meeting, Montreal 2025
ICER model adaptation to local market4–6 weeks2–5 days (less than 1% error)Axtria / idalab, White paper ISPOR 2025
Complete GVD4–8 months6–8 weeks (60% less)idalab EPRI Tool, 2024
Adaptation to 7 LATAM markets in parallel14–28 weeks (sequential)2–5 weeks (parallel)ISPOR / Agilisium reports, 2025

4. What changes for teams operating across multiple LATAM markets

For companies that need to adapt their HTA evidence to Brazil, Colombia, Mexico, Argentina, Chile, Peru, and other markets simultaneously, the impact of AI is not linear — it is multiplicative.

With the traditional process, adaptation is sequential: one country at a time, with different consultancies, different timelines, different reviews. With AI, adaptation can be done in parallel: the same base model, the same methodological criteria, adapted to each market's local parameters at the same time.

ScenarioTraditional processWith AIDifference
Adapt ICER model to Brazil + Colombia + Mexico12–18 weeks (sequential)1–2 weeks (parallel)10–16 weeks of advantage
Systematic review for 2 indications6–12 weeks4–8 days~80% reduction
Base GVD + 5 local adaptations12–18 months3–5 months60–70% reduction
Team required for the same output8–12 people3–4 people + AIReduced cost, expanded capacity

The competitive advantage you cannot recover

In LATAM markets, HTA evaluation windows are specific and not always annual. A company that reaches CONITEC in the right cycle with complete and well-adapted evidence gains access. The one that arrives two months late waits for the next cycle. AI is not a marginal advantage in this context — it is the difference between access and waiting.

5. The validation framework: how to ensure HTA agencies accept the evidence

The question we hear most from market access teams when they consider implementing AI in their evidence process is: will it pass scrutiny from CONITEC, IETS, or CENETEC?

The short answer is yes — if it is documented correctly. The long answer has three components:

5.1 The principle of methodological equivalence

No HTA agency in Latin America has published specific guidelines on AI. What they have established — CONITEC, IETS, CENETEC — are methodological quality criteria for the evidence they receive. A dossier built with AI that meets those methodological criteria will be evaluated exactly the same as one built without AI. The problem is not the use of AI — it is the use of AI without documentation of the process.

5.2 The frameworks that international agencies already recognize

NICE (UK) established in August 2025 that the use of AI in HTA processes must be declared, transparent, and reproducible. The ELEVATE-GenAI and CHEERS-AI frameworks provide the methodological language to make that declaration in a way that agencies can verify. Using these frameworks is not bureaucracy — it is the guarantee that the evidence will pass scrutiny.

5.3 The documentation that must accompany each submission

  • Declare in the dossier that AI tools were used in the construction process.
  • Specify which sections or steps used AI and which were validated by a human expert.
  • Document the prompt engineering used for key sections (especially the systematic review and the economic model).
  • Include the ELEVATE-GenAI checklist as an appendix if the agency requests it.
  • Ensure that the methodological expert signs and validates the final output — responsibility for the content rests with the team, not with the AI.

Key principle

AI in the HTA evidence process does not lower the methodological standard — what it lowers is the time required to meet it. A dossier built with AI under the ELEVATE-GenAI and CHEERS-AI frameworks meets the same quality criteria as one built with traditional methods, but in a fraction of the time.

Frequently asked questions

Where should my team start if we have never used AI in the HTA evidence process?

The most efficient entry point is the systematic literature review — specifically the title and abstract screening process. It is the most mechanical step in the process, the most time-consuming, and the one with the lowest learning curve for AI. Once the team understands how AI output supervision works in that context, extending the methodology to the economic model and the dossier comes naturally.

What level of technical expertise does the team need to implement these tools?

No programming or machine learning experience is required. What is needed is: (1) methodological experience in HEOR — to be able to critically evaluate AI outputs; (2) basic prompt engineering skills — formulating precise instructions for the model; and (3) familiarity with validation frameworks (ELEVATE-GenAI, CHEERS-AI). The ideal profile is an HEOR manager with experience in systematic reviews and economic modeling who learns to work with AI as a co-author, not as a replacement.

Is it possible to use AI in the HTA evidence process with the tools available today?

Yes, with nuances. Tools such as GPT-4, Gemini, or Claude can be used for literature screening, dossier section drafts, and economic model adaptation with the right protocol. However, specialized market access platforms offer significant advantages: pre-configured workflows for GVD steps, automatic process documentation for ELEVATE-GenAI compliance, and pre-loaded economic parameters for LATAM markets for efficient local adaptation.

How long does it take to implement this process in a team starting from scratch?

Basic implementation — literature screening with supervised AI — can be operational in two to four weeks. Full implementation — including economic modeling and dossier construction — requires between six and twelve weeks, depending on the team's learning curve and the complexity of the indication. The critical factor is not implementation time but the team's willingness to change the process, not just the tool.

How do you measure the return on investment of implementing AI in the HTA evidence process?

There are three direct metrics: (1) reduction in weeks of the evidence preparation cycle — a direct impact on when you reach the market; (2) reduction in external consulting costs — especially for systematic reviews and local adaptations; and (3) the number of markets that can be worked simultaneously with the same team. The hardest ROI to quantify but the most important is the one that comes from reaching the payer in the right evaluation cycle versus having to wait for the next one.

Conclusion: speed in HTA evidence is no longer a luxury

In Latin America, market access teams with the best methodological resources are not always the ones that gain access. The ones that gain access are those that arrive with the right evidence at the right time. AI does not change what constitutes quality evidence — it changes how long it takes to produce it.

$1.7M

Cost of 2 months of delay

For a drug with projected sales of USD $10M/year in LATAM, two months of delay represent revenue that is never recovered — plus patients without access to treatment.

A 60% reduction in dossier preparation time is not an operational adjustment. It is the difference between entering the CONITEC evaluation cycle this year or the next. Between adapting the model to seven markets simultaneously or doing it sequentially over 18 months. Between having a team of three people that produces the output of ten, or needing ten people to compete with someone who already has three and AI.

The methodological frameworks that HTA agencies recognize already exist. The efficiency benchmarks are already documented. The question is not whether your team will implement AI in the HTA evidence process. It is whether you will do it before or after your competitor.