CONITEC and IETS are redefining what evidence counts as sufficient. Does your team know?


Latin America's two most influential HTA agencies still have no clear rules on artificial intelligence. That does not mean they are waiting. It means the ground is shifting under the feet of market access teams.
There is a question that market access teams in LATAM are still not asking with enough seriousness: what will happen when CONITEC in Brazil and IETS in Colombia start receiving dossiers built with artificial intelligence, and have no clear criteria to evaluate them?
The short answer: it is already happening. Pharmaceutical companies are already using generative AI in parts of their evidence processes. Systematic reviews are being accelerated with automated screening tools. Economic models are being built with LLM assistance. And none of the major HTA agencies in Latin America has a public position today on how it will evaluate that evidence.
The vacuum is not neutral
It is both an opportunity and a risk. Companies that document their AI use well will become reference points for evaluators. Those that do not expose themselves to methodological integrity concerns.
1. The current state: CONITEC, IETS, and the silence on AI
CONITEC (Brazil)
CONITEC is the most rigorous HTA agency in Latin America. Created in 2011 under Federal Law 12.401, it has more than 100 multidisciplinary professionals and has evaluated hundreds of health technologies for the SUS. Its methodological guidelines for economic evaluations are detailed, demanding, and widely referenced in the region [1].
However, CONITEC has no public position today on the use of artificial intelligence in evidence generation or dossier construction. The criteria it applies to evaluate systematic reviews, cost-effectiveness models, and budget impact analyses do not distinguish whether the process was AI-assisted or performed entirely manually.
In parallel, Brazil's Senate approved Bill 2.338/2023 on general AI regulation in December 2024, currently under review by the Chamber of Deputies, which establishes a risk-based approach similar to the European AI Act [2]. But that regulatory framework is generic: it does not specifically address pharmaceutical evidence or HTA processes.
IETS (Colombia)
Colombia's Institute of Health Technology Assessment plays a central role in defining coverage, reference prices, and clinical practice guidelines for the Colombian health system. Its methodologies are based on international standards from ISPOR and EUnetHTA.
Like CONITEC, IETS has no specific public position on AI in evidence. Colombia introduced a new AI regulation bill in July 2025, PL 043/2025, with urgent status declared by the government in September 2025, which incorporates principles of transparency, human oversight, and risk-based classification inspired by the OECD and the European AI Act [3]. But it has no direct application to HTA processes either.
The result in both countries is the same: pharmaceutical companies are using or exploring AI tools in their evidence processes, and HTA agencies have no established criteria to evaluate them.
2. What is happening globally: NICE led, the others are watching
To understand where LATAM is heading, one has to look at where Europe already arrived.
In August 2024, NICE, the UK's HTA agency and a global methodological reference, published its Position Statement on the use of AI in evidence generation (ECD11) [4]. It was the first document of its kind issued by a top-tier HTA agency. Its central message: AI can be used in evidence submissions, but under strict conditions.
The main conditions NICE establishes are:
- Mandatory transparency: organizations must clearly document which AI methods they used, why, and what assumptions they applied.
- Required validation: AI methods must be validated and, when possible, contrasted with alternative methods.
- Non-negotiable human oversight: AI must augment, not replace, expert human judgment in the evidence process.
- Justified use: AI should only be used when it adds clear value. It should not be used as a shortcut when established methodological alternatives exist.
- Recognized bias risk: AI methods for estimating comparative effects are considered high-risk and require sensitivity analysis and triangulation with available clinical evidence.
Canada's Drug Agency (CDA-AMC) published its own NICE-based Position Statement in 2025 [5]. EMA has an AI workplan for 2023-2028 that includes guidance on evidence across the medicines lifecycle. FDA published its draft guidance on the use of AI in regulatory decision-making in 2025 [6].
The pattern is clear: agencies in the developed world are building frameworks. LATAM has not yet.
3. Why that vacuum is simultaneously an opportunity and a risk
The opportunity
When an agency has no established rules, the standard is defined by the first actors who establish it in practice. Pharmaceutical companies that today build evidence processes with well-documented, transparent, and methodologically rigorous AI, following the best practices of NICE, ISPOR, and emerging frameworks, are positioning themselves as reference points before evaluators.
CONITEC and IETS, when they develop their own positions on AI, and they will, because the pressure from dossier volumes and the influence of global agencies will demand it, will look at the precedents that already exist. Companies that have documented well how they use AI in their evidence will have a real advantage.
Moreover, CONITEC and IETS evaluators already know the international standards. They participate in ISPOR, collaborate with RedETSA, publish in indexed journals. The methodological quality of a dossier, with or without AI, is visible to them.
The risk
The absence of rules does not protect those who use AI without rigor. A CONITEC evaluator who receives a dossier with an AI-assisted systematic review, without process documentation, without results validation, and without method transparency, has reason to distrust all the evidence presented.
The Putnam Associates report (2025) is clear on this: although several pharmaceutical companies already have internal initiatives to test AI tools in HEOR, they rarely use them in real dossiers for fear of non-acceptance by HTA agencies [7]. The use of AI in evidence generation remains uncommon and often undeclared.
The double risk of silence
Teams that do not declare AI use expose themselves to methodological integrity concerns if the process comes to light. Those who avoid AI out of fear compete at a growing disadvantage in time and cost against those who integrate it with rigor.
4. What LATAM teams should do while the rules arrive
The strategy is not to wait for CONITEC and IETS to publish their positions on AI. The strategy is to prepare today with the standards that already exist, so that when positions are published, the team is already aligned.
Adopt NICE standards as the reference
NICE ECD11 guidelines are the most complete standard available today for AI use in HTA evidence. Although not binding in LATAM, they represent the level of rigor that the most advanced agencies already demand. Building evidence processes that meet those criteria, transparency, validation, human oversight, justified use, is the best possible preparation for what is coming.
Document the process, not just the result
The most important change AI introduces in evidence processes is not the output, it is process traceability. A dossier that documents how AI was used, which tools, what validations were performed, and how human oversight was maintained, is a dossier that can be defended before any evaluator, today or when the rules arrive.
Train teams in AI-HEOR methodology
The ISPOR Working Group on generative AI in HEOR is explicit: organizations must adopt AI tools with robust checks and balances, and HEOR professionals need to understand the tools they are using [8]. A team that uses AI without understanding its methodological limitations is a team that cannot defend its evidence before a trained evaluator.
The vacuum will not last
CONITEC and IETS will face growing pressure to define their position on AI in evidence. Evaluation demand is rising, resources are limited, and AI tools to accelerate processes are increasingly accessible. When positions arrive, they will most likely follow the NICE model: AI permitted, under strict conditions of transparency, validation, and human oversight.
Pharmaceutical companies building that methodological capability today, not just using AI but using it well, will be the ones arriving prepared at that moment.
How we help
At Quantus, we support market access and HEOR teams in LATAM to build evidence processes that are rigorous, defensible, and aligned with the international standards that already exist. If you want to understand how to prepare your team for what is coming, write to us.
References
[1] CONITEC. Diretrizes Metodológicas: Elaboração de Estudos para Avaliação Econômica em Saúde. Ministério da Saúde, Brazil. Available at: conitec.gov.br
[2] Library of Congress. Brazil: Senate Advances Discussions on Bill to Regulate AI Use. May 2025. Available at: loc.gov/global-legal-monitor
[3] AmCham Colombia. Proyecto de ley sobre inteligencia artificial: elementos clave de la regulación en Colombia. August 2025. Available at: amchamcolombia.co
[4] NICE. Use of AI in evidence generation: NICE position statement (ECD11). August 2024. Available at: nice.org.uk/corporate/ecd11
[5] Canada's Drug Agency (CDA-AMC). Position Statement on the Use of AI in the Generation and Reporting of Evidence. 2025. Available at: cda-amc.ca
[6] FDA. Draft Guidance: Considerations for the Use of Artificial Intelligence To Support Regulatory Decision-Making for Drug and Biological Products. 2025. Available at: fda.gov
[7] Putnam Associates. Acceptance of Artificial Intelligence in Evidence and Dossier Developments by HTA bodies: Challenges and Opportunities. 2025. Available at: putassoc.com
[8] Fleurence R, et al. Generative Artificial Intelligence for Health Technology Assessment: Opportunities, Challenges, and Policy Considerations. Value in Health. 2025;28(2):175-183.