In this article, we navigate NICE’s Position on the Use of AI in Evidence Generation for Health Technology Assessment (HTA) and explain how Loon Hatch™ – our end-to-end, fully automated, and expert-validated evidence synthesis solution – and Loon Lens™ – our scientifically validated, autonomous literature screener – align with the HTA body’s’ guidelines on the use of AI in Health Economics and Outcomes Research (HEOR).
Revolutionizing Evidence Synthesis with AI: Loon’s Compliance with NICE Guidelines
The National Institute for Health and Care Excellence (NICE) has recently released guidelines on the responsible use of AI in evidence synthesis for HTA. At Loon, we’re delighted to demonstrate how our AI-powered solutions, such as Loon Hatch™ and Loon Lens ™, align seamlessly with these guidelines. At Loon, we’re not just meeting these guidelines — we’re exceeding them and setting new standards in speed, accuracy, and compliance for Market Access, HTA, and HEOR workflows.
Loon’s AI Solutions: Exceeding NICE Standards
Our end-to-end AI-powered solutions for evidence synthesis are designed to redefine evidence synthesis while adhering to NICE’s stringent guidelines:
NICE Guideline | Loon’s Approach to Compliance |
Human Oversight | Loon Hatch™ AI outputs are always assessed and validated by human experts, ensuring efficiency and accuracy. |
Validation Audit Trace | We show when and why an expert overrode an AI recommendation, ensuring that all validation decisions are transparent and traceable, enhancing accountability and trust in the AI system. |
Scientific Methodology | Loon offers full disclosure of the scientific methodologies used in our AI systems, including validation data. |
Transparency and Justification | Loon provides clear explanations of AI’s role and outcomes through comprehensive documentation, allowing users to track and verify AI decisions alongside expert assessments. |
Ethical and Legal Compliance | Loon ensures strict adherence to legal frameworks and ethical guidelines, including GDPR, for data protection and fairness. |
Security and Risk Mitigation | Robust cybersecurity measures and risk management strategies such as air-gapping are in place to protect AI systems and prevent cyber incidents. |
Detailed Reporting | Loon maintains thorough documentation of AI operations, ensuring transparency and continuous improvement. |
Early Engagement with NICE | Loon will initiate proactive dialogue with NICE to align AI methods with their frameworks right from the start. |
Scientific Validation: Loon Lens™ Literature Screener
Loon Lens™, our fully automated literature screener, has undergone rigorous scientific validation to ensure its accuracy and reliability in identifying relevant studies for systematic reviews. Recently, Loon published a validation paper detailing the performance of Loon Lens™ on medRxiv, which demonstrates an accuracy of 95.5% (95% CI: 94.8–96.1), with sensitivity (recall) at 98.95% (95% CI: 97.57–100%) and specificity at 95.24% (95% CI: 94.54–95.89%). These results set a new standard for AI-assisted literature screening. This paper offers full transparency on the methodologies used, model performance, and validation processes, fostering trust and credibility in AI-driven research.
For a more detailed view of the paper, please refer to the full text and article metrics on medRxiv.
Transforming Evidence Synthesis with Loon Hatch™
Loon Hatch™ leverages our patent-pending Cognitive Ensemble AI Systems™ to revolutionize the evidence synthesis process:
- Unparalleled Efficiency: Reduces systematic literature review timelines from 2,500 hours to just 85, accelerating patient access to therapies and ensuring that biopharmaceutical innovators maximize the reimbursement potential of their therapies and eliminate market access delays.
- Complete Transparency:
- Data Audit Trace: Ensures research integrity through full disclosure of data sources.
- Scientific Validation: Loon offers papers with full disclosure of the scientific methodologies used in our AI systems, including validation data. This allows users to fully understand how our AI models operate, their performance metrics, and any limitations, fostering trust and enabling informed decision-making.
- AI Decision Transparency: Allows users to track AI decisions alongside expert assessments.
- Data Audit Trace: Ensures research integrity through full disclosure of data sources.
- Full Automation and Expert Validation: While our AI fully automates labour-intensive processes, human experts make final decisions, ensuring the highest quality outcomes. This oversight ensures that AI acts as a tool to augment human expertise, not replace it.
Aligning with NICE’s Vision for AI in HTA
NICE emphasizes AI as a tool to enhance, not replace, human involvement in evidence synthesis. This aligns perfectly with Loon’s approach. For instance, Loon Hatch™ rapidly processes vast amounts of literature, but human experts make the final inclusion decisions.
Our solutions comply with NICE’s recommendations on machine learning (ML) and large language models (LLMs) in evidence synthesis:
- Supporting evidence identification
- Automating study classification
- Streamlining screenings
All of these processes are conducted with rigorous expert oversight, ensuring accuracy and reliability.
Loon’s Commitment to Responsible AI Use
As we continue to innovate, we remain deeply committed to adhering to industry standards and guidelines, ensuring that our AI solutions automate processes and enhance efficiency while also meeting the highest standards of transparency and ethical use. Our collaboration with regulatory bodies and profound understanding of clinical research challenges position us as leaders in the future of evidence synthesis.
By choosing Loon Hatch™, you are accelerating your evidence synthesis process and ensuring full compliance with the latest industry guidelines, making your HTA submissions more robust and reliable.
Ready to Transform Your Evidence Synthesis Process?
Contact us today for a demo, or visit loonbio.com to learn more about how we’re revolutionizing market access and clinical research with AI-driven solutions that reduce research timelines from years to days.
About Loon
Loon Inc. is at the forefront of AI-driven market access and clinical research. We help biopharma companies navigate the complexities of market access with confidence, providing innovative solutions that dramatically reduce research timelines while maintaining the highest standards of quality and compliance.
FAQs
What is the role of Loon Hatch™ in evidence synthesis?
Loon Hatch™ is an end-to-end AI solution designed for biopharma and their market access and HEOR consultants who want to automate literature screening, data extraction, and the entire evidence synthesis research process. Loon Hatch™ enhances HTA and HEOR workflows, maintaining full compliance with NICE guidelines.
What is the role of Loon Lens™ in literature screening?
Loon Lens™, our scientifically validated, autonomous literature screener, accelerates the literature screening process by achieving 98.95% recall and 95.5% accuracy, reducing time and increasing reliability for researchers.
How do Loon Hatch™ and Loon Lens™ comply with NICE’s guidelines on AI in HTA?
Loon Hatch™ and Loon Lens™ comply with NICE’s guidelines by ensuring transparency in data provenance, providing full methodological transparency, maintaining expert oversight (for Loon Hatch™, and showing scientific validation for Loon Lens™ and offering detailed AI decision logs. This approach guarantees that AI is used to support human expertise, not replace it.
What makes Loon’s AI solutions different from other AI tools in the market?
Loon’s AI solutions, including Loon Hatch™ and Loon Lens™, are the only ones scientifically validated and uniquely designed to fully automate systematic literature reviews while ensuring compliance with strict regulatory standards. Our end-to-end solutions provide transparency, expert oversight, and detailed AI decision logs, making them reliable tools for HTA and HEOR processes.
Why is human oversight important in AI-driven evidence synthesis?
Human oversight ensures that AI recommendations are validated by experts who can apply nuanced judgment. This balance between AI efficiency and human expertise is essential for producing accurate and reliable research outputs, especially in critical and regulated healthcare areas.
How does Loon ensure the security and risk management of its AI systems?
Loon implements robust cybersecurity measures and comprehensive risk management strategies to protect AI systems from manipulation and unauthorized access. These precautions are in line with NICE’s guidelines to ensure the safe deployment of AI in healthcare.
What is the significance of NICE guidelines in AI-driven evidence synthesis?
NICE guidelines provide a critical framework for the responsible use of AI in health technology assessments (HTA). They ensure that AI solutions are transparent, ethically compliant, and rigorously validated, which is essential for maintaining the quality and safety of healthcare innovations.