Tag: clinical trials

  • Ensuring Loon’s Compliance with NICE Guidelines on AI Use in Evidence Synthesis

    Ensuring Loon’s Compliance with NICE Guidelines on AI Use in Evidence Synthesis

    In this article, we navigate NICE’s Position on the Use of AI in Evidence Generation for Health Technology Assessment (HTA) and explain how Loon Hatch™ – our end-to-end, fully automated, and expert-validated evidence synthesis solution – and Loon Lens™ – our scientifically validated, autonomous literature screener – align with the HTA body’s’ guidelines on the use of AI in Health Economics and Outcomes Research (HEOR).

    NICE AI Position

    Revolutionizing Evidence Synthesis with AI: Loon’s Compliance with NICE Guidelines

    The National Institute for Health and Care Excellence (NICE) has recently released guidelines on the responsible use of AI in evidence synthesis for HTA. At Loon, we’re delighted to demonstrate how our AI-powered solutions, such as Loon Hatch™ and Loon Lens ™, align seamlessly with these guidelines. At Loon, we’re not just meeting these guidelines — we’re exceeding them and setting new standards in speed, accuracy, and compliance for Market Access, HTA, and HEOR workflows.

    Loon’s AI Solutions: Exceeding NICE Standards

    Our end-to-end AI-powered solutions for evidence synthesis are designed to redefine evidence synthesis while adhering to NICE’s stringent guidelines:

    NICE GuidelineLoon’s Approach to Compliance
    Human OversightLoon Hatch™ AI outputs are always assessed and validated by human experts, ensuring efficiency and accuracy.
    Validation Audit TraceWe show when and why an expert overrode an AI recommendation, ensuring that all validation decisions are transparent and traceable, enhancing accountability and trust in the AI system.
    Scientific MethodologyLoon offers full disclosure of the scientific methodologies used in our AI systems, including validation data.
    Transparency and JustificationLoon provides clear explanations of AI’s role and outcomes through comprehensive documentation, allowing users to track and verify AI decisions alongside expert assessments.
    Ethical and Legal ComplianceLoon ensures strict adherence to legal frameworks and ethical guidelines, including GDPR, for data protection and fairness.
    Security and Risk MitigationRobust cybersecurity measures and risk management strategies such as air-gapping are in place to protect AI systems and prevent cyber incidents.
    Detailed ReportingLoon maintains thorough documentation of AI operations, ensuring transparency and continuous improvement.
    Early Engagement with NICELoon will initiate proactive dialogue with NICE to align AI methods with their frameworks right from the start.


    Scientific Validation: Loon Lens™ Literature Screener

    Loon Lens™, our fully automated literature screener, has undergone rigorous scientific validation to ensure its accuracy and reliability in identifying relevant studies for systematic reviews. Recently, Loon published a validation paper detailing the performance of Loon Lens™ on medRxiv, which demonstrates an accuracy of 95.5% (95% CI: 94.8–96.1), with sensitivity (recall) at 98.95% (95% CI: 97.57–100%) and specificity at 95.24% (95% CI: 94.54–95.89%). These results set a new standard for AI-assisted literature screening. This paper offers full transparency on the methodologies used, model performance, and validation processes, fostering trust and credibility in AI-driven research.

    For a more detailed view of the paper, please refer to the full text and article metrics on medRxiv.

    Transforming Evidence Synthesis with Loon Hatch™

    Loon Hatch™ leverages our patent-pending Cognitive Ensemble AI Systems™ to revolutionize the evidence synthesis process:

    Aligning with NICE’s Vision for AI in HTA

    NICE emphasizes AI as a tool to enhance, not replace, human involvement in evidence synthesis. This aligns perfectly with Loon’s approach. For instance, Loon Hatch™ rapidly processes vast amounts of literature, but human experts make the final inclusion decisions.

    Our solutions comply with NICE’s recommendations on machine learning (ML) and large language models (LLMs) in evidence synthesis:

    All of these processes are conducted with rigorous expert oversight, ensuring accuracy and reliability.

    Loon’s Commitment to Responsible AI Use

    As we continue to innovate, we remain deeply committed to adhering to industry standards and guidelines, ensuring that our AI solutions automate processes and enhance efficiency while also meeting the highest standards of transparency and ethical use. Our collaboration with regulatory bodies and profound understanding of clinical research challenges position us as leaders in the future of evidence synthesis.

    By choosing Loon Hatch™, you are accelerating your evidence synthesis process and ensuring full compliance with the latest industry guidelines, making your HTA submissions more robust and reliable.

    Ready to Transform Your Evidence Synthesis Process?

    Contact us today for a demo, or visit loonbio.com to learn more about how we’re revolutionizing market access and clinical research with AI-driven solutions that reduce research timelines from years to days.


    About Loon

    Loon Inc. is at the forefront of AI-driven market access and clinical research. We help biopharma companies navigate the complexities of market access with confidence, providing innovative solutions that dramatically reduce research timelines while maintaining the highest standards of quality and compliance.

  • Making Systematic Reviews Feasible for Every Clinical Trial with Loon Hatch™ and Revolutionizing Clinical Research

    Making Systematic Reviews Feasible for Every Clinical Trial with Loon Hatch™ and Revolutionizing Clinical Research

    Can starting and ending clinical trials with Systematic Reviews truly be feasible? It hasn’t been—until now!

    In the world of clinical research, systematic reviews are essential for ensuring that trials are well-informed, ethically sound, and impactful. Yet, a recent study by Clarke et al. uncovered a concerning trend: out of 175 randomized controlled trial (RCT) reports published over 25 years in five top-tier medical journals, only 2.9% referenced up-to-date systematic reviews in their Introduction sections. Even more alarming, just 3.4% incorporated their findings into an updated systematic review in the Discussion sections.

    The Problem: Research Gaps That Could Cost Lives

    These numbers are not just disappointing—they’re dangerous. Without systematic reviews, clinical trials risk:

    The Reality: Systematic Reviews Are Time-Consuming and Resource-Intensive

    It’s easy to say that systematic reviews should be integrated into every stage of a clinical trial, but the reality is far from simple. The average systematic review takes 2,500 person-hours and several expert reviewers to complete. Considering that each clinical trial would require at least two systematic reviews, you’re looking at an additional 5,000 person-hours—a significant strain on already limited resources.

    The Solution: Loon Hatch™ – Automating Systematic Reviews for the Future of Clinical Research

    This is where Loon Hatch™ comes in. At Loon, we’ve developed a groundbreaking tool that automates the systematic review process, making it possible to maintain living systematic reviews—rapidly and effortlessly.

    With Loon Hatch™, the once daunting task of integrating systematic reviews into clinical trials becomes a seamless part of the research process. Imagine being able to:

    And when it comes to updating existing systematic reviews with new trial results? With Loon Hatch™, it’s as simple as publishing your findings. Our tool automatically updates your living systematic review, ensuring that your research remains at the cutting edge.

    The Future is Now: Join Us in Advancing Evidence-Informed Research

    AI-enabled technologies like Loon Hatch™ are transforming the future of clinical research, making it possible and practical to integrate systematic reviews at every stage of a clinical trial. This isn’t just a step forward, towards a more ethical, efficient, and impactful research process.

    Let’s connect and explore how we can push the boundaries of evidence-informed research together. The future of clinical research is truly exciting, and with tools like Loon Hatch™, we’re just getting started.

  • Augmented Intelligence for Clinical Discovery in Hypertensive Disorders of Pregnancy Using Outlier Analysis

    Augmented Intelligence for Clinical Discovery in Hypertensive Disorders of Pregnancy Using Outlier Analysis

    Clinical discoveries are heralded by observing unique and unusual clinical cases. The effort of identifying such cases rests on the shoulders of busy clinicians. We assess the feasibility and applicability of an augmented intelligence framework to accelerate the rate of clinical discovery in preeclampsia and hypertensive disorders of pregnancy-an area that has seen little change in its clinical management.

    Methods


    We conducted a retrospective exploratory outlier analysis of participants enrolled in the folic acid clinical trial (FACT, N=2,301) and the Ottawa and Kingston birth cohort (OaK, N=8,085). We applied two outlier analysis methods: extreme misclassification contextual outlier and isolation forest point outlier. The extreme misclassification contextual outlier is based on a random forest predictive model for the outcome of preeclampsia in FACT and hypertensive disorder of pregnancy in OaK. We defined outliers in the extreme misclassification approach as mislabelled observations with a confidence level of more than 90%. Within the isolation forest approach, we defined outliers as observations with an average path length z score less or equal to -3, or more or equal to 3. Content experts reviewed the identified outliers and determined if they represented a potential novelty that could conceivably lead to a clinical discovery.

    Results


    In the FACT study, we identified 19 outliers using the isolation forest algorithm and 13 outliers using the random forest extreme misclassification approach. We determined that three (15.8%) and 10 (76.9%) were potential novelties, respectively. Out of 8,085 participants in the OaK study, we identified 172 outliers using the isolation forest algorithm and 98 outliers using the random forest extreme misclassification approach; four (2.3%) and 32 (32.7%), respectively, were potential novelties. Overall, the outlier analysis part of the augmented intelligence framework identified a total of 302 outliers. These were subsequently reviewed by content experts, representing the human part of the augmented intelligence framework. The clinical review determined that 49 of the 302 outliers represented potential novelties.

    Conclusions


    Augmented intelligence using extreme misclassification outlier analysis is a feasible and applicable approach for accelerating the rate of clinical discoveries. The use of an extreme misclassification contextual outlier analysis approach has resulted in a higher proportion of potential novelties than using the more traditional point outlier isolation forest approach. This finding was consistent in both the clinical trial and real-world cohort study data. Using augmented intelligence through outlier analysis has the potential to speed up the process of identifying potential clinical discoveries. This approach can be replicated across clinical disciplines and could exist within electronic medical records systems to automatically identify outliers within clinical notes to clinical experts.

    Keywords: augmented intelligence; clinical discovery; clinical trials; hdp; hypertensive disorders of pregnancy; preeclampsia treatment; preeclampsia-eclampsia; real-world data; research methods and design.