• Algorithmic health®
  • cockpit.health™
  • Validated and Safe

Learning nature's algorithms for health™

Learn More

What We Do

Evidence-Based AI for Better Health™

Research

Methodological rigor. Validation and Verification (V&V). Clinical Trials.

Design

Modern and interoperable cognitive software architecture. Human-centered design.

Develop

Test-driven development for high quality software.

Deploy

Secure cloud deployment.

Here are some interesting facts about global health.

World Health Statistics 2015.

Life expectancy at birth (years)

Infant mortality rate (risk of dying by age 1 per 1,000 live births)

Adults aged 18 years and older who are overweight, BMI ≥ 25 (%)

Physicians per 10,000 population

Medicine is a science of uncertainty and an art of probability.

cockpit.health™

AI-Enabled platform built with neuroergonomics design principles to support the patient journey

Monitoring

Ubiquitous and wearable. Preparing for future pandemics.

  • Virtual coaching for behavior change
  • Monitoring and early detection
  • Alerts and reminders
  • Digital pathogen surveillance

Algorithms

Methodological rigor, validation & verification. Clinical Trials.

  • EMR, labs, imaging, multi-omics
  • Diagnosis, prognosis, causality
  • Clinical Question Answering
  • Simulation and Visual Analytics

Treatment

AI-driven reengineering for the triple aim: care quality, cost, and health.

  • Dynamic Treatment Regimes
  • Guidelines and protocols
  • Digital Twin
  • Run-time monitoring

Interoperability

Secure and interoperable health informatics systems.

  • HL7 standards
  • Biomedical ontologies
  • Federated Machine Learning
  • Privacy and security

Get Connected

Please follow us on Twitter and LinkedIn.

Our Approach



In a paper titled Medical error — the third leading cause of death in the US published in the BMJ in May 2016, Martin Makary and Michael Daniel (patient safety researchers at John Hopkins University School of Medicine) estimated the incidence of medical error at more than 250,000 deaths a year. With the discovery of new biomarkers from imaging, genomics, transcriptomics, epigenomics, proteomics, metabolomics, microbiomics, and other -omics, the number of data types that should be considered in clinical decision making will quickly surpass the information processing capacities of the human brain. Furthermore, there is an increasing awareness of the social, economic, and environmental determinants of human health. We believe that the safe use of AI will translate into reduced incidence of iatrogenic errors, improved health outcomes, and better quality of life for patients.

The global COVID-19 pandemic has laid bare the need for an adaptive and resilient healthcare system. A new reality calls for care that is delivered outside of the traditional hospital walls including self-care and virtual care using personal health sensors, at-home testing kits during pandemics, and AI algorithms to provide diagnosis and evidence-based health guidance ubiquitously, in real time, and at scale. While reducing infection risk, this approach still keeps clinicians in the loop through remote patient monitoring technologies. Deteriorating patients and those requiring close medical attention or intervention can be brought into hospitals or dedicated facilities for treatment. Some of these dedicated facilities can be built just-in-time during pandemics like the Wuhan's Huoshenshan hospital in China's Hubei province which was built in just 10 days. Since the next pandemic will not give us an explicit advanced warning, we will also need a global network and digital infrastructure for pandemic surveillance and data sharing. It is a vision of care delivery that we refer to as the cockpit.health™ — a platform that we're building by leveraging our founder's background in aeronautical engineering.

The Swiss cheese model of accident causation is of particular relevance to both the aviation and healthcare domains and can inform a systems approach to safety. Developed by J. Reason, the theory suggests than an accident or medical error occurs when the holes in cheese slices align allowing "a trajectory of accident opportunity". Each slice represents a layer of defense against failure including factors such as: human-centered design following cognitive ergonomics and human factor engineering principles (a potential cure to EHR-induced physician burnout which increases the risk of medical error); validation and verification (V&V); crew resource management (CRM) focusing on interpersonal communication and team work; regulations and certification; simulation training; clinical guidelines and care protocols; and organizational leadership and culture. We also see interesting parallels between optimal control theory in aerospace engineering and the role of allostasis in regulating human physiology and behavior — and hence health and disease trajectories.

Although we work on interventions that leverage AI and computational health informatics, we approach healthcare as what it truly is — a complex system. Our methodology is therefore interdisciplinary and informed by complexity science. For example, there is a clear interdependence between: the environmental impact of meat production (greenhouse gas emissions, waste disposal, antibiotic resistance, use of fertilizers, pesticides, and growth enhancers like clenbuterol); the extinction of wildlife and the loss of biodiversity due to livestock encroachment; wildlife poaching due to economic deprivation; the global wildlife trade and the sale of wildlife meat in open wet markets; chronic diseases like hypertension, heart disease, diabetes, and cancer; and zoonotic diseases like COVID-19 which are caused by viruses jumping from animals to humans. We fully embrace the vision of One Health which promotes a unified approach to the health of people, animals, and the environment.

It is estimated that physicians' decisions contribute to 80% of the $3.5 trillion in annual US healthcare spending which accounted for 17.9% of the country's Gross Domestic Product (GDP) in 2017. AI can support a continuously learning and adaptive healthcare system as well as guideline-concordant, fast, safe, and explainable clinical decisions. On the other hand, we are intrigued with the opportunity to use AI to support patients in their transition to a healthier lifestyle including nutrition, smoking cessation, physical activity, stress reduction, and therapy adherence. According to a study by Bolnick et al. titled Health-care spending attributable to modifiable risk factors in the USA: an economic attribution analysis, modifiable risk factors contibuted to $730.4 billion or 27% of total US healthcare expenditures in 2016. AI algorithms on smartphones and wearables can be part of the solution as they are ubiquitous and can give patients the support they need as they go about their daily lives. Behavior change by patients and physicians remains the most important challenge in healthcare. Our solutions incorporate insights from Behavioral Economics like nudging. Ultimately, we seek to understand nature's algorithms which underlie human biology and health.

The Hippocratic Oath primum non nocere or "first, do no harm" applies to AI in healthcare as well. Our approach — Evidence-Based AI for Better Health™ — is based on the use of rigorous methodologies for the development, validation, transparent reporting, and safe deployment of AI algorithms. These algorithms support virtual coaching for patients' health behavior change, prevention, care alerting, diagnosis, prognosis, care planning, clinical decision-making, and remote patient monitoring.

Out-of-pocket healthcare spending in the US grew to $365.5 billion in 2017, while high deductibles and co-payments prevent patients from seeking much needed healthcare services. Meanwhile, in Ontario (Canada), healthcare spending totaled $61.3 billion accounting for 38.7% of the 2018-2019 provincial government's budget. Health is also a global challenge. Estimated cancer deaths in Africa are expected to increase almost 70% to 1 million by 2030. In healthcare, the responsible use of AI can alleviate the severe shortage of radiology and oncology specialists in Africa by extending the capabilities of generalist physicians and nurses.

With hundreds of billions of dollars of global investments in biomedical research (the US National Institutes of Health alone spends $45 billion annually) accelerating scientific discoveries and expanding our medical knowledge base, we believe that the real challenge is in care delivery. The estimated average bench-to-bedside lag time is 17 years. Unwarranted variations from evidence-based Clinical Practice Guidelines (CPGs), medical errors, and unaffordability (due in part to the lack of price transparency) are persisting challenges.

Based on research on human clinical decision making from the fields of neuroscience and the cognitive sciences, our approach emphasizes patient safety, effective human-machine interaction, integrated care pathways, and the importance of the clinician-patient relationship. Our approach supports a shared decision making process which takes into account the values, goals, and wishes of the patient. We see AI in healthcare as a tool which allows clinicians to focus more on providing care compassionately. One lesson we have learned from studying the introduction of AI in medicine during the last decade is that the responsible use of AI requires not only validation and verification but also prospective studies to evaluate the efficacy of AI on patient-centered outcomes which include essential measures such as survival, time to recovery, severity of side effects, quality of life, functional status, remission (e.g., depression remission at six and twelve months), and health resource utilization. The recently released guidelines for clinical trial protocols for interventions involving artificial intelligence (SPIRIT-AI extension) and the guidelines for clinical trial reports for interventions involving artificial intelligence (CONSORT-AI extension) represent significant milestones for evidence-based AI in healthcare.

The introduction of predictive models into clinical practice requires rigorous validation. In the context of supervised Machine Learning, dataset and covariate shifts can produce incorrect and unreliable predictions when the model training and deployment environments differ due to population, equipment, policy, or practice variations. We follow existing consensus guidelines such as the Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD) Statement. Internal validation using methods like cross-validation or preferably the Bootstrap should provide clear performance measures such as discrimination (e.g., C-statistic or D-statistic) and calibration (e.g., calibration-in-the-large and calibration slope). In addition to internal validation, external validation should be performed as well to determine the generalizability of the model to other patient populations. External validation can be performed with data collected at a different time (temporal validation) or at different locations, countries, or clinical settings (geographic validation). The clinical usefulness of the prediction model (net benefit) can be evaluated using decision curve analysis. We look forward to the release of Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis–Machine Learning (TRIPOD-ML) and the Standards For Reporting Diagnostic Accuracy Studies–Artificial Intelligence (STARD-AI).

In the real world, we expect AI algorithms to be embedded in cyber-physical systems like medical devices. The medical device industry has a long history in implementing strict quality standards and we believe there is a lot to learn from that experience. Deep neural networks (DNNs) have recently achieved impressive levels of performance in diagnosis and prognosis tasks. However, they tend to overfit in low data regimes (prior to 2009, less than 10% of US hospitals had an Electronic Health Record or EHR system); are vulnerable to minor adversarial perturbations; are not interpretable; and lack conceptual understanding as well as logical and causal reasoning abilities. Traditional statistical validation approaches like cross-validation and the Bootstrap perform validation using data from the same clinical data set used for training the algorithm. This data set is typically collected during routine clinical care and stored in an electronic health records system or an imaging database. On the other hand, formal methods can generate counterexamples such as out-of-distribution (OOD) and adversarial inputs which can result in incorrect predictions. There is a growing literature on the use of formal methods based on probabilistic verification for providing provable guarantees of the robustness, safety, and fairness of Machine Learning algorithms.

Automated run-time monitoring of algorithmic performance can help tackle issues like dataset shifts through the real-time monitoring of inputs, outputs, and other variables (e.g., by detecting OOD input data).

Principled approaches should be used for handling missing data and for representing the uncertainty inherent in clinical data including measurement errors and misclassification. Bayesian Decision Theory is a principled methodology for solving decision-making problems under uncertainty. We see the Bayesian approach as a promising alternative to null hypothesis significance testing (using statistical significance thresholds like the p-value) which has contributed to the current replication crisis in biomedicine.

Furthermore, clinicians sometimes need answers to counterfactual questions at the point of care (e.g., when estimating the causal effect of a clinical intervention). We believe that these questions are best answered within the framework of Causal Inference as opposed to prediction with Machine Learning. It is a well-known adage that correlation does not imply causation. Causal models are more robust to dataset shifts than predictive models and better reflect the underlying biology of disease. Even in medical imaging tasks that are often thought of as pure pattern recognition, the lack of causality can also lead to models that rely on spurious correlations (e.g., scanner brand and model). Increasingly, observational studies based on Causal Inference over real world clinical data are being recognized as complementary to randomized control trials (RCTs) — the gold standard for Evidence-Based Practice (EBP). These observational studies provide Practice-Based Evidence (PBE) which is necessary for closing the evidence loop.

The ongoing COVID-19 pandemic has revealed the impact of the lack of timely access to the trustworthy data necessary for developing algorithms that can be used in making evidence-based clinical decisions. In a paper titled Prediction models for diagnosis and prognosis of covid-19: systematic review and critical appraisal published in the BMJ in January 2021, Wynants et al. reviewed 169 prediction models and concluded: "The COVIDPRECISE group does not recommend any of the current prediction models to be used in practice, but one diagnostic and one prognostic model originated from higher quality studies and should be (independently) validated in other datasets." As the British statistician Doug Altman puts it: "To maximise the benefit to society, you need to not just do research, but do it well."

We believe that infectious disease registries should be implemented to aggregate clinical data at the healthcare provider, national, and global levels. Access to the data can be accomplished by exposing these registries and other clinical data sources through web application programming interfaces (APIs) or using privacy-preserving federated machine learning architectures, subject to appropriate data governance mechanisms. However, this will necessitate the global adoption of foundational standards for health data, semantic interoperability, security, and privacy. These standards already exist and include: HL7 FHIR, SNOMED CT, and OpenID Connect.

The American engineer, statistician, and quality guru Edwards Deming once famously remarked: "In God we trust; all others must bring data." We strongly believe in data-driven evidence and decision making. The Machine Learning community has a lot to learn from the field of biostatistics which has developed many techniques, tools, and guidelines (such as the TRIPOD Statement) for developing and validating clinical prediction models. Deep Learning practitioners are starting to pay attention to the challenge of identifiability which has been well studied in statistics and econometrics. On the other hand, Machine Learning has come of age in the era of Big Data and has proven effective at handling high-dimensional data sets. However, AI in healthcare is not a mere rebranding of biostatistics although we do hope that a mutually enriching relationship will exist between the two fields.

Clinical Cognition — the study of the cognitive and psychological processes that underlie diagnostic reasoning, clinical decision making, cognitive biases, and human errors — can inform the design of Cognitive Architectures for safe and human-centered AI in healthcare. A Cognitive Architecture implements a computational model of various mechanisms and processes involved in cognition such as: perception (visual, tactile, auditory), memory, attention, learning, causality, reasoning, decision making, planning, emotions, motivation, communication, social interaction, and metacognition. We take the view that AI is an interdisciplinary field studying artificial minds and taking inspiration from biological minds with all their inherent complexity. As the physicist Richard Feynman explained: "If our small minds, for some convenience, divide this universe, into parts — physics, biology, geology, astronomy, psychology, and so on — remember that nature does not know it!".

To understand the current limitations of Deep Learning in medicine, one should start with a general theory of clinical decision making. One such theory called Dual Process Theory has been proposed by Daniel Kahneman in his book Thinking, Fast and Slow. Dr. Pat Croskerry has written extensively about the application of Dual Process Theory to diagnostic reasoning and clinical decision making. According to Dual Process Theory, human reasoning consists of two different systems. System 1 is fast, emotional, intuitive, stereotypical, unconscious, and automatic. System 2 is slow, conscious, deliberate, objective, and logical. Deep Learning which is based on pattern recognition is a System 1 process. System 1 is where cognitive biases and medical errors are more likely to occur. As in any complex system, System 1 and System 2 are not isolated processes but interact significantly to produce rational, ethical, and safe clinical decisions. It is the reason why we are generally cautious when reading on comparisons of the performance of clinicians with that of AI algorithms in research papers and the press. For example, in addition to perceptual processing, radiologists also recruit other cognitive abilities such as attention, metacognition, conceptual processing (including knowledge of human biology and disease), and causal reasoning during medical image interpretation.

Furthermore, comparisons between machine and human perception are fraught with issues resulting from human cognitive bias. An example of cognitive bias is the human tendency to attribute anthropomorphic competencies to an AI agent that is only learning image surface statistics as opposed to conceptual abstractions like medical students do. This inevitably leads to finding spurious correlations in the data set. In Adversarial Examples Are Not Bugs, They Are Features, Ilyas et al. called adversarial vulnerabilities "a fundamentally human phenomenon." Concept learning starts early in infancy — a research topic in the field of developmental psychology — and continues throughout medical education and practice. It is also related to embodiment and sensorimotor experiences and memories which in turn enable common sense reasoning and counterfactual thinking (causality). During a recent public AI debate, Judea Pearl said: "I am very much opposed to the culture of data only....I believe that we should build systems which have a combination of knowledge of the world together with data." Not heeding Pearl's advice will impede the progress and trustworthiness of AI.

German mathematician, theoretical physicist, and philosopher Hermann Weyl (1885-1955) once remarked that "logic is the hygiene the mathematician practices to keep his ideas healthy and strong." French mathematician Jacques Hadamard (1865-1963) stated that "logic merely sanctions the conquests of the intuition." Logic-based Clinical Decision Support (CDS) systems for medical Knowledge Representation and Reasoning (KRR) have been successfully deployed for the automatic execution of CPGs and care pathways at the point of care. Description Logic (DL) is the foundation of the Systematized Nomenclature of Medicine (SNOMED) — an ontology which contains more than 300,000 carefully curated medical concepts organized into a class hierarchy and enabling automated reasoning capabilities based on subsumption and attribute relationships between medical concepts.

The clinical algorithms in CPGs often require the automated execution of highly accurate and precise calculations (over multiple clinical concept codes and numeric values) which are better performed with a logic-based formalism. An example is a clinical recommendation based on multiple diagnoses or co-morbidities, the patient's age and gender, physiological measurements like vital signs, and laboratory result values. Consider the following rule from the 2013 American College of Cardiology Foundation/American Heart Association (ACCF/AHA) Guideline for the Management of Heart Failure: "Aldosterone receptor antagonists (or mineralocorticoid receptor antagonists) are recommended in patients with NYHA [New York Heart Association] class II-IV HF [Heart Failure] and who have LVEF [left ventricular ejection fraction] of 35% or less, unless contraindicated, to reduce morbidity and mortality. Patients with NYHA class II HF should have a history of prior cardiovascular hospitalization or elevated plasma natriuretic peptide levels to be considered for aldosterone receptor antagonists. Creatinine should be 2.5 mg/dL or less in men or 2.0 mg/dL or less in women (or estimated glomerular filtration rate > 30 mL/min/1.73 ㎡), and potassium should be less than 5.0 mEq/L. Careful monitoring of potassium, renal function, and diuretic dosing should be performed at initiation and closely followed thereafter to minimize risk of hyperkalemia and renal insufficiency". Healthcare payers have established strict quality measures to ensure physicians' concordance with CPGs.

Machine autonomy in the care management of patients goes counter to the principle of shared decision making in medicine. Legal scholars and lawyers should decide whether existing doctrines of informed consent are still relevant or should be updated. In the meantime, the use of AI should be disclosed to patients in routine care. This can be done as part of the well-established principle of shared decision-making which considers the values, goals, and preferences of the patient during care planning. Argumentation Theory is a good old branch of AI that can help reconcile AI recommendations, uncertainty, risks and benefits, patient preferences, clinical practice guidelines, and other scientific evidence. As a guide to rational clinical decision making (by evaluating and communicating the pros and cons of various courses of action), the implementation of Argumentation Theory may also reduce physician's exposure to liability by generating arguments for potential jurors. This approach empowers both the patient and clinician to reason given the fact that modern AI algorithms like Deep Learning are based on pattern recognition and lack logical and causal reasoning abilities. In their paper titled Why do humans reason? Arguments for an argumentative theory, Hugo Mercier and Dan Sperber wrote: "Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade."

Most of the headline-grabbing news about AI in healthcare have been in medical imaging applications using Deep Learning. However, real-world clinical practice involves complex decisions and processes. Therefore, we use decision and process modeling to determine how to safely embed AI algorithms in clinical decision making, treatment planning, and care pathways with the goal of achieving seamless clinical workflows. The automated execution of CPGs and standardized care pathways will enable greater accountability through the use of audit logs and process mining for diagnostic feedback and avoiding unwarranted clinical variations.

In a paper titled Physician Burnout, Interrupted published in the NEJM, Pamela Hartzband, M.D. and Jerome Groopman, M.D. discuss EHR-induced burnout and physician intrinsic motivation through the lenses of self-determination theory and write: "They [doctors, nurses, and other health care professionals] tend to enter their field with a high level of altruism coupled with a strong interest in human biology, focused on caring for the ill. These traits and goals lead to considerable intrinsic motivation." Solutions like Automated Speech Recognition (ASR) using Machine Learning can facilitate more patient-clinician interaction, reduce physician burnout, and improve physician professional satisfaction (Human-Centered AI). As Golden Krishna said, "The best interface is no interface."

We believe that medicine can borrow proven practices from safety-critical domains like aviation. Examples include: emphasis on Human Factors engineering, Cognitive Ergonomics, simulation, checklists, the Swiss cheese model of accident causation, situational awareness, standard operating procedures, crew resource management, flight data recording and analysis, mandatory flight duty time limitations and rest requirements, debriefings, and safety reporting (a prerequisite for a learning health care system). According to the International Air Transport Association (IATA), the 2019 fatality risk per millions of flights was 0.09. In addition, price transparency in healthcare should allow patients to purchase healthcare services at a competitive price just as they are able to comparison-shop online for travel packages including flights, hotels, cars, tours, entertainment, and other activities.

Lastly, we can improve the health and save the lives of millions of people worldwide with the medical knowledge that is already available in CPGs and the biomedical literature. To harness that knowledge at the point of care, we explore novel approaches to medical Knowledge Representation and Reasoning (KRR) as well as Natural Language Understanding (NLU) and Clinical Question Answering (CQA). Dynamic Treatment Regimes (as opposed to static rule-based clinical practice guidelines) is an approach which borrows AI algorithms from Reinforcement Learning and can be used to personalize sequential treatment decisions based on the individual patient's disease trajectory and up-to-date clinical data.

We subscribe to the No Free Lunch Theorem (NFL) — introduced by David Wolpert and William Macready in their paper No free lunch theorems for optimization — and have experience in various state-of-the-art approaches to AI including Bayesian Inference and Optimization, eXtreme Gradient Boosting, Deep Learning, Reinforcement Learning, Causal Inference, Statistical Relational Learning, Evolutionary Algorithms, Computational Logic, and Neural-Symbolic Integration. We use techniques like Visual Analytics, simulation, and Augmented Reality to help patients and clinicians understand risk, uncertainty, causality, and evidence.

We pay special attention to important issues such as: AI safety, security, privacy, human factors, algorithmic fairness, explainability, and accountability. To solve these issues, we are fully committed to long term research and development, methodological rigor, formal verification, and clinical validation.

Vidjinnagni Amoussou
Vidjinnagni J. Amoussou
Founder & CEO
sylvie dan
Sylvie Dan, TOGAF, PMP, CCBA, ITIL, CSM
Chief Operating Officer

Contact

Learn more about algorithmichealth.com.

Email us

vamoussou@algorithmichealth.com