AI Scribe Evaluation Matrix

The AI Scribe Evaluation Matrix, created by OMD Peer Leader Dr. Kevin Samson, helps clinicians understand the differences between available AI scribe solutions and makes it easier to compare them when selecting an AI scribe. 

Instructions for Scoring by Dr. Kevin Samson 

Scoring Matrix 

For each item in the evaluation matrix, users should assign a score between 0 and 5 based on their experience during a trial or demo presentation. The scoring reflects how well the solution meets their needs and expectations in real-world or simulated scenarios. 

0 – N/A:  The solution does not include this functionality. 

1 – Poor: The solution fails to meet expectations, has significant limitations, or is unusable. 

2 – Fair: The solution has noticeable flaws, but provides minimal functionality or value. 

3 – Good: The solution is functional, but has room for improvement, or has minor issues. 

4 – Very Good: The solution performs well, meets most requirements, and offers good usability. 

5 – Excellent: The solution exceeds expectations, performs flawlessly, and aligns perfectly with user needs. 

Tips for Scoring: 

  • Focus on usability, performance, and how the feature addresses your specific clinic needs. 
  • Consider your experience during trials, including user interface, speed, and overall compatibility with workflows. 
  • Factor in personal preferences and feedback from colleagues who interacted with the solution. 

This scoring approach ensures personalized, hands-on evaluation of each feature to guide informed decision-making. 

An even more customized result could be achieved by applying a weighting system to the scores based on specific needs and circumstances. 
 

Important Note: Other evaluation considerations such as cost, privacy, security and legal requirements are assessed by Supply Ontario for the AI Scribe provincial procurement.