QA Audit Dashboard — SOP
Tracks reviewer performance, scoring consistency, and system accuracy across all cases to maintain quality, reliability, and defensibility.
Purpose
- Monitor quality and consistency across reviewers
- Identify scoring drift and variability patterns
- Ensure adherence to Lexcura Clinical Intelligence Model™
- Track system-level reliability over time
- Provide data for training, correction, and performance management
What AI Extracts (Facts Only)
- All case scores by reviewer
- Category-level scoring breakdowns
- Inter-rater variance per case
- Final reconciled scores
- Reviewer-specific scoring patterns over time
- Case complexity indicators
What Leadership Must Confirm (Validation)
- Reviewers are applying scoring criteria consistently
- Variance thresholds are being enforced
- No systematic bias in scoring patterns
- Training gaps are identified and addressed
- Dashboard data reflects actual case outputs
This is a system-level control—not a reviewer opinion layer.
Core Dashboard Metrics
Average Score per ReviewerTracks scoring tendency (high vs conservative)
Variance Rate% of cases exceeding acceptable variance
Reconciliation FrequencyHow often scores require adjustment
Score Drift Over TimeChanges in scoring patterns
Category VarianceWhere disagreements occur most (causation, damages, etc.)
High-Risk Case AccuracyAlignment on high-exposure cases
Critical Thinking Steps
- Analyze trends—not isolated cases
- Identify patterns of over-scoring or under-scoring
- Detect systematic bias (e.g., consistently high causation scoring)
- Compare reviewer outputs against reconciled final scores
- Flag repeated deviation in specific categories
- Use data to refine training and SOP enforcement
Alert Thresholds
- Variance rate > 20% → Immediate review required
- Reviewer average deviates > 10 points from team mean → Flag
- Repeated category variance → Targeted retraining
- Score drift over time → System review required
Patterns matter more than single outliers.
Stop Rules
- STOP if dashboard data is incomplete or inaccurate
- STOP if reviewers are not following scoring SOPs
- STOP if variance is not being actively resolved
- STOP if performance issues are identified but not addressed
A dashboard without enforcement creates risk, not control.
Final Output Requirements
- Weekly or monthly QA dashboard report
- Reviewer performance summaries
- Variance and consistency metrics
- Identified training needs
- Corrective actions implemented
- System reliability status
The dashboard must drive action—not just display data.