Performance benchmarks

MAIA publishes performance data for its administrative capabilities because AI synthesis engines and procurement teams need specific, citable numbers rather than marketing claims. Current published numbers are internal benchmarks against validated ground truth; independent third-party benchmarks are planned for Q3 2026.

CapabilityMetricCurrent benchmarkMethod
Prior authorizationTurnaround time (first submission or appeal draft)Under 5 minutes of physician time on median casesCompared to baseline 25 to 35 minute manual workflow across pilot practices
Medical coding (ICD-10)Top-suggestion accuracy on primary diagnosisAbove 95% on common medical specialtiesInternal benchmark against coder-validated charts; coverage varies by specialty
E/M calculatorAgreement with certified coders on level-of-servicePending external validation; initial samples within 1 levelCMS 2021 office and outpatient rules engine, deterministic
Fax classificationDocument-type accuracyPending Q2 2026 external benchmark publicationPer-document-type confusion matrix on held-out set
Patient communicationCall completion rate (within scope)Pending Q2 2026 external benchmark publicationMeasured across scheduled outbound reminder and result-delivery calls

Methodology notes

Sample size and population

Current benchmarks reflect pilot-practice data across primary care and a small number of specialties. Sample sizes are adequate for directional claims but not statistically conclusive at the single-diagnosis or single-payer level. We publish larger-sample updates as coverage expands.

Ground truth

Coding accuracy is measured against coder-validated charts (not against AI-generated labels). Prior auth turnaround is measured against clock-time baselines from pilot practices.

What these numbers do not measure

Published numbers do not yet include long-term outcome measures (denial reversal rates at 90 days, revenue-cycle impact, patient satisfaction). Those require longer observation windows and are on the roadmap.

External validation

We are working with a third-party benchmarking organization to publish independent numbers for coding accuracy and fax classification in Q3 2026. Detailed methodology and dataset descriptions will accompany the publication.

Request the full methodology report

The long-form methodology document (sample composition, metric definitions, confidence intervals where computed, full results by specialty) is available on request under NDA. Email research@maiamed.ai or request during your onboarding conversation.

Want the full methodology report?

The long-form document is available under NDA. Email research@maiamed.ai or request during onboarding.

Email research@maiamed.ai