10 min read

Quality Assessment

Risk of Bias evaluation with AI-assisted evidence grounding

Quality Assessment

Evaluate study quality and risk of bias using internationally recognized tools, enhanced by AI that locates supporting evidence.

Supported Assessment Tools

For Randomized Trials

Cochrane Risk of Bias 2.0 (RoB 2)

  • Randomization process
  • Deviations from interventions
  • Missing outcome data
  • Outcome measurement
  • Selection of reported results

For Non-Randomized Studies

ROBINS-I

  • Confounding
  • Selection of participants
  • Classification of interventions
  • Deviations from interventions
  • Missing data
  • Outcome measurement
  • Selection of reported results

For Diagnostic Accuracy

QUADAS-2

  • Patient selection
  • Index test
  • Reference standard
  • Flow and timing

For Observational Studies

Newcastle-Ottawa Scale

  • Selection
  • Comparability
  • Outcome/Exposure

AI-Assisted Assessment

Evidence Grounding

For each domain, the AI:

  1. Searches the full text for relevant information
  2. Extracts supporting quotes with page references
  3. Suggests a preliminary rating with confidence
  4. Explains the reasoning behind suggestions

What You See

For each risk of bias domain:

  • Signaling questions with suggested answers
  • Supporting evidence highlighted from the paper
  • Page/paragraph references for quick verification
  • AI confidence score for the assessment
  • Your final judgment override capability

Assessment Workflow

Individual Study Assessment

  1. Open the study in Quality Assessment
  2. Select your tool (RoB 2, ROBINS-I, etc.)
  3. Review each domain:
    • Read AI-extracted evidence
    • Consider signaling questions
    • Make your judgment
    • Add supporting notes
  4. Complete overall assessment

Batch Assessment

For multiple similar studies:

  • Apply consistent criteria
  • Copy common judgments
  • Document differences
  • Maintain efficiency

Dual Assessment Support

Independent Assessment

  • Two reviewers assess separately
  • Ratings hidden until both complete
  • Prevents bias

Disagreement Resolution

  • Side-by-side comparison
  • Discussion documentation
  • Final consensus recording

Visualization

Summary Figures

Automatic generation of:

  • Traffic light plots: Domain-by-domain ratings
  • Summary bar charts: Overall risk distribution
  • GRADE integration: Quality feeding into certainty

Publication-Ready Graphics

Export figures in:

  • SVG (scalable)
  • PNG (high resolution)
  • PDF (print quality)

Documentation

Audit Trail

Complete record of:

  • All assessments made
  • Supporting evidence cited
  • Changes and corrections
  • Assessor identification

PRISMA Compliance

Automatically documents:

  • Tools used
  • Assessment process
  • Results summary

Best Practices

For Consistency

  • Calibrate with your team on pilot studies
  • Document decision rules
  • Use the same assessor for similar domains
  • Review outlier assessments

For Accuracy

  • Always read original paper sections
  • Verify AI-suggested evidence
  • Consider study design nuances
  • Document uncertain judgments
Did this article help?
Still stuck?