PCCM Hub

Educational Content. This material is for training purposes only and does not constitute medical advice. Always verify with current guidelines and institutional protocols.

Trust & Safety

How we ensure quality, ethics, and security

PCCM Fellows Hub is built on the principle that medical education technology must meet the highest standards of accuracy, transparency, and security. This page explains the systems, processes, and standards we use to protect our users and ensure content quality.

Human-in-the-Loop: Our Core Principle

AI is a powerful tool, but it is not the final authority. Every piece of educational content on this platform — whether generated by AI, submitted by faculty, or sourced from literature — must pass through human expert review before reaching learners. Board-certified PCCM faculty have final authority over all content. AI assists, validates, and flags — but humans decide.

AI Generates & Validates

3 models cross-check

Faculty Reviews

Board-certified experts

Human Approves

Final authority

3

AI Models Validate

TripleCheck Engine

5

Review Stages

Peer Review Pipeline

Verified

Content Sources

NEJM, CHEST, JAMA

Encrypted

Data Protection

In Transit & At Rest

Every piece of AI-generated or AI-enhanced content passes through our TripleCheck validation pipeline before it reaches fellows. This system uses three independent AI models to cross-validate medical accuracy, identify potential hallucinations, and flag content that needs expert review.

How It Works

1

Content Generation

When AI generates a question, summary, or explanation, the primary model produces the initial content with clinical context and evidence references.

2

Cross-Validation

Two additional AI models independently review the content for medical accuracy, checking facts against established guidelines and identifying any inconsistencies or unsupported claims.

3

Consensus Scoring

The system calculates an agreement score across all three models. Content with high consensus (all models agree) is flagged as validated. Any disagreement triggers a review.

4

Hallucination Detection

The system specifically checks for fabricated references, incorrect drug dosages, outdated guidelines, and clinical claims not supported by evidence. Flagged items are quarantined for expert review.

5

Expert Review Queue

Content that doesn't achieve full consensus or has hallucination flags is routed to the faculty review queue. Nothing reaches learners without passing validation or expert approval.

Validation Performance

In internal testing, the TripleCheck engine achieved unanimous agreement on 188 out of 190 content items (98.9% consensus rate). The 2 flagged items were correctly identified as containing outdated guideline references and were corrected before publication.

All content — whether submitted by faculty, generated by AI, or sourced from literature — goes through a structured peer review process before entering the active question bank or content library. This process mirrors academic peer review standards adapted for medical education content.

1

Submission & AI Pre-Screening

Faculty submit questions, articles, or teaching content. AI performs initial screening: checks for duplicate content against the existing bank, validates formatting, and flags potential issues. A similarity score is calculated to prevent redundant content.

2

Expert Review

Qualified reviewers (board-certified faculty) evaluate the content for medical accuracy, clinical relevance, appropriate difficulty level, and alignment with ABIM blueprint topics. Reviewers can approve, request revisions, or reject with detailed feedback.

3

Inline Editing & Diff Tracking

Reviewers can make inline edits to improve content. The system tracks every change with a full diff comparison between the original submission and the edited version, preserving a complete audit trail.

4

Contributor Feedback Loop

After review, the original contributor receives notification with the review outcome, any changes made, and constructive feedback. This creates a learning loop that improves future submissions.

5

Publication & Ongoing Monitoring

Approved content enters the active library with full attribution. Users can report issues via the feedback button on any question, which sends the item back through review. All reviewer activity is logged with timestamps.

Audit Trail

Every review action is permanently logged: who reviewed, when, what changes were made, and the approval decision. Program directors can access the full review audit trail for compliance reporting and quality assurance.

We are committed to continuous improvement of our safety and quality systems. If you have questions about our practices or suggestions for improvement, please use the feedback button or contact your program director.

Last updated: March 2026 · PCCM Fellows Hub

Educational Use Only. Content is for medical education and training. It does not constitute medical advice. Verify clinical decisions with current evidence-based guidelines and institutional protocols.

AI Transparency. Some content on this platform is generated, validated, or enhanced using AI models. All AI-assisted content undergoes multi-model validation (TripleCheck) and expert review. AI-involved content is labeled accordingly.

Send Feedback

Help us improve PCCM Hub

What kind of feedback do you have?