Using the Reliability Dashboard
Using the Reliability Dashboard
The Reliability dashboard (Analytics → Reliability) shows how well FullSlot is classifying your cancellations and converting them into filled slots. Use it to identify bottlenecks, reduce manual review work, and grow your fill rate over time.
You can adjust the time window (7, 14, 30, or 90 days) to spot trends or evaluate recent changes.
The KPI strip
Four key metrics appear at the top. Each one tells you something specific about classification quality and sync health.
High-confidence rate
The percentage of slots that FullSlot classified with high confidence (≥ 0.8 on a 0–1 scale). These slots flow straight to notification without manual review.
- Target: ≥ 80%
- If this number is low, you likely have inconsistent event titles or missing classification rules. See Classification rules to tighten up matching.
Avg confidence
The average confidence score across all classified slots. A healthy system typically runs 0.85 or higher.
False-match rate
The percentage of slots where an operator had to correct FullSlot's classification after the fact. This measures how often the engine gets it wrong.
- Target: < 2%
- A high false-match rate means notifications are going to the wrong waitlist. Review your classification rules and consider adding provider rules if appointments vary by staff member.
Write-back success
For Calendly integrations with write-back enabled: the percentage of filled slots that were successfully synced back to your scheduler. Failed write-backs mean the appointment may not appear on your calendar.
- Target: ≥ 98%
- If this drops, check your Calendly connection in Integrations.
Classification breakdown
This panel shows how the engine resolved each slot:
- Auto-classified (high confidence) — matched and broadcast automatically
- Auto-classified (low confidence) — matched but may warrant review
- Operator resolved — required manual classification
- Unmatched — no rule or pattern matched; held for review
Action: If "Operator resolved" or "Unmatched" slots are high, you're doing too much manual work. Add classification rules for recurring patterns.
Confidence distribution
A histogram showing where your classified slots fall on the 0.5–1.0 confidence scale:
- 1.0 (Exact match) — perfect rule match, no ambiguity
- 0.9–0.99 (High) — strong match, safe to auto-broadcast
- 0.8–0.89 (Confident) — good match, occasional false positives
- 0.6–0.79 (Borderline) — weak signal, higher error risk
- 0.5–0.59 (Low) — near-random; these often go to the review queue
Action: If you have a cluster in the borderline/low range, those event titles need better rules. Click through to see which appointment types are affected.
Claim conversion by classification
This table answers: Does classification quality predict fill rate?
For each resolution type (auto-high, auto-low, operator-resolved), you see:
- How many slots were broadcast
- How many were claimed
- The conversion rate
What to look for:
- If operator-resolved slots convert significantly better than auto-classified ones, the engine may be routing to the wrong waitlists. Tighten your rules.
- If low-confidence slots convert poorly, consider gating them for review rather than auto-broadcasting.
Cascade performance
Cascade rebooking creates internal slots when a waitlisted client rebooks and frees up their original appointment. This panel tracks:
- Cascade slots — how many internal rebook slots were created
- Claimed — how many were filled by another waitlist client
- Fill rate — the conversion rate for cascade slots
The chain depth distribution shows how deep your cascades go (depth 1 = one rebook, depth 2 = a rebook triggered another rebook, etc.).
Action: If cascade fill rate is low, your waitlist may not have enough depth for the freed appointment types. See Performance + Recommendations for waitlist-building tips.
Review queue throughput
This section tracks the manual review workload:
- Sent to queue — slots that required operator review (ambiguous or fallback classification)
- Operator resolved — how many were classified by a team member
- Wrong-route corrections — slots where an operator changed the classification after broadcast (a more serious error)
- Low-confidence rate — percentage of all slots that fell below the confidence threshold
Targets:
- Low-confidence rate: < 15%
- Wrong-route corrections: as close to zero as possible
Action: If your queue is backing up, prioritize adding rules for the most common unmatched patterns. Each rule you add reduces future manual work.
How to use Reliability for growth
The Reliability dashboard isn't just for troubleshooting — it's a growth tool. Here's how to use it strategically:
1. Weekly review: Check high-confidence rate and false-match rate. Are they trending in the right direction?
2. Reduce operator load: Sort the review queue by frequency. The patterns you see most often are the ones worth writing rules for.
3. Improve fill rates: If certain classification types have low conversion, investigate why. Wrong waitlist? Bad timing? Insufficient waitlist depth?
4. Expand cascade coverage: If cascade fill rate is strong, you're capturing internal churn. If it's weak, consider renotify to re-engage clients who didn't respond the first time.
5. Track rule changes: After adding a new classification rule, check back in 7–14 days. Did high-confidence rate improve? Did false-match rate stay flat?