Claripulse vs. Manual MAUDE Search
Same data. Different results.
Manual MAUDE Search
The status quo
Time to identify a signal
Hours to days of manual review
Device coverage
One device at a time, when you remember to check
Detection method
Eyeballing reports and guessing whether volume looks high
Historical baseline
No way to compare current volume against what’s normal
New failure modes
Only found if you happen to read the right report
Peer comparison
You’d need to build your own dataset across device classes
Alerting
None. You find signals when you go looking.
Data freshness
As current as your last search
Claripulse
Continuous automated monitoring
Time to identify a signal
Flagged automatically each week
Device coverage
32 cardiac device types, monitored continuously
Detection method
Six statistical methods run on every device, every week
Historical baseline
Years of baseline data with maturity-aware thresholds
New failure modes
Emerging patterns surfaced automatically across reports
Peer comparison
Built-in outlier detection across similar devices
Alerting
Scored, ranked alerts delivered to your inbox
Data freshness
Updated weekly from the openFDA API
Time to identify a signal
Hours to days of manual review
Flagged automatically each week
Device coverage
One device at a time, when you remember to check
32 cardiac device types, monitored continuously
Detection method
Eyeballing reports and guessing whether volume looks high
Six statistical methods run on every device, every week
Historical baseline
No way to compare current volume against what’s normal
Years of baseline data with maturity-aware thresholds
New failure modes
Only found if you happen to read the right report
Emerging patterns surfaced automatically across reports
Peer comparison
You’d need to build your own dataset across device classes
Built-in outlier detection across similar devices
Alerting
None. You find signals when you go looking.
Scored, ranked alerts delivered to your inbox
Data freshness
As current as your last search
Updated weekly from the openFDA API
~1M
reports analyzed
32
device types
6
detection methods
Free during beta. No credit card required.
Why manual MAUDE search falls short
The FDA MAUDE database is the most comprehensive public source of medical device adverse event reports in the world. Anyone can search it. The problem isn't access. It's what happens after you run a search.
A typical query returns hundreds or thousands of raw reports. Each one is a wall of unstructured text: device identifiers, event descriptions, patient outcomes, manufacturer narratives. To spot a genuine safety signal, you need to read enough of these reports to notice patterns, then figure out whether the pattern is statistically meaningful or just noise.
That process doesn't scale.
Most teams check a handful of devices when they have time. Signals for less-watched devices go unnoticed until someone else raises the alarm. And even when you do search, MAUDE doesn't tell you whether a device's current reporting volume is normal or elevated. Five adverse event reports in a month might be routine for a high-volume device and a serious red flag for a niche one. Without a baseline, you're guessing.
Then there's the problem of new failure modes. When a device starts generating reports about something it's never been flagged for, that's exactly the kind of early signal that matters most for post-market surveillance. But catching it manually would require reading every new report and remembering every previous one. Nobody does that.
How Claripulse works differently
We pull data from the openFDA API every week, covering 32 cardiac device types across eight categories: CRM devices, ablation catheters, structural heart, coronary interventions, mechanical circulatory support, and more. Every report gets matched to a normalized device catalog so we can track patterns at the device level over time.
Instead of relying on one method, we run six different detection approaches against every device, every week:
- Sudden spikes in reporting volume
- Gradual trends that build over weeks or months
- Devices with disproportionate reporting compared to peers
- New problem types that don’t match historical patterns
- Early warning indicators for recently approved devices
- Sustained shifts that cross statistical thresholds
Each potential signal gets scored on the strength of the underlying evidence. Only signals that cross the threshold reach your inbox, ranked and prioritized. You spend your time reviewing what matters instead of reading thousands of reports hoping to notice something unusual.
The thresholds themselves are calibrated to each device's maturity. A newly approved device with a handful of reports per month gets different sensitivity than an established device generating hundreds. That prevents false alarms on high-volume devices without missing signals on newer ones.
Two ways to monitor device safety
One of them scales.
MW5084891 EDWARDS SAPIEN 3 THV INJURY DURING DEPLOYMENT THE VALVE MIGRATED SUPERIORLY...
MW5084650 EDWARDS SAPIEN 3 THV MALFUNCTION POST-IMPLANT IMAGING AT 45 DAYS...
MW5083990 EDWARDS SAPIEN 3 THV INJURY CHORDAL ENTANGLEMENT DURING CLIP...
MW5083701 EDWARDS SAPIEN 3 THV DEATH PT EXPIRED FOLLOWING STRUCTURAL VALVE...
... 847 more results
| Device | Reports | Last checked |
|---|---|---|
| Sapien 3 | 23 | Mar 14 |
| Watchman FLX | ? | Feb 28 |
| Impella 5.5 | ? | Jan 15 |
Colleague forwarded an article
"Did you see the Impella hemolysis cluster? It was in last month's MAUDE data."
Abiomed Impella 5.5
Mechanical Circulatory Support
Hemolysis-related reports increased 340% vs. 90-day baseline. New narrative cluster detected.
0.82
Score
+340%
vs. Baseline
Edwards Sapien 3 THV
Structural Heart
Peer outlier: valve migration reports 2.1x category average over last 6 months.
0.54
Score
2.1x
vs. Peers
Get signal alerts before everyone else
Join the early access list for twice-a-month signal digests covering cardiology devices. Free during beta.