61 points ariavikram 1 day ago 35 comments
Hospitals are racing to adopt AI. More than 2,000 clinical AI tools hit the U.S. market last year - from ambient scribes to imaging models. But new regulations (HTI-1, Colorado AI Act, California SB 3030, White House AI Action Plan) require auditable proof that these models are safe, fair, and continuously monitored.
The problem is, most hospital IT teams can’t keep up. They can’t vet every vendor, run stress tests, and monitor models 24/7. As a result, promising tools die in pilot hell while risk exposure grows.
We saw this firsthand while deploying AI at Columbia University Irving Medical Center, so we built Parachute. Columbia is now using it to track live AI models in production.
How it works: First, Parachute evaluates vendors against a hospital’s clinical needs and flags compliance and security risks before a pilot even begins. Next, we run automated benchmarking and red-teaming to stress test each model and uncover risks like hallucinations, bias, or safety gaps.
Once a model is deployed, Parachute continuously monitors its accuracy, drift, bias, and uptime, sending alerts the moment thresholds are breached. Finally, every approval, test, and runtime change is sealed into an immutable audit trail that hospitals can hand directly to regulators and auditors.
We’d love to hear from anyone with hospital experience who has an interest in deploying AI safely. We look forward to your comments!
iamgopal 1 day ago | parent
ariavikram 1 day ago | parent
iamgopal 22 hours ago | parent
jph 1 day ago | parent
ariavikram 1 day ago | parent
padolsey 1 day ago | parent
We're looking for domain experts especially in high risk domains like healthcare, education, therapy. Then we'd work together co-authoring an eval in your specialism to expose and motivate AI labs to do better.
robertlagrant 1 day ago | parent
potatoman22 1 day ago | parent
siva7 1 day ago | parent
jstummbillig 1 day ago | parent
For example, consider what happens in this video: https://www.youtube.com/watch?v=AZhCYisIQB8&t=2s
Please don't make this mistake of thinking "aha, but you see, a human intervened!" This will never happen in the real world for the vast majority of humans in a similar scenario.
potatoman22 1 day ago | parent
ariavikram 1 day ago | parent
padolsey 1 day ago | parent
Usually you can run human-in-the-loop spot checks to ensure that there's parity between your LLM evaluators and the equivalent specialist human evaluator.
pizzathyme 1 day ago | parent
Next up is just great execution by you all!
That list of logos you all have - are those paying customers today?
Best of luck!
seriusam 1 day ago | parent
ariavikram 1 day ago | parent
We use in-house evals (based on existing state-of-the-art benchmarks) to compare ambient scribes.
If you take a deeper look into the companies on our landing page, you will see that the first list refers to the compliance standards our workflows follow and the second refers to the existing tools we integrate with.
seriusam 1 day ago | parent
> We use in-house evals (based on existing state-of-the-art benchmarks) to compare ambient scribes.
Have you validated that your in-house evals accurately reflect real-world performance?
> If you take a deeper look into the companies on our landing page, you will see that the first list refers to the compliance standards our workflows follow and the second refers to the existing tools we integrate with.
I am talking about your use of Abridge Ambient Scribe, Nuance and Deepscribe brands in your landing page. You have numbers on the number of beds, hourly efficiency and their costs next to the actual brands. I don't see any proper attributions or disclaimers.
Also, if you were to compare actual numbers you get from the websites, these companies can use different models for different users, have enterprise discounts for different organizations and etc. How are you planning on having access to these to make a proper comparisons to see what they would actually offer to their potential customer?
Fwiw, I am a fan of the "AI marketplace" runs. This one just raises a lot of questions for me.
But, good luck!
richwater 1 day ago | parent
Impossible to deliver
sgt 1 day ago | parent
potatoman22 1 day ago | parent
Here's a good overview of fairness: https://learn.microsoft.com/en-us/azure/machine-learning/con... and there's plenty of papers discussing how to safely use predictive analytics and AI in healthcare.
I don't know if this product can give proof for safe and fair ML systems, but it's not impossible to use these things safely and fairly.
ariavikram 1 day ago | parent
padolsey 1 day ago | parent
cactca 1 day ago | parent
Here are a few questions that should be part of an evaluation of the Parachute platform to pressure test the claims made on the website and this post: 1) How many Parachute customers have passed regulatory audits by CMS, OCR, CLIA/CLAP, and the FDA? 2) What high quality peer-reviewed scientific evidence supports the claims of increased safety and detection of hallucinations and bias? 3) What liability does Parachute assume during production deployment? What are the SLAs? 4) How many years of regulatory experience does the team have with HIPPA, ISO, CFR, FDA, CMS, and state medical board compliance?
tony-yamin 1 day ago | parent
fehudakjf 1 day ago | parent
We've all seen how powerful language can be in legal defenses surrounding the for profit healthcare industry of the united states.
What new "pre-existing conditions" alike thought, and legal argument, terminating phrases will these large language models come up with for future generations?
nradov 1 day ago | parent
shandrodo 1 day ago | parent
The OP provided you one such “time bomb”: pre-existing condition. This was, 40 years ago, a totally innocuous phrase and then it became a rally cry of health insurers’ “delay,deny,defend” modus operandi.
If a large language model is taking notes for a doctor how will you defend against it slipping in phrases such as this to allow insurers to avoid their responsibilities?
Tell me how your product is designed to defend people from health insurers, or admit how your product is designed to help health insurers.
nradov 1 day ago | parent
https://www.hhs.gov/healthcare/about-the-aca/pre-existing-co...
HIPAA also allows individuals to request amendments to their medical records if there are errors such as an incorrect diagnosis.
https://www.ecfr.gov/current/title-45/subtitle-A/subchapter-...
zmmmmm 1 day ago | parent
this is humans? I'm really not sure how this could be automated given the vast spectrum of applications and specific requirements complex organisations like hospitals have. It would have to boil down to "check box" compliance style analysis which in my experience usually leads to poor outcomes down the track (the worst product from every other point of view gets chosen because it checks the most arbitrary boxes on the security / compliance forms - then the integration bill dwarfs whatever it would have cost to address most of those things bespoke anyway).