A trial is much like a diagnostic test, actually. In general use, you don’t do a test because you KNOW there is something wrong, you do a test because you WANT TO KNOW if there is something wrong, or because you THINK there is something wrong.
Similarly, you usually don’t sue when you know all the facts of the case. You sue because there’s some important stuff–REALLY important stuff–in dispute. Was the procedure done right? Was a mistake made? Was the drug prescribed correctly? And so on. One person thinks yes, the other thinks no. Only the judge (or jury) can decide.
Nice points. So, let’s assume the reason we go to trial is to find the “truth” (i.e. whether medical error lead to the injury). We “test” with a malpractice trial. It has a false positive rate of 37 percent according to Studdert. This leaves us a test with a 63 percent specificity. Furthermore, the costs of performing this test are ridiculously high, which everyone agrees on:
. . . a LOT of expense in medmal comes from the requirement to prove what SHOULD have happened. A lot of cases progress because people think something should have happened that didn’t happen (or that something happened which should not have happened). This is really expensive. You need experts. You need experts to fight with the experts. You need lawyers to talk to them all. You can have days of debate about what the “proper” procedure is to treat X, and why Y is (or is not) as good or better.
So, all I ask – is this the best we can do? An expensive test with 63 percent specificity? Some may think so. I happen to think we can do better.