Distributional Trust

I've written previously about factors impacting cooperation, especially in the presence of large disagreements. One factor that relates to cooperation that has been on my mind lately is trust. There are obviously moral and interpersonal components to trust, but if we are willing to relax those meanings of the word for a bit, another interesting way to view trust is simply the ability to predict how someone will act[1]. This is interesting from the perspective of handling disagreements because it allows for trust without necessarily "getting along" in a more conventional sense.
This notion of trust brings a certain property into focus that I think is true of a lot of other notions as well, which is that trust tends to go hand in hand with certainty. I think this is pretty clear in the "predictive trust" case. High prediction accuracy means high confidence in the person's future actions. Achieving this accuracy would probably also require having lots of information about the person. In the language of interpersonal trust, I think this lines up with the idea of "knowing" a person well. The more insight you have into how a person thinks and acts, the more confident you can be in your assessment of them.
This property makes sense in a lot of contexts, but I'm not sure its always desirable. For example, this seems to imply that wanting privacy is inherently untrustworthy. Privacy is about concealing certain things, denying information to others that might increase their confidence in predictions about how the private person will act. Even when a person has seemingly legitimate reasons to conceal information, the predictive notion of trust considers this a negative.
I'm interested in what it would mean to try to relax or remove this property from the idea of trust. I definitely think this property is useful and valid in many circumstances. Still, I think it's worthwhile to imagine what "trust" could mean if we try to strip out the relationship with certainty.
Trust without certainty
To that end, I want to describe an initial idea that attempts to describe what it might mean to "trust" in a way that doesn't equate increased certainty with increased trust. When thinking about this, I found the idea of variance helpful. Saying we don't want to penalize high uncertainty means high variance is okay. We are attempting to measure bias, while leaving variance out of the equation.
As an example, consider asking someone if they'd like to join you for coffee. A typical trustworthy person may agree to coffee, and then reliably show up. An untrustworthy person may agree, but can't be depended on to actually show up. But what about a person who says "there's a 50% chance I will show up for coffee"? In the prediction sense, this isn't any better than saying you will show up and showing up 50% of the time, it's still high variance. You still are going to struggle to predict if the person will show up. But I think intuitively there is a sense in which they are more worthy of trust compared to the person who always says they will show up, but only does so 50% of the time.
We could imagine an idea of trust that is based around calibration only, while ignoring precision. Similar to how brier scores in the context of probabilistic prediction can be decomposed into multiple terms, one representing calibration[2]. You can make well-calibrated predictions that are nevertheless not super informative. This captures the idea of trust without certainty. There is a certainty property that holds for the statements even if they don't contain a lot of predictive information.
I propose to call this concept distributional trust. If we understand a person's statements or actions to imply a distribution of their future actions, then they are distributionally trustworthy if their future actions follow this implied distribution. I say "implied" because many statements won't be explicit probability statements. Instead of saying "50% chance I show up for coffee" people are much more likely to say "maybe". Obviously I'm cheating a bit by packing a lot of the complexities into how the distribution is determined. Maybe it should be based on what a typical person in that context and culture would think the statement implies. Maybe there is some other way to operationalize the idea. I'm not sure on that part. But the idea of distributional trust assumes a way to do so, and evaluates "trust" as the extent to which a person conforms to this implied distribution.
Why distributional trust might be worthwhile
One of reasons a person may care about "trust" is that they need to plan their behavior in light of another person's actions. Distributional trust has a real disadvantage for this application because the variance of a person's actions still matter a lot for this. If I want to meet someone for coffee, I still incur the costs if I show up, and they don't, regardless of the probability they gave for showing. At the same time, knowing the probabilities does give me some advantages for planning. I can assess the risk of a no-show, and if they are distributionally trustworthy, my risk calculations can be accurate. I can decide if the probability of them showing up is worth the risk for me, or if I should just cancel the meeting all-together. Making decisions based on distributional trust won't always result in making the right decision, but it allows the ability to accurately account for the risks involved.
For example, see the discussion of "predictive trust" on the "trust" wiki article. ↩︎
"Calibration" meaning something predicted to happen x% of the time happens x% on average in large samples. ↩︎