Accountability sinks
A Reading Note
In The Unaccountability Machine, Dan Davies argues that organizations form “accountability sinks,” structures that absorb or obscure the consequences of a decision such that no one can be held directly accountable for it. Here’s an example: a higher up at a hospitality company decides to reduce the size of its cleaning staff, because it improves the numbers on a balance sheet somewhere. Later, you are trying to check into a room, but it’s not ready and the clerk can’t tell you when it will be; they can offer a voucher, but what you need is a room. There’s no one to call to complain, no way to communicate back to that distant leader that they’ve scotched your plans. The accountability is swallowed up into a void, lost forever.
Davies proposes that:
For an accountability sink to function, it has to break a link; it has to prevent the feedback of the person affected by the decision from affecting the operation of the system.
Davies, The Unaccountability Machine, page 17
Once you start looking for accountability sinks, you see them all over the place. When your health insurance declines a procedure; when the airline cancels your flight; when a government agency declares that you are ineligible for a benefit; when an investor tells all their companies to shovel so-called AI into their apps. Everywhere, broken links between the people who face the consequences of the decision and the people making the decisions.
That’s assuming, of course, that a person did make a decision at all. Another mechanism of accountability sinks is the way in which decisions themselves cascade and lose any sense of their origins. Davies gives the example of the case of Dominion Systems vs Fox News, in which Fox News repeatedly spread false stories about the election. No one at Fox seems to have explicitly made a decision to lie about voting machines; rather, there was an implicit understanding that they had to do whatever it took to keep their audience numbers up. At some point, someone had declared (or else strongly implied) that audience metrics were the only thing that mattered, and every subsequent decision followed out from that. But who can be accountable to a decision that wasn’t actually made?
It’s worth pausing for a moment to consider what we mean by “accountable.” Davies posits that:
The fundamental law of accountability: the extent to which you are able to change a decision is precisely the extent to which you can be accountable for it, and vice versa.
Davies, The Unaccountability Machine, page 17
Which is useful. I often refer back to Sidney Dekker’s definition of accountability, where an account is something that you tell. How did something happen, what were the conditions that led to it happening, what made the decision seem like a good one at the time? Who were all of the people involved in the decision or event? (It almost never comes down to only one person.) All of those questions and more are necessary for understanding how a decision happened, which is a prerequisite for learning how to make better decisions going forward.
If you combine those two frameworks, you could conclude that to be accountable for something you must have the power to change it and understand what you are trying to accomplish when you do. You need both the power and the story of how that power gets used.
The comparisons to AI are obvious, inasmuch as delegating decisions to an algorithm is a convenient way to construct a sink. But organizations of any scale—whether corporations or governments or those that occupy the nebulous space between—are already quite good at forming such sinks. The accountability-washing that an AI provides isn’t a new service so much as an escalated and expanded one. Which doesn’t make it any less frightening, of course; but it does perhaps provide a useful clue. Any effort that’s tried and failed to hold a corporation to account isn’t likely to have more success against an algorithm. We need a new bag of tricks.