Tech as a moral smoke screen

Predictive policing. Those softwares used in US cities to plan the route of policemen, such that they stick around “hot spots of crime”, to optimize their time, so they are around places were crime is more likely to occur. You also probably heard criticisms of those softwares. Since they use as prior where crime was noticed by police, and not were crime actually occurs, they tend to designate those places the police already goes as “hot spots”. And those places are usually poor and inhibited by minorities. This can only result in a negative feedback loop: police will more often patrol those specific places, and report crime for those places, which will prompt the software to send more policemen in those places, etc.

In the end, the effectivness of those softwares is very questionable. So why would police buy them? Policemen need to optimize their time, and the technology, at the time, was untested, therefore it was perfectly legitimate to test them out. Now that we tested them out, we know the result: More discrimination, and policemen unwilling to not commit, they reply: “I didn’t chose to patrol only the black streets, the computer told me”.

Rent fixing software. In a few places in the US, landlords use a private company’s “algorithm” to chose when to increase rates and evict occupants. This greatly improved returns on real estate, at the cost of more empty apartments and more evictions. The US government seems fairly unhappy with this, they consider it a price fixing scheme. Again, the landlords reply: “We didn’t fix any price, the computer told us”.

Health care companies use software to deny claims at breakneck speed. The software is given a minimal percentage of claim denials to submit, and using a set of heuristics to chose the ones most likely to pass for being fraudulent. Supposedly, a doctor should be part of the decision loop, but in practice, the doctor only stamped what “the computer told them”.

Software is used in the current middle east conflict. The software identified likely enemy operatives for the military. Followed by a 20 seconds review, to then decide to strike the enemy operative and their entire family (because the military only had software to identify their homes, not their actual position). Journalists, humanitarians, policemen, nurses were designated by this software. Well, right now the military says “we never did this”, but you bet in two years, during judicial inquiry, we’ll hear the same “the computer told me”.

I was careful to use “software” rather than “AI” in those paragraphs, because AI implies that the computer made the decision. Computers do not make decisions, computers can do one thing only, and it always does it without failure, it follows instructions.

The innovation here doesn’t lie in technical novelty, or programming. There is one innovation, and one innovation only: A previous barrier, that of ethics, has been torn down.

The scenario

Let’s write down what happens in those four real world events in a generic way. It’s the tech smoke screen scenario.

We have three actors: (1) The executor, whose role is to read the software output and execute its orders in the real world. (2) The authority, which imposed to the executor the use of software, and has ultimate economic and political authority on the executor. (3) The programmer, who developed a software that act as a proxy to the decision of the authority.

The authority bought a software made by the programmer, required the executor to use it and do as the software tells them.

In a world without software, the executor would have some agency on their decisions, they would refuse to execute orders they judge too arbitrary, illegal or unfair. In a world with software, the executor trusts the software, that the order they got is a result of a complex, fair, analysis, and that the order is legal, the executor then executes it without further question. Yet, in practice, the software just says what the authority wants it to say. And an illegal act is committed: killing civilians, refusing to pay legitimate claims, price fixing, ethnic policing discrimination.

The authority has picked the software specifically because it produces the kind of orders the authority wants. The authority may not even realize they use the software as a pretext for or laundering crimes. It’s perfectly possible (and easy) to accidentally pick up a software that just acts as a proxy for your preferences.


This results in ethical and legal breach. So who is responsible? In this case, I think the authority is always responsible, and in some special cases the programmer is.

The authority is the one in power, and decided to pick a software to coerce the executor into actions they wouldn’t otherwise accomplish. Even if they aren’t aware that they use the software for coercion, they should.

The programmer is the one enabling the authority by creating software to launder the authority’s illegal orders. But the programmer may not understand themselves the nature of their software, or simply have designed software that was diverted from its initial purpose.

Note that in this scenario, software has nothing to do with the issue. You could replace “software” with “haruspicy” or “gizmo”, and we would have the same result. It’s not a problem related to software or new technology, therefore no tech regulation will fix this problem (although they may fix other problems).

It’s a problem of social relations, the abuse of faith in technology to coerce people to do things they wouldn’t do otherwise.

In this case, new technology is solely a smoke screen to get executors to act without judgement.


So how can we avoid this? If tech regulations has zero impact in this kind of technology usage, what can we do?

There is a silver lining. Here, unlike the old school scenario with an authority and an executor, we have an additional actor, the programmer. This means that we multiply the numbers of ways we can avoid the repetition of the tech smoke screen scenario. We need to sensitize all potential actors to the pitfalls of tech smoke screens. We need to kill this toxic belief that technologies are inherently positive, or, in fact, have any inherent property.

The tech smoke screen can be played while all actors act in good faith. So if any of the three parties understand what is being played, they can break the mechanism of tech smoke screen. But for that to happen, they need to be aware of the concept of tech smoke screen in the first place. Teaching, including this kind of ethical analysis in computer science and management cursus is important, as it can avoid major social hurts.