Monthly Archives: June 2019

Should robots pay taxes?

Andrew Yang is a democrat contender for president of the United States. He has expressed concern about a ‘jobless future’ as Martin Ford puts it [Rise of the Robots: Technology and the Threat of a Jobless Future], from technological change, in particular changes coming from the application of artificial intelligence in the workplace, which may produce mass unemployment, maybe forever.

Yang is rightly worried about the jobless future and has proposed a Freedom Dividend, otherwise known as a universal basic income to deal with pending mass unemployment.

Two outcomes are possible. Technological change will be like it has been in the past where even disruptive changes have created new and different jobs in other parts of the economy, or, this time it is different.

If it is indeed different this time, it is necessary to rethink our various assumptions on how our workforce is structured. I have proposed to use the term ‘cognology’ to describe the embedding of AI type capabilities into ‘things’ to create smart technologies, to emphasise the essentially cognitive nature of these new capabilities. This has certain consequences. Let’s have a quick look.

Abbott and Bogenschneider [Should robots pay taxes? Tax policy in the age of automation, Harvard Law and Policy Review, 12:2018, and with apologies for using their title] make the point that tax policy has focused on labour (employment of people) and not on capital (the things people use). They write that the tax system breaks down when the labour is capital. The important consideration, though, and this is to avoid taxing pencils, is that this applies when the capital is a substitution of the labour — pencils don’t write by themselves.

They make the point that the tax system actually incentivises automation, because firms can replace humans with robots, and avoid the taxes.

From a policy analysis perspective (using the Wilson matrix), this makes firms free riders, as the wider costs of the labour displacement they create is not a direct cost they incur. These costs are transferred to society as a whole. Given the tax base is likely to shrink from unemployment (as it does when employment drops anyway), governments will find it hard to finance these costs and will need to borrow against an uncertain future.

I would like to propose that robots and cognitive decision systems (i.e. software) that replace humans are actually a type of “labour substitution”, and firms should bear the costs of that substitution through a tax on these technologies.

If we start to think of these technologies as labour substitution, then we have a much larger frame to understand the costs and benefits that arise from the technology. Search engine technology companies extract the value of the search, but have not born the costs of unneeded librarians. Yang, though, wants to tax them, but this just creates a NIMBY situation, and opens the door to tax avoidance. By casting the tax net widely, as the quasi-universal tax on employed labour does, paid in part by workers and in part by employers, we get closer to a more equitable and socially effective approach to taxing technology.

As always the difficulty is in the measurement of the effort to tax — a salary is easy, but how much labour is in an decision support system that assists a doctor my scanning mammograms for tumours? Is it one radiologist equivalent or many?

The other thing to consider in an AI future, is how to factor into workforce planning the labour substitution effect of cognologies themselves. After all, we cannot have our already unreliable workforce planning made even more unreliable. Poor workforce estimates feed though to the production of graduates from universities of colleges, who may not taking into account the work of intelligent machines. Perhaps these intelligent machines will be taking their online courses.

While Yang’s position is reasonable it is misguided. We really need to come to grips with technologies as a substitute for labour, and determine its labour equivalent effect for taxation purposes. This will go some way to determining what real workforce displacement is likely. It may be that under that scenario, where cognologies are fully costed against labour, we may be better able to value the human condition, rather than exploit its weaknesses.

So, what do you think? Should robots (and intelligent decision systems) pay taxes?


Pain is, well, a pain. It is the one thing we all have direct experience of, and can communicate to others, but which defies direct clinical measurement. We are left with subjective measures of pain, such as the ‘oucher’ scale or similar.

No alt text provided for this image

While it is frustrating not to have a direct way to measure pain, it is a reminder that pain is also something we do create. Elaine Scarry’s insightful book, The Body in Pain, []I think puts pain into the right context, as easy to feel but hard to describe, so we are left with metaphors. As well, there is historical evidence that today’s patients are less tolerant of pain itself as Edward Shorter wrote about in “Doctors and their patients: a social history” [].

In terms of my own views, I did once chair a review of my hospital’s pain management, both acute and chronic, as we found poor compliance with pain protocols, so something wasn’t working. Indeed, we learned that burn patients may experience psychosis from the medicines that in effect separate their heads from their bodies to minimise the burn pain: they experienced a sense of being disembodied as they couldn’t feel their bodies. Surgeons, we also learnt, are of two types: those who would medicate post-operative pain slightly above the pain threshold, and those that felt the patient should perceive no pain. And then when patients could control their pain meds post-operatively with a pump, they tended to take less. The pain was telling them something important. Having said that, pain complaints are a frequent cause of medical litigation, as patients often feel that if they leave hospital in pain, the procedure was not successful. This is also telling us something important. In all cases, pre-operative pain counselling was an important part of surgical preparation for the patient.

This, of course, brings us to opioids.

While there is ongoing litigation, it is not appropriate to speculate on the outcome. It is, however, possible to look at the “pain ecosystem” and whether we can learn something. As while today it is opioids, tomorrow it may be psychoactive drugs for depression, or some thing we can’t today imagine.

Patients and doctors exist in a type of dance. Patient expectations, perhaps culturally or socially influenced, lead doctors to prescribe. And doctors need a good way to end the consultation, apart from standing up and holding the door open. Not getting a prescription for many patients is evidence that their needs have not been taken seriously; given how little time doctors actually spend listening to a patient — about 13 seconds! — should we be surprised? Doctors too exists in a type of dance with medicines and they are influenced as much as the weak clinical and practice guidance as the low quality of evidence and information available. They are left with taking each patient on their own merits, as they say, “the patient before me” to do what they think will work. Of course, there may be influential peers advocating specific pain practices, and often for pay — so-called Key Opinion Leaders.

The opioid crisis is a creature born of that broken medicines system, which plays off the anxieties of patients, and beliefs that somewhere “there’s a pill for that”. It plays off the failure of regulators to do their job, to ensure robust clinical guidance, and pain audits to capture emerging clinical / medicines risk. It plays off the failure of prescribing doctors to use evidence informed control and indeed self-restraint over the use of medicines. And it plays off the inappropriate use of incentives to pharmaceutical sales representatives.

In that respect as a systems problem, the opioid crisis is based on a profound ignorance and lack of evidence informed judgement by patients, doctors, industry and regulators.

If we do not get this sorted out, we will simply have this type of problem reassert itself again; indeed, it may already be lurking in the data.