Category Archives: Strategy

Real world data and evidence in healthcare: The market access challenge

We are in a new world when it comes to access to and the use of data and evidence. Real world data and evidence takes us from structured studies to the routine delivery of healthcare, actual use of a medicine, and the patient’s actual health status.

What is knowing this worth and to whom?

Real world data is best understood in the context of decision making, or choices and how they are made and the consequences that flow from these decisions. To illustrate:

  1. Patients get the wrong treatment, i.e. they are misdiagnosed. This is a particular issue for patients with rare diseases which experience not just being treated for the wrong condition (i.e. they are in the wrong treatment pathway).
  2. Clinical reasoning may be flawed. The main issue here is medical misdiagnosis, and clinical reasoning itself (backward/forward driven reasoning), and the rules for diagnosis, guidelines, the order items are listed in the differential diagnosis, and behavioural heuristics that impact clinical reasoning. Medical errors are more associated with backward-driven reasoning, using the hypothetico-deductive method; while forward-driven begins with data, with fewer errors. Other reasoning concerns include: doctors are reluctant to make a rare disease diagnosis, called the zebra retreat; inappropriate referral and diagnosing a mimic and sending the patient off to the wrong specialist, not listening to the parents of ill children, and so on.
  3. The treatment is the problem. Even if the treatment is diagnostically correct, the success or failure of that treatment often depends on whether the patient is adherent. It also depends on whether there are adverse drug events which alter patient acceptance of the medicine. Some patients may be non-respondent to the treatment, too.

What does that mean?

Enabling much of this is the use of computational methods, and machine learning, which uses real world data to enable precision medicine, case finding, precision cohort identification and treatable populations.

Regulators currently rely on industry reporting for adverse drug event reporting. RWD could enable regulators to directly monitor the market in real time and identify AD events. This would alter the pharmacovigilance system. In addition, they could gather data on off-label use (for and against) to assess the validity of treatment claims.

RWE may speed regulatory approval as the studies are tightly focused, don’t make expansive product claims and benefits are easier to demonstrate, thereby reducing regulatory risk.

Reimbursement regulators, providers and payers benefit from the potential to improve the quality of care as delivered to patients. This is enhanced by the development of more sophisticated decision support tools built on e.g. computational approaches or embedded in electronic record systems. This includes, for example, ‘red flagging’ tools to improve differential diagnosis, identify mimics, and trigger appropriate clinical suspicion as well as ‘referral filters’ to address inappropriate referrals, and so on. All these improve the value for money equation, and importantly reduce treatment risk, which drives avoidable costs out of the system.

Pharmaceutical companies can use this type of data to inform their drug portfolio development process. This would bring some order to research and development to improve internal priority setting and assessment of research targets in particular to avoid research bias (the impact of behavioural heuristics in R&D decision making for instance). The impact on trials cannot be ignored, use of synthetic control arms, improve precision of trial cohorts to remove the 80% or more of individuals who are not selected for a trial and perhaps save 60% or more of trial costs, and predict trial outcomes.

The evidence base for dossier submissions can be evidence informed with respect to the size of the treatable population, and patient response to treatment, reducing payer risk which manifests itself in refusing to reimburse.

The table suggests just a few changes from current market access to data-driven RWE market access.

Needless to say, this alters the underlying assumptions of pharmacoeconomics, medicines pricing and positioning.

I’ve summarised just a few points in the table below, to distinguish between what today could be called “Push market access”, a sales driven approach to placement, to a “RWD/RWD market access” with reduced risk and improved opportunities for demonstrating product value.

Stakeholder push’ Market Access RWD/RWE Market Access



Patient Risk of non-beneficial treatment Precise patient treatment cohorts

Risk of mis-/missed diagnosis, medical error Precision diagnosis with decision support tools



Clinician Uncertainty of benefits of treatment and the ‘halo’ of uncertainty inherent in clinical decision making Precision patient identification releases benefits through treatment targetting



Payers Pay for uncertain benefits Pay only for responders

Pay for treatment to non-responders Precision medicine to demarcate treatable population

Pay for non-adherent treatments Pay only for adherence, and risk reduction of non-adherence

Risk averse for uncertain treatable populations Risk managed for an evidence treatable population



Pharma industry Weak evidence for size of treatable population, with a “price per pill” Precision patient cohorts defines treatable population with cohort pricing

Missing Phase 4 evidence Good quality Phase 4 evidence

Risk of non-adherence, and non-responders Reduce risk through precision case finding

Missed patients Find the true treatable population

Drives costs into the healthcare system Removes costs from the healthcare system

Innovation and Academic Health Science Centres: some policy thinking

To some extent Academic Health Science Centres [AHSCs] are caught between the research push and market pull.

If they prioritise technology transfer, they opt for a research push approach that emphasises the availability of technologies or innovations for market-based actors based on potential commercial application. While the university mission within an AHSC will emphasise the quality of the technology rather than end-user or market benefits, this fails to address the adoption opportunities available from the healthcare service mission.

The primary weakness of ‘research push’ is that the acceptance of new technologies generally depends more on social and cultural factors within clinical communities, than on the merits of the technology itself.

On the other hand, the transformation of research into innovations that can be used to solve problems facing practitioners and patients is ‘market pull’, or perhaps more precisely ‘solution’ pull. Internal research and development communities of an AHSC need to be closely linked to the problems faced by practitioners and patients.

However, researchers often lack the inclination to pursue the innovation exploitation agenda. Indeed, a focus on adoption and the translation of research arises precisely because research productivity has in the past been favoured over solving real-world problems. In healthcare, the problems needing solving are swamped by a vast sea of research and many governments continue to fund research and wonder why they slide down the innovation scale – they value academic citations over patents for instance.

Taking these different configurations of AHSCs into account, the organisational options also need to consider how innovations move from bench to bedside. A “gated”1 scientific and market-based review process with in-house industry expertise and a network of extramural experts for assessments would create a degree of granularity, enabling assessment of benefit from the initial insight (pure research) through translation to end-user benefits. Some well-known institutions use gated processes to filter research to identify innovations, or to assess market-readiness of research for commercialisation.

In the absence of internal gated processes, institutions may use external expertise. One example is to out-source to market-facing intermediaries the technology transfer process or to commercialisation agents. This brings knowledge of markets through the retained third party. Appetite for risk in life sciences by the private equity and venture capital communities fluctuates over time, specialist groups are more likely to emerge to act as agents to commercialise intellectual property. Many universities have structured such relationships with firms that develop their intellectual property on a licensing basis, so the commercial benefits accrue to that firm, and less to the university, which may get royalties.

AHSCs, though, combine universities and hospitals, and so they need to harmonise where possible differing conditions of employment as these firms could be seen as exploiting these administrative challenges. The technology transfer route which is very widely used takes the context for commercial and entrepreneurial exploitation away from the AHSC. This means that AHSCs will fail to build internal capacity to assess intellectual property and know what to do with it.

An out-sourcing approach is useful when there is little internal interest or or more likely ability in commercialisation. The critical question for any AHSC is to make sure they have a view of the value of their own work. An intellectual property audit, for instance, is often necessary. If the AHSC lacks the ability to understand this, this would be evidence that it has failed to internalise the clinical service priorities from the hospital partner.

A risk facing any AHSC is whether they are being measured by some innovation metric that tracks spin-outs, licenses, or patents (rather than papers published or citations), as it may encourage premature commercial activity, which can take the form of single-technology companies (“one-trick ponies”). These types of start-up generally have high failure rates, and longer time-to-market; they may also be encouraged by a preference for simple licensing deals, reflecting either commercial naivete, impatience or lack of interest (which could be evidence of complacency). Regretfully, metrics such as these can actually hamper the realisation of the translational research agenda and produce less innovation since it measures the wrong thing – companies – rather than solutions. A gated process would, or at least should, determine whether this were happening.

AHSCs with a strong entrepreneurial perspective may choose to develop their own venture funds to develop and ‘de-risk’ innovations to make them attractive for subsequent acquisition. This is an attractive option as it also encourages the development of a domestic market in which to invest. Whole countries, too, have pursued venture funds as a national strategy with more failure than success.2

The US dominates life sciences research for a variety of reasons. One of these may lie in the institutional flexibility tied to the market, and which may be a partnering factor for AHSCs. Many well-established and highly differentiated not-for-profit research commercialisation institutes have been formed in the US, such as MITRE (from MIT) and SRI (from Stanford University), or endowed facilities have been created, such as Battelle and Howard Hughes Medical Institute, while others have emerged as specialists at exploiting particular knowledge domains such as the Santa Fe Institute for complexity or chaos theory. Comparable facilities in other countries that are separate from government are few, as most are tied in one form or another to state paymasters, such Germany’s Fraunhofer and its constituent institutes.

In many countries, one must wonder why there fails to be greater entrepreneurialism at creating novel organisational entities to take forward innovation agendas, themselves, as what such institutions offer is a better way for bringing innovations to market. The challenge for policymakers, though, is distinguishing between enabling the existence of such institutions and letting them function within a mandate with a high level of autonomy versus government determining what it does.

In the end, the options for AHSCs may be constrained by the public funding rules and have little to do with innovation itself.

Setting the policy agenda for AHSCs

Underpinning the notion of an AHSC as a nexus of innovation is whether such a nexus, will attract talent and entrepreneurial zeal. Obviously a process of development, extending perhaps over a number of years may be necessary.

A policy agenda for AHSCs would entail thinking about the following:

Should life science research funding be aligned to favour AHSC-type arrangements?

Obviously this would lead to non-AHSCs losing funding as well encourage migration of researchers toward AHSCs. This may not be compatible with national policy goals on local employment and wealth creation based on simplistic notions of clustering.

While ensuring that other centres do not suffer from a skills drain as the most talented people may be attracted to the special opportunities AHSCs might provide, we need to be mindful that AHSCs can quite easily poach talent as they are more likely to offer superior opportunities. Increasingly “brain circulation” (as talented people move from country to country and perhaps back home again) describes how researchers move about rather than the narrow and parochial “brain drain”. AHSCs are better positioned than other industries to exploit the global mobility of talent, while research funding and innovations forum-shop for the most favourable locations – an AHSC needs to be seen as a most-favoured location.

Should the funding and performance management of higher education and healthcare systems take account of AHSCs tripartite mission?

Since they are expensive, and disproportionately consume the healthcare expenditure budget, they may need to be judged by different performance standards. It might be better that AHSCs are accredited or recognised through explicit criteria rather than a system of self-certification.

Health professions education in traditional teaching hospitals should be replaced by AHSC supervised training arrangements; the logic here is to ensure that students have access to the best and appropriate clinical learning opportunities, within structured “clinical teaching” centres in healthcare providers. That hospitals are monopoly suppliers of clinical placements limits training opportunities but a focus on quality should prune that tree.

In addition, this would enable greater career mobility between academe and clinical service, even if such mobility challenged academic appointment criteria, or public sector employment requirements. Enabling greater flexibility here could encourage more entrepreneurs without losing them from training as well as create greater visibility of the value of entrepreneurialism within professional training.

Are current national restrictions on ownership or management of hospitals, universities hampering the development of AHSCs?

In Europe, universities and hospitals would benefit form new ways of organising their interconnected missions, but there is much to be done to understand how they are evolving and what national forces are shaping them as they are in the main, subject to the will of the state.

Investigation is needed to identify the performance, role and function of AHSCs in Europe, and to understand whether they are in fact a nexus of innovation or a quagmire of bureaucratic interference, as this could be a rate-limiting factor in innovation development. The general poor performance of European universities in international ranking may suggest the latter and a misuse of public money.

The potential scope of AHSCs comprises innovations in technologies impacting clinical care (software, medical devices, medicines), and ways of working (demarcation of health professions, clinical workflow). It is necessary to review relevant policy environments to learn at least [1] whether policies enable or inhibit high performing AHSCs where they exist, [2] whether policies inhibit AHSCs coming into existence, and [3] whether policies have perverse consequences on research and innovation production.

What is the best way to design and constitute an AHSC?

The preferences outlined here seek to understand the form/function balance, but we need more empirical evidence within the models to assess whether there is a critical size below which an AHSC may be ineffective in terms of mission attainment. Size alone may not be as important as the ability to align the various components as needed which is more a function of autonomy.

Nevertheless, size does matter to the extent that a small dysfunctional academic/hospital network or partnership will only become a small dysfunctional AHSC. This gives us one reason we need something better than sui generis self-certification as claims of excellence need evidence.

Notes

1 A ‘gated’ review process involves assessing an innovation at different stages using specific evaluative criteria. Failing to pass a gate is fatal at that stage, so the process passes through to the end innovations that have passed all the gates (which may also be thought of as filters). Gated innovation processes are used by scientifically-oriented organisations such as NASA and military defence agencies. A gated process must have a failure regime to be meaningful, which has consequences for performance assessment of research productivity.

2 See as an example: D Senor, S Singer, Start-up Nation: the story of Israel’s economic miracle. Council on Foreign Relations/Hachette, 2009. This, though, needs to balanced against the more cautionary perspective of the improper role of government in commercialisation in J Lerner, Boulevard of Broken Dreams: why public efforts to boost entrepreneurship and venture capital have failed and what to do about it. Princeton, 2009. A useful comparison of venture funding relevant to this discussion is J Lerner, Y Pierrakis, L Collins, AB Biosca, Atlantic Drift: venture capital performance in the UK and the US, NESTA, June 2011.

Managerial control of medicines cost drivers

It is not unreasonable to have concerns about the cost of medicines.

Drug costs are usually influenced by government policies on pricing and reimbursement of medicines themselves. These range from simple discount seeking to more complex approaches such as conditional approvals, and value-based pricing (perhaps a subject for another posting). These can achieve a measure of drug cost control, but may also distort the market of medicines themselves.

For instance, tendering for generic medicines can sometimes lead to unacceptable consequences, such as unexpected product substitution by suppliers, patient and clinician confusion as medicines change appearance, and complications in medicines management or pharmacists. And a ‘winner take all’ award of contract can mean that the losers exit the market, removing a source of price competition and choice for consumers and governments. This is an unintended but avoidable consequence of using this crude procurement instrument.

Regulations and health technology assessment together are challenging to free pricing of medicines, but it is unsurprising that medicines should be subject to some assessment of efficacy and performance in the real world, and not just on the results of clinical trial evidence on a highly selected study population. HTA has also thrown into the spotlight the logic by which drug prices are established by the pharmaceutical industry. This scrutiny is not a bad thing as it highlights the methodologies used, whether they accurately produce a price reflecting the value of the medicine as used. Separately, the cost of the research to produce the medicine is a factor, and one should not be surprised that the prices of successful drugs should try to recoup the costs of all the failed drug research, even if those costs could be seen as the price of the risk of doing business for the industry.

Apart from these approaches to drug cost control, there are opportunities to reduce costs within the healthcare system itself.

Achieving improved cost control, value for money and improved health outcomes are consequence of better management of medicines procurement, patient adherence, dispensing and waste reduction and reduction in variations in prescribing practices.

These are processes and organisational interventions designed to enable improved professional practice through hospital formulary controls and best practice in medicines logistics. These enable the ability to reduce prescribing variance, strengthen quality systems and improve patient acceptability whilst strengthening the foundations of professional practice.

The following “logic map” shows how this works:

A central feature of any high-performing healthcare system or organisation includes best-practice in medicines use and clinical management.

As all aspects of healthcare are under varying degrees of financial stress, cost controls and appropriate use of medicines are a legitimate focus of scrutiny to achieve the highest standards of clinical practice and safe patient care.

Failure to achieve clinical and managerial control of the use of medicines across the patient treatment pathway may arise from:

  • misuse of medicines (failure to prescribe when appropriate, prescribing when not appropriate, prescribing the wrong medicine, failure to reconcile medicines use across clinical hand-offs
  • “clinical inertia” and failure to manage patients to goal (e.g. management of diabetes, and hypertension post aMI) [see for example: O’Connor PJ, Sperl-Hillen JM, Johnson PE, Rush WA, Blitz WAR, Clinical inertia and outpatient medical errors, in Henriksen K, Battles JB, Marks ES et al, editors, Advances in Patient Safety: From Research to Implementation Vol 2: Concepts and Methodology), Agency for Healthcare Research and Quality, 2005]
  • failure to use or follow best-practice and rational prescribing guidance
  • lack of synchronisation between the use of medicines (demand) and procurement (supply), with an impact on inventory management and
  • loss of cost control of the medicines budget.

The essential challenge is ensuring that the healthcare system and its constituent parts are fit for purpose to address and avoid these failures or at least minimise their negative impact.

Medicines costs are the fastest growing area of expenditure and comprise a major constituent of patient treatment and recovery.

The cost of drug mortality was described in 1995 [Johnson JA, Bootman JL. Drug-related morbidity and mortality; a cost of illness model. Arch Int Med. 1995;155:1949/56] showing the cost of drug mortality and morbidity in the USA and costed the impact at $76.6 billion per year (greater than the cost of diabetes).

The study was repeated five years later [Ernst FR, Grizzle A, Drug-related morbidity and mortality: updating the cost of illness model, J Am Pharm Assoc. 2001;41(2)] and the costs had doubled.

And costs and use have continued to rise since then.

Evidence from a variety of jurisdictions suggests that drugs within the total cost of illness can be substantial, for instance:

  • Atrial fibrillation: drugs accounted for 20% of expenditure [Wolowacz SE, Samuel M, Brennan VK, Jasso-Mosqueda J-G, Van Gelder IC, The cost of illness of atrial fibrillation: a systematic review of the recent literature, EP Eurospace (2011)13 (10):1375-1385]
  • Pulmonary arterial hypertension: drugs accounted for 15% in a US study [Kirson NY, et al, Pulmonary arterial hypertension (PAH): direct costs of illness in the US privately insured population, Chest, 2010; 138.]

There are upward pressures that increase costs, downward pressures that decrease costs and pressures that influence costs in either direction; the diagram illustrates a few:

Many of the drivers can be addressed through a combination of professional staff development, better use of information, particularly within decision-support systems to support guidelines and prescribing compliance, and organisational interventions.

An interventional strategy to manage medicines cost drivers involves a structured review of central drivers of drug cost and use within existing national or organisational priorities.

The range of possible solutions fall across of spectrum of interventions and any or all of these are good starting points:

  1. development of drug use policies
  2. development of clinical policies, guidelines, and clinical decision-support algorithms
  3. drug-use evaluation studies
  4. clinical and medical audit
  5. cost-benefit studies
  6. professional development
  7. procurement effectiveness performance review
  8. patient treatment pathway analysis
  9. analysis of waste reduction opportunities
  10. management/organisational improvements to support appropriate behaviours.

To start involves assessing the current state of these aspects, and determine any gaps with national or organisational policy, or evidence-informed best practice. As a proxy measure of the necessary changes, measurement of this gap becomes the focus, and requires evidence of current practice against the desired goal. In many cases, where systems are weak or poorly performing a comprehensive root-and-branch review may be needed, with a corresponding impact on existing managerial, organisational and professional practice.

All healthcare systems and organisations are different and whilst it is difficult to precisely quantify the outcomes in advance, organisations undertaking a sustained process of medicines review and optimisation should be able to release more than 10% of existing drug expenditure and possibly more.

In organisations with a less-well developed clinical pharmacy, where medicines information systems are not well developed, and where clinical guidance is not proceduralised, greater savings are likely, perhaps to 25% or more, reflecting the possibility that the lack of information conceals upward drivers of costs, masks inefficient medicines management or evidence of misuse and waste.

In the longer run, healthcare organisations will need to ensure sustainability of any medicines optimisation review, by ensuring strong organisational structures, practices and behaviours. Development of these frameworks is an important by-product of medicines optimisation interventions, with a corresponding improvement in medicines safety.

Healthcare Cognology: autonomous agency for patient empowerment and system reform

Healthcare systems have been slowly evolving toward a model of care delivery that seeks to leave behind the traditional medical model, based on fighting diseases – sometimes called the lesion-theory of medicine – and which has driven health care thinking since the 1800s.

The direction of travel is toward the health ecology model conceiving healthcare as about helping people live their lives well, seeing ill-health and disease within an ecology comprising choices they make, the context in which they lead their lives and importantly, on the central role of the individual within that ecology to decide how to organise healthcare to help them lead this life. In that respect, the ecological model is more in tune with the real, complex, nature of the world with the various parts working together more in a self-organising manner to achieve desired results. This contrasts with the never-ending top-down plans of state run systems, over harnessing the forces in the society to drive quality and performance improvement of outcomes.

Self-care has been the main policy response to the realisation of this complexity and we have examples such as the expert patient, patient activation, patient reported outcome measurement, disease or care management programmes, managed care, and health promotion and lifestyle programmes.

At present, many health systems and policymakers are focused on chronic ill-health or long-term conditions, which entail continuing healthcare requirements perhaps over the lifetime of individuals and requiring varying degrees of support and perceived unsustainable funding needs. Many long-term conditions arise from lifestyle choices in part and that explains why there has been a focus on engaging the patient in the care process, to ensure that they are inclined to make the necessary choices to avoid further exacerbations in their conditions, or indeed to avoid these conditions in the first place; another goal of self-care is to achieve a policy driven cost-shift to the patient user, to exploit financial co-payments, for instance, to alter behaviour, in the spirit of liberal paternalism.

The California Healthcare Foundation has stated [www.chcf.org/topics/health-it]: “information technology is still fairly new and untested in health care, making experimentation, analysis and evaluation critically important”.

We know technology helps to enable not just efficiencies and effectiveness, but also the greater personalisation of services – consumerisation. The impact of technology, therefore, includes, but is not limited to:

  1. breaking down (or disintermediating) processes to remove steps that do not add value to the end-user experience, or which have no useful role to play, despite being seen as current good practice by professionals; this can create novel service integration
  2. shifting skills toward customer-facing staff (e.g. consider how different banking has become)
  3. widening public access to hitherto restricted health information to patients, including information on clinical performance. In some cases, this has been mandated (such as public information on hospital performance) or has evolved in response to customer interest (such as health websites providing information and advice on health conditions)
  4. enabling organisations to create new ways to engage with the consumer or end-user more effectively in improving products and services than the traditional customer/supplier relationship.

A particular impact is relevant in healthcare, namely, moving knowledge across the boundaries of regulated professions (e.g. to imaging technologists from radiologists, from doctors to nurses).

Healthcare is highly controlled and the application and use of professional knowledge legally regulated. The effect of this has been to compartmentalise knowledge and skills within a broad hierarchy, with the doctor at the top, in effect, as the default health professional who supervises and validates the application of knowledge and skills by other professions. This, of course, is changing, partly as a response to the sheer complexity of healthcare and the levels of knowledge and skill involved, but also through new ways of working, in teams, and across organisational boundaries, with skilled nursing care facilities, polyclinics, etc. The patient, though, has not been an immediate beneficiary of this.

As knowledge has migrated away from people into devices, we have seen the invention of patient-use devices which in the past have required sophisticated testing and professional knowledge; an obvious example is the pregnancy testing kit, and many mice and rabbits are no doubt relieved at its invention.

The impact of embedding knowledge in devices in healthcare, and thereby the potential impact on the internet of things within hospitals and for patients unbundles knowledge cartels and redistributes it.

Putting knowledge into people means training them, and it can either shift knowledge to other professionals, such as is found in interventional radiotherapy (imaged-guided surgery), whereby surgeons interpret imaging results in theatre, replacing a separate radiologist. Knowledge can also be given to patients, often by simply enabling them to have access to more knowledge and insight; this has been a key impact of the internet and which has raised many issues around the quality of health information on the internet.

Knowledge can be put into devices, which can be used by patients and consumers, and where the device does what a health professional used to do. This is the artificial intelligence revolution.

Finally, technology can enable knowledge to be put into ‘systems’ to generally interact with people, such as in the home, or hospital, for instance; it is the embodiment of smart devices within systems that offers particular benefits.

The Internet of Things, for want of better terminology, can help achieve greater personalisation of service delivery and move toward such notions as the Smart Hospital and the Smart Home to support the Smart Consumer.

Why do we want this greater personalisation within a healthcare context? Because evidence demonstrates that customising services is effective – patient outcomes are improved, patient experience is positive, and the provider gets better value for money.

Personalisation has the potential to be enabled through autonomous agents acting on behalf of patients, enabling the patient/consumer to drive their preferences and choices, rather than these emerging through professional delegation or proxy interpretation. Is this Alexa or Google’s Assistant on steroids?

A vast array of device technologies are used in healthcare, particularly in hospitals, probably the most complex organisations in our society. A known priority within healthcare is to integrate the vast sea of information produced, whether conclusions by clinicians, activities of patients, the output of devices, or underlying information such as financial performance, inventory, or quality. Progress is slow and mixed.

E-health has largely failed to get substantial traction, either as a mode of service delivery, or commercially, despite being seen as having considerable potential, by enabling better linkages between operational parts of the healthcare system with the patient. This, despite evident progress is still work in progress.

There are many approaches to integrating information across the information value chain, with the electronic health record (EHR) seen as key from a clinical perspective, along with opportunities real-time monitoring of patients outside hospital through sensors, or interacting with patients through video teleconferencing. Most countries are grappling with how to enable patient access to the EHR, with concerns around identity determination, privacy regulations and security being central, but this debate is being carried by the healthcare providers and their regulators that see the EHR as belonging to them, and not something owned and under the control by the patient.

Electronic prescribing, is seen as reducing medical errors, and better correlating patient data with rational prescribing, but the benefits to patients are limited, in the main, to electronic delivery of the prescription to the pharmacy of their choosing, but this is a choice that is already theirs that is not enhanced by e-prescribing itself. The benefits here accrue to reduce processing time, or commercial capture of the prescriptions themselves through co-location of pharmacies and prescribers, which in the end sort of defeats the point from a patient perspective.

Other areas, such as care management programmes using remote monitoring, SMS alerts, etc. but little of this is really new, as they are mainly automating existing activities, and facilitating better communication.

Let’s consider starting in a different place.

I am mindful of underlying clinical requirements in the hospital, such as linking the dispensing of a medicine to a patient (informed through clinical decision-support prescribing systems and documented accordingly) with bed-side capabilities to ensure the right patient gets the right medicine, and linking that in turn back to batch control and inventory control, budgeting and procurement, not to mention links to quality assurance, audit and utilisation review. And should the patient react badly to the medicine, batch control can help identify any problems with the medicine itself, such as expiration date, or even whether it is counterfeit. How are we to design a system that seamlessly makes all this work?

I am starting with the relationship between the patient and the hospital (mindful that perhaps what we mean by hospital will evolve over the next decade for other reasons), a relationship, built on trust, and on service delivery, communication, treatment, and information. Illustratively, a wireless world of healthcare is possible, which respects this.

Autonomous agents and the next stage of evolution of the Internet of Things

“Cognology” is a term coined by myself to describe the evolution toward technologies with embedded intelligence. So what can the internet of things be in this context? I have adopted the operational definition of how the internet of things should work in healthcare from Kosmatos et al 2011:

“… a loosely coupled, decentralized system of smart objects—that is, autonomous physical/digital objects augmented with sensing, processing, acting and network capabilities.”

The implication of operationalising devices within a cognology and fitting this definition is to alter our current notion of the internet of things from a cognitive perspective. That is to say, the ‘thing-ness’ of devices that we perceive to be the interesting development evolves as autonomous agents give functional purpose to these things. This in effect means moving from a view of the internet of things defined as bundles of technological capabilities, and more as a ‘distributed cognitive system’ [Tremblay 2005] defined in its ability to evolve and transform itself in response to changing circumstances, rather than a strict functional hierarchy.

Conversion of the internet of hospital things into the internet of self-care (or what might be thought of as ‘my things’), through autonomous agents bridges the gap between the hospital setting and the personal context (home, school, work, play), in effect by having the autonomous agents ‘repurpose’ the device.

In a wireless world, the individual is the focus of the cognological capabilities provided by smart device technologies. This achieves the additional benefit of shifting the focus away from technologies that can deliver this or that service, to the use of the information and its manipulation to achieve various goals.

I also think it is important to adopt Simon’s technological agnosticism, to ensure we are focused on results, rather than ‘things’ as such.

I think of this shift from technology to cognology as achieved in part through advances such as the potential of the internet of things, with the embedding of functional intelligence in devices transforming them from physical things into cognitive things.

In this respect, the internet of things is a misnomer.

The internet of hospital things

Healthcare technologies should have certain degrees of freedom:

of geography: in terms of home, hospital/clinic, ambulance, workplace, etc. to support location-independent care;

of intelligence: embedded ‘intelligence’ of one sort or another proving a constellation of capabilities, but perhaps most importantly, a predictive and anticipatory capability;

of engagement: seeking out and exchanging at various levels and in various forms with people (doctors, nurses, patients, carers, etc.), with processes (admission, discharge, alerting, quality monitoring, etc.) and with other objects (blood gas monitor, diabetic monitor, cardiac monitor).

I see the Internet of Things as a different approach, which, when coupled to the use of autonomous agents, offers substantial opportunities to recast clinical processes so making the patient central to healthcare. This consumerist approach will render dated many e-health initiatives for example, as well as the current approach to the use of EHRs.

References and Want to Know More?

Autonomous Agents and Multi-Agent Systems for Healthcare, Open Clinical, www.openclinical.org/agents.html#properties

Kosmatos EA, Tselikas ND, Boucouvalas AC, Integrating RFIDs and Smart Objects into a Unified Internet of Things Architecture, Advances in Internet of Things, 2011, 1, 5-12, doi: 10.4236/ait.2011.11002

Lehoux P, The Problem of Health Technology, Routledge, 2006.

Simon LD, NetPolicy.com: public agenda for a digital world, Woodrow Wilson Centre, 2000.

Storni C, Report on the “Reassembling Health Workshop: exploring the role of the internet of things, Journal of Participatory Medicine 2(2010), www.jopm.org/media-watch/conferences/2010/09/29/report-on-the-reassembling-health-workshop-exploring-the-role-of-the-internet-of-things/

Tremblay M, Cognology in Healthcare: Future Shape of Health Care and Society, Human and Organisational Futures, London, 2005

Tremblay M, The Citizen is the Real Minister of Health: the patient as the most disruptive force in healthcare, Nortelemed Conference, Tromso, Norway, 2002.

Wireless World Research Forum (2001) Book of Visions 2001.

Want to know more? There are some diagrams I excluded which showed a schematic of the system at work.

9 Tribes of the Internet and their health interests

Discussions on health literacy are increasing as healthcare providers, clinicians, payers and patients consider what this means for healthcare. Having been involved in launching the world’s first digital interactive health channel in the UK in 2000, one thing I learned is not to assume that everyone is alike or has common interests.

Healthcare systems are poor doing what retailers take for granted, namely the segmentation of their users. When we did the health channel, we worked with a simple framework drawing on work by the California HealthCare Foundation, in their report “Health E-People”. This gave us a workable model of the different types of users and their different needs, and that in developing content and services for them through the Channel, we needed to be mindful of this. More recent work by the Pew Internet Project has identified the “9 Tribes of the Internet”, to reveal how different people interact and use technology. Of course, segmentation can be quite elaborate, but at this stage we need a scaffold to guide our further understanding.

The main assumption we need to make about technology is how it will be used by people and thereby how this informs the adoption/diffusion process. Health and social care are traditionally “high touch” activities, given the way that knowledge has been organised, who knows it and how it is used. This, however, is being challenged by technologies that embody what traditionally has been found in the brains of specialist clinicians — what I call ‘cognologies’.

Increasingly we are seeing technological innovations that can embody both that knowledge (in decision algorithms for instance) and in skill (in robotic devices, vision systems for instance). Will people accept a shift toward high technology care at the expense of its traditional focus on care by humans? Is that an aesthetic preference (we like it) or might people come to prefer “lower touch” technologically-enabled services if it is reliable, and on-demand?

As we think about this, I suggest the following as some thoughts for policy makers and care providers:

  1. Eventually, the individual will have to own, in some form, their own health record if much of the desired changes in patient behaviours are to be realised. This will lead to patients having a new understanding of information about themselves, and as such this information will need to be clear without mediation or interpretation by others. Patients will, therefore, become involved in decisions about what to do with their information, and with whom it is shared and used; for instance, use in databases whether in commercial or public organisations that will be accountable to the patient for the use of that information. The patient, as what I call the ‘auditor of one’ will come to take a keener interest in the accuracy of the health record and be less tolerant of mistakes or inaccuracies, as is the case in other areas (e.g. banking, credit scoring).
  2. Not everyone will be digitally enabled in the way technology pundits fantasise about. This is not a digital divide and is not evidence of social exclusion, but is a personal choice of people to lead their lives as they wish in a pluralistic society; it may be that in the end, we all end up as digital natives over time, but some will still be hold-outs, or ‘islands in the net. The key implication here is that service providers will need to move in some cases very slowly to adopt technologies with some types of people. In time, perhaps people may adopt low-level access and interactivity, but for some people technological interactivity will remain at best an option not a preference within an evolving technological ecosystem. It remains to be seen whether this will continue to be the case; evidence from other technologies suggests not, that in time, technologies are broadly universally accepted, but not necessarily used in the same ways by everyone.
  3. The assessment of benefits of technologies in the traditional health technology assessment [HTA] model will need to pay much greater attention to the segment of the population likely to be involved and the social context of that group, taking account of distinct patterns of use and preferences. This challenges the current paradigm used within HTA communities. The conclusion that one-size-fits-all HTA assessment will increasingly prove inadequate. This means that designing and implementing technologies will need to be far more flexible when it comes to the structure of service delivery as the adoption/diffusion process itself will come to determine the socio-economic benefits. Consider that few today would subject the telephone to an impact assessment – it is now part of our expectations, and we should not be surprised if the same thing happens to evolving technologies in healthcare focused on the use by consumers and patients.
  4. The tribes model suggests that not everyone will necessarily buy into the technology revolution. For many people, they work in care precisely because they want to have personal contact with people, and not through intermediating technologies. Since many patients also would have that preference, organisations may need to structure services and staffing to ensure the right mix of people to service the right publics. This will challenge approaches to the organisational design of service providers, in the main suggesting more pluralism in variety, scale and function.
  5. Patient compliance, concordance, adherence may become more dependent on the features of the technologies, their design and ease of use, than on the willingness of the patient to follow a particular care regime. Patients are deliberately non-adherent for many good reasons (some of which reflect fundamental flaws from the medicine itself, its delivery system, or side-effects). Accidental non-adherence is another matter obviously. Helping people understand their limitations in using and working with technologies as matter of personal preferences will become very important, which increases the focus on personalisation.

It is common for health and social care systems, especially where the state is the main source of funding, to tend toward omnibus systems of service delivery, which has difficulty dealing with individual service preferences. Whether it is fully appreciated, such systems favour professional and provider interests and depend on proxy interpretations of patient preference. It would be a mistake to assume a similar approach with technologies. Instead, we should be encouraging approaches which are sensitive to the preferences and usage patterns of individuals. In this way, too, we may actually see services being offered that people will value and use.

The 9 Tribes in Health

Background

Pew Internet Project identified the “9 Tribes of the Internet” in a report in 2009 [http://www.pewinternet.org/2009/06/10/the-nine-tribes-of-the-internet/], to ascertain how different people interact and use technology. The California HealthCare Foundation, in its “Health E-People” [http://www.pewinternet.org/2009/06/10/the-nine-tribes-of-the-internet/], identified three broadly defined populations: the well with an interest in health, the newly diagnosed, and those with long-term or chronic health conditions.

The Pew research was instructive in thinking about how people might deal with a more technologically enabled health and social care system. I have sketched out some relationships in the table which gives an overview of the sort of considerations that are likely relevant and important.

NOTE: This was first written in 2010, and updated in 2019.

Should robots pay taxes?

Andrew Yang is a democrat contender for president of the United States. He has expressed concern about a ‘jobless future’ as Martin Ford puts it [Rise of the Robots: Technology and the Threat of a Jobless Future], from technological change, in particular changes coming from the application of artificial intelligence in the workplace, which may produce mass unemployment, maybe forever.

Yang is rightly worried about the jobless future and has proposed a Freedom Dividend, otherwise known as a universal basic income to deal with pending mass unemployment.

Two outcomes are possible. Technological change will be like it has been in the past where even disruptive changes have created new and different jobs in other parts of the economy, or, this time it is different.

If it is indeed different this time, it is necessary to rethink our various assumptions on how our workforce is structured. I have proposed to use the term ‘cognology’ to describe the embedding of AI type capabilities into ‘things’ to create smart technologies, to emphasise the essentially cognitive nature of these new capabilities. This has certain consequences. Let’s have a quick look.

Abbott and Bogenschneider [Should robots pay taxes? Tax policy in the age of automation, Harvard Law and Policy Review, 12:2018, and with apologies for using their title] make the point that tax policy has focused on labour (employment of people) and not on capital (the things people use). They write that the tax system breaks down when the labour is capital. The important consideration, though, and this is to avoid taxing pencils, is that this applies when the capital is a substitution of the labour — pencils don’t write by themselves.

They make the point that the tax system actually incentivises automation, because firms can replace humans with robots, and avoid the taxes.

From a policy analysis perspective (using the Wilson matrix), this makes firms free riders, as the wider costs of the labour displacement they create is not a direct cost they incur. These costs are transferred to society as a whole. Given the tax base is likely to shrink from unemployment (as it does when employment drops anyway), governments will find it hard to finance these costs and will need to borrow against an uncertain future.

I would like to propose that robots and cognitive decision systems (i.e. software) that replace humans are actually a type of “labour substitution”, and firms should bear the costs of that substitution through a tax on these technologies.

If we start to think of these technologies as labour substitution, then we have a much larger frame to understand the costs and benefits that arise from the technology. Search engine technology companies extract the value of the search, but have not born the costs of unneeded librarians. Yang, though, wants to tax them, but this just creates a NIMBY situation, and opens the door to tax avoidance. By casting the tax net widely, as the quasi-universal tax on employed labour does, paid in part by workers and in part by employers, we get closer to a more equitable and socially effective approach to taxing technology.

As always the difficulty is in the measurement of the effort to tax — a salary is easy, but how much labour is in an decision support system that assists a doctor my scanning mammograms for tumours? Is it one radiologist equivalent or many?

The other thing to consider in an AI future, is how to factor into workforce planning the labour substitution effect of cognologies themselves. After all, we cannot have our already unreliable workforce planning made even more unreliable. Poor workforce estimates feed though to the production of graduates from universities of colleges, who may not taking into account the work of intelligent machines. Perhaps these intelligent machines will be taking their online courses.

While Yang’s position is reasonable it is misguided. We really need to come to grips with technologies as a substitute for labour, and determine its labour equivalent effect for taxation purposes. This will go some way to determining what real workforce displacement is likely. It may be that under that scenario, where cognologies are fully costed against labour, we may be better able to value the human condition, rather than exploit its weaknesses.

So, what do you think? Should robots (and intelligent decision systems) pay taxes?

Payer decision making

The relevance of value in establishing the positioning of medicines is the new normal for pharmaceutical marketing. Pharmaceutical companies have customers who are highly constrained by whether healthcare system funding is sustainable long term. Remember, payers think epidemiologically and in multiple years of costed care so industry needs to assess how that can be understood for product value. The pharmaceutical industry is constrained by its ability to generate revenues from medicines sales to cover the costs of research and development.

These two collide in the decision making process to adopt, or not, a medicine. The payers broadly have to balance the sustainability of their budgets with a potentially innovative medicine that will improve care outcomes. The pharmaceutical companies have to construct the value case to demonstrate these care outcomes. That probably means at least two things among many;

  1. Stop pricing drugs by the pill or pack, and start pricing valued outcomes for a defined set of patients over a number of treatment years, and
  2. Forget about trying to ‘time’ the market for product launch. The right time is set by payer budget cycles and their drug investment and disinvestment decisions. And, oh yes, the evidence.

By the way, my approach does differ from the journey model of Ed Schoonveld in important respects, by identifying the structured, and gated, decision processes involved; that why medicines aren’t sold, but bought.

Let’s first look at the colliding priorities. The diagram shows that payers are concerned with the value of a medicine in minimising treatment risk for the treated population. A company is seeking the value of the medicine by maximising the size of the treatment population that they believe benefits. As you grow the treatable population beyond the evidence, risk rises; for payers, reducing that risk is addressed through evidence.

No alt text provided for this image

This is a collision of notions of ‘uncertainty’ in decision making and folks on the industry side should be used to requests for more evidence and novel access arrangements such as conditional reimbursement with evidence generation, and so on. As in any model of competing interests seeking a common price, the intersection of these two notions of uncertainty is defined by a price at which both parties will agree the price pays for the uncertainty it quantifies (i.e. it quantifies uncertainty in a certain way). The intersect quantifies risk, and sets the size of the treatment population that can benefit for that price.

The resulting curve may be thought of the ‘community effectiveness curve‘ depicting the optimal balancing of risk for the treatment community and a proxy for price agreement along that curve. This, by the way, is a better way to identify price corridors for people who still think that way.

This structured process is what this article is about.

Here is the gated decision process for payer decision making. While payers may not formally see themselves going through this in a linear way, they are thinking these thoughts, in this order.

No alt text provided for this image
Gated Payer Decision Making for Market Entry of New Medicines

From the payer perspective, information needs to be specific to the decision gate and having the wrong information at the wrong time (e.g. the right information at the wrong gate) will just frustrate folks and probably irritate decision makers.

The diagram is read left to right, and a ‘yes’ answer to a question is needed in order to move through the gate. Getting a ‘no’ means the information supplied failed to make the case.

The following is a quick tour of the underlying logic. By the way, I call this a gated process as there are criteria for satisfying the conditions for passing through the gate; it is, I believe, unhelpful to decision making to characterise them as hurdles, as this suggests they are imposed to make life difficult. They are, actually, simply the structure of decision making.

Looking at this from a behavioural perspective, i.e. psychology informing decision making, each gate means this:

  • To get through the first gate, the payer is confronted with existing treatment options and asks why do I need another, or why change? Unfamiliarity may also be at work, with novel treatment benefits that lack comparators. Evidence of unmet need might be helpful along with good epidemiology to demonstrate the possibility of better outcomes.
  • Satisfied that a new therapy may be warranted, there is the question of risk and benefit compared to current treatment. While a new therapy might be indicated (yours?), the associated risk may be unacceptable compared to not using it. The benefits really do have to hold under increased uncertainty for a payer to agree to increased treatment risk. I suggest this is where discussion of standards of care begin to be quantified, having been introduced at the first gate. Payers often are not as aware as they should be on the current standards of care evidence in misdiagnosis, medical error and patient dissatisfaction.
  • Then having agreed that this uncertainty and its associated risk are acceptable, we are confronted with the cost and efficacy issue. Now we are beginning to price that risk. Good analysis of the costs of care and mis-care are useful, again because payers are not often aware of whole system costs (i.e. the costs of a treatment pathway) either because they are using using a fee schedule linked to DRG type classification or haven’t proofed their capitation models.
  • Success in pricing that risk moves to the question of the medicine in the context of total treatment costs and whether the treatment costs themselves for the patient population can be managed or will the scaling of the costs overwhelm the system for this treatment population versus all other options. Companies may see themselves as just suppliers of medicines for a price, and not a partner in the total system. But understanding the cost drivers along the whole treatment pathway, not just the costs a new medicine may drive, becomes an important element in final value pricing. If you have a medicine that reduces associated costs, or avoids certain costs (think the Triple Aim, here), then the determinants of value are much clearer. It may be that a biomarker is a value-add from one perspective but only if it reduces medical error and misdiagnosis, without increasing costs, so precision patient identification becomes important. If you’ve got this far, though, you’ll have already shown you can demarcate the treatment population, including the responder subset with a degree of precision.
  • Finally, the payer thinks about the future and whether there will be new medicines coming along that might address the same treatment population, alter risk differently, improve outcomes, avoid costs, with better patient adherence, and so on. Given, broadly, a medicine is alone in its treatment class for months, rather than years, payers may choose to delay decision making or consider options you’ve ignored that may trade off future costs and present priorities. This may be where a payer will be thinking disinvestment or product substitution and the determinants of that are critical in this final phase. Here’s a scenario: Why might a particular medicine not be a preferred medicine on a hospital formulary? The answer is simple: don’t have production problems where supply cannot be guaranteed. The lesson is that this is where the long game gets played out.

For those of you who read Kahneman’s “Thinking Fast and Slow”, or similar, there are decisional heuristics at work here. And across that gated process, you are contending not just with highly structured evidence informed quantitative information, but also how humans can be influenced by how human’s think they think. This has a raft of factors such as confirmation bias, hyperbolic discounting, choice overload, loss aversion, endowment effect, anchoring, mental accounting and social proof. It will pay to be attentive to when you present what information and the frame of mind decision makers are in. The reason this is important is that that regulators and payers in different countries, hospitals or regions can make different decisions from the same evidence, so something else is going on.

And so, a comment on pricing. To short-circuit this challenging gated process, it is common simply to cut the price, i.e. discount. Discounting is a quick win trick that only works if payers are trying to reduce present costs, which they all are. However, payers with their eye on the future are more likely to be interested in pricing arrangements that address uncertainty over time and so will be amendable to arrangements such as coverage with evidence development or outcomes guarantee. If they are focused on whole system issues, they will be interested in care pathway (cohort/whole system) pricing for instance. If, though, the future costs are a priority, think about capitation arrangements, or simple price/volume but be mindful that this last is like selling products door-to-door in the 1950’s.

I happen to think care pathway pricing of carefully demarcated patient populations with costs taken over say 5 years is a better pricing model for both parties. Value can be demonstrated on both sides along with evidence of such things as improved adherence (to reduce waste by non-responders) or diagnostic decision support aids to address misdiagnosis and sources of medical error or reduce time to the correct diagnosis, in the case of rare diseases for instance.

This article is designed to emphasise product value determination under conditions of uncertainty to arrive at a sustainable long-term relationship.

A model for mapping machine learning onto human decision making

The AI agenda is for me all about augmenting human reasoning; what I call cognology (cognitive focus) to distinguish from technology (physical focus). This is the core challenge to work flow and adoption.

Here are some thoughts on the application of John Boyd’s OODA decision making from military decision making to healthcare decision making.

Boyd developed OODA to characterise decision making by fighter pilots who must react quickly. Success lay in cycling through this more quickly than the opponent.

OODA means: Observe, Orient (interpret), Decide (from options), Act. The faster a person can work through that process (reason) is evidence of quicker decision making and interpretation of evidence. 

Artificial Intelligence has a role in each of these steps. It becomes quite important to know where to focus AI capabilities, what operational benefits flow from that and indeed what the wider impact of AI in clinical reasoning might be.

At root, that means being clear about what aspect of human reasoning is being addressed by AI and where in the decision making process.

What we’re seeing with AI and which is what has caused the most concern for critics is the risk that AI’s significant augmentation of human reasoning along the OODA process could in the end replace humans. My view is that we need to know where the AI augments and how, and where the AI replaces and why.

A worrisome example is AI in combat, with autonomous/semi-autonomous drones, the former having the capability of acting without human intervention: humans are ”out of the loop”. Healthcare, too, offers the potential for clinicians to be “out of the loop” and in the absence of adoption of augmented reasoning by clinicians, the AI could dominate by default.

Boyd’s model looks like this:

No alt text provided for this image
Boyd’s OODA model

The AI computational models are very good at dealing with the complexity illustrated of decision making. I’d suggest much AI is still at the first two O’s: computational modelling of tumours, for instance and suggesting where highest risk lies. We are beginning to see the D being addressed when clinicians are presented with treatment options (such as referral of a patient with a hitherto unknown diagnosis for genetic testing as not referring was the default clinical decision — this is related to work I’m involved with on patient finding and undiagnosed rare conditions). Much AI has helped with OOD. It is the A that is the coming challenge and which has the potential to take humans ‘out of the loop’ and allow the AI determine actions, e.g. automatically having the patient referred.

The reason this matters is that clinical processes involve prediction about what health outcomes will be obtained from what treatment intervention. Here’s an example: AI is outperforming clinicians in diagnosis (using ROC figures). The prediction models I’m working with for identification of patients with rare diseases operate at an ROC of about 0.9 and when clinicians review the output as part of augmenting reasoning, the AI’s ROC jumps to over 0.97, suggesting almost certainty of a rare disease diagnosis. At present, patients with rare diseases experience an average of 7 years to a correct first diagnosis and may see as many as 20 different clinicians on that journey. AI cuts that to ‘hours’ and fewer wasted clinical encounters. This means the OODA cycle becomes more precise and much quicker from the patient’s perspective.

UK and the EU: Brexit as failed ideology

Dr Tim Oliver posted on the LSE Blog a thoughtful item on the various ways to understand the negotiation structure of Brexit [link to item]

He puts forward four key ones, and what I want to do is briefly comment on each.

Neoclassical Realism: This is about power relationships. The UK’s position within the EU has been weaked  as a naysayer of much of the European agenda. Externally, it is a full member of the UN security council and a member of NATO but both of these are immaterial to the Brexit outcome. As a card to play, they carry very little weight in negotiations as for the UK to abrogate its security responsibilities or use them as a bargain chip would actually signal weakness. In response, NATO would see the UK as an unreliable partner who would trade collective security for self-interest. As a global power in its own right, I suspect the evolution will be continuing geopolitical decline and loss of global influence. While we may see new alliances, for a realist, the international anarchy of inter-state relationships will become a factor in dealing with the EU and the UK will be the weaker for opting out of power relationships, for a delusional view of national power.

Constructivism: This is about norms and rules. The Brexit leave logic is that the UK can forge new relationships more productively outside the EU than within. Trade is a proxy for the power of nations to abide by norms or construct rules. As a nation among many, trade migrates to the larger blocs and the single actors take what they can get. The UK will become a rule-taker outside the EU. The test will be the deal with the EU. If the UK can’t agree a good deal with the EU, that would signal the UK can’t be negotiated with unless they get their way. This is of course silly logic at one level since the UK is leaving a trading bloc where it was a rule maker. Only fools and deluded politicians believe rule taking is preferable.

Bureaucratic politics: This is about the behaviours of bureaucratic systems. The UK has viewed the Brussels bureaucracy in some respects as a distraction from domestic affairs. The EU relationship was managed through the “Foreign and Commonwealth Office”, a strong clue on how the EU was viewed (viz. foreign). In terms of civil servants building careers, postings in Brussels were not seen as career enhancing (unlike working for the Home Office for instance); this led to very good individuals pursing careers at the Commission to the detriment of their domestic career progression. Indeed, expertise in European matters was frequently dismissed. This sorry state of affairs of course played out through the removal or departure of key individuals with expertise in European affairs. That they might have gone ‘native’ is a concern all governments have and is one reason diplomats are routinely rotated. But the EU requires deep expertise both because it is a unique body of law but also because the UK was a key actor in that system. I suspect that the current negotiations are being handled badly partly because the UK team lacks the ‘native’ understanding; this may explain why the government is afraid of civil servants with strong EU views; like Orwell’s 1984, this doesn’t fit with the mind set in government. The consequence is more about failure for the UK from incompetence than from bad bargaining.

Cognitivism: This is about ideas and mindsets. The UK has seen the EU as simply a trade arrangement, consistent with years of free trading. The EU sees itself as an idea, in the same was the US sees itself as an ideology. There is nothing wrong with that. The weakness is the UK sees itself defined through trade and not as a national idea called the UK; indeed it not sufficient to argue the UK’s ideology rests on notions of sovereignty and taking back control as this flies in the face of the fact that all nations are constrained by treaties of one sort or another should they choose — what Brexit does signal is the UK can abrogate a treaty obligation and may be prima facie unreliable. The Brexit debate has shown how poorly prepared the UK politicians on the government side are, and who actively avoid discussing the social dimension of the EU — indeed look very uncomfortable discussion the rights of 3 million EU citizens within the UK. Social Europe is made up of academic networks amongst research institutions, or families brought together across borders, of young people experiencing another culture through Erasmus exchanges, even of duty free wine and beer, freedom to travel, enjoying the security the European Health Insurance Card brings and so on. As an ideology, the UK dismisses this as a ‘project’ and emphasises that all things about money matter more than people. Barnier and colleagues emphasise the primacy of people. This is consistent with the ideological basis for the EU’s bargaining position. The result is incomprehension by the UK of the EU position, while the EU knows the UK position well as it has played out over 40 years of opposition to social Europe.

From a decision making perspective, I concur with Oliver that each in some way is being played out. The salience of the various issues is rising for those who voted in the referendum and showing the problems that were indeed well-known beforehand, by experts of course. But rising public salience will constrain politicians’ actions as technical issues evolve into political ones. For instance, cross-border access to healthcare (1708/71) is full of technical details, but the public salience will be loss of healthcare when they travel. The departure of EMA from the UK looks like a technical issue of moving offices, but its salience lies in drug companies deprioritising the country for launching new medicines, with possible diminution of research infrastructure. Inside each technical issue that can be hammered out by civil servants, lurks a political issue that can only be resolved through public discussion.

What Cognology would say.

Intelligent application of game theory in complex areas such as Brexit would have revealed that perhaps there are/were more options than assumed. The driving anti-intellectual logic of “red lines”, which signalled boundaries within negotiation, is always a bad thing. In the case of Brexit, it probably guarantees a bad outcome at least for the UK. I think smarter negotiating would have done a better job early on modelling or gaming the likely scenarios. What we are left with is political egos, hardly something noted for intelligence.

Intelligent medicines optimisation

A central feature of any high performing healthcare system or organisation includes best practice in medicines use and management. As all aspects of healthcare are under varying degrees of financial stress these days, cost controls and appropriate use of medicines must support the highest standards of clinical practice and safe patient care.

Medicines optimisation is one strategy as the use of medicines influences the quality of healthcare across the whole patient treatment pathway.

Failure to optimise the use of medicines across this pathway may arise from:

  • misuse of medicines (failure to prescribe when appropriate, prescribing when not appropriate, prescribing the wrong medicine, failure to reconcile medicines use across clinical hand-offs;
  • “clinical inertia” and failure to manage patients to goal (e.g. management of diabetes, and hypertension post aMI) [O’Connor PJ, SperlHillen JM, Johnson PE, Rush WA, Blitz WAR, Clinical inertia and outpatient medical errors, in Henriksen K, Battles JB, Marks ES et al, editors, Advances in Patient Safety: From Research to Implementation Vol 2: Concepts and Methodology), Agency for Healthcare Research and Quality, 2005];
  • failure to use or follow best-practice and rational prescribing guidance;
  • lack of synchronisation between the use of medicines (demand) and procurement (supply), with an impact on inventory management and
  • loss of cost control of the medicines budget.

The essential challenge is ensuring that the healthcare system and its constituent parts are fit for purpose to address and avoid these failures or at least minimise their negative impact.

Medicines costs are the fastest growing area of expenditure and comprise a major constituent of patient treatment and recovery.

The cost of drug mortality was described in 1995 [Johnson JA, Bootman JL. Drug-related morbidity and mortality; a cost of illness model. Arch Int Med. 1995;155:1949/56] showing the cost of drug mortality and morbidity in the USA and costed the impact at $76.6 billion per year (greater than the cost of diabetes).

The study was repeated five years later [Ernst FR, Grizzle A, Drug-related morbidity and mortality: updating the cost of illness model, J Am Pharm Assoc. 2001;41(2)] and the costs had doubled.

Evidence from a variety of jurisdictions suggests that drugs within the total cost of illness can be substantial, for instance:

  • Atrial fibrillation: drugs accounted for 20% of expenditure [Wolowacz SE, Samuel M, Brennan VK, Jasso-Mosqueda J-G, Van Gelder IC, The cost of illness of atrial fibrillation: a systematic review of the recent literature, EP Eurospace (2011)13 (10):1375-1385]
  • Pulmonary arterial hypertension: drugs accounted for 15% in a US study [Kirson NY, et al, Pulmonary arterial hypertension (PAH): direct costs of illness in the US privately insured population, Chest, 2010; 138.]

Upward pressure on the medicines budget include:

  • medicines with new indications (be careful, some of this is an artefact of drug regulation gamed by manufacturers)
  • changes in clinical practice which has an uplift effect on medicines use (especially if guidelines are poorly designed)
  • increasing the number of prescribers (keep in mind that prescribers are cost-drivers)
  • medicines for previously untreated conditions (this trades-off with reduced costs in misdiagnosis, mis-/delayed treatment)
  • therapeutic improvements over existing medicines, and
  • price increases (think of monopoly generic manufacturers, for instance).

Downward pressures include:

  • effective procurement methods (e.g. avoid giving winners of tenders ‘the whole market’ and ensure that rules enable generic competition)
  • use of drug and therapeutic committees and drug review processes (it is all about knowing where the money goes for improving value)
  • use of prescribing and substitution guidelines e.g. generic substitution (oh yes, enforcing it, too; it also helps to ensure OTC medicines are not reimbursed by insurance as this adds to competitive pricing pressure and improves patient choices)
  • positive and negative hospital formularies (yes, hard choices)
  • pro-active clinical pharmacy services engaged in both business and professional domains, (this means ensuring the expertise of pharmacists are central to decision-making) and
  • reduction of waste (you don’t want to know how much drug waste there is but estimates are up to 30% of expenditure is waste).

Additional sources of pressure in either direction come from:

  • population case-mix (that means paying attention of the health of the nation)
  • changing prevalence and incidence over time (also paying attention to the determinants of ill-health, particularly avoidable causes and effects by age cohorts)
  • performance and efficiency of clinical workflow across the patient pathway (this is where money gets wasted at light speed and where it can also be saved; clinicians are in control of workflow so engaging them in areas where they can make a difference matters a lot)
  • medicines payment and reimbursement practices including patient co-payments where they exist and the structure of hospital budgets or financing, (do we want to discuss the unintended and perverse consequences of the payment system?) and
  • healthcare system regulations (yes, where many problems are caused in the first place).

What Cognology says.

Many of the drivers of problems can be addressed through a combination of professional staff development, better use of information, particularly within decision-support systems to support guidelines and prescribing compliance, and organisational interventions.