Category Archives: Strategy

Innovation and Academic Health Science Centres: some policy thinking

To some extent Academic Health Science Centres [AHSCs] are caught between the research push and market pull.

If they prioritise technology transfer, they opt for a research push approach that emphasises the availability of technologies or innovations for market-based actors based on potential commercial application. While the university mission within an AHSC will emphasise the quality of the technology rather than end-user or market benefits, this fails to address the adoption opportunities available from the healthcare service mission.

The primary weakness of ‘research push’ is that the acceptance of new technologies generally depends more on social and cultural factors within clinical communities, than on the merits of the technology itself.

On the other hand, the transformation of research into innovations that can be used to solve problems facing practitioners and patients is ‘market pull’, or perhaps more precisely ‘solution’ pull. Internal research and development communities of an AHSC need to be closely linked to the problems faced by practitioners and patients.

However, researchers often lack the inclination to pursue the innovation exploitation agenda. Indeed, a focus on adoption and the translation of research arises precisely because research productivity has in the past been favoured over solving real-world problems. In healthcare, the problems needing solving are swamped by a vast sea of research and many governments continue to fund research and wonder why they slide down the innovation scale – they value academic citations over patents for instance.

Taking these different configurations of AHSCs into account, the organisational options also need to consider how innovations move from bench to bedside. A “gated”1 scientific and market-based review process with in-house industry expertise and a network of extramural experts for assessments would create a degree of granularity, enabling assessment of benefit from the initial insight (pure research) through translation to end-user benefits. Some well-known institutions use gated processes to filter research to identify innovations, or to assess market-readiness of research for commercialisation.

In the absence of internal gated processes, institutions may use external expertise. One example is to out-source to market-facing intermediaries the technology transfer process or to commercialisation agents. This brings knowledge of markets through the retained third party. Appetite for risk in life sciences by the private equity and venture capital communities fluctuates over time, specialist groups are more likely to emerge to act as agents to commercialise intellectual property. Many universities have structured such relationships with firms that develop their intellectual property on a licensing basis, so the commercial benefits accrue to that firm, and less to the university, which may get royalties.

AHSCs, though, combine universities and hospitals, and so they need to harmonise where possible differing conditions of employment as these firms could be seen as exploiting these administrative challenges. The technology transfer route which is very widely used takes the context for commercial and entrepreneurial exploitation away from the AHSC. This means that AHSCs will fail to build internal capacity to assess intellectual property and know what to do with it.

An out-sourcing approach is useful when there is little internal interest or or more likely ability in commercialisation. The critical question for any AHSC is to make sure they have a view of the value of their own work. An intellectual property audit, for instance, is often necessary. If the AHSC lacks the ability to understand this, this would be evidence that it has failed to internalise the clinical service priorities from the hospital partner.

A risk facing any AHSC is whether they are being measured by some innovation metric that tracks spin-outs, licenses, or patents (rather than papers published or citations), as it may encourage premature commercial activity, which can take the form of single-technology companies (“one-trick ponies”). These types of start-up generally have high failure rates, and longer time-to-market; they may also be encouraged by a preference for simple licensing deals, reflecting either commercial naivete, impatience or lack of interest (which could be evidence of complacency). Regretfully, metrics such as these can actually hamper the realisation of the translational research agenda and produce less innovation since it measures the wrong thing – companies – rather than solutions. A gated process would, or at least should, determine whether this were happening.

AHSCs with a strong entrepreneurial perspective may choose to develop their own venture funds to develop and ‘de-risk’ innovations to make them attractive for subsequent acquisition. This is an attractive option as it also encourages the development of a domestic market in which to invest. Whole countries, too, have pursued venture funds as a national strategy with more failure than success.2

The US dominates life sciences research for a variety of reasons. One of these may lie in the institutional flexibility tied to the market, and which may be a partnering factor for AHSCs. Many well-established and highly differentiated not-for-profit research commercialisation institutes have been formed in the US, such as MITRE (from MIT) and SRI (from Stanford University), or endowed facilities have been created, such as Battelle and Howard Hughes Medical Institute, while others have emerged as specialists at exploiting particular knowledge domains such as the Santa Fe Institute for complexity or chaos theory. Comparable facilities in other countries that are separate from government are few, as most are tied in one form or another to state paymasters, such Germany’s Fraunhofer and its constituent institutes.

In many countries, one must wonder why there fails to be greater entrepreneurialism at creating novel organisational entities to take forward innovation agendas, themselves, as what such institutions offer is a better way for bringing innovations to market. The challenge for policymakers, though, is distinguishing between enabling the existence of such institutions and letting them function within a mandate with a high level of autonomy versus government determining what it does.

In the end, the options for AHSCs may be constrained by the public funding rules and have little to do with innovation itself.

Setting the policy agenda for AHSCs

Underpinning the notion of an AHSC as a nexus of innovation is whether such a nexus, will attract talent and entrepreneurial zeal. Obviously a process of development, extending perhaps over a number of years may be necessary.

A policy agenda for AHSCs would entail thinking about the following:

Should life science research funding be aligned to favour AHSC-type arrangements?

Obviously this would lead to non-AHSCs losing funding as well encourage migration of researchers toward AHSCs. This may not be compatible with national policy goals on local employment and wealth creation based on simplistic notions of clustering.

While ensuring that other centres do not suffer from a skills drain as the most talented people may be attracted to the special opportunities AHSCs might provide, we need to be mindful that AHSCs can quite easily poach talent as they are more likely to offer superior opportunities. Increasingly “brain circulation” (as talented people move from country to country and perhaps back home again) describes how researchers move about rather than the narrow and parochial “brain drain”. AHSCs are better positioned than other industries to exploit the global mobility of talent, while research funding and innovations forum-shop for the most favourable locations – an AHSC needs to be seen as a most-favoured location.

Should the funding and performance management of higher education and healthcare systems take account of AHSCs tripartite mission?

Since they are expensive, and disproportionately consume the healthcare expenditure budget, they may need to be judged by different performance standards. It might be better that AHSCs are accredited or recognised through explicit criteria rather than a system of self-certification.

Health professions education in traditional teaching hospitals should be replaced by AHSC supervised training arrangements; the logic here is to ensure that students have access to the best and appropriate clinical learning opportunities, within structured “clinical teaching” centres in healthcare providers. That hospitals are monopoly suppliers of clinical placements limits training opportunities but a focus on quality should prune that tree.

In addition, this would enable greater career mobility between academe and clinical service, even if such mobility challenged academic appointment criteria, or public sector employment requirements. Enabling greater flexibility here could encourage more entrepreneurs without losing them from training as well as create greater visibility of the value of entrepreneurialism within professional training.

Are current national restrictions on ownership or management of hospitals, universities hampering the development of AHSCs?

In Europe, universities and hospitals would benefit form new ways of organising their interconnected missions, but there is much to be done to understand how they are evolving and what national forces are shaping them as they are in the main, subject to the will of the state.

Investigation is needed to identify the performance, role and function of AHSCs in Europe, and to understand whether they are in fact a nexus of innovation or a quagmire of bureaucratic interference, as this could be a rate-limiting factor in innovation development. The general poor performance of European universities in international ranking may suggest the latter and a misuse of public money.

The potential scope of AHSCs comprises innovations in technologies impacting clinical care (software, medical devices, medicines), and ways of working (demarcation of health professions, clinical workflow). It is necessary to review relevant policy environments to learn at least [1] whether policies enable or inhibit high performing AHSCs where they exist, [2] whether policies inhibit AHSCs coming into existence, and [3] whether policies have perverse consequences on research and innovation production.

What is the best way to design and constitute an AHSC?

The preferences outlined here seek to understand the form/function balance, but we need more empirical evidence within the models to assess whether there is a critical size below which an AHSC may be ineffective in terms of mission attainment. Size alone may not be as important as the ability to align the various components as needed which is more a function of autonomy.

Nevertheless, size does matter to the extent that a small dysfunctional academic/hospital network or partnership will only become a small dysfunctional AHSC. This gives us one reason we need something better than sui generis self-certification as claims of excellence need evidence.

Notes

1 A ‘gated’ review process involves assessing an innovation at different stages using specific evaluative criteria. Failing to pass a gate is fatal at that stage, so the process passes through to the end innovations that have passed all the gates (which may also be thought of as filters). Gated innovation processes are used by scientifically-oriented organisations such as NASA and military defence agencies. A gated process must have a failure regime to be meaningful, which has consequences for performance assessment of research productivity.

2 See as an example: D Senor, S Singer, Start-up Nation: the story of Israel’s economic miracle. Council on Foreign Relations/Hachette, 2009. This, though, needs to balanced against the more cautionary perspective of the improper role of government in commercialisation in J Lerner, Boulevard of Broken Dreams: why public efforts to boost entrepreneurship and venture capital have failed and what to do about it. Princeton, 2009. A useful comparison of venture funding relevant to this discussion is J Lerner, Y Pierrakis, L Collins, AB Biosca, Atlantic Drift: venture capital performance in the UK and the US, NESTA, June 2011.

Managerial control of medicines cost drivers

It is not unreasonable to have concerns about the cost of medicines.

Drug costs are usually influenced by government policies on pricing and reimbursement of medicines themselves. These range from simple discount seeking to more complex approaches such as conditional approvals, and value-based pricing (perhaps a subject for another posting). These can achieve a measure of drug cost control, but may also distort the market of medicines themselves.

For instance, tendering for generic medicines can sometimes lead to unacceptable consequences, such as unexpected product substitution by suppliers, patient and clinician confusion as medicines change appearance, and complications in medicines management or pharmacists. And a ‘winner take all’ award of contract can mean that the losers exit the market, removing a source of price competition and choice for consumers and governments. This is an unintended but avoidable consequence of using this crude procurement instrument.

Regulations and health technology assessment together are challenging to free pricing of medicines, but it is unsurprising that medicines should be subject to some assessment of efficacy and performance in the real world, and not just on the results of clinical trial evidence on a highly selected study population. HTA has also thrown into the spotlight the logic by which drug prices are established by the pharmaceutical industry. This scrutiny is not a bad thing as it highlights the methodologies used, whether they accurately produce a price reflecting the value of the medicine as used. Separately, the cost of the research to produce the medicine is a factor, and one should not be surprised that the prices of successful drugs should try to recoup the costs of all the failed drug research, even if those costs could be seen as the price of the risk of doing business for the industry.

Apart from these approaches to drug cost control, there are opportunities to reduce costs within the healthcare system itself.

Achieving improved cost control, value for money and improved health outcomes are consequence of better management of medicines procurement, patient adherence, dispensing and waste reduction and reduction in variations in prescribing practices.

These are processes and organisational interventions designed to enable improved professional practice through hospital formulary controls and best practice in medicines logistics. These enable the ability to reduce prescribing variance, strengthen quality systems and improve patient acceptability whilst strengthening the foundations of professional practice.

The following “logic map” shows how this works:

A central feature of any high-performing healthcare system or organisation includes best-practice in medicines use and clinical management.

As all aspects of healthcare are under varying degrees of financial stress, cost controls and appropriate use of medicines are a legitimate focus of scrutiny to achieve the highest standards of clinical practice and safe patient care.

Failure to achieve clinical and managerial control of the use of medicines across the patient treatment pathway may arise from:

  • misuse of medicines (failure to prescribe when appropriate, prescribing when not appropriate, prescribing the wrong medicine, failure to reconcile medicines use across clinical hand-offs
  • “clinical inertia” and failure to manage patients to goal (e.g. management of diabetes, and hypertension post aMI) [see for example: O’Connor PJ, Sperl-Hillen JM, Johnson PE, Rush WA, Blitz WAR, Clinical inertia and outpatient medical errors, in Henriksen K, Battles JB, Marks ES et al, editors, Advances in Patient Safety: From Research to Implementation Vol 2: Concepts and Methodology), Agency for Healthcare Research and Quality, 2005]
  • failure to use or follow best-practice and rational prescribing guidance
  • lack of synchronisation between the use of medicines (demand) and procurement (supply), with an impact on inventory management and
  • loss of cost control of the medicines budget.

The essential challenge is ensuring that the healthcare system and its constituent parts are fit for purpose to address and avoid these failures or at least minimise their negative impact.

Medicines costs are the fastest growing area of expenditure and comprise a major constituent of patient treatment and recovery.

The cost of drug mortality was described in 1995 [Johnson JA, Bootman JL. Drug-related morbidity and mortality; a cost of illness model. Arch Int Med. 1995;155:1949/56] showing the cost of drug mortality and morbidity in the USA and costed the impact at $76.6 billion per year (greater than the cost of diabetes).

The study was repeated five years later [Ernst FR, Grizzle A, Drug-related morbidity and mortality: updating the cost of illness model, J Am Pharm Assoc. 2001;41(2)] and the costs had doubled.

And costs and use have continued to rise since then.

Evidence from a variety of jurisdictions suggests that drugs within the total cost of illness can be substantial, for instance:

  • Atrial fibrillation: drugs accounted for 20% of expenditure [Wolowacz SE, Samuel M, Brennan VK, Jasso-Mosqueda J-G, Van Gelder IC, The cost of illness of atrial fibrillation: a systematic review of the recent literature, EP Eurospace (2011)13 (10):1375-1385]
  • Pulmonary arterial hypertension: drugs accounted for 15% in a US study [Kirson NY, et al, Pulmonary arterial hypertension (PAH): direct costs of illness in the US privately insured population, Chest, 2010; 138.]

There are upward pressures that increase costs, downward pressures that decrease costs and pressures that influence costs in either direction; the diagram illustrates a few:

Many of the drivers can be addressed through a combination of professional staff development, better use of information, particularly within decision-support systems to support guidelines and prescribing compliance, and organisational interventions.

An interventional strategy to manage medicines cost drivers involves a structured review of central drivers of drug cost and use within existing national or organisational priorities.

The range of possible solutions fall across of spectrum of interventions and any or all of these are good starting points:

  1. development of drug use policies
  2. development of clinical policies, guidelines, and clinical decision-support algorithms
  3. drug-use evaluation studies
  4. clinical and medical audit
  5. cost-benefit studies
  6. professional development
  7. procurement effectiveness performance review
  8. patient treatment pathway analysis
  9. analysis of waste reduction opportunities
  10. management/organisational improvements to support appropriate behaviours.

To start involves assessing the current state of these aspects, and determine any gaps with national or organisational policy, or evidence-informed best practice. As a proxy measure of the necessary changes, measurement of this gap becomes the focus, and requires evidence of current practice against the desired goal. In many cases, where systems are weak or poorly performing a comprehensive root-and-branch review may be needed, with a corresponding impact on existing managerial, organisational and professional practice.

All healthcare systems and organisations are different and whilst it is difficult to precisely quantify the outcomes in advance, organisations undertaking a sustained process of medicines review and optimisation should be able to release more than 10% of existing drug expenditure and possibly more.

In organisations with a less-well developed clinical pharmacy, where medicines information systems are not well developed, and where clinical guidance is not proceduralised, greater savings are likely, perhaps to 25% or more, reflecting the possibility that the lack of information conceals upward drivers of costs, masks inefficient medicines management or evidence of misuse and waste.

In the longer run, healthcare organisations will need to ensure sustainability of any medicines optimisation review, by ensuring strong organisational structures, practices and behaviours. Development of these frameworks is an important by-product of medicines optimisation interventions, with a corresponding improvement in medicines safety.

Healthcare Cognology: autonomous agency for patient empowerment and system reform

Healthcare systems have been slowly evolving toward a model of care delivery that seeks to leave behind the traditional medical model, based on fighting diseases – sometimes called the lesion-theory of medicine – and which has driven health care thinking since the 1800s.

The direction of travel is toward the health ecology model conceiving healthcare as about helping people live their lives well, seeing ill-health and disease within an ecology comprising choices they make, the context in which they lead their lives and importantly, on the central role of the individual within that ecology to decide how to organise healthcare to help them lead this life. In that respect, the ecological model is more in tune with the real, complex, nature of the world with the various parts working together more in a self-organising manner to achieve desired results. This contrasts with the never-ending top-down plans of state run systems, over harnessing the forces in the society to drive quality and performance improvement of outcomes.

Self-care has been the main policy response to the realisation of this complexity and we have examples such as the expert patient, patient activation, patient reported outcome measurement, disease or care management programmes, managed care, and health promotion and lifestyle programmes.

At present, many health systems and policymakers are focused on chronic ill-health or long-term conditions, which entail continuing healthcare requirements perhaps over the lifetime of individuals and requiring varying degrees of support and perceived unsustainable funding needs. Many long-term conditions arise from lifestyle choices in part and that explains why there has been a focus on engaging the patient in the care process, to ensure that they are inclined to make the necessary choices to avoid further exacerbations in their conditions, or indeed to avoid these conditions in the first place; another goal of self-care is to achieve a policy driven cost-shift to the patient user, to exploit financial co-payments, for instance, to alter behaviour, in the spirit of liberal paternalism.

The California Healthcare Foundation has stated [www.chcf.org/topics/health-it]: “information technology is still fairly new and untested in health care, making experimentation, analysis and evaluation critically important”.

We know technology helps to enable not just efficiencies and effectiveness, but also the greater personalisation of services – consumerisation. The impact of technology, therefore, includes, but is not limited to:

  1. breaking down (or disintermediating) processes to remove steps that do not add value to the end-user experience, or which have no useful role to play, despite being seen as current good practice by professionals; this can create novel service integration
  2. shifting skills toward customer-facing staff (e.g. consider how different banking has become)
  3. widening public access to hitherto restricted health information to patients, including information on clinical performance. In some cases, this has been mandated (such as public information on hospital performance) or has evolved in response to customer interest (such as health websites providing information and advice on health conditions)
  4. enabling organisations to create new ways to engage with the consumer or end-user more effectively in improving products and services than the traditional customer/supplier relationship.

A particular impact is relevant in healthcare, namely, moving knowledge across the boundaries of regulated professions (e.g. to imaging technologists from radiologists, from doctors to nurses).

Healthcare is highly controlled and the application and use of professional knowledge legally regulated. The effect of this has been to compartmentalise knowledge and skills within a broad hierarchy, with the doctor at the top, in effect, as the default health professional who supervises and validates the application of knowledge and skills by other professions. This, of course, is changing, partly as a response to the sheer complexity of healthcare and the levels of knowledge and skill involved, but also through new ways of working, in teams, and across organisational boundaries, with skilled nursing care facilities, polyclinics, etc. The patient, though, has not been an immediate beneficiary of this.

As knowledge has migrated away from people into devices, we have seen the invention of patient-use devices which in the past have required sophisticated testing and professional knowledge; an obvious example is the pregnancy testing kit, and many mice and rabbits are no doubt relieved at its invention.

The impact of embedding knowledge in devices in healthcare, and thereby the potential impact on the internet of things within hospitals and for patients unbundles knowledge cartels and redistributes it.

Putting knowledge into people means training them, and it can either shift knowledge to other professionals, such as is found in interventional radiotherapy (imaged-guided surgery), whereby surgeons interpret imaging results in theatre, replacing a separate radiologist. Knowledge can also be given to patients, often by simply enabling them to have access to more knowledge and insight; this has been a key impact of the internet and which has raised many issues around the quality of health information on the internet.

Knowledge can be put into devices, which can be used by patients and consumers, and where the device does what a health professional used to do. This is the artificial intelligence revolution.

Finally, technology can enable knowledge to be put into ‘systems’ to generally interact with people, such as in the home, or hospital, for instance; it is the embodiment of smart devices within systems that offers particular benefits.

The Internet of Things, for want of better terminology, can help achieve greater personalisation of service delivery and move toward such notions as the Smart Hospital and the Smart Home to support the Smart Consumer.

Why do we want this greater personalisation within a healthcare context? Because evidence demonstrates that customising services is effective – patient outcomes are improved, patient experience is positive, and the provider gets better value for money.

Personalisation has the potential to be enabled through autonomous agents acting on behalf of patients, enabling the patient/consumer to drive their preferences and choices, rather than these emerging through professional delegation or proxy interpretation. Is this Alexa or Google’s Assistant on steroids?

A vast array of device technologies are used in healthcare, particularly in hospitals, probably the most complex organisations in our society. A known priority within healthcare is to integrate the vast sea of information produced, whether conclusions by clinicians, activities of patients, the output of devices, or underlying information such as financial performance, inventory, or quality. Progress is slow and mixed.

E-health has largely failed to get substantial traction, either as a mode of service delivery, or commercially, despite being seen as having considerable potential, by enabling better linkages between operational parts of the healthcare system with the patient. This, despite evident progress is still work in progress.

There are many approaches to integrating information across the information value chain, with the electronic health record (EHR) seen as key from a clinical perspective, along with opportunities real-time monitoring of patients outside hospital through sensors, or interacting with patients through video teleconferencing. Most countries are grappling with how to enable patient access to the EHR, with concerns around identity determination, privacy regulations and security being central, but this debate is being carried by the healthcare providers and their regulators that see the EHR as belonging to them, and not something owned and under the control by the patient.

Electronic prescribing, is seen as reducing medical errors, and better correlating patient data with rational prescribing, but the benefits to patients are limited, in the main, to electronic delivery of the prescription to the pharmacy of their choosing, but this is a choice that is already theirs that is not enhanced by e-prescribing itself. The benefits here accrue to reduce processing time, or commercial capture of the prescriptions themselves through co-location of pharmacies and prescribers, which in the end sort of defeats the point from a patient perspective.

Other areas, such as care management programmes using remote monitoring, SMS alerts, etc. but little of this is really new, as they are mainly automating existing activities, and facilitating better communication.

Let’s consider starting in a different place.

I am mindful of underlying clinical requirements in the hospital, such as linking the dispensing of a medicine to a patient (informed through clinical decision-support prescribing systems and documented accordingly) with bed-side capabilities to ensure the right patient gets the right medicine, and linking that in turn back to batch control and inventory control, budgeting and procurement, not to mention links to quality assurance, audit and utilisation review. And should the patient react badly to the medicine, batch control can help identify any problems with the medicine itself, such as expiration date, or even whether it is counterfeit. How are we to design a system that seamlessly makes all this work?

I am starting with the relationship between the patient and the hospital (mindful that perhaps what we mean by hospital will evolve over the next decade for other reasons), a relationship, built on trust, and on service delivery, communication, treatment, and information. Illustratively, a wireless world of healthcare is possible, which respects this.

Autonomous agents and the next stage of evolution of the Internet of Things

“Cognology” is a term coined by myself to describe the evolution toward technologies with embedded intelligence. So what can the internet of things be in this context? I have adopted the operational definition of how the internet of things should work in healthcare from Kosmatos et al 2011:

“… a loosely coupled, decentralized system of smart objects—that is, autonomous physical/digital objects augmented with sensing, processing, acting and network capabilities.”

The implication of operationalising devices within a cognology and fitting this definition is to alter our current notion of the internet of things from a cognitive perspective. That is to say, the ‘thing-ness’ of devices that we perceive to be the interesting development evolves as autonomous agents give functional purpose to these things. This in effect means moving from a view of the internet of things defined as bundles of technological capabilities, and more as a ‘distributed cognitive system’ [Tremblay 2005] defined in its ability to evolve and transform itself in response to changing circumstances, rather than a strict functional hierarchy.

Conversion of the internet of hospital things into the internet of self-care (or what might be thought of as ‘my things’), through autonomous agents bridges the gap between the hospital setting and the personal context (home, school, work, play), in effect by having the autonomous agents ‘repurpose’ the device.

In a wireless world, the individual is the focus of the cognological capabilities provided by smart device technologies. This achieves the additional benefit of shifting the focus away from technologies that can deliver this or that service, to the use of the information and its manipulation to achieve various goals.

I also think it is important to adopt Simon’s technological agnosticism, to ensure we are focused on results, rather than ‘things’ as such.

I think of this shift from technology to cognology as achieved in part through advances such as the potential of the internet of things, with the embedding of functional intelligence in devices transforming them from physical things into cognitive things.

In this respect, the internet of things is a misnomer.

The internet of hospital things

Healthcare technologies should have certain degrees of freedom:

of geography: in terms of home, hospital/clinic, ambulance, workplace, etc. to support location-independent care;

of intelligence: embedded ‘intelligence’ of one sort or another proving a constellation of capabilities, but perhaps most importantly, a predictive and anticipatory capability;

of engagement: seeking out and exchanging at various levels and in various forms with people (doctors, nurses, patients, carers, etc.), with processes (admission, discharge, alerting, quality monitoring, etc.) and with other objects (blood gas monitor, diabetic monitor, cardiac monitor).

I see the Internet of Things as a different approach, which, when coupled to the use of autonomous agents, offers substantial opportunities to recast clinical processes so making the patient central to healthcare. This consumerist approach will render dated many e-health initiatives for example, as well as the current approach to the use of EHRs.

References and Want to Know More?

Autonomous Agents and Multi-Agent Systems for Healthcare, Open Clinical, www.openclinical.org/agents.html#properties

Kosmatos EA, Tselikas ND, Boucouvalas AC, Integrating RFIDs and Smart Objects into a Unified Internet of Things Architecture, Advances in Internet of Things, 2011, 1, 5-12, doi: 10.4236/ait.2011.11002

Lehoux P, The Problem of Health Technology, Routledge, 2006.

Simon LD, NetPolicy.com: public agenda for a digital world, Woodrow Wilson Centre, 2000.

Storni C, Report on the “Reassembling Health Workshop: exploring the role of the internet of things, Journal of Participatory Medicine 2(2010), www.jopm.org/media-watch/conferences/2010/09/29/report-on-the-reassembling-health-workshop-exploring-the-role-of-the-internet-of-things/

Tremblay M, Cognology in Healthcare: Future Shape of Health Care and Society, Human and Organisational Futures, London, 2005

Tremblay M, The Citizen is the Real Minister of Health: the patient as the most disruptive force in healthcare, Nortelemed Conference, Tromso, Norway, 2002.

Wireless World Research Forum (2001) Book of Visions 2001.

Want to know more? There are some diagrams I excluded which showed a schematic of the system at work.

9 Tribes of the Internet and their health interests

Discussions on health literacy are increasing as healthcare providers, clinicians, payers and patients consider what this means for healthcare. Having been involved in launching the world’s first digital interactive health channel in the UK in 2000, one thing I learned is not to assume that everyone is alike or has common interests.

Healthcare systems are poor doing what retailers take for granted, namely the segmentation of their users. When we did the health channel, we worked with a simple framework drawing on work by the California HealthCare Foundation, in their report “Health E-People”. This gave us a workable model of the different types of users and their different needs, and that in developing content and services for them through the Channel, we needed to be mindful of this. More recent work by the Pew Internet Project has identified the “9 Tribes of the Internet”, to reveal how different people interact and use technology. Of course, segmentation can be quite elaborate, but at this stage we need a scaffold to guide our further understanding.

The main assumption we need to make about technology is how it will be used by people and thereby how this informs the adoption/diffusion process. Health and social care are traditionally “high touch” activities, given the way that knowledge has been organised, who knows it and how it is used. This, however, is being challenged by technologies that embody what traditionally has been found in the brains of specialist clinicians — what I call ‘cognologies’.

Increasingly we are seeing technological innovations that can embody both that knowledge (in decision algorithms for instance) and in skill (in robotic devices, vision systems for instance). Will people accept a shift toward high technology care at the expense of its traditional focus on care by humans? Is that an aesthetic preference (we like it) or might people come to prefer “lower touch” technologically-enabled services if it is reliable, and on-demand?

As we think about this, I suggest the following as some thoughts for policy makers and care providers:

  1. Eventually, the individual will have to own, in some form, their own health record if much of the desired changes in patient behaviours are to be realised. This will lead to patients having a new understanding of information about themselves, and as such this information will need to be clear without mediation or interpretation by others. Patients will, therefore, become involved in decisions about what to do with their information, and with whom it is shared and used; for instance, use in databases whether in commercial or public organisations that will be accountable to the patient for the use of that information. The patient, as what I call the ‘auditor of one’ will come to take a keener interest in the accuracy of the health record and be less tolerant of mistakes or inaccuracies, as is the case in other areas (e.g. banking, credit scoring).
  2. Not everyone will be digitally enabled in the way technology pundits fantasise about. This is not a digital divide and is not evidence of social exclusion, but is a personal choice of people to lead their lives as they wish in a pluralistic society; it may be that in the end, we all end up as digital natives over time, but some will still be hold-outs, or ‘islands in the net. The key implication here is that service providers will need to move in some cases very slowly to adopt technologies with some types of people. In time, perhaps people may adopt low-level access and interactivity, but for some people technological interactivity will remain at best an option not a preference within an evolving technological ecosystem. It remains to be seen whether this will continue to be the case; evidence from other technologies suggests not, that in time, technologies are broadly universally accepted, but not necessarily used in the same ways by everyone.
  3. The assessment of benefits of technologies in the traditional health technology assessment [HTA] model will need to pay much greater attention to the segment of the population likely to be involved and the social context of that group, taking account of distinct patterns of use and preferences. This challenges the current paradigm used within HTA communities. The conclusion that one-size-fits-all HTA assessment will increasingly prove inadequate. This means that designing and implementing technologies will need to be far more flexible when it comes to the structure of service delivery as the adoption/diffusion process itself will come to determine the socio-economic benefits. Consider that few today would subject the telephone to an impact assessment – it is now part of our expectations, and we should not be surprised if the same thing happens to evolving technologies in healthcare focused on the use by consumers and patients.
  4. The tribes model suggests that not everyone will necessarily buy into the technology revolution. For many people, they work in care precisely because they want to have personal contact with people, and not through intermediating technologies. Since many patients also would have that preference, organisations may need to structure services and staffing to ensure the right mix of people to service the right publics. This will challenge approaches to the organisational design of service providers, in the main suggesting more pluralism in variety, scale and function.
  5. Patient compliance, concordance, adherence may become more dependent on the features of the technologies, their design and ease of use, than on the willingness of the patient to follow a particular care regime. Patients are deliberately non-adherent for many good reasons (some of which reflect fundamental flaws from the medicine itself, its delivery system, or side-effects). Accidental non-adherence is another matter obviously. Helping people understand their limitations in using and working with technologies as matter of personal preferences will become very important, which increases the focus on personalisation.

It is common for health and social care systems, especially where the state is the main source of funding, to tend toward omnibus systems of service delivery, which has difficulty dealing with individual service preferences. Whether it is fully appreciated, such systems favour professional and provider interests and depend on proxy interpretations of patient preference. It would be a mistake to assume a similar approach with technologies. Instead, we should be encouraging approaches which are sensitive to the preferences and usage patterns of individuals. In this way, too, we may actually see services being offered that people will value and use.

The 9 Tribes in Health

Background

Pew Internet Project identified the “9 Tribes of the Internet” in a report in 2009 [http://www.pewinternet.org/2009/06/10/the-nine-tribes-of-the-internet/], to ascertain how different people interact and use technology. The California HealthCare Foundation, in its “Health E-People” [http://www.pewinternet.org/2009/06/10/the-nine-tribes-of-the-internet/], identified three broadly defined populations: the well with an interest in health, the newly diagnosed, and those with long-term or chronic health conditions.

The Pew research was instructive in thinking about how people might deal with a more technologically enabled health and social care system. I have sketched out some relationships in the table which gives an overview of the sort of considerations that are likely relevant and important.

NOTE: This was first written in 2010, and updated in 2019.

Should robots pay taxes?

Andrew Yang is a democrat contender for president of the United States. He has expressed concern about a ‘jobless future’ as Martin Ford puts it [Rise of the Robots: Technology and the Threat of a Jobless Future], from technological change, in particular changes coming from the application of artificial intelligence in the workplace, which may produce mass unemployment, maybe forever.

Yang is rightly worried about the jobless future and has proposed a Freedom Dividend, otherwise known as a universal basic income to deal with pending mass unemployment.

Two outcomes are possible. Technological change will be like it has been in the past where even disruptive changes have created new and different jobs in other parts of the economy, or, this time it is different.

If it is indeed different this time, it is necessary to rethink our various assumptions on how our workforce is structured. I have proposed to use the term ‘cognology’ to describe the embedding of AI type capabilities into ‘things’ to create smart technologies, to emphasise the essentially cognitive nature of these new capabilities. This has certain consequences. Let’s have a quick look.

Abbott and Bogenschneider [Should robots pay taxes? Tax policy in the age of automation, Harvard Law and Policy Review, 12:2018, and with apologies for using their title] make the point that tax policy has focused on labour (employment of people) and not on capital (the things people use). They write that the tax system breaks down when the labour is capital. The important consideration, though, and this is to avoid taxing pencils, is that this applies when the capital is a substitution of the labour — pencils don’t write by themselves.

They make the point that the tax system actually incentivises automation, because firms can replace humans with robots, and avoid the taxes.

From a policy analysis perspective (using the Wilson matrix), this makes firms free riders, as the wider costs of the labour displacement they create is not a direct cost they incur. These costs are transferred to society as a whole. Given the tax base is likely to shrink from unemployment (as it does when employment drops anyway), governments will find it hard to finance these costs and will need to borrow against an uncertain future.

I would like to propose that robots and cognitive decision systems (i.e. software) that replace humans are actually a type of “labour substitution”, and firms should bear the costs of that substitution through a tax on these technologies.

If we start to think of these technologies as labour substitution, then we have a much larger frame to understand the costs and benefits that arise from the technology. Search engine technology companies extract the value of the search, but have not born the costs of unneeded librarians. Yang, though, wants to tax them, but this just creates a NIMBY situation, and opens the door to tax avoidance. By casting the tax net widely, as the quasi-universal tax on employed labour does, paid in part by workers and in part by employers, we get closer to a more equitable and socially effective approach to taxing technology.

As always the difficulty is in the measurement of the effort to tax — a salary is easy, but how much labour is in an decision support system that assists a doctor my scanning mammograms for tumours? Is it one radiologist equivalent or many?

The other thing to consider in an AI future, is how to factor into workforce planning the labour substitution effect of cognologies themselves. After all, we cannot have our already unreliable workforce planning made even more unreliable. Poor workforce estimates feed though to the production of graduates from universities of colleges, who may not taking into account the work of intelligent machines. Perhaps these intelligent machines will be taking their online courses.

While Yang’s position is reasonable it is misguided. We really need to come to grips with technologies as a substitute for labour, and determine its labour equivalent effect for taxation purposes. This will go some way to determining what real workforce displacement is likely. It may be that under that scenario, where cognologies are fully costed against labour, we may be better able to value the human condition, rather than exploit its weaknesses.

So, what do you think? Should robots (and intelligent decision systems) pay taxes?

Payer decision making

The relevance of value in establishing the positioning of medicines is the new normal for pharmaceutical marketing. Pharmaceutical companies have customers who are highly constrained by whether healthcare system funding is sustainable long term. Remember, payers think epidemiologically and in multiple years of costed care so industry needs to assess how that can be understood for product value. The pharmaceutical industry is constrained by its ability to generate revenues from medicines sales to cover the costs of research and development.

These two collide in the decision making process to adopt, or not, a medicine. The payers broadly have to balance the sustainability of their budgets with a potentially innovative medicine that will improve care outcomes. The pharmaceutical companies have to construct the value case to demonstrate these care outcomes. That probably means at least two things among many;

  1. Stop pricing drugs by the pill or pack, and start pricing valued outcomes for a defined set of patients over a number of treatment years, and
  2. Forget about trying to ‘time’ the market for product launch. The right time is set by payer budget cycles and their drug investment and disinvestment decisions. And, oh yes, the evidence.

By the way, my approach does differ from the journey model of Ed Schoonveld in important respects, by identifying the structured, and gated, decision processes involved; that why medicines aren’t sold, but bought.

Let’s first look at the colliding priorities. The diagram shows that payers are concerned with the value of a medicine in minimising treatment risk for the treated population. A company is seeking the value of the medicine by maximising the size of the treatment population that they believe benefits. As you grow the treatable population beyond the evidence, risk rises; for payers, reducing that risk is addressed through evidence.

No alt text provided for this image

This is a collision of notions of ‘uncertainty’ in decision making and folks on the industry side should be used to requests for more evidence and novel access arrangements such as conditional reimbursement with evidence generation, and so on. As in any model of competing interests seeking a common price, the intersection of these two notions of uncertainty is defined by a price at which both parties will agree the price pays for the uncertainty it quantifies (i.e. it quantifies uncertainty in a certain way). The intersect quantifies risk, and sets the size of the treatment population that can benefit for that price.

The resulting curve may be thought of the ‘community effectiveness curve‘ depicting the optimal balancing of risk for the treatment community and a proxy for price agreement along that curve. This, by the way, is a better way to identify price corridors for people who still think that way.

This structured process is what this article is about.

Here is the gated decision process for payer decision making. While payers may not formally see themselves going through this in a linear way, they are thinking these thoughts, in this order.

No alt text provided for this image
Gated Payer Decision Making for Market Entry of New Medicines

From the payer perspective, information needs to be specific to the decision gate and having the wrong information at the wrong time (e.g. the right information at the wrong gate) will just frustrate folks and probably irritate decision makers.

The diagram is read left to right, and a ‘yes’ answer to a question is needed in order to move through the gate. Getting a ‘no’ means the information supplied failed to make the case.

The following is a quick tour of the underlying logic. By the way, I call this a gated process as there are criteria for satisfying the conditions for passing through the gate; it is, I believe, unhelpful to decision making to characterise them as hurdles, as this suggests they are imposed to make life difficult. They are, actually, simply the structure of decision making.

Looking at this from a behavioural perspective, i.e. psychology informing decision making, each gate means this:

  • To get through the first gate, the payer is confronted with existing treatment options and asks why do I need another, or why change? Unfamiliarity may also be at work, with novel treatment benefits that lack comparators. Evidence of unmet need might be helpful along with good epidemiology to demonstrate the possibility of better outcomes.
  • Satisfied that a new therapy may be warranted, there is the question of risk and benefit compared to current treatment. While a new therapy might be indicated (yours?), the associated risk may be unacceptable compared to not using it. The benefits really do have to hold under increased uncertainty for a payer to agree to increased treatment risk. I suggest this is where discussion of standards of care begin to be quantified, having been introduced at the first gate. Payers often are not as aware as they should be on the current standards of care evidence in misdiagnosis, medical error and patient dissatisfaction.
  • Then having agreed that this uncertainty and its associated risk are acceptable, we are confronted with the cost and efficacy issue. Now we are beginning to price that risk. Good analysis of the costs of care and mis-care are useful, again because payers are not often aware of whole system costs (i.e. the costs of a treatment pathway) either because they are using using a fee schedule linked to DRG type classification or haven’t proofed their capitation models.
  • Success in pricing that risk moves to the question of the medicine in the context of total treatment costs and whether the treatment costs themselves for the patient population can be managed or will the scaling of the costs overwhelm the system for this treatment population versus all other options. Companies may see themselves as just suppliers of medicines for a price, and not a partner in the total system. But understanding the cost drivers along the whole treatment pathway, not just the costs a new medicine may drive, becomes an important element in final value pricing. If you have a medicine that reduces associated costs, or avoids certain costs (think the Triple Aim, here), then the determinants of value are much clearer. It may be that a biomarker is a value-add from one perspective but only if it reduces medical error and misdiagnosis, without increasing costs, so precision patient identification becomes important. If you’ve got this far, though, you’ll have already shown you can demarcate the treatment population, including the responder subset with a degree of precision.
  • Finally, the payer thinks about the future and whether there will be new medicines coming along that might address the same treatment population, alter risk differently, improve outcomes, avoid costs, with better patient adherence, and so on. Given, broadly, a medicine is alone in its treatment class for months, rather than years, payers may choose to delay decision making or consider options you’ve ignored that may trade off future costs and present priorities. This may be where a payer will be thinking disinvestment or product substitution and the determinants of that are critical in this final phase. Here’s a scenario: Why might a particular medicine not be a preferred medicine on a hospital formulary? The answer is simple: don’t have production problems where supply cannot be guaranteed. The lesson is that this is where the long game gets played out.

For those of you who read Kahneman’s “Thinking Fast and Slow”, or similar, there are decisional heuristics at work here. And across that gated process, you are contending not just with highly structured evidence informed quantitative information, but also how humans can be influenced by how human’s think they think. This has a raft of factors such as confirmation bias, hyperbolic discounting, choice overload, loss aversion, endowment effect, anchoring, mental accounting and social proof. It will pay to be attentive to when you present what information and the frame of mind decision makers are in. The reason this is important is that that regulators and payers in different countries, hospitals or regions can make different decisions from the same evidence, so something else is going on.

And so, a comment on pricing. To short-circuit this challenging gated process, it is common simply to cut the price, i.e. discount. Discounting is a quick win trick that only works if payers are trying to reduce present costs, which they all are. However, payers with their eye on the future are more likely to be interested in pricing arrangements that address uncertainty over time and so will be amendable to arrangements such as coverage with evidence development or outcomes guarantee. If they are focused on whole system issues, they will be interested in care pathway (cohort/whole system) pricing for instance. If, though, the future costs are a priority, think about capitation arrangements, or simple price/volume but be mindful that this last is like selling products door-to-door in the 1950’s.

I happen to think care pathway pricing of carefully demarcated patient populations with costs taken over say 5 years is a better pricing model for both parties. Value can be demonstrated on both sides along with evidence of such things as improved adherence (to reduce waste by non-responders) or diagnostic decision support aids to address misdiagnosis and sources of medical error or reduce time to the correct diagnosis, in the case of rare diseases for instance.

This article is designed to emphasise product value determination under conditions of uncertainty to arrive at a sustainable long-term relationship.

A model for mapping machine learning onto human decision making

The AI agenda is for me all about augmenting human reasoning; what I call cognology (cognitive focus) to distinguish from technology (physical focus). This is the core challenge to work flow and adoption.

Here are some thoughts on the application of John Boyd’s OODA decision making from military decision making to healthcare decision making.

Boyd developed OODA to characterise decision making by fighter pilots who must react quickly. Success lay in cycling through this more quickly than the opponent.

OODA means: Observe, Orient (interpret), Decide (from options), Act. The faster a person can work through that process (reason) is evidence of quicker decision making and interpretation of evidence. 

Artificial Intelligence has a role in each of these steps. It becomes quite important to know where to focus AI capabilities, what operational benefits flow from that and indeed what the wider impact of AI in clinical reasoning might be.

At root, that means being clear about what aspect of human reasoning is being addressed by AI and where in the decision making process.

What we’re seeing with AI and which is what has caused the most concern for critics is the risk that AI’s significant augmentation of human reasoning along the OODA process could in the end replace humans. My view is that we need to know where the AI augments and how, and where the AI replaces and why.

A worrisome example is AI in combat, with autonomous/semi-autonomous drones, the former having the capability of acting without human intervention: humans are ”out of the loop”. Healthcare, too, offers the potential for clinicians to be “out of the loop” and in the absence of adoption of augmented reasoning by clinicians, the AI could dominate by default.

Boyd’s model looks like this:

No alt text provided for this image
Boyd’s OODA model

The AI computational models are very good at dealing with the complexity illustrated of decision making. I’d suggest much AI is still at the first two O’s: computational modelling of tumours, for instance and suggesting where highest risk lies. We are beginning to see the D being addressed when clinicians are presented with treatment options (such as referral of a patient with a hitherto unknown diagnosis for genetic testing as not referring was the default clinical decision — this is related to work I’m involved with on patient finding and undiagnosed rare conditions). Much AI has helped with OOD. It is the A that is the coming challenge and which has the potential to take humans ‘out of the loop’ and allow the AI determine actions, e.g. automatically having the patient referred.

The reason this matters is that clinical processes involve prediction about what health outcomes will be obtained from what treatment intervention. Here’s an example: AI is outperforming clinicians in diagnosis (using ROC figures). The prediction models I’m working with for identification of patients with rare diseases operate at an ROC of about 0.9 and when clinicians review the output as part of augmenting reasoning, the AI’s ROC jumps to over 0.97, suggesting almost certainty of a rare disease diagnosis. At present, patients with rare diseases experience an average of 7 years to a correct first diagnosis and may see as many as 20 different clinicians on that journey. AI cuts that to ‘hours’ and fewer wasted clinical encounters. This means the OODA cycle becomes more precise and much quicker from the patient’s perspective.

Intelligent medicines optimisation

A central feature of any high performing healthcare system or organisation includes best practice in medicines use and management. As all aspects of healthcare are under varying degrees of financial stress these days, cost controls and appropriate use of medicines must support the highest standards of clinical practice and safe patient care.

Medicines optimisation is one strategy as the use of medicines influences the quality of healthcare across the whole patient treatment pathway.

Failure to optimise the use of medicines across this pathway may arise from:

  • misuse of medicines (failure to prescribe when appropriate, prescribing when not appropriate, prescribing the wrong medicine, failure to reconcile medicines use across clinical hand-offs;
  • “clinical inertia” and failure to manage patients to goal (e.g. management of diabetes, and hypertension post aMI) [O’Connor PJ, SperlHillen JM, Johnson PE, Rush WA, Blitz WAR, Clinical inertia and outpatient medical errors, in Henriksen K, Battles JB, Marks ES et al, editors, Advances in Patient Safety: From Research to Implementation Vol 2: Concepts and Methodology), Agency for Healthcare Research and Quality, 2005];
  • failure to use or follow best-practice and rational prescribing guidance;
  • lack of synchronisation between the use of medicines (demand) and procurement (supply), with an impact on inventory management and
  • loss of cost control of the medicines budget.

The essential challenge is ensuring that the healthcare system and its constituent parts are fit for purpose to address and avoid these failures or at least minimise their negative impact.

Medicines costs are the fastest growing area of expenditure and comprise a major constituent of patient treatment and recovery.

The cost of drug mortality was described in 1995 [Johnson JA, Bootman JL. Drug-related morbidity and mortality; a cost of illness model. Arch Int Med. 1995;155:1949/56] showing the cost of drug mortality and morbidity in the USA and costed the impact at $76.6 billion per year (greater than the cost of diabetes).

The study was repeated five years later [Ernst FR, Grizzle A, Drug-related morbidity and mortality: updating the cost of illness model, J Am Pharm Assoc. 2001;41(2)] and the costs had doubled.

Evidence from a variety of jurisdictions suggests that drugs within the total cost of illness can be substantial, for instance:

  • Atrial fibrillation: drugs accounted for 20% of expenditure [Wolowacz SE, Samuel M, Brennan VK, Jasso-Mosqueda J-G, Van Gelder IC, The cost of illness of atrial fibrillation: a systematic review of the recent literature, EP Eurospace (2011)13 (10):1375-1385]
  • Pulmonary arterial hypertension: drugs accounted for 15% in a US study [Kirson NY, et al, Pulmonary arterial hypertension (PAH): direct costs of illness in the US privately insured population, Chest, 2010; 138.]

Upward pressure on the medicines budget include:

  • medicines with new indications (be careful, some of this is an artefact of drug regulation gamed by manufacturers)
  • changes in clinical practice which has an uplift effect on medicines use (especially if guidelines are poorly designed)
  • increasing the number of prescribers (keep in mind that prescribers are cost-drivers)
  • medicines for previously untreated conditions (this trades-off with reduced costs in misdiagnosis, mis-/delayed treatment)
  • therapeutic improvements over existing medicines, and
  • price increases (think of monopoly generic manufacturers, for instance).

Downward pressures include:

  • effective procurement methods (e.g. avoid giving winners of tenders ‘the whole market’ and ensure that rules enable generic competition)
  • use of drug and therapeutic committees and drug review processes (it is all about knowing where the money goes for improving value)
  • use of prescribing and substitution guidelines e.g. generic substitution (oh yes, enforcing it, too; it also helps to ensure OTC medicines are not reimbursed by insurance as this adds to competitive pricing pressure and improves patient choices)
  • positive and negative hospital formularies (yes, hard choices)
  • pro-active clinical pharmacy services engaged in both business and professional domains, (this means ensuring the expertise of pharmacists are central to decision-making) and
  • reduction of waste (you don’t want to know how much drug waste there is but estimates are up to 30% of expenditure is waste).

Additional sources of pressure in either direction come from:

  • population case-mix (that means paying attention of the health of the nation)
  • changing prevalence and incidence over time (also paying attention to the determinants of ill-health, particularly avoidable causes and effects by age cohorts)
  • performance and efficiency of clinical workflow across the patient pathway (this is where money gets wasted at light speed and where it can also be saved; clinicians are in control of workflow so engaging them in areas where they can make a difference matters a lot)
  • medicines payment and reimbursement practices including patient co-payments where they exist and the structure of hospital budgets or financing, (do we want to discuss the unintended and perverse consequences of the payment system?) and
  • healthcare system regulations (yes, where many problems are caused in the first place).

What Cognology says.

Many of the drivers of problems can be addressed through a combination of professional staff development, better use of information, particularly within decision-support systems to support guidelines and prescribing compliance, and organisational interventions.

 

Smart anti-counterfeiting: it is all in the system’s design

How can I be sure the medicine I take is genuine?

Counterfeit medicines are a global problem, with trade in the billions of dollars. The World Health Organization estimates 8-10% of all drugs supplied globally are counterfeit.
Counterfeits are a clear and present danger to human health. No country is immune from the risk. Fake medicines are hazardous, with documented toxicity, instability and ineffectiveness but few people are experts in pill authentication (even pharmacists get fooled). Counterfeit drugs are easier to make and fake than money. But there is little patients can do but rely on assurances by others that drugs are genuine. That may not be good enough.

The health and medicines regulators had for years believed there wasn’t a problem because there are few cases from their perspective. But today we know better and there have been efforts to address regulatory denial.

Counterfeit medicines are infiltrated into the supply and distribution of legitimate medicines by rogue, criminal organizations and individuals, who specifically target the weaknesses in supply chains, as well as human weakness (bribes and kickbacks) and gaps in healthcare payment systems.

Counterfeiting had originally been viewed as a patent issue legal advisors took a purely legalistic interpretation. It was not until the problem of counterfeiting was presented as a risk to human health and people’s lives that the dead end logic of patent protection was dropped. But why did the lawyers fail understand the context in which the problem existed? New legislation is always being introduced, such as the EU’s Falsified Medicines Directive, but the criminals will find a way to game this, even though this directive was apparently ‘gamed’ by developers using the French problem with the drug Mediator. However, gaming policy for developmental purposes also needs people to think like a criminal.

Once a medicine has been factory sealed by the pharmaceutical manufacturer, there is no assurance that it will reach the patient unopened; a pharmacist and doctor can open it, and packages that cross borders are opened for repackaging and labelling. Indeed, there are companies with the licensed authority to repackage factory-sealed medicines with new labels in new languages. Unscrupulous distributors can conceal the illegal substitution of counterfeits within these apparently highly regulated systems. Many countries are net importers of medicines as they lack sufficient domestic manufacturing capacity or the medicine is complex and is manufactured in only a few places. This makes these countries vulnerable to supply chain interference.
While international trade in medicines trade has often focused on internet pharmacies, the real problem is that the online mail-order environment is a counterfeit drug delivery system into every home on the planet.

Healthcare systems themselves must address perverse incentives that drive criminal behaviour; keep in mind that criminals exploit weaknesses in supply chains, laws and regulations, and respond to unmet demand for a product (from toasters to cars, there are illegal markets everywhere and not just for drugs) resting on common incentives. A major driver for criminals is the existence of cash markets for their products (they tend not to take cheques), and one of the largest cash markets is people without adequate health insurance cover and reimbursement systems that do not cover the full cost of medicines, or fail to insulate patients from high drug costs. In addition, as information on medicines can now be widely salient through internet social media, a country failing to license a medicine that some people would value opens a door to counterfeiters to exploit a patient demand for that medicine.

What Cognology says.

Catching crooks with counterfeit drugs is also a problem of finding them. Using advanced intelligent technologies (cognologies as in the name of this blog) means that surveillance can be smarter and less distracted by false signals.

Smarter and more Intelligent Healthcare in 2035

The King’s Fund, a UK health charity ran a scenario essay writing competition, and here is the link and of course congratulations to the winner: (winner, runner up and other scenarios, but not mine).

My scenario builds on the notion of service unbundling and draws on strong and weak signals of changes likely to impact health and social care perhaps to about 2035. The scenario is written as a retrospective view from the year 2047. My objective was to avoid a doctrinaire scenario.

Unbundling 2035

Between 2016 and 2035, the way that people worked had substantially changed by widespread digitisation of information. Smart machines and robots had moved from doing physical work to being central to much cognitive work and which led to fundamental restructuring of the economy. By 2035, taxation was changing from taxing people to taxing the work done by devices, cognologies, and robots.

The fault lines between reality and expectations were starkly evident during the 2020s, as public investment in health and social care struggled to cope with the rapidly changing world. People were becoming accustomed to flexible access to personalised services that came to them and expected the same from care provision. Rising displeasure at service decline led to middle-class flight to alternatives with rising use of private medical insurance, progressively fracturing the social contract that legitimated publicly-funded care. Indeed, by 2028, 38% of the population used private care, with over 55% amongst Millennials.

Fearful health and social executives and worried Ministers of Health had reacted to these stresses by pulling the system even more tightly together, to protect jobs and avoid the failure of publicly-funded institutions.

This fed further public displeasure by the dominant middle-aged Millennials who challenged the traditional approaches to health and social care. In the United Kingdom, for instance, this unrest led to the 2028 Referendum on their tax-funded healthcare system, leading to the replacement of this system with social insurers and personal Social and Health Care Savings Accounts.

The process of changes in health and social care around the world has become known as Unbundling. This brief historical retrospective outlines three of the key components of that unbundling.

The 1st Unbundling: of knowledge and clinical work

Professional knowledge was affected by digital technologies which had unbundled knowledge from the expert. This changed how expert knowledge was organised, used and accessed; research institutions and knowledge-based organisations were the first to feel the changes, with librarians being one of the first professions to face obsolescence. Rising under-employment, particularly in traditional male-dominated occupations was still being absorbed by the economy.

Routine cognitive work and access to information and services was increasingly provided by cognologies (intelligent technologies) or personal agents as they were called. Widely used across society, they were embedded in clinical workflow from diagnosis to autonomous minimally invasive surgery. By this time, jobs with “assistant” in the title had generally disappeared from the care system, despite having been seen as an innovative response to workforce shortages through the late 20-teens. These jobs had turned out to be uninteresting, and being highly fragmented, required time-consuming supervision.

The benefits of precision medicine were substantial by this time, enabling earlier diagnosis and simpler and less invasive treatments. Theranostics, the merging of diagnosis and therapy, unbundled the linear care pathway and the associated clinical and support work. This also led to the unbundling of specialist clinical services, laboratory testing and imaging from monopoly supply by hospitals. Indeed, the last hospital was planned in 2025, but by the time it opened in 2033, was deemed obsolete.

The 2nd Unbundling: of financing and payment

The unbearable and unsustainable rise in health and social care costs necessitated better ways to align individual behaviours and preferences with long term health and well-being. Behavioural science had shown that people did not always act in their own best interests; this meant the care system needed people to have ‘skin in the game’, best done by monetising highly salient personal risks.

Existing social insurance systems which used co-payments were more progressive in this direction, while countries with tax-funded systems were forced to reassess the use of co-payments, and financial incentives. The Millennials, having replaced the baby-boomers as the primary demographic group, were prepared to trade-off equity for more direct access to care. It also became politically difficult to advance equity as a goal against the evidence of poorer health outcomes as comparisons with peer countries drove performance improvements.

The use of medical/social savings accounts was one way that gave individuals control of their own money and building on consumerist behaviour, this directly led to improved service quality and incentivised provider performance as they could no longer hide behind the protecting veil of public funding. The social insurers were able to leverage significant reforms through novel payment systems, and influence individual health behaviours through value-based (or evidence-based) insurance not possible under a taxation system.

The 3rd Unbundling: of organisations

With people used to having their preferences met through personalised arrangements, care was organised around flexible patterns of provision able to respond easily to new models of care. This replaced the “tightly coupled” organisational approach known in the early part of the 21st century as “integration”, which we know led to constrained patient pathways, and limited patient choices unable to evolve with social, clinical and technological changes.

The big-data tipping point is reckoned to have occurred around 2025. Because the various technologies and cognologies had become ambient in care environments they were invisible to patients, informal carers, and care professionals alike; this enabled the genesis of smaller and more diverse working environments.

By 2032, medical consultants were no-longer hospital-based, having become clinical care social organisations, with their cheaper, smaller, portable, networked and intelligent clinical resources. Other care professionals had followed suit. These clinical groupings accessed additional clinical expertise on as-needed basis (known as the “Hollywood” work model); this way of organising clinical expertise helped downsize and reshape the provision of care and met patient expectations for a plurality of care experiences.

It takes time to shift from the reliance on monopoly supply of care from hospitals in those countries that continued to pursue a state monopoly role in care provision. However, most repurposed themselves quite quickly as focused factories, while the more research-oriented specialised in accelerating the translation of research into daily use, helped along by the new research discovery tools and the deepening impact of systems biology which was making clinical trials obsolete.

What Cognology Says

This Unbundling arose as a product of the evolution of social attitudes, informed by the emerging technological possibilities of the day. The period from 2016 to 2025 was a critical time for all countries, exacerbated by shortages in the workforce coupled with economic difficulties and political instability.

Today, in 2047, we are well removed from those stresses that caused such great anxiety. We must marvel, though, at the courage of those who were prepared to build what today is a leaner, simpler and more plural system, removed from politicised finance and management decisions.

It is hard to imagine our familiar home-based theranostic pods emerging had this trajectory of events not happened. As our Gen-Zeds enter middle age, they will, in their turn, reshape today’s system.

Plus ça change, plus c’est la même chose.

27 December 2047

Note on the Scenario

This scenario is informed by strong and weak signals, including:

Ayers A, Miller K, Park J, Schwartz L, Antcliff R. The Hollywood model: leveraging the capabilities of freelance talent to advance innovation and reduce risk. Research-Technology Management. 2016 Sep 2;59(5):27–37.

Babraham Institute. The zero person biotech company. Drug Baron. http://drugbaron.com/the-zero-person-biotech-company/

Cook D, Thompson JE, Habermann EB, Visscher SL, Dearani JA, Roger VL, et al. From ‘Solution Shop’ Model to ‘Focused Factory’ in hospital surgery: increasing care value and predictability. Health Affairs. 2014 May 1;33(5):746–55.

Cullis P. The personalized medicine revolution: how diagnosing and treating disease are about to change forever. Greystone Books, 2015.

Does machine learning spell the end of the data scientist? Innovation Enterprise. https://channels.theinnovationenterprise.com/articles/does-machine-learning-spell-the-end-of-the-data-scientist

Eberstadt, N. Men without work. Templeton, 2016.

Europe’s robots to become ‘electronic persons’ under draft plan. Reuters. www.reuters.com/article/us-europe-robotics-lawmaking-idUSKCN0Z72AY

First 3D-printed drug just unveiled: welcome to the future of medicine. https://futurism.com/first-3d-printed-drug-just-unveiled-welcome-future-medicine/

Ford M. The rise of the robots: technology and the threat of mass unemployment. Basic Books, 2015.

Frey BC, Osborne MA. The future of employment: how susceptible are jobs to computerisation? Oxford Martin School, Oxford University, 2013.

Generation uphill. The Economist. www.economist.com/news/special-report/21688591-millennials-are-brainiest-best-educated-generation-ever-yet-their-elders-often [accessed December 2016]

Lakdawalla DN, Bhattacharya J, Goldman DP. Are the young becoming more disabled? Health Affairs, 23(1-2004):168-176.

Susskind R, Susskind D. The future of the professions: how technology will transform the work of human experts. Oxford UP, 2015.

Topol E. The creative destruction of medicine: how the digital revolution will create better health care. Basic Books, 2012.

With Samsung’s ‘Bio-Processor,’ wearable health tech is about to get weird. Motherboard. http://motherboard.vice.com/read/with-samsungs-bio-processor-wearable-health-tech-is-about-to-get-weird