HealthOps: A New Stage in the Evolution of Healthcare Management

 

In the previous article, we examined how technology evolved and how it’s culminated in the streamlined, automated, intelligence of Internet of Things ecosystems. We paid special attention to the healthcare sector and looked at how the Internet of Medical Things is availing itself to new technologies as it pushes to open new healthcare frontiers and improve patient outcomes.

The last hundred years have seen a lot of technological changes to be sure, but it’s seen equal and often parallel changes to business practices. In the space of this article, we’ll unpack many of those changes, we’ll make note of the common threads that run across and tie them together, and we’ll add that information to what we’ve already established in the previous article to predict the trends most likely to unfold going forward.

 

 

 
Schedule a Demo

The Evolution of Business Practices

The invention of the wheel won’t take you very far without the vision to build some form of a basic suspension system. The same is true with most technologies – without changing your systems, processes, and practices to better support the new technology, the gains are largely theoretical.

Accordingly, the stages of technological evolution mentioned in the previous article – from the Machine Age (spanning the first half of the 20th century) to the Digital Revolution (starting in the 1950s)  to the Information Age (beginning in the late ‘70s) to the current epoch of Digital Transformation (that began in the mid-2000s) – have seen parallel changes in the business practices of the industries looking to most effectively leverage them. Most such changes can be placed in one of 4 groups:

1.  Maintenance

As technology evolved, so did the methods for getting it to produce as much as possible for as long as possible. Over the last hundred years or so, maintenance models have all boiled down to one of the following paradigms: 

  • Reactive –

Most factors of production cannot run continuously without seeing degradation in their performance and depreciation in their value, which is why they need to be maintained. Since the dawn of the Industrial Revolution, the dominant model for asset maintenance has been reactive. That is to say, when things broke, they would be fixed. Simple and effective.

  • Preventative –

Eventually, the owners of those factors of production realized that reactive maintenance models were inefficient and wasteful. The costs of repairs and replacement are almost always be higher than the costs of preemptive servicing. What’s more, the lost production that would result from the repairs and replacement would also exceed the cumulative effects of regular servicing. The idea that overall costs would be lower and production higher with planned maintenance interventions really started gaining traction in the 1950s.

  • Predictive –

Preventative maintenance offers huge improvements over a more reactive approach, but it’s far from perfect. For this approach to scale, it needs to work according to the law of averages. So if the goal is to prevent breakdowns, and the average factory press develops problems in its hydraulics system every 6 months, you’ll need to service your machines every 5 months. In effect, that means that you’re taking an equally large number of machines offline that don’t actually need servicing as those that do. (That’s how averages work!)  

When decision makers realized that regularly shutting down equipment, irrespective of the specifics of the individual unit’s circumstances, was costing them unnecessarily, they started looking for a better alternative. With the computing boom of the 1980s and the advances in sensor technology that would soon follow, there was now the ability to collect data points (in both their individualized and aggregated forms) at scale and plug them into predictive models that would make more tailored maintenance recommendations.

As the technology improved, this became cheaper and easier to pull off as well as more and more accurate.

2.  Inventory

Just as maintenance practices shifted from more reactive and “heavier” models to more efficient and dynamic alternatives, so too have inventory management practices evolved. Since the Machine Age, inventory strategies have gone through a few major reviews. Here’s what that looked like:

  • Maintained surplus –

There’s nothing worse than over-promising and under-delivering, which is why business all over the world have historically taken a better safe than sorry approach to inventory management. Starting in the 1940s, this approach inched toward greater precision and scalability with the introduction of punch card management systems.

Already by the 1950s businesses began using early barcode systems to improve record keeping – allowing them to more accurately compare demand and inventory patterns and move toward aligning them more tightly.

  • Just-in-Time (JIT) stock management –

Pioneered by Japanese manufacturers (the Toyota Motor Corporation in particular) in the 1960s and 70s, Just-in-Time supply management was a re-imagining of the production cycle with the stated goal of increasing efficiencies, removing unnecessary process stages, and reducing operating overhead. In practice, this meant designing the production floor with greater forethought to eliminate all wasted motion and spatially concentrate operations. It also meant revising supplier arrangements to physically shorten supply chains and allow orders to be fulfilled in smaller volume and with shorter notice.

Breaking from the standard established by Henry Ford and his assembly line process, there was also a concerted effort to build and expand the skill-set of workers so that they would understand each other’s tasks, could smoothly swap in and out of roles when necessary, and collaborate more intelligently to solve problems and improve processes.

Combined with smart flow scheduling, strict oversight, and hardened operational discipline – empowered by then-new technologies such as Universal Product Codes and computerized tracking/management systems – these efforts helped JIT practitioners to dramatically reduce inventory holding costs, limit labor costs, accelerate production cycles, and increase total output.  The principles guiding this breakthrough methodology came to be more broadly referred to as “lean”.

  • Decentralized responsive fulfillment –

Consumer facing technologies like the mainstream arrival of the internet in the late 1990s, along with business facing technologies like Radio-Frequency Identification would once again push the bounds of what was possible. Seizing on the principles of lean, forward-thinking strategists rebuilt their businesses around the idea that, with smart systems in place, supply can be paired to demand in near real-time, with little upfront investment, little overhead, little risk, and tremendous scale economies. The idea was essentially to use superior technology and processes to create and take advantage of an arbitrage opportunity between existing wholesale and retail models.

No one executed this strategy more famously or more successfully than Amazon. At the turn of the millennium, Amazon built its online book-selling empire through a partner-based distributed business model predicated on inventory management with virtually no excess stock. As the company grew bigger, their inventory management methods grew more advanced and the company relied increasingly on sophisticated predictive modeling to remain ahead of the curve, while keeping overhead to a minimum. Eventually, the company even built its own inventory management system to better facilitate their unique business model.

Continuing to innovate this decentralized responsive fulfilment model, Amazon has more recently been a champion and early adopter of using autonomous retrieval robots in their order fulfillment centers.

3.  Organizational structure

Perhaps most profound of all is the impact of the forces of business practice evolution have had on the fundamental structure of organizations looking to enjoy the full range of benefits that improved technology has to offer. We’ve gone from simple to complex, from clunky to swift, rigid to flexible, and static to dynamic. Consider the following:

Large operations need to organize to run effectively, which is why they’re called organizations. Historically most organizational structures have taken the form of simple hierarchies – with more people at the lower levels of the operation reporting to and doing the bidding of fewer people at the upper levels of the organization. Roles are clearly defined, and daily tasks are largely dictated by the chain of command. This basic design – whether implemented according to functional or divisional fault lines – continues all the way up until power concentrates into the hands of a small number of individuals.

The benefit of this model is that it is well-defined and simple, allowing it to scale with relative ease. The disadvantage is that it is not very efficient, discourages a big-picture orientation among employees and does not lend itself very well to meaningful collaboration or innovation.

  • Dynamic hierarchies –

Over time, the shortcomings of these rigid hierarchy structures became increasingly apparent and forward-thinking business leaders began to rethink how they were organizing their companies. In the 1970s, the idea of matrix management structures began to gain prominence as a way of breaking down silos, boosting creative and pro-active thinking, fostering collaboration, and giving employees an improved sense of ownership over their work.

In this model, hierarchies remain but they are constant transition and exist multi-directionally. So, for example, a given employee might report to multiple supervisors based on the different projects he or she is working on. These different supervisors may express their oversight relationship in more direct or more Laissez-faire depending on the circumstances of the collaboration.

Focused on developing cross-functional business groups, this approach has everyone in the organization – from the bottom to the top – wearing multiple hats that change with the season, so to speak. For example, by people in your engineering teams who have marketing skills and who report to both engineering and marketing supervisors, you not only prevent competing interests from driving projects in different incongruous directions (that will later need to crudely patched together later), but you build more thoughtful, synergistic products from the outset, winning widespread internal buy-in as a natural consequence.   

Around the same time that dynamic hierarchies were gaining popularity, some began to question the need for elaborate hierarchy altogether. These business leaders argued that layers of middle management only added organizational glut and inefficiency. These leaders redesigned their businesses to allow for greater professional independence for individual workers as well as for business teams. Flat organizations, say advocates, are able to respond to market conditions faster and with greater creativity. It’s important to note, that this is not a binary choice: hierarchical or flat.

Proponents of flattening point out that while truly flat organizational models in the style of the Valve Corporation may be rare and often fickle things, most organizations can benefit from some degree of flattening. Over the years, this has increasingly become accepted wisdom, as Wikipedia notes, “In general, over the last decade, it has become increasingly clear that through the forces of globalization, competition and more demanding customers, the structure of many companies has become flatter, less hierarchical, more fluid and even virtual.”

Tracing back to the early 2000s, wirearchies are the organizational structures that emerged from the societal and business conditions of the Information Age. Specifically, this model sees authority dynamically derived from the information, trust, credibility, and business results delivered by each employee.

Though this approach is still far from mainstream, it is likely that it will provide the framework on which more and more businesses will be built going forward.

4.  Process strategies

From the beginning of the 20th century to today, the fundamental view of how processes ought to be undertaken and managed evolved too. All told, we can see at least four distinct stages in this evolution. These include:

  • Production oriented –

At the turn of the century, business processes and practices were mostly governed by production-oriented economic philosophies of the sort described by Robert Keith. The underlying logic was simple: due to scale advantages, the producer that outproduces his competition will come out on top.

  • Value oriented –

Then beginning in the 30s with the Great Depression, being able to do more was no longer considered the goal; increasingly the goal was to do better. In practice, this boiled down to a simple mantra: don’t make what you can’t sell. Specifically, that meant more thoughtfully mapping internal business motivations to external market values.

Notably this period saw those market values as being soft and somewhat suggestable. So, more important than changing your products and services to better align with the public’s wants was adding complementary distribution and sales capabilities to help them “see the value”.

  • Lean oriented –

While not departing from the value-driven ethos of the previous period, process strategies continued to evolve. In the 1970s, organizations of all types began to more deliberately embrace lean principles. This saw administrators focus more on improving efficiencies and removing any procedures that added cost or time without improving output/outcome quality.

This period ushered in an explosion of organizational creativity and diversification – with managers all over the world suddenly taking little-to-nothing for granted and constantly asking if there wasn’t a better way to do x, y, or z. This period also gave rise to Six Sigma strategies that aimed to systematically seek out and relieve process bottlenecks and create highly customized processes to produce very specific comparative advantages.

  • Agile oriented –

Less a distinct approach to business processes than it is taking lean practices a step further, agile process management strategies emerged in the 1990s. This strategy gained its first footholds through the worlds of manufacturing and software development and later percolated through to the wider business world.

Based on principles of interoperability, de-siloing, strong interdepartmental communication and collaboration, smart tooling, and automation, agile businesses respond quickly to changing demands without seeing a decrease in product quality or reliability. More recently, the push for greater agility has given way to the push to DevOps-ify the organization.

The Arrival of DevOps

This trend towards bigger data, more automation, less waste, greater collaboration, and increased responsiveness culminated in the “DevOps“ phenomenon. A portmanteau of “development” and “operations”, DevOps emerged as the successor to agile methodologies and a hot trend among software companies looking to get better results and more harmonious interactions from their people, processes, and technologies.

DevOps combines information technology operations and software development into a single, deeply collaborative process. It draws on all the previous waves of innovation, requiring a lean, agile business environment in which roles and responsibilities are fluid, departments operate with shared responsibility, and data flows freely between them.

DevOps is often thought of as a sort of playbook containing specific software lifecycle management techniques, such as:

  • Continuous integration: Software changes are regularly branched and merged into the main code repository. This removes the need for system downtime, decreases the likelihood of work getting lost, prevents version drift, and avoids confusions around the change validation process.
  • Continuous delivery: Businesses change, test, and release their products on a continual basis, leading to faster time-to-market.
  • Continuous deployment: Software changes are released to the public without any human assessment. It speeds up product improvement even further and removes the pressure of perfecting software before “release day.”

The truth is that DevOps is something much more fundamental and much more universally applicable. It is a holistic business philosophy. At its core lies the following principles:

  • Modularization breaks down long processes into component chunks, allowing small teams to each take ownership of their piece and “sprint” to the finish line. This accelerates release cycles, catches and fixes bugs more quickly, gives users more frequent added value, and prevents complex interdependencies from bottlenecking progress. As an added benefit, those chunks – removed from the context of the larger project in which they’re embedded – can be rearranged and repackaged together with other chunks for future applications.
  • Multi-disciplinarity and collaboration help each part to better understand its place in the whole and to work so that each individual contribution can serve the whole as best as is possible.
  • Decentralized task distribution and self-management enable more effective task “chunking”, increases organizational agility, and allows team members to work to their strengths.
  • Smart, well-defined management policies and tools allow for all component tasks and processes to be placed within a unifying architecture to seamlessly coordinate and integrate everyone’s individual contributions to advance large complex projects. Continuous integration is one example of how this principle is enacted.
  • Automation of as many workflows as possible increases efficiency and scalability. DevOps endeavors to reduce to an absolute minimum any actions and processes that cannot scale – which is why automation is so important.
  • Continuous iteration and testing deploys different product versions simultaneously and subjects them to the rigors of testing to tease out any bugs, vulnerabilities, or inadequacies as quickly as possible – preventing breaks in value delivery chains and increasing throughput. At the same time, data is being constantly gathered to inform on the process and drive the next iteration.
  • Comprehensive documentation of changes and their effects, ideally collected passively through automated logs, creates a clear audit trail. This makes it easy to identify which of many interconnected actions and changes and iterations caused a problem.
  • Strict role and permission governance protects key areas of businesses from accidental changes or damage in the course of normal operations. It restricts access to the deepest layers of the business ecosystem.

It’s no coincidence that these practices coevolved in the same spacio-temporal context as the Internet of Things. They’re two sides to the same coin and in many ways represent the culmination of century’s forward march.

There are clear themes to the progress. The simple is becoming more sophisticated. Round approximations are being replaced with precise calculations. Reactive is giving way to predictive. Silos are being broken. Centralized management, systems, and processes are being decentralized. Information is flowing more freely and more automatically. Production and management processes are being designed more and more for repeat-use and scalability. Tools are being made more purpose-specific while people are being given more multi-functional skills and latitude. And production cycles are shortening.

These underlying trends form a pattern, a trajectory, that offers insight into what we can expect going forward in the evolution of healthcare. And DevOps is sure to factor prominently.

Digging Deeper into the Broader DevOps Phenomenon

While DevOps originated as an approach to software operations and development, it’s become a comprehensive business methodology. It’s useful for any industry at any level. Methodologies like DataOps apply DevOps and lean approaches to data analytics – breaking down complex business processes into “micro-services” that can be performed independently by cross-departmental teams boosts the efficiency of large corporations. Releasing frequent product updates and changes allows for ongoing improvements to the business output.

Target introduced a DevOps business model a few years ago, and it’s become crucial for the company’s success. Target owns 1,800 stores across the US, plus a sizable online presence, so it’s not surprising that it found itself with silos that got in the way of smooth operations. Even data about store locations was scattered across three different systems. Beginning in 2013, Target began focusing on removing silos and increasing collaboration between IT and development teams. They switched to continuous integration and continuous deployment, and found that it was easier to make small, frequent corrections than to handle a rare but major crisis. By extending DevOps principles to how they approached, their app, their customer service channels, and their in-store POS systems, Target saw a sharp drop in customer frustration and a sharp increase in employee satisfaction.

DevOps is having an impact on healthcare, too. Patients are discerning consumers, demanding a treatment experience that parallels the customer service, speed of delivery, and personalization that they get from Amazon. The only way that healthcare providers can keep up with patient demands is by adopting similarly automatable, collaborative, smart workflows to those used by their favorite brands.

For example, patient-led medicine requires more data, but that data is only useful if it reaches the right healthcare professional. Silos between departments could lead to the ER withholding information from Cardiology, for example. And as medical centers merge and grow larger, it becomes vital for all teams to collaborate and communicate effectively, while still being able to work as autonomous units.

The Dawn of HealthOps

For healthcare, it’s only a matter of time until these trends coalesce with IoMT into something bigger. It’s something we could call “HealthOps”. HealthOps would see healthcare providers adopt a working culture similar to DevOps.

Imagine a patient – let’s call him Joe – had a pacemaker implanted at the local hospital last year because of his heart arrhythmia. The pacemaker sends data about Joe’s medical condition to the medical center’s patient follow-up system.

One day, the pacemaker notes an irregularity in Joe’s heartbeat a few times within an hour. The device delivers the shock needed to correct the arrhythmia and sends an alert to the medical center and the hospital. Joe gets an automated text message informing him about his heart events, which he hadn’t even noticed. The text asks him to click a link to schedule an examination within 24 hours.

At the same time, Joe’s cardiologist and general physician both get alerts about Joe’s heart incident. They examine more data about Joe’s recent general health, nutrition, and exercise levels. This data was gathered by a wearable sensor that Joe agreed to wear on his wrist, and sent automatically to Joe’s health portal. The doctors consult through the secure doctors-only app, and agree on a personalized treatment plan for Joe. Joe’s cardiologist adjusts the pacemaker settings through the secure remote app that controls it. After explaining the options in person, Joe’s doctor sends him the same information through the medical center’s patient portal.

In a healthcare model such as this, not only is a potentially catastrophic medical event avoided, but, using a mix of smart technology and smart processes, it’s preemptively attended to almost entirely outside of the hospital – keeping costs low and resources free.

Security and HealthOps

HealthOps has the potential to fully unleash the promise of smart healthcare. In the scenario above, Joe may well have died without it. But HealthOps could be a nightmare if security is sidelined. Just like its inspiration DevOps, HealthOps will need to leverage secure-by-design technologies and processes to thrive.

So much is the success of a DevOps approach tied to security that the underlying philosophy has been expanded the preferred term is now DevSecOps. DevSecOps is a way of incorporating software development, IT operations, and cybersecurity into a highly connected, continuously integrated system that is protected from hackers and viruses. Without the necessary safeguards, a cybercriminal or virus can rip through a super-connected DevOps-enabled system in no time.

HealthOps requires just as much careful, early planning with security in mind. If you think about what’s at stake – the safety of patients, continuance of care, and the confidentiality of ePHI – the need for strong security capable of spanning and protecting the complex web of interactions running between your technologies, systems, and processes is paramount.

Consider a simple example: many IoMT devices enable remote telemetry, allowing the device to communicate key measurements to staff who might be out of sight of the device. This makes a lot of sense and makes it a lot easier to better monitor more patients. But since so many devices are built on modularized software and hardware frameworks, designed for repeat use and diverse application, they often also come with built-in functionality that is at best unwanted and at worst dangerous. When the digital framework used by a medical device to enable remote telemetry also enables remote control, it can be a big problem. 

As it happens, an example of this sort made headlines not long ago when it was discovered that a syringe pump could be hijacked and remote controlled through the hospital network. Hackers would be able to turn the pump on or off, speed up or slow down the drug delivery rate, silence alarms, and more. That’s just one example, of course, but it does well to demonstrate the threat.

Indeed, if you want HealthOps to work for your benefit rather than your detriment, now’s the time to start thinking about how security will fit into your organization’s future.

Conclusion

Throughout history, as technology advanced, business practices and organizational methodologies have always rushed to catch up. Recently we’ve seen IoT technologies come online all over the world. We’ve also seen businesses of every sort increasingly adopt DevOps methodologies and principles to boost efficiencies, streamline workflows, enhance collaboration, accelerate production, and improve responsiveness. With the advent of the Internet of Medical Things, it’s only a matter of time until forward-thinking administrators looking to get the most out of their new technologies follow suit roll out their own take on DevOps.

An automated, collaborative, data-driven, integratively decentralized, and highly responsive HealthOps will be realized in the very near future. The possibilities are practically endless and the future looks to be bright. As a society, we’re on the cusp of fundamentally transforming the way we deliver care, and indeed how we approach health itself.That said, this future is not guaranteed. If the principle of secure-by-design is not built into HealthOps, we’re liable to do more harm than good. Healthcare providers need to act now to put the necessary security systems into place so that they can make the most of HealthOps.