Part 2: Interoperability 2.0 – How data is organized within a healthcare data platform to add value
Last week we kicked off our three-part blog series with a look at how a new generation of interoperability may be applied to the ingest of data into a healthcare data platform. Next let’s consider “interoperability 2.0” concepts in terms of how data can be organized to add value once it’s within the platform.
Quick sidebar – as we discuss newer, more sophisticated interoperability capabilities, please understand that this is not to suggest that we leave the original, tried-and-true methods behind. Quite the contrary. It should be recognized that there will always be stakeholders in the community who do not possess the resources (financial and/or skillset) to advance. Therefore, a commitment toward making those HL7, IHE, and other fundamental interfaces better, cheaper, and faster should remain a constant.
Part 2: Interoperability 2.0 relative to how data is organized within a healthcare data platform to add value
Traditionally, if data platforms stored healthcare transactions and documents at all, they would do so exactly as they were received and index them to a specific patient, similar to how the Dewey Decimal System works for filing books in a library. Here the primary purpose is to create a history or longitudinal record for the individual patient.
As we look to move forward, our focus should shift toward ways that we can create actual intelligence on a patient from all the data collected; how to make the data more useful and easier to interpret.
Enter one of the bigger buzzwords in healthcare IT today – “aggregation”. In its framework for Population Health Management, KLAS designates “Aggregation” as its first vertical, defining it as “compiling disparate clinical/administrative data sources to support population health.” While it’s certainly no coincidence that KLAS labeled “Aggregation” as vertical 1, representing how foundational it is for PHM in general, one could take issue with their definition. Is merely “compiling” the data really enough? Haven’t we already been there with the Dewey Decimal System method called out above? We need to take aggregation further to help create intelligence and enable the data to be more useful. Let’s start with incorporating technology such as semantic resolution, where code sets are normalized into a standard terminology. Then, duplicate medications, procedures, results, and other clinical data elements should be consolidated to create a cleaner, more simplified view of the patient’s record. Ultimately data elements should be stored discretely, whether contributed via HL7 transactions or CCD documents as examples, in a standard object model. That’s the definition of “aggregation” the industry should be striving for in this new generation of interoperability.
Clearly one of the most critical services related to putting patient data into context is identifying the exact patient to whom the incoming data should be associated. Needless to say, mistakes in this area can have extremely negative impacts in terms of patient safety and exorbitant administrative costs. Matching patients accurately to their care events has become increasingly complicated, as patients receive care in multiple settings, and organizations use different systems to share records electronically. The challenge will only increase with the current movement toward removing social security numbers (even the last four digits) from the equation. Bottom line – the importance of patient matching, coupled with the difficulty to do so effectively, efficiently, and at scale, requires continuous and concentrated investment in a patient management strategy in general.
In the final blog of our series next week, we’ll dig into how these new “Interoperability 2.0” concepts apply to step 3 in the process: how data is ultimately shared from a healthcare data platform.