Appl Clin Inform 2020; 11(05): 725-732
DOI: 10.1055/s-0040-1718374
Research Article

Accuracy of an Electronic Health Record Patient Linkage Module Evaluated between Neighboring Academic Health Care Centers

Mindy K. Ross
1   Department of Pediatrics, University of California Los Angeles, Los Angeles, United States
,
Javier Sanz
2   Department of Medicine, Clinical and Translational Science Institute, University of California Los Angeles, Los Angeles, United States
,
Brian Tep
3   Department of Enterprise Information Services, Advanced Analytic Services, Cedars-Sinai Medical Center, Los Angeles, United States
,
Rob Follett
2   Department of Medicine, Clinical and Translational Science Institute, University of California Los Angeles, Los Angeles, United States
,
Spencer L. Soohoo
4   Department of Biomedical Sciences, Division of Informatics, Cedars-Sinai Medical Center, Los Angeles, United States
,
Douglas S. Bell
5   Department of Medicine, University of California Los Angeles, Los Angeles, United States
› Author Affiliations
Funding This research was supported by U.S. Department of Health and Human Services, National Institutes of Health, National Center for Advancing Translational Sciences, (grant no.: UL1TR001881).
 

Abstract

Background Patients often seek medical treatment among different health care organizations, which can lead to redundant tests and treatments. One electronic health record (EHR) platform, Epic Systems, uses a patient linkage tool called Care Everywhere (CE), to match patients across institutions. To the extent that such linkages accurately identify shared patients across organizations, they would hold potential for improving care.

Objective This study aimed to understand how accurate the CE tool with default settings is to identify identical patients between two neighboring academic health care systems in Southern California, The University of California Los Angeles (UCLA) and Cedars-Sinai Medical Center.

Methods We studied CE patient linkage queries received at UCLA from Cedars-Sinai between November 1, 2016, and April 30, 2017. We constructed datasets comprised of linkages (“successful” queries), as well as nonlinkages (“unsuccessful” queries) during this time period. To identify false positive linkages, we screened the “successful” linkages for potential errors and then manually reviewed all that screened positive. To identify false-negative linkages, we applied our own patient matching algorithm to the “unsuccessful” queries and then manually reviewed a sample to identify missed patient linkages.

Results During the 6-month study period, Cedars-Sinai attempted to link 181,567 unique patient identities to records at UCLA. CE made 22,923 “successful” linkages and returned 158,644 “unsuccessful” queries among these patients. Manual review of the screened “successful” linkages between the two institutions determined there were no false positives. Manual review of a sample of the “unsuccessful” queries (n = 623), demonstrated an extrapolated false-negative rate of 2.97% (95% confidence interval [CI]: 1.6–4.4%).

Conclusion We found that CE provided very reliable patient matching across institutions. The system missed a few linkages, but the false-negative rate was low and there were no false-positive matches over 6 months of use between two nearby institutions.


#

Background and Significance

The fragmentation of health care in the United States contributes to an environment in which some patients receive medical treatment among multiple health care organizations.[1] This can adversely affect clinical care and research efforts when tests and treatments are repeated or appear to be missing. Ultimately, patient safety and quality of care are reduced, while health care costs are increased.[2] [3]

To address this fragmentation of care in the United States, data interoperability and transportability of health care data were encouraged through the establishment of the Office of the National Coordinator (ONC) for Health Information Technology in 2004, but has not been achieved on a national scale.[4] [5] One barrier is that patients are not issued a unique identification code, making it difficult to identify health care records belonging to the same patient across institutions.[6] Instead, data linkage methods have been developed to accomplish matching without a unique identifier by using patient identifiers, such as name, sex, and date of birth (DOB), in a deterministic (predetermined rules) or probabilistic (weighted identifiers) approach.[7]

Efforts to improve health information exchange (HIE) across institutions occur in many domains have highlighted barriers to implementation including data integrity, organizational competition, low adoption, and accuracy of algorithms.[8] [9] [10] Electronic health record (EHR) vendors are engaged in patient matching across institutions in real time. The goals include avoiding false positive matches to limit erroneous protected health information disclosure and patient care errors while also minimizing missed matches.[11]

Epic Systems, the vendor used in this study, developed a tool called Care Everywhere (CE) that probabilistically matches patient identities across institutions (using characteristics such as name, telephone number, and DOB) and, when patients match, proceeds to exchange information from matching patients' medical records.[12] This tool is anecdotally effective, but its accuracy to identify matched patients across institutions has not been quantified, to our knowledge.

Advanced privacy-preserving methods that can link patients across institutions without releasing protected health information (PHI) have demonstrated feasibility. Grannis et al confirmed the ability to match patients using deidentified data between hospital-based patient registries and the Social Security Death master File (SSDMF) with 92% sensitivity and a 100% specificity.[13] In addition, a privacy-preserving algorithm developed by Kho et al matched patients between a clinical study dataset and several academic health care medical records with a sensitivity of 96% and specificity of 100%.[14] However, we are not aware of studies that have evaluated an algorithm directly linking patient identities between EHRs at two separate institutions.


#

Objectives

The objective of this study is to evaluate the accuracy of Epic's CE module to identify matched patient records between two neighboring health care systems, UCLA Health and Cedars-Sinai Medical Center. The two health care institutions are located less than 5 miles apart and are known to commonly share patients. CE is now in routine use by many institutions and we expect that a precise estimate of its accuracy and failure modes would be useful for the research, clinical, and quality improvement communities. Of note, other EHR vendors beyond Epic are also engaged in patient matching across sites, so the findings of this study may be of interest beyond Epic customers.


#

Methods

Query Analysis

We analyzed the accuracy of CE to match patients between UCLA Health and Cedars-Sinai Health System by manually reviewing a sample of the matches and nonmatches that CE produced over a span of 6 months (November 1, 2016–April 30, 2017). Our strategy to identify cases for manual review, for both positive and negative matches, was designed to use different strategies than CE used, thereby have the best chance of finding errors. These alternative approaches to matching constituted our best approximation of a gold standard, and we subsequently treated our alternative approach as such in calculating characteristics of the CE algorithm such as its positive predictive value (PPV).

How the CE module works will be described here. On the evening before a scheduled patient encounter (indicating the patient has a treatment relationship with the institution), the CE module automatically attempts to link the patient with identities at other institutions where CE is activated. We refer to each attempt to link a patient as a “query.” Each query returns a result labeled as a “successful” link (records linked by CE) or an “unsuccessful” link (patient linkage attempted, but no match identified), for the pair of health care systems. We retrieved data on the outcomes of CE queries received from Cedars-Sinai using the UCLA Epic Systems' Clarity database (its relational data repository). Manual review was performed to determine the accuracy of the “successful” or “unsuccessful” labels. M.K.R. and J.S. performed the manual review and cases of disagreement or uncertainty were adjudicated by D.S.B. This study was reviewed and approved by the University of California, Los Angeles, and Cedars-Sinai Medical Center Institutional Review Boards (IRBs).


#

The Care Everywhere Patient Matching Process

The CE data linkage algorithm compares demographic identifiers of potentially matching patients using a probabilistic approach that assigns weights to several different identifiers such as name, sex, and DOB ([Table 1]). Although Epic allows organizations to fine-tune their own matching algorithm by assigning different weights to identifiers, both UCLA and Cedars-Sinai use the default CE settings recommended by Epic Systems, and this default does not include social security numbers.

Table 1

Weights assigned to demographics for the Care Everywhere algorithm default recommended settings

Demographic Match

Weight

Exact name (with or without middle initial)

10

Last name sounds like

5

Exact sex

1

Exact birth date

8

Birth date one digit difference

6

Birth date month and day or year

1

Exact phone

2

Exact e-mail address

2

Exact address

2

Similar address

1

Exact city

0.5

Exact zip

0.5

The first step of the CE algorithm retrieves for each queried identity a batch of potential matches based on the last name, using a “soundex” transformation (i.e., the name is transformed based on how it sounds in English, which can account for minor spelling differences). For these patients, the algorithm then evaluates whether the other demographic identifiers match, and a total score is assigned to each possible match based upon the sum of the weights assigned.

If one unique, “high threshold” link is achieved (i.e., the score is more than 20 points based on the summed weights), the query is labeled as “successful.” For successful queries, the demographics are not stored by the receiving institution; instead a CE identification number is assigned to the same patient at both institutions ([Fig. 1], upper part).

Zoom Image
Fig. 1 Data storage flow chart for Care Everywhere queries. “Nondetailed outcome report” references records that do not include additional details to explain the outcome. For “unsuccessful” queries, it refers to the lack of further classification to explain why the matching attempt was not successful. For “successful” queries, it refers to the absence of patient identifiers included in the original request that was sent across institutions. “Query record” refers to the metadata related to the query itself (time stamp, outcome, institution ID, etc.), not patient information, that is recorded in the database system.

If a query between institutions does not find a single, high threshold match for a patient, the linkage attempt is labeled as “unsuccessful” and a brief explanation is generated as to the reason (e.g., multiple low threshold matches and no high threshold match). The receiving institution stores a copy of the label along with the demographic identifiers. The submitting institution only stores the label and the date of the query ([Fig. 1], lower part). Our analysis studied patient linkage queries made by Cedars-Sinai to UCLA because our primary analysis focused on UCLA records.

CE does not recheck linkages for previously linked patients, but it does recheck those not previously linked, without respect for prior failures. Thus, if patients are not linked, requests for the same patient are sent again, each time the patient presents for care, whereas once a patient is linked, the query will never be sent again. However, for both our false-positive and false-negative analyses, we analyzed only unique (deduplicated) cases.


#

False-Positive Analysis

To search for false positive linkages (i.e., patient linkages labeled as “successful” that were not a true match), we programmatically screened each linked patient pair in our cohort using fields other than the last name that we expected to have a reasonable probability of mismatching in case of an error: the last four digits of the social security number (obtained separately from Cedars-Sinai, with IRB permission, on the matched cohort only), first name, and DOB. Any linkages that did not match exactly on one these three criteria were considered to be a potential error and the stored demographic identifiers were manually reviewed in more detail by J.S., M.K.R., and D.S.B. Additional demographics that we examined to determine whether a linkage was a true versus false match included: middle name, last name, sex, address, city, zip code, phone numbers (cell, work, and home), and e-mail address. Finally, if needed, medical records were manually reviewed to adjudicate the linkage.


#

False-Negative Analysis

To discover false negative CE queries (i.e., queries labeled as “unsuccessful” that are a true match and should have been linked), we performed our own probabilistic matching analysis using three key demographic identifiers that differed from the initial matching used by CE, and can be considered an “alternative” algorithm to link patients. To avoid errors due to patient demographics changing over time, we saved a monthly snapshot of UCLA patient demographics on our secure servers per IRB protocol; for each incoming query we searched for potential matches among the set of UCLA demographics saved the month the query was received.

The demographic identifiers that we used in our “alternative” algorithm (rather than the name soundex algorithm, used by CE) were as follows:

  • E-mail address

  • Phone number (work, home, or cell interchangeably)

  • Date of birth AND first name AND zip code (concatenated)

We first searched for a patient match based on one or more of these identifiers. Then for each match, we also calculated a score based on the same weights that the CE algorithm used. We only analyzed linkages with a score above 20, indicating strong matches that were potentially missed by the CE algorithm. We searched using three separate ways that patients could potentially match to maximize the opportunity for finding matches that CE missed. We did not include SSN in this algorithm because we would need the full social security number (SSN) of all patients to look for missed matches and we wished to avoid the privacy hazards that would entail.

Next, we divided the patients identified as likely matches into strata according to the demographics that defined their match. For instance, if the patient match was based on e-mail only, they were assigned to stratum A. If the patient match was based on e-mail and phone number, they were assigned stratum B, etc. for a total of seven strata. Using these strata allowed us to review a sample of records within each stratum and then weight the estimates from each stratum to generate an estimate for the whole sample. We then sampled 100 pairs in each category for manual review, or if the category contained less than 100 pairs, we reviewed all of them. We selected the sample size of 100 to be feasible for the authors to manually review, and to produce reasonable stratum-specific standard deviations (SDs), for example, if the rate for the stratum was 0.1 the SD world be 0.03. When summing the weighted strata in our analysis, we summed the variances of each stratum and took the square root to find the SD of the weighted sum. We then multiplied this by 1.98 to find the width of the 95% confidence interval.

Our manual review used six criteria to determine patient matching: name match (first, middle, and last), address (street, city, and zip code), DOB, any interchangeable phone match (cell, home, or work), and e-mail address. If after allowing for misspellings and variations, the name and DOB matched plus one other feature of either the address, phone numbers or e-mail address, then the patients were considered to be a match and counted as a false-negative case.


#
#

Results

Patient Matching Results Produced by Care Everywhere

UCLA received 265,348 queries from Cedars-Sinai during the 6-month study period. Of these, 83,781 queries contained exactly the same demographic information as one or more previous queries, meaning these queries represented 181,567 distinct Cedars-Sinai patient identities. Within this cohort, the CE algorithm identified linkages for 22,923 patients between the health care systems (“successful” queries). This indicates that of the patients who had an encounter at Cedars-Sinai during the study period, 12.6% were linked to records at UCLA. The CE algorithm did not identify a linkage for the other 158,644 patient identities (87.4%) seen at Cedars-Sinai during the study period (“unsuccessful” queries).

False-Positive Matches

Of the 22,923 patient linkages labeled as “successful” during the study period by the CE algorithm, our false-positive screening algorithm (using the last four digits of the social security number, first name, and DOB) identified 256 linked pairs that did not match exactly on all three of these identifiers. Of these, however, manual review revealed that 247 did in fact match after taking into consideration minor errors such as transpositions and misspellings of the demographic identifiers. Six of the remaining nine cases matched on enough other demographic identifiers, such as address, phone number, e-mail addresses, etc., to consider them the same individual.

Of the three remaining cases, manual review of the patients' medical records identified enough data in common to consider them a match. Only one case potentially could have been a false-positive match due to a complete difference in social security number and largely missing demographic information on one side; however, the patient identities had the same name (albeit common) and DOB, and enough similar medical history facts across institutions to consider the case a likely match. In corroboration with our findings, UCLA Healthcare has not received any reports of false positive CE linkages from providers, in 3 years of operation on every UCLA patient.


#

False Negative Matches

After applying our “alternative” matching strategy to the patients comprising the 158,644 “unsuccessful” queries, we identified 27,827 patients in whom CE had not matched but our “alternative” algorithm identified as having one or more potential linkages across systems. Of them, 7,613 had a single match with a “high threshold” score above 20. The patient pairs representing these “high threshold” matches were categorized into their respective strata based on the demographic data that contributed to the match. As shown in [Table 2], a random sample of 100 record pairs were selected for manual review from the five strata having more than 100 records, and all records were reviewed for the two strata having less than 100 records, for a total of 623 record pairs selected for manual review.

Table 2

Outcomes of manual review for each match-stratum in false negative analysis

Matching elements in stratum

Patients in stratum n

Sample reviewed

Resulting FNs/TNs

Stratum FN rate (%)

Stratum-specific SD (%)

FN number projected to population

DOB + name + zip

Phone

e-Mail

Yes

Yes

Yes

53

53

53/0

100

53

Yes

Yes

No

372

100

100/0

100

372

Yes

No

Yes

70

70

70/0

100

70

No

Yes

Yes

160

100

27/73

27

4.4

43

Yes

No

No

3,611

100

94/6

94

2.4

3,394

No

Yes

No

2,647

100

26/74

26

4.4

688

No

No

Yes

700

100

13/87

13

3.4

91

No

No

No

151,051

0

Total

158,664

623

383/240

4,712

Abbreviations: DOB, date of birth; FN, false negative; SD, standard deviation; TN, true negative.


Note: Light-gray columns indicate the variable matches used to make up the stratum. Each stratum was exclusive and a sample of 100 was taken from the stratum for review, or the whole stratum was reviewed if it contained less than 100. Columns at right show the stratum-specific SD (a step toward generating the 95% confidence interval), and the weighted false negative number projected back to the population (a step in calculating the negative predictive value). The last row is shaded dark gray to indicate that it represents the remainder of patients who did not meet criteria for a stratum. No sample was reviewed for these patients because they had no matching patient; a random patient pairing would have essentially no chance of matching.


Within the strata overall, our manual review identified 383 of the “unsuccessful” CE patient queries as having a match across institutions (false negatives), whereas 240 of the queried patients did not have a match using the alternative algorithms that we employed (true negatives). Projecting the rate of false negatives in each stratum to the whole, the estimated number of false negatives in the full population would be 4,712. Assuming our matching strategy identified true matches among the 158,644 distinct patient identities not linked between UCLA and Cedars-Sinai by the CE algorithm, then the false negative rate is 4,712/158,644 = 2.97%. The 95% confidence interval (CI) for this estimate is ± 1.41%.

[Table 3] shows our weighted results summarized as a “2 × 2” table evaluating CE as a diagnostic test in this population. During our study time period, the overall PPV and specificity of CE to identify whether a unique Cedars-Sinai patient had records in the UCLA Health System were 100%, whereas the negative predictive value (NPV) of a nonlinkage was 97.0% with a 95% CI of 95.6 to 98.4%. The corresponding sensitivity estimate is 82.9% with a 95% CI of 76.7 to 90.3% (based on a 95% CI for the weighted false-positive count of2.741–6,952).

Table 3

2 × 2 table showing the accuracy of Care Everywhere match results versus manual review for unique patients queried from Cedars-Sinai Health system to UCLA Health system (n = 181,567 total queries)

Gold standard (manual review)

(+)

(−)

Care Everywhere linkage

(+)

22,923 (TP)

0 (FP)

22,923 “successful” queries

(−)

4,712 (FN)

153,932 (TN)

158,644 “unsuccessful” queries

Abbreviations: FN, false negative; TN, true negative; UCLA, University of California, Los Angeles.


CE missed matches due to a variety of errors ([Table 4]). Overall, the most common errors (44%) were due to last name changes, which were often among female patients who appeared to have changed their names due to marriage or divorce (i.e., maiden last name becomes new middle name or new last name combined with old last name). First name changes, such as use of shortened versions (i.e., Bob vs. Robert), initials, abbreviations, and misspellings, were also very common. Errors related to address were most commonly related to abbreviations of street names or errors in the unit number. Birth date errors were minimal with partial differences in the month, date, or year. Phone number errors were most commonly due to missing data.

Table 4

Errors contained in the false-negative patient queries leading to missed matches

Total errors classified

n = 379 (100%)

First name

 Abbreviation or nickname

33 (8)

 Misspelling

31 (8)

Middle name

 Complete name versus absent

115 (31)

 Complete name versus initial, same first letter

41 (10)

 Initial versus absent

31 (8)

 Complete names, different

10 (3)

 Complete name versus initial, different first letter

9 (2)

 Misspelling

2 (1)

Last name

 Concatenated with previous

103 (27)

 Different

65 (17)

 Misspelling

22 (5)

 Shortened

9 (2)

 Extra space

3 (<1)

 Hyphenated with previous

2 (<1)

Name combinations

 Different last and middle name, same last name

38 (10)

 Last name, middle name concatenated

10 (2)

 First name, middle name transposed

9 (2)

 First name, middle name concatenated

4 (<1)

 First name, last name transposed

3 (<1)

Street address difference

 Complete versus abbreviation

113 (29)

 Unit error (missing, abbreviated, or misspelling)

75 (19)

 Misspelling

28 (7)

 Punctuation

18 (4)

 Street and unit concatenated

7 (1)

Date of birth

 Partial difference (month, date, and year)

6 (1)

Zip code

 Completely different

24 (6)

 Partial difference

6 (1)

City

 Different, complete

32 (8)

 Different, partial

7 (1)

Phone

 Work phone, both absent

141 (37)

 Work phone, present versus absent

130 (34)

 Cell phone, present versus absent

116 (30)

 Home phone different

84 (22)

 Work phone, different

29 (7)

 Cell phone, different

23 (6)

 Cell phone, both absent

23 (6)

 Home phone, present versus absent

17 (4)

 Home, cell, and work, different or missing

15 (3)

 Home phone, both absent

2 (<1)

 Cell phone, area code different

1 (<1)

 Home phone, area code different

2 (<1)

E-mail

 Different

6 (1)

 Similar

6 (1)

Note: Some cases contained multiple errors.



#
#
#

Discussion

While the 2009 Health Information Technology for Economic and Clinical Health (HITECH) Act and the Electronic Health Records Incentive Program for meaningful use was central to EHR expansion across the United States, a remaining challenge has been that EHR systems do not easily connect with each other. This has resulted in serious consequences to patient care and safety (e.g., missing information, medication errors, and incomplete transitions of care) and escalated cost of care (e.g., duplicate laboratory testing and/or imaging and extra medical visits).[15] In 2012, the ONC issued a follow-up statement detailing the importance of implementing a strategy to advance HIE across institutions.[16] Accurate HIE also contributes to more accurate quality improvement measures mandated by the Merit Based Incentive Payments System (MIPS) included in the Medicare Access and CHIP Reauthorization Act of 2015 (MACRA) and the Physician Payments Sunshine Act (PPSA) within the Affordable Care Act.[17] A systematic review found that incomplete patient information is the leading barrier to information exchange in the United States. The reasons for incomplete information include: poor matching of patients, concerns about security/privacy, institutional competition, and patients' receiving care outside of the HIE catchment area.[18] The latest roadmap introduced by the ONC addresses these issues and highlights the importance of standardized data formats to exchange information across EHR systems in a secure manner; with a goal of “nationwide operability to enable a learning health system” by the year 2024.[19]

HIE measures such as Epic's CE have created progress toward wider interoperability and have demonstrated value including faster service times, a decrease in cost, decreased laboratory testing and imaging, and admission rates from the emergency room.[20] [21] [22] Our study adds to this knowledge by demonstrating that the CE algorithm can link patients between institutions with an imperceptible false positive rate and a very low, but nonnegligible false-negative rate for matching patient identities when implemented between two adjacent healthcare institutions. To the best of our knowledge, this has not previously been confirmed in practice.

By using an “alternative” approach to the CE algorithm, we were able to find approximately 3% more patient matches that CE did not recognize. However, this does not detract from the fact that CE was able to link 12.6% of patients between Cedars-Sinai and UCLA without any apparent false-positive errors, and these patients likely experienced the benefits of HIE, including avoidance of redundant tests and information about completed treatments. By erring on the side of permitting some false negatives, the CE algorithm appears appropriately tuned to avoid the risk of erroneously linking the wrong patients, which would cause a merge of records that could be difficult to undo. If evaluation had found a significant false positive rate, of even 0.5 to 1%, then the sites could consider adjusting the CE matching threshold to require more than the default of 20 points for a match.

Missed matches were most often due to name changes, misspellings, and data entry errors, which is a common reason for data linkage errors.[23]

Our institution is part of a healthcare clinical-research consortium in Southern California. As part of these efforts, our next step is to use CE as a gold standard to test privacy preserving algorithms for patient linkage that do not require the exchange of PHI.


#

Limitations

A limitation of this study is that we evaluated the CE algorithm for matches between two particular institutions that had both implemented Epic's default scoring algorithm. Had these institutions elected to modify their algorithm, as the software allows, performance of the algorithm might have differed. Furthermore, if the institutions had modified their CE algorithm, we might also expect them to monitor their results and to further adapt their match rules based on this feedback, thus generally improving the algorithm from the default, rather than degrading it. Another limitation is that we needed to invent our own “gold-standard” methods for manual review and evaluation of the CE algorithm. We are not aware of similar prior efforts that could help guide the selection of matching parameters. Different matching methods might have produced somewhat different results, but our methods for searching, both for false positives and false negatives, were thorough, and we believe that any differences would impact the results minimally. Finally, we evaluated institutions that were close geographically. Generalizing to institutions that are farther apart would generally lower the prevalence of a match, and thus, if the sensitivity of the system were constant, would result in a higher (more accurately looking) NPV (false-positive rate).

A drawback of CE and other HIE methods is that they require the exchange of PHI to determine matching patients between health systems. An alternative approach is to use “private record linkage” (PRL) methods for linking patients, in which PHI fields are broken into pieces and then one-way hashed in a way that still allows for the records to be compared for similarity across institutions but does not allow the PHI to be reconstructed.[13] [14] [24] [25] [26] [27] [28] [29] [30] [31] [32]


#

Conclusion

We determined that Epic's CE tool, as deployed in practice, has a positive predictive value of 97.1% and a specificity at or near 100% for matching patients between Cedars-Sinai Medical Center and UCLA Medical Center. This means that matches provided by CE, at least in the current implementation, can be considered highly reliable.


#

Clinical Relevance Statement

Current health care and research efforts in the United States are fragmented due to patient care across institutions. Widespread electronic health information exchange across institutions is the important next step of health care to address this fragmentation. We demonstrated reliable matching of patient identities across institutions with the same electronic health record platform.


#

Multiple Choice Questions

  1. Health information exchange (HIE) have been demonstrated to decrease

    • Physician burnout

    • Laboratory testing

    • Visit diagnoses

    • Patient satisfaction

    Correct Answer: The correct answer is option b. A systematic review of Health Information Exchange outcomes[33] detailed findings about resource use, perceptions, and factors associated with outcomes. Decreased laboratory testing was a consistent finding, the majority of the findings were in the emergency setting.

  2. Barrier(s) to a national health information exchange between healthcare institutions in the United States include:

    • Privacy concerns

    • Lack of data standards

    • Institutional competition

    • All of the above

    Correct Answer: The correct answer is option d. Perceived barriers obtained from focus groups, interviews and research observations are categorized into three categories: completeness of information, organization/workflow, and technology/user needs.[18]


#
#

Conflict of Interest

Epic played no role in the planning, conduct, analysis or interpretation of these results. D.S.B reports grants from National Center for Advancing Translational Sciences, during the conduct of the study. D.S.B. reports grants from National Center for Advancing Translational Sciences, during the conduct of the study. B.T. reports grants from NIH CTSI grant, during the conduct of the study. S.S. reports grants from National Institutes of Health, during the conduct of the study.

Acknowledgments

Ana Esquivel from UCLA's Information Systems and Solutions group provided information on UCLA's implementation of CE; Carolyn Schuh, from Epic Systems, provided additional details on Epic's point-system matching algorithm. David Elashoff and Sitaram Vangala provided statistical consultation, and Marianne Zachariah provided project management support.

Protection of Human and Animal Subjects

No human subjects were involved in the project.


  • References

  • 1 Cebul RD, Rebitzer JB, Taylor LJ, Votruba ME. Organizational fragmentation and care quality in the U.S healthcare system. J Econ Perspect 2008; 22 (04) 93-113
  • 2 Frandsen BR, Joynt KE, Rebitzer JB, Jha AK. Care fragmentation, quality, and costs among chronically ill patients. Am J Manag Care 2015; 21 (05) 355-362
  • 3 Pellegrin K, Chan F, Pagoria N, Jolson-Oakes S, Uyeno R, Levin A. A Statewide Medication Management System: Health Information Exchange to Support Drug Therapy Optimization by Pharmacists across the Continuum of Care. Appl Clin Inform 2018; 9 (01) 1-10
  • 4 Haug PJ, Narus SP, Bledsoe J, Huff S. Promoting national and international standards to build interoperable clinical applications. AMIA Annu Symp Proc 2018; 2018: 555-563
  • 5 DeSalvo KB, Dinkler AN, Stevens L. The US Office of the National Coordinator for Health Information Technology: Progress and Promise for the Future at the 10-Year Mark. Ann Emerg Med 2015; 66 (05) 507-510
  • 6 Duggal R, Khatri SK, Shukla B. Improving patient matching: single patient view for Clinical Decision Support using Big Data analytics. Paper presented at: 2015 4th International Conference on Reliability, Infocom Technologies and Optimization (ICRITO; Trends and Future Directions); 2015
  • 7 Harron K, Goldstein H, Dibben C. , eds. Methodological Developments in Data Linkage. West Sussex, United Kingdom: John Wiley & Sons; 2015
  • 8 Martin TJ, Ranney ML, Dorroh J, Asselin N, Sarkar IN. Health Information Exchange in Emergency Medical Services. Appl Clin Inform 2018; 9 (04) 884-891
  • 9 Kruse CS, Marquez G, Nelson D, Palomares O. The Use of Health Information Exchange to Augment Patient Handoff in Long-Term Care: A Systematic Review. Appl Clin Inform 2018; 9 (04) 752-771
  • 10 Cummins MR, Ranade-Kharkar P, Johansen C. et al. Simple Workflow Changes Enable Effective Patient Identity Matching in Poison Control. Appl Clin Inform 2018; 9 (03) 553-557
  • 11 Parker CJ, Adler-Milstein J. Errors related to health information exchange. In: Agarwal A. , ed. Safety of Health IT: Clinical Case Studies. Cham, Switzerland: Springer; 2016: 153-165
  • 12 Care Everywhere 101: Epic Systems Care Everywhere Training Companion 2015. Available at: https://galaxy.epic.com/?#Browse/page=8400!35!240!3729927 (access via login)
  • 13 Grannis SJ, Overhage JM, McDonald CJ. Analysis of identifier performance using a deterministic linkage algorithm. Proc AMIA Symp 2002; 305-309
  • 14 Kho AN, Cashy JP, Jackson KL. et al. Design and implementation of a privacy preserving electronic health record linkage tool in Chicago. J Am Med Inform Assoc 2015; 22 (05) 1072-1080
  • 15 Vest JR, Kaushal R, Silver MD, Hentel K, Kern LM. Health information exchange and the frequency of repeat medical imaging. Am J Manag Care 2014; 20 (11 Spec No. 17): eSP16-eSP24
  • 16 Williams C, Mostashari F, Mertz K, Hogin E, Atwal P. From the Office of the National Coordinator: the strategy for advancing the exchange of health information. Health Aff (Millwood) 2012; 31 (03) 527-536
  • 17 Siljander B. Assessing the impact of the medicare access and CHIP reauthorization act: the repeal of the SGR and beyond. Health Law. 2014; 27: 26
  • 18 Eden KB, Totten AM, Kassakian SZ. et al. Barriers and facilitators to exchanging health information: a systematic review. Int J Med Inform 2016; 88: 44-51
  • 19 Holmgren AJ, Patel V, Adler-Milstein J. Progress in interoperability: measuring us hospitals' engagement in sharing patient data. Health Aff (Millwood) 2017; 36 (10) 1820-1827
  • 20 Everson J, Kocher KE, Adler-Milstein J. Health information exchange associated with improved emergency department care through faster accessing of patient information from outside organizations. J Am Med Inform Assoc 2017; 24 (e1): e103-e110
  • 21 Winden TJ, Boland LL, Frey NG, Satterlee PA, Hokanson JS. Care everywhere, a point-to-point HIE tool: utilization and impact on patient care in the ED. Appl Clin Inform 2014; 5 (02) 388-401
  • 22 Ruley M, Walker V, Studeny J, Coustasse A. The nationwide health information network: the case of the expansion of health information exchanges in the United States. Health Care Manag (Frederick) 2018; 37 (04) 333-338
  • 23 Just BH, Marc D, Munns M, Sandefer R. Why patient matching is a challenge: research on master patient index (MPI) data discrepancies in key identifying fields. Perspect Health Inf Manag 2016; 13 (Spring): 1e
  • 24 Churches T, Christen P. Some methods for blindfolded record linkage. BMC Med Inform Decis Mak 2004; 4: 9
  • 25 Dusserre L, Quantin C, Bouzelat H. A one way public key cryptosystem for the linkage of nominal files in epidemiological studies. Medinfo 1995; 8 (Pt 1): 644-647
  • 26 Adam N, White T, Shafiq B, Vaidya J, He X. Privacy preserving integration of health care data. AMIA Annu Symp Proc 2007; 2007: 1-5
  • 27 Neubauer T, Heurix J. A methodology for the pseudonymization of medical data. Int J Med Inform 2011; 80 (03) 190-204
  • 28 Kuzu M, Kantarcioglu M, Durham EA, Toth C, Malin B. A practical approach to achieve private medical record linkage in light of public resources. J Am Med Inform Assoc 2013; 20 (02) 285-292
  • 29 Vatsalan D, Christen P. Privacy-preserving matching of similar patients. J Biomed Inform 2016; 59: 285-298
  • 30 Toth C, Durham E, Kantarcioglu M, Xue Y, Malin B. SOEMPI: a secure open enterprise master patient index software toolkit for private record linkage. AMIA Annu Symp Proc 2014; 2014: 1105-1114
  • 31 Randall SM, Ferrante AM, Boyd JH, Bauer JK, Semmens JB. Privacy-preserving record linkage on large real world datasets. J Biomed Inform 2014; 50: 205-212
  • 32 Brown AP, Borgs C, Randall SM, Schnell R. Evaluating privacy-preserving record linkage using cryptographic long-term keys and multibit trees on large medical datasets. BMC Med Inform Decis Mak 2017; 17 (01) 83
  • 33 Hersh W, Totten A, Eden K. et al. Health information exchange. Evidence report/technology assessment 2015; 220: 1-465

Address for correspondence

Douglas S. Bell, MD, PhD
Department of Medicine, University of California Los Angeles
1100 Glendon Avenue, Ste. 1830, Los Angeles, CA 90024
United States   

Publication History

Received: 03 June 2020

Accepted: 27 August 2020

Article published online:
04 November 2020

© 2020. Thieme. All rights reserved.

Georg Thieme Verlag KG
Stuttgart · New York

  • References

  • 1 Cebul RD, Rebitzer JB, Taylor LJ, Votruba ME. Organizational fragmentation and care quality in the U.S healthcare system. J Econ Perspect 2008; 22 (04) 93-113
  • 2 Frandsen BR, Joynt KE, Rebitzer JB, Jha AK. Care fragmentation, quality, and costs among chronically ill patients. Am J Manag Care 2015; 21 (05) 355-362
  • 3 Pellegrin K, Chan F, Pagoria N, Jolson-Oakes S, Uyeno R, Levin A. A Statewide Medication Management System: Health Information Exchange to Support Drug Therapy Optimization by Pharmacists across the Continuum of Care. Appl Clin Inform 2018; 9 (01) 1-10
  • 4 Haug PJ, Narus SP, Bledsoe J, Huff S. Promoting national and international standards to build interoperable clinical applications. AMIA Annu Symp Proc 2018; 2018: 555-563
  • 5 DeSalvo KB, Dinkler AN, Stevens L. The US Office of the National Coordinator for Health Information Technology: Progress and Promise for the Future at the 10-Year Mark. Ann Emerg Med 2015; 66 (05) 507-510
  • 6 Duggal R, Khatri SK, Shukla B. Improving patient matching: single patient view for Clinical Decision Support using Big Data analytics. Paper presented at: 2015 4th International Conference on Reliability, Infocom Technologies and Optimization (ICRITO; Trends and Future Directions); 2015
  • 7 Harron K, Goldstein H, Dibben C. , eds. Methodological Developments in Data Linkage. West Sussex, United Kingdom: John Wiley & Sons; 2015
  • 8 Martin TJ, Ranney ML, Dorroh J, Asselin N, Sarkar IN. Health Information Exchange in Emergency Medical Services. Appl Clin Inform 2018; 9 (04) 884-891
  • 9 Kruse CS, Marquez G, Nelson D, Palomares O. The Use of Health Information Exchange to Augment Patient Handoff in Long-Term Care: A Systematic Review. Appl Clin Inform 2018; 9 (04) 752-771
  • 10 Cummins MR, Ranade-Kharkar P, Johansen C. et al. Simple Workflow Changes Enable Effective Patient Identity Matching in Poison Control. Appl Clin Inform 2018; 9 (03) 553-557
  • 11 Parker CJ, Adler-Milstein J. Errors related to health information exchange. In: Agarwal A. , ed. Safety of Health IT: Clinical Case Studies. Cham, Switzerland: Springer; 2016: 153-165
  • 12 Care Everywhere 101: Epic Systems Care Everywhere Training Companion 2015. Available at: https://galaxy.epic.com/?#Browse/page=8400!35!240!3729927 (access via login)
  • 13 Grannis SJ, Overhage JM, McDonald CJ. Analysis of identifier performance using a deterministic linkage algorithm. Proc AMIA Symp 2002; 305-309
  • 14 Kho AN, Cashy JP, Jackson KL. et al. Design and implementation of a privacy preserving electronic health record linkage tool in Chicago. J Am Med Inform Assoc 2015; 22 (05) 1072-1080
  • 15 Vest JR, Kaushal R, Silver MD, Hentel K, Kern LM. Health information exchange and the frequency of repeat medical imaging. Am J Manag Care 2014; 20 (11 Spec No. 17): eSP16-eSP24
  • 16 Williams C, Mostashari F, Mertz K, Hogin E, Atwal P. From the Office of the National Coordinator: the strategy for advancing the exchange of health information. Health Aff (Millwood) 2012; 31 (03) 527-536
  • 17 Siljander B. Assessing the impact of the medicare access and CHIP reauthorization act: the repeal of the SGR and beyond. Health Law. 2014; 27: 26
  • 18 Eden KB, Totten AM, Kassakian SZ. et al. Barriers and facilitators to exchanging health information: a systematic review. Int J Med Inform 2016; 88: 44-51
  • 19 Holmgren AJ, Patel V, Adler-Milstein J. Progress in interoperability: measuring us hospitals' engagement in sharing patient data. Health Aff (Millwood) 2017; 36 (10) 1820-1827
  • 20 Everson J, Kocher KE, Adler-Milstein J. Health information exchange associated with improved emergency department care through faster accessing of patient information from outside organizations. J Am Med Inform Assoc 2017; 24 (e1): e103-e110
  • 21 Winden TJ, Boland LL, Frey NG, Satterlee PA, Hokanson JS. Care everywhere, a point-to-point HIE tool: utilization and impact on patient care in the ED. Appl Clin Inform 2014; 5 (02) 388-401
  • 22 Ruley M, Walker V, Studeny J, Coustasse A. The nationwide health information network: the case of the expansion of health information exchanges in the United States. Health Care Manag (Frederick) 2018; 37 (04) 333-338
  • 23 Just BH, Marc D, Munns M, Sandefer R. Why patient matching is a challenge: research on master patient index (MPI) data discrepancies in key identifying fields. Perspect Health Inf Manag 2016; 13 (Spring): 1e
  • 24 Churches T, Christen P. Some methods for blindfolded record linkage. BMC Med Inform Decis Mak 2004; 4: 9
  • 25 Dusserre L, Quantin C, Bouzelat H. A one way public key cryptosystem for the linkage of nominal files in epidemiological studies. Medinfo 1995; 8 (Pt 1): 644-647
  • 26 Adam N, White T, Shafiq B, Vaidya J, He X. Privacy preserving integration of health care data. AMIA Annu Symp Proc 2007; 2007: 1-5
  • 27 Neubauer T, Heurix J. A methodology for the pseudonymization of medical data. Int J Med Inform 2011; 80 (03) 190-204
  • 28 Kuzu M, Kantarcioglu M, Durham EA, Toth C, Malin B. A practical approach to achieve private medical record linkage in light of public resources. J Am Med Inform Assoc 2013; 20 (02) 285-292
  • 29 Vatsalan D, Christen P. Privacy-preserving matching of similar patients. J Biomed Inform 2016; 59: 285-298
  • 30 Toth C, Durham E, Kantarcioglu M, Xue Y, Malin B. SOEMPI: a secure open enterprise master patient index software toolkit for private record linkage. AMIA Annu Symp Proc 2014; 2014: 1105-1114
  • 31 Randall SM, Ferrante AM, Boyd JH, Bauer JK, Semmens JB. Privacy-preserving record linkage on large real world datasets. J Biomed Inform 2014; 50: 205-212
  • 32 Brown AP, Borgs C, Randall SM, Schnell R. Evaluating privacy-preserving record linkage using cryptographic long-term keys and multibit trees on large medical datasets. BMC Med Inform Decis Mak 2017; 17 (01) 83
  • 33 Hersh W, Totten A, Eden K. et al. Health information exchange. Evidence report/technology assessment 2015; 220: 1-465

Zoom Image
Fig. 1 Data storage flow chart for Care Everywhere queries. “Nondetailed outcome report” references records that do not include additional details to explain the outcome. For “unsuccessful” queries, it refers to the lack of further classification to explain why the matching attempt was not successful. For “successful” queries, it refers to the absence of patient identifiers included in the original request that was sent across institutions. “Query record” refers to the metadata related to the query itself (time stamp, outcome, institution ID, etc.), not patient information, that is recorded in the database system.