This community is in archive. Visit community.xprize.org for the current XPRIZE Community.
Judging Criteria for an Interoperability Prototype and Demonstration
HeatherSutton
Posts: 77 XPRIZE
in Prize Design
In Round 1, teams write their proposal on how they will design a solution that aggregates and harmonizes health data to address the primary care needs of a country.
In Round 2, teams will actually build an interoperability prototype and test its performance with existing point-of-care systems and against predetermined criteria.
In Round 2, teams will actually build an interoperability prototype and test its performance with existing point-of-care systems and against predetermined criteria.
- How long should we give teams to build their interoperability prototype?
- What are some illustrative measurement targets we could use as judging criteria to judge prototypes?
- What are some benchmarks we could use to evaluate things such as
- Performance consistency
- Speed of transactions
- Cost
- Flexibility
- Extensibility
- Workflow fit
0
Comments
As for judging, there should be scores given to clinical utility, UX on both user and clinician side, Security and Privacy. This is a minimal set.
1. Data interoperability: Use of international data standards such as SnowMed, ICD, CPT, LOINC, RxNorm, HPO, etc. Use of UMLS for connecting common standards. Standard ways to integrate local custom codes (concepts) with the international standard codes. If the application is for multiple languages, the data interoperability also means language independent. Data model is done at concept level, which can be expressed in and inter-operated by any language.
2. Content interoperability: Use international standards for presenting health information such as medical records. Use of HL7 FHIR or other common public standards. If using private content format, it should follow the standard approach to exchange contents so that the content can be easily understood and consumed by receiving applications.
3. Knowledge interoperability: If knowledge representation is used for AI or RPA application, it should also follow international standards closely in order to achieve interoperability in knowledge computation.
4. Exchange interoperability: Use of data exchange standards, such as REST API, event-driven messaging or data streaming.
Is the intervention working as it was intended?
Monitoring activities can measure changes in performance over time, increasingly in real time, allowing for course-corrections to be made to improve implementation fidelity. Plans for monitoring of digital health interventions should focus on generating data to answer the following questions, where “system” is defined broadly as the combination of technology software, hardware and user workflows:
- Does the system meet the defined technical specifications?
- Is the system stable and error-free?
- Does the system perform its intended tasks consistently and dependably?
- Are there variations in implementation across and/or within sites?
- Are benchmarks for deployment being met as expected?
Effective monitoring entails collection of data at multiple time points throughout a digital health intervention’s life-cycle and ideally is used to inform decisions on how to optimize content and implementation of the system. As an iterative process, monitoring is intended to lead to adjustments in intervention activities in order to maintain or improve the quality and consistency of the deployment.Hi @preciouslunga, @jblaya, @synhodo, @poppyfarrow, @Vishalgandhi, @RKadam, @reubenwenisch, @vipat, @kakkattil, @joshnesbit, @dollendorf, @kkatara, @alabriqu - Would love to hear your thoughts on judging and evaluating interoperability prototype.
I think 2-4 weeks is an ideal period of time to build the prototypes. However, this will be influenced by the project timeline. Longer timelines necessitate having an extended duration for prototypes design as there would be no rush to complete the design thinking process.
We have recently done some work with partners on using interoperability solutions to strengthen epidemiological surveillance. Based on that, here are some thoughts--
1. How long should we give teams to build their interoperability prototype?--- depending on the scope of how many systems/devices are in scope, this could range from 1-3 months.
2. What are some illustrative measurement targets we could use as judging criteria to judge prototypes? --- In addition to what's been mentioned in the comments above, no. of records successfully handled, error/crash rates, usability, could be a few.
Also, you could provide feedback on the overall prize timeline here