Philips Team, Digital Pathology Solutions
Best, The Netherlands
- Lead: Liselotte Kornmann, PhD
- Sr Regulatory Affairs Specialist, Regulatory and Clinical Affairs – Digital Pathology
- Prarthana Shrestha, PhD
- Mischa Nelis, PhD
- Esther Abels, PhD
We believe that eeDAP can be very useful in a Quantitative study to evaluate Image Quality (see for more info Shresta P et al. A quantitative approach to evaluate image quality of whole slide imaging scanners; J Pathol Inform 2016, 7:56). Before starting this study we would like to first run a pilot study to evaluate the use of this tool. Would that be possible?
In addition, in the past we discussed possible legal and IP aspects. To make this successful, we would like to have an agreement in place. To make it feasible, we can start to have this in place for the studies only. Thereafter we can build and expand it to the WSI WG, which is common for WGs like this, e.g. ICC or DICOM.
I’m glad to hear that you want to test drive eeDAP. I think that it would be great to follow up your study of subjective pathologist ratings with objective pathologist ratings. Are you thinking about a study using the same HercepTest slides? That would be nice and could include several clinically relevant tasks for the pathologists. The results could be correlated to your metric values similar to how they were correlated with the subjective ratings.
If you don’t mind, let me take this opportunity to comment on “subjective” ratings. A conversation that I know will come up in our future discussions. To me “subjective” implies opinion. Opinion is neither wrong or right, and there is no true value for opinions.
I believe that you intend to do eeDAP studies that will generate objective ratings. The HER2 score is a biomarker intended to predict survival or effectiveness of treatment options. We can objectively determine how well that is accomplished if we collect survival and treatment information. I also believe there is a true amount of staining or completeness of staining, that underlies pathologist ratings of these quantities. We might even be able to characterize these features with units, which definitely implies objectivity (whether or not we know the truth).
In Prathana’s paper, “The participants were asked to rate the images as a level of comfort for providing a HercepTest score in the scale of one (very uncomfortable) to five (very comfortable).” There is no true level of comfort of a pathologist that is known or unknown. It is the pathologists opinion, and I don’t believe it can be determined more or less correct. Of course the technical measurements of image quality are objective (and usually reproducible with noise). What we all want to know is how well do they correlate with performance in the hands of the pathologist.
Finally, I would assume that pathologist opinions on image quality are important from a customer standpoint, but they don’t always correlate with task-based performance evaluations.
they will go through the FDA’s counsel. I cannot approve agreements. Still, as I have said before, I am ok with an agreement of some kind. We need to start this quickly, come to an informal plain-language agreement between you and me, and then forward the result to FDA counsel. Generally, my position would be to not limit disseminating discussions and all of my contributions. I would like to disseminate methods, software, and phantoms that are related to the evaluation of WSI systems, but I can imagine carving out elements that you want to protect. I certainly can imagine protecting data until a publication is produced.
- Will you allow others in the eeDAP studies group to sign on to the agreement?
Since this group is a big part of my current plans, I need this. I can’t justify working with only one group. I want to help and learn from each group as we create data collection and analysis protocols in the group setting so that we can summarize consensus approaches and the result can be a template for others to follow. The agreement doesn’t need to expand to include the entire WSI WG.