Year 1. High-throughput truthing of microscope slides to validate artificial intelligence algorithms analyzing digital scans of pathology slides: leveraging data collected in international “grand challenges”.
- Project was funded March 2018
- Link to Full Proposal
- Link to updates
- Link to collaborators
Below you will find
- Project Overview
- FY18 Q3 Project Report
- Plain Language Summary
This is a new project led by FDA. The kick-off started with an FDA internal-funding proposal. The idea is to marry high-throughput reader studies (like the one conducted on the 14-head microscope at MSKCC) with algorithms developed in grand challenges to produce regulatory-grade evaluations of the algorithms. Collaborators include MSKCC, CSHL, and challenge organizers Jeroen van der Laak and Mitko Veta.
This project is generally open to new participants.
We will use the eeDAPstudies NCIPhub group to coordinate communications. So if you are a member, you will receive related communications about that project in addition to communications about the eeDAP MDDT. If you are not a member, sign up or check for updates here and in the blog. Updates will also be provided to the WSI working group on a less frequent basis.
FY18 Q3 Project Report
The FDA Critical Program Office asked me to fill out a project report (milestones and metrics). It seems like a good thing to share this with the project collaborators and why not the world. Here it is: https://nciphub.org/groups/eedapstudies/wiki/HighThroughputTruthing/File:CPscorecardFY18-Q3public.pdf]
Generally, things are moving. I would be very happy to make similar progress in the next quarter. Now there are some necessary logistical things that have started and need to continue to move forward (MTAs, CRADAs, etc), and there also are some interesting and fun science coming our way (studies to design and execute). Please refer to wiki page and the FY2018-Q3 report. I am very appreciative of the support received so far and hope y’all will continue to help moving forward. I’d like to convene a Tcon early August to review this update with everyone to answer questions and get feedback. So please review and make some notes and feel free to email me with questions or feedback in the meantime.
Plain language summary
The microscope is going digital; glass slides are being digitized by devices called whole slide imaging (WSI) devices. The first WSI device for primary diagnosis was cleared this past spring (4/12/17) and more are coming. Among the potential benefits of this technology is to enable artificial intelligence (AI), computer algorithms. AI promises to reduce the pathologist’s burden of searching and enumerating certain cells or cellular features on the slides as the pathologist evaluates a case and produces his or her report; let the computer do it. The regulatory question is then, ”How well can the computer algorithms do the tasks at hand?” The most practical ground-truth for evaluating the performance of an algorithm is a pathologist’s assessment of the WSI images. The problem with this kind of truth is that clinicians make mistakes and don’t agree. Furthermore, there is a loss of information in the scanning process. The scanners have limited spatial and color resolution and currently produce a 2D slice of a 3D specimen. In this work, we plan to investigate a high-throughput algorithm truthing study and aid computer algorithm developers by producing a public resource for use in a regulatory submission. We have developed a hardware and software evaluation environment for digital and analog pathology (eeDAP). eeDAP allows us to automatically present pre-specified regions of interest or individual cells and cellular features on a microscope for pathologist evaluation. This allows us to compare location-specific computer algorithm results to microscope-based pathologist evaluations. Last week, we installed eeDAP on a multi-head microscope and completed a data-collection session, collecting evaluations from 12 pathologists simultaneously in a single visit to MSKCC. That is high throughput truthing. In recent years, “grand challenges” have been organized that offer algorithm developers an opportunity to compare computer algorithms on a common set of images in a controlled public setting. We plan to leverage the materials, results, and expertise produced by challenges that are based on WSI images. Two of our collaborators are the challenge organizers of http://tupac.tue-image.nl/ and https://camelyon16.grand-challenge.org/organizers/. Using the glass slides from the challenges, we plan to design, execute, and analyze studies with pathologists that will yield regulatory-grade performance results and a template for the evidence module of an FDA submission. We will do this in the public domain so that the community will benefit along with the challenge participants. This work is a natural evolution of the PI’s efforts to nurture a community for discussing topics related to technical and pathologist performance using WSI images (https://nciphub.org/groups/wsi_working_group ) and the Medical Device Development Tool qualification of the enabling technology eeDAP (https://nciphub.org/groups/eedapstudies ). The work will speed access to safe and effective state-of-the-art computer algorithms to help pathologists provide patients with better information.