For hollow organs such as the oesophagus, colon and stomach, endoscopy is used to detect, monitor and treat diseases including cancer. However, endoscopy is not without its challenges, and this can lead to the subtle changes in appearance caused by pre-cancerous lesions and early-stage cancers to be missed.
In the colon, polyps are early precursors to cancer which can be removed during a colonoscopy to reduce the chance of getting cancer. However, some polyps are very small or flat, and are difficult to spot in the folds of the colon lining. It is also challenging to see the boundary between the polyp and normal tissue (“segmentation”), which is important if the polyp is to be removed completely. Several groups have developed machine learning methods to assist in polyp detection and segmentation but most of these do not perform well on unseen datasets from different hospitals, patient populations or imaging systems.
To address this pitfall, Dr Sharib Ali (Lecturer at the University of Leeds and visiting scholar at the University of Oxford) led a computer vision data challenge. Together with a team of gastroenterologists across Europe and Africa, the researchers put together a set of 600 colonoscopy videos from multiple centres and diverse populations collected on six different colonoscopy systems. They invited teams from across the world to develop generalisable algorithms that could be applied to the full range of data collected.
Published in Scientific Reports, the detection and segmentation algorithms that performed best were objectively compared. These algorithms were trained to be highly accurate, but this was at the expense of the real-time performance required for use in the clinic. Future research needs to focus on developing strategies that can be applied to real-world clinical settings.
This work was supported by the University of Leeds and NIHR Oxford Biomedical Research Centre.