Currently, prognosis for oesophageal cancer patients is poor and only 15% survive for 5 years after diagnosis. Part of the reason for the high mortality is the frequently late detection of this cancer, when treatments are less effective. By contrast, cancers detected at an earlier stage have a much higher chance of cure. Further, using methods such as radiofrequency ablation, it is possible to prevent pre-cancerous regions from progressing to cancer if they are detected in time.
Unfortunately, during conventional endoscopy, the more easily treated pre-cancerous conditions and early stage cancers are harder to observe and often missed, especially by less experienced endoscopists. In an effort to improve detection by endoscopy, a team of OxCODE researchers from the Translational Gastroenterology Unit (Barbara Braden, Adam Bailey and James East), the Institute for Biomedical Engineering (Sharib Ali and Jens Rittscher) and the Ludwig Institute for Cancer Research (Felix Zhou and Xin Lu) have developed a deep-learning framework, posted recently on arXiv, for quality assessment of endoscopy videos. This framework is able to identify multiple artefacts in the video, such as bubbles, overexposure and blurring, and restore mildly corrupted frames. Future work aims to employ real-time computer algorithm-aided analysis of endoscopic images and videos, which will enable earlier identification of neoplastic changes automatically during endoscopy.
For more information on this project, please visit the CRUK Oxford Centre page.
This work is supported by: