Histologic grading from images has become widely accepted as a powerful indicator of prognosis in breast cancer. Automated grading can assist the doctor diagnosing the medical condition. But algorithms still lag behind human experts in this task, as human experts excel in identifying parts, detecting characteristics and relating concepts and semantics. This can be improved by making algorithms distinguish and characterize the most relevant types of objects in the image and characterizing images from that. We propose a three-stage automated approach named OBI (Object-based Identification) with steps: 1. Object-based identification, which identifies the “type of object” of each region and characterizes it; 2. Learn about image, which characterizes distribution of characteristics of those types of objects in image; 3. Determination of degree of malignancy, which assigns a degree of malignancy based on a classifier over object type characteristics (the statistical distribution of characteristics of structures) in the image. Our proof-of-concept prototype uses publicly-available Mytos-Atypia dataset [19] to compare accuracy with alternatives. Results summary: human expert (medical doctor) 84%, classic machine learning 74%, convolution neural networks (CNN), 78%, our approach (OBI) 86%. As future work, we expect to generalize our results to other datasets and problems, explore mimicking knowledge of human concepts further, merge the object-based approach with CNN techniques and adapt it to other medical imaging contexts.
|