3rd Mini Workshop of Ideas!

Virtual Meeting - October 27th, 2020

14:00 David Oliveira
Title: MultimodalDermaCAD - Classification of multimodal dermatological data
Skin cancer is a dangerous disease originating in the skin. When diagnosed at an early stage, it is fairly easy to treat, with high chances of survivability. However, if left unchecked, treatment becomes complicated and expensive, with higher chances of death for the patient. Specific instances vary from the type of cell that originated it, Basal Cell Carcinoma (BCC) being the most common cancerous lesion, with over nine thousand cases, while Melanoma being the most dangerous, with one thousand cases in Portugal. Currently, there is an inadequate number of specialist personnel, resulting in large wait times for a consultation. With the rising appearance of new instances of skin cancers, partly due to an aging population, the inadequate numbers of specialists and heavy reliance on their experience poses a problem in identifying skin cancer in a timely manner. Screening has been tried in an attempt to identify skin cancers before symptoms appear, however this method merely provides the possibility of finding early signs of cancer, at the cost of a higher work load on the medical staff. Methods such as the ABCDE rule and the 7-point checklist have been proposed to streamline the classification process, however it remains largely dependant on the experience of the medical staff. A large amount of research has been done on Computer-Aided Diagnosis (CAD) systems. These systems aid medical staff by providing a second opinion on the lesion. Capable of analysing images of the lesion, these give their prediction on what type it belongs to. The progress of technology has enabled the easier access to dermatoscopes, devices that provide better control of the conditions when capturing images, as well as a magnification on the lesion. These devices enable the capture of clearer and higher detailed dermoscopic images, however the clinical (or macroscopic) images still possess details that can not be acquired with the dermatoscope. This means that there is easier access to more modalities in this domain. With even specialist improving their diagnosis when having access to more data, CAD systems have yet to fully explore this aspect of the skin cancer domain. This dissertation seeks to explore the fusion of multiple modalities in a CAD system that predicts the classification of a lesion. Several experiments are performed to determine the effect of the various modalities: dermoscopic images, clinical images and metadata. The impact from various techniques on the results is investigated. Techniques such as transfer learning, multitasking, class- fusion and feature-fusion are investigated regarding their effect on the fusion of various modalities. To facilitate a faster testing of these various experiments, a simple Convolutional Neural Network is utilized. Various conclusion could be obtained from the experiments. Dermoscopic images provide the most/best information out of the modalities. Metadata (location and elevation of the lesion and sex of the patient) improved the performance over the BCC cancer. Giving the model access to more modalities generally provided an improvement on the results, however the techniques were needed to facilitate the fusion of the modalities and obtaining the improvements. The fusion of all of the modalities, with multitasking and feature-fusion obtained 0.65 global accuracy, 0.52 average SEN, 0.886 average SPC, 0.508 average F1-score and 0.772 average AUROC.
14:20 Carla Silva Title:
Mapping TSP to Quantum Annealing
14:40 Iván Carrera Title:
Machine Learning Models for Discovering Biological Interactions