Meryem Altin Karagoz

Visiting Graduate Researcher

Biography

Meryem Altin Karagoz joined Computer Engineering department of Sivas Cumhuriyet University Turkey, in 2016, as a Research Assistant. She received a master’s degree in Computer Engineering at Erciyes University in 2019 and is currently Ph.D. student in Computer Engineering at Erciyes University. She is awarded a grant by The Scientific and Technological Research Council of Turkey (TUBITAK) to perform research at University of Virginia. Her research interests are artificial intelligence, deep learning, and medical image analysis.

References

Project

Title: Developing Self-Supervised Learning Models for Reducing the Dependency on Expert Supervision in Medical Image Analysis

Deep Learning methods are capable of learning highly complex mathematical functions using a large number of neural-net layers. Therefore, deep learning models require a large amount of data to learn large numbers of parameters. However, the two main problems encountered in current medical imaging studies with Deep Learning are: (i) lack of annotated datasets and (ii)imbalanced datasets between labels or classes. While limited labeled datasets cause overfitting, an imbalanced dataset between positive and negative samples causes an underfitting problem for deep networks. Thus, deep learning models are challenging to provide high performance in computer vision problems with insufficient data; medical imaging is one of these areas. The problem of generating largescale and public datasets for medical image analysis is difficult to solve in the short term due to practical difficulties such as cost, time, patient privacy, and lack of standard data publication procedures. On the other hand, the problem of an imbalanced dataset can be challenging for generating a large-scale annotated dataset. Therefore, it is an important field for medical image analysis studies that researchers develop Deep Learning strategies to overcome these data constraints by reducing the dependence on expert supervision. In recent years, self-supervised learning (SSL) has gained remarkable attention against lack of datasets owing to pretext task and downstream task mechanisms with prior studies. The pretext task is pre-designed as the former task for learning discriminative visual features on unlabeled data to be used in downstream tasks such as classification, detection, and segmentation. In addition to providing robust and performance-enhancing solutions in the lack of labeled data, self-supervised learning methods allow extraction of distinct signals by using labeled and unlabeled data all together as in an unsupervised manner during the pretext task step. Thus, SSL provides a solution to the problem of a large amount of unlabeled data, which is the most critical problem in medical image tasks. However, there is a noticeable lack of self-supervised learning models for medical image analysis tasks and needs to be improved with further studies. In conclusion, we focus on SSL models to deal with data limitations and enable automated diagnosis and detection systems at the expert level for clinical usage by reducing expert demand for medical image tasks.

Resources needs

Compute:

Data: at least 10 TB of space for data storage.

Last modified July 14, 2023: Update index.md (3185b00)