I’m a PhD student and research assistant in the Electrical and Computer Engineering (ECE) department at Northeastern University. I received my Bachelor of Science degree from Sharif University of Technology in 2010. Currently working in Cognitive Systems Laboratory (CSL) under advisory of Professors Deniz Erdogmus, Dana Brooks and Jennifer Dy, my main research interest includes machine learning and medical image processing.
Like many others, my research projects consist of two main parts: theoretical part and application-oriented activities. I’m really interested in understanding the underlying mathematical foundation of engineering algorithms developed using various tools so that I can find efficient ways of employing them to attack against problems in my field of study and also trying to modify them in order to make them more compatible with real-world applications. This also requires having a reasonable overview to the problems itself. Here are some brief descriptions about what my research is mainly about:
Interacted Image Segmentation: Image segmentation is one of the most important requirements of low-level image processing especially in medical applications, it simply means finding and locating an object that the user is interested in, from an image: an example is segmentation of organs like lung in a CT (Computed Tomography) image from a patient’s chest. The accuracy and speed of the algorithms are vital in medical treatment procedures. While manually doing the segmentations is not efficient, due to human mistakes and also the long time it takes, fully automated algorithms have been under research of many engineers for the last few decades. Many methods, which are working admirably require the user to provide them with some training datasets based on which they will be able to learn themselves. However, in some medical cases gathering training datasets is a very expensive and time-consuming task such that clinical experts might not be able to prepare a large enough training set to train a machine. In the same time using unsupervised segmentation methods — the ones which don’t need any training datasets, wouldn’t be as accurate as the supervised ones. One solution that has been studied and attracted a lot of attentions recently, is a family of methods which use the information coming from the user while running; of course such information would be much smaller compared to a training set.
An efficient strategy of using user’s inputs is called Active Learning in machine learning terminology, provides the user with what information is necessary for the algorithm to gain better performance. For more details see http://active-learning.net/.
Interactive methods are usually modifications of fully automated algorithms. Currently I’m working on spectral clustering, as a reliable unsupervised clustering method, to make an interactive version out of it. It’s been many works for this purpose already, however, they cannot be run for large datasets, like 3-dimensional medical images with huge number of voxels.
Detection of Dermis-Epidermis Junction in Reflectance Confocal Microscopic Images of the Human Skin: The junction between the two most superficial layers of human skin—Epidermis and Dermis is of a great interest as many fatal diseases, such as melanoma, start growing from this region. It’s crucial to detect cancerous cells in early stages in order to prevent metatheses to other site of the body; therefore it’s always very important for clinicians to monitor this junction. Reflectance Confocal Microscopy (RCM) is an in-vivo imaging modality which can take images from different depths of the skin with no need of doing biopsy and cell-painting. However, while it’s fast and non-invasive, distinguishing Dermis-Epidermis Junction (DEJ) is a very challenging task in such images. Even various experts might not agree with each other on interpreting RCM images. This difficulty is mostly because of heterogeneous inter-subject and even intra-subject characteristics of Dermis and Epidermis layers.
The images are 3-dimensional in case of en-face imaging with typical size of 1000x1000x60 and resolution of 1um; number of slices in depth depends on the spacing in z-direction (it’s adjustable but typically 1um). On the other hand, in case of oblique sections we usually have only a single image in which the depth information can be obtained by noticing the direction of going deeper.
Detection of DEJ in en-face images is done by dividing each slice into several patches, clustering them in z-direction and localizing the DEJ in the boundary between two clusters which are mostly separate. Similarly for oblique sections we divide the image into strips along the direction of going deeper and find the location of DEJ in sequence of patches of each strip. At both cases, the result will be smoothed in a post-processing step.
Of course, this project is ongoing and there are a lot of things to be built or improved. For instance, one of the goal is to take advantage of histological information as well as image processing features. As a first step in this direction we have tried to find wrinkles a priori to DEJ detection based on the fact that wrinkles change the shape of DEJ of their neighboring regions.
My work in this area is a continuation of precious works that Sila Kurugol has done during the previous years. In this project we’re collaborating with Milind Rajadhyaksha from Dermatology department of Memorial Sloan Kettering Cancer Center in New York, NY.
J. Sourati, D. H. Brooks, J. G. Dy, E. Ataer-Cansizoglu, D. Erdogmus, M. Rajadhyaksha, ‘Unsupervised Wrinkle Detection in Reflectance Confocal Microscopy Images of the Human Skin’, accepted by ICASSP , Kyoto, Japan, 2012.