Visualizing Large Medical Image Datasets using the 3D Scale-Invariant Feature Transform (SIFT)

Consider human-in-the-loop systems that rely on a human expert to make decisions based on data, e.g., clinicians such as radiologists or pathologists interpreting images visually to identify the subtle signs of disease to plan the best possible treatment. According to this fact, one of the most challenging fields is medical image processing these days.

Rapidly interpreting large numbers of images is a tedious and time-consuming task which is accomplished by radiologists or clinical experts. The classification result’s accuracy depends on the expert human’s experience and is prone to human error. Automatic classification is a powerful tool; however even modern approaches may produce erroneous results with high confidence, Nguyen et al. (2015), and it may be challenging to provide an optimal decision from a single classification label without the rich-nuanced information presented in the original image.

According to the age of computers and improvements in graphical user interfaces, the definition of the word « visualization » has changed over the time, from a form of cognitive/intellectual sketch of something to a graphical illustration of an object or information set and also cause to the rise of exploratory data analysis. Data analysis has historically been a statistical issue while many common types of visualizations like scatter plots or box plots originate from statistics, Friendly & Denis (2001).

What is the best information code to provide to the human expert to balance between fully manual and fully automatic approaches. We propose providing the human visual system with a visualization that simultaneously is highly informative regarding diagnosis/classification label and contains spatially pertinent information indicating how the diagnosis/classification is obtained.

Local Image Features

An image typically contains an enormous amount of information illustrated by an array or lattice of intensity measurements in computer vision programs, i.e., pixels in 2D photographs or voxels in 3D MRI volumes. In images of natural objects or scenes, most of the images may be composed of redundant or uninformative intensity information, for example, regions of homogeneous image intensity. Information tends to be concentrated into a small subset of unique or distinctive regions, i.e., features. Features may contain global attributes information of the image, e.g., intensity histogram, frequency domain descriptors, covariance matrix, and high order statistics, etc., or include local region information of the image, e.g., spatially localized edges, corners or blob patterns, etc. The image can be defined as a set of such global or local regions,   global features contain overall information such as shape, whereas local features focus on the details. In this work, we discuss the anatomical structures in different subjects images, and we seek to observe and characterize similarities and differences between them by detail. Therefore local features are more effective rather than global.

Local features are considered for essential reasons. There is data reduction compared to the original image information. As the amount of information reduced by finding local features, the processing time of algorithm would be optimized and considerably faster. Furthermore, spatially localized features are robust to the occlusion and clutter, rotation, translation and resolution, Lowe (2004); Wells III (1997); Fergus et al. (2007). These interesting points could be matched across images without an explicit search or image registration, Amit & Kong (1996); Yang et al. (2011). Moreover, local similarities detection between images is more reliable rather than global similarities, Toews et al. (2010); Toews & Arbel (2009).

Local features are useful in many applications e.g., image alignment, reconstruction, motion tracking, object recognition, indexing and database retrieval, navigation, etc. In fact, the local feature representation might be considered as a general building block for many computer vision and medical imaging algorithms.

Local interest points detectors were used to identify the salient points in order to match the images in binocular vision by Marr & Poggio (1977) and robotic mapping by Moravec (1979) in early works. Later Harris & Stephens (1988) and Rohr (1997) detected the corner and landmark by calculating spatial gradients. This was generally achieved via salience operators evaluating fixed-size image regions.

Automatic Image Classification

Automatic classification is a computational task where the goal is to assign a label to a data sample. Fully automatic image classifiers trained via machine learning have long promised to alleviate this workload. However, there are challenges. First, it may be difficult to interpret the result of the classification. For example, modern classifiers such as deep convolutional neural networks (CNNs) achieve impressive classification results which can produce confidence values. However, they also produce highly confident classifications in case of irrelevant, artificially generated images, Nguyen et al. (2015); Szegedy et al. (2013), raising the possibility of patient misclassification. Second, it is difficult to ensure classifiers generalize across imaging conditions due to limited numbers of training data. For example, MRI are notoriously difficult to normalize across the different sites. Accurate site-wise classifiers may be possible via combinations of transfer learning, classifier retraining. However, retraining is computationally intensive, requires many training samples (e.g patient data) that might not be available or easily accessible. Effective transfer learning for multi-site medical imaging data remains an open research topic.

Recently, there has been a drastic changes in the field of information technology and the worldwide web access to the visualized data. The main challenge in this field is organizing and classifying these data to make the user’s access easy for appropriate data. Users wish to get an appropriate image when they search and also are interested in navigating through the images. These types of requirement has generated excessive demands for operational and flexible systems for organizing digital images and visual data.

One important method for solving such a problem is using image classification to constitute a digital library, Haralick et al. (1973). Image classification is the mission of splitting images into categories based on the labels which are presented in training data. There are various methods for image classification but a general issue is involved by this matter can be listed as follow:

• Image features; finding significant part of the image and express image by them,
• Organization of feature data; categorizing these features in a way to be identified separable,
• Classifier; dividing images in different categories.

Image classification could give rise to a semantic organization of a digital database. In order to map the irrelevant visual features it is important to train a large number of classifiers to accomplish large-scale image classification. The classifiers performance largely depends on the devices design for training and the quality of feature objectives which could be two serious matter. Different model of classifiers has been used recently. Vapnik (2013) explain about different classifiers and their structure. Mika et al. (1999)proposes an example of a non linear classifier based on based on Fisher’s discriminant. Too many challenging problems can be indicated for classification of large number of classes.

Table des matières

INTRODUCTION
CHAPTER 1 RELATED WORKS
1.1 Local Image Features
1.2 Automatic Image Classification
1.2.1 Kernel Density Estimation
1.2.2 K-nearest Neighbors
1.3 Visualization
1.3.1 Group-wise Visualization
1.3.2 Image-wise Visualization
1.4 Hypothesis Testing
1.4.1 p-value
CHAPTER 2 METHODOLOGY
2.1 Local Feature Format
2.2 Generative Model
2.3 Distance Metric
2.4 Classification
2.5 Parameter Estimation
2.5.1 p-value
2.5.2 False Discovery Rate
2.6 Visualization
CHAPTER 3 EXPERIMENTS
3.1 Data and Features
3.2 Classification
3.3 Visualization
3.3.1 Group-wise Visualization
3.3.2 Image-wise Visualization
CONCLUSION 

Cours gratuitTélécharger le document complet

Télécharger aussi :

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *