Comparing the similarity of objects is a faculty characteristic of the human condition, performed seamlessly and without effort. Comparison of objects begins with identification of features that are used, at some level, for measuring similarity. Quantifying similarity inevitably leads to the question of nearness, i.e. in terms of characteristic features, how near are pairs or sets of objects? It was only recently, relatively speaking, that this idea of nearness was first explored mathematically. Frigyes Riesz first published a paper in 1908 on the nearness of two sets, initiating the mathematical study of proximity spaces and the eventual discovery of descriptively near sets. Inspired by this human ability to make feature-based comparisons, the focus of my research is a formal and systematic process for considering and comparing neighbourhoods of points in the context of near sets, rough sets, open sets, and general topology in solving practical image analysis problems.

Descriptive topological and proximity spaces provide a formal framework for assessing near (or far) within practical applications, where nearness is quantified via metrics defined between an object and a set or between sets of objects. In particular, patterns of interest can be extracted for intelligent systems when considering a set as the union of two classes between which the system must make some decision or determination in a manner similar to that of a human performing the same task. Thus, the focus of this work is to on implementation of computational proximity-based frameworks for practical applications. Further, these tasks are often computationally intensive and are excellent candidates for execution on Graphics Processing Units (see, e.g., GPU Computing).

My interest lies in quantifying sets of objects, based on their characteristic attributes and features, in a manner similar to humans performing the same task. As is well known, in 2012 Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) -- by a significant margin -- using a deep learning convolutional neural network driven by graphic processing units (GPU). Their work demonstrated that the combination of GPUs, large labelled ``big data'' datasets, and deep neural networks could solve a real-world image classification problem. This event, and the developments that followed, raised the following question: had the problem of quantifying the similarity of sets of objects been effectively solved by these networks? The answer is no, and my desire to answer this question forms the foundation of my work. Human behaviour is much richer than simply the ability to classify objects. We have a powerfully inherent ability to make judgements on the similarity of groups of objects, which we perform seamlessly and unconsciously many times a day. Thus, there is a strong need for the combination of theoretical frameworks for quantifying similarity (along with their applications) and the current exciting developments in the field of machine learning. Thus, I aim to develop theoretical and computational frameworks for the synthesis of human perception of the similarity of sets of objects, where no a priori knowledge (represented by pre-defined classes) of the objects are present. The mathematical foundation of this work is descriptive topology and descriptive proximity spaces, which formalize relationships between objects, sets of objects, and collections of these sets based on features that characterize intrinsic object attributes.

Deep artificial neural networks not only provide a rich source for comparison, but they offer great opportunity to augment descriptive proximity theory (or vice versa) to advance the ability of automated systems to quantify the similarity of sets of objects. As a result, the majority of my recent industrial collaborations have focused on application of deep artificial neural networks. Industrial collaborations provide opportunities to deepen experience, build expertise, and provide a source of problems and datasets that will allow for comparison and contrast of my new methods with established approaches such as machine learning algorithms. This line of work will also provide opportunities to test hybrid approaches of machine learning and computational proximity.

The problem considered in this work is one of establishing a theoretical framework on which to build applications to produce results similar to a human performing the same task. While, this work can be applied to any problem that can be formulated in terms of objects with associated feature vectors, the focus is on finding and discerning patterns and similarities within single images (image analysis), and between sets of images (content-based image retrieval).

J. F. Peters introduced the concept of near sets, which are disjoint sets containing objects with similar descriptions. Similarity is determined quantitatively via some description of the objects. Near set theory provides a formal basis for identifying, comparing, and measuring resemblance of objects based on their descriptions, i.e. based on the features that describe the objects. The discovery of near sets begins with identifying feature vectors for describing and discerning affinities between sample objects. Objects that have, in some degree, affinities in their features are considered perceptually near each other. Groups of these objects, extracted from the disjoint sets, provide information and reveal patterns of interest.

Rough sets were introduced by Z. Pawlak during the early 1980s. Briefly, a set X is considered a rough set if X cannot be reproduced by the union of cells in a partition, where the partition is defined by an equivalence relation on object attributes, called the indiscernibility relation. Much work has been reported in the use of rough sets in image analysis. The focus here is on disjoint visual rough sets.