Dr. Hamed Sari-Sarraf, my image
processing professor at Texas Tech, described pattern recognition
and image processing as the two sub-disciplines that make up computer
vision. He said, that image processing converts images into data that
is useful e.g. remove noise, highlight regions of interest, etc.; whereas,
pattern recognition is using those processed images to make a
judgment about the data e.g. identify a face, perform a
classification of an object, etc.
According to a former co-worker, Don
Waagen, pattern recognition is broken down into four basic parts from
lowest level processing to highest: sensing, segmentation, feature
extraction, and classification. Sensing converts images, sounds,
x-rays, etc. to a signal. Segmentation isolates sensed objects from
uninteresting signal i.e. noise. Feature extraction measures useful
properties of objects e.g. width of a face image or the length of a
bridge; basically, a feature is any information from an object that
can be unique to that object. Classification assigns objects to a
category.
Standing on the shoulders of Hamed and
Don, I view pattern recognition as feature extraction and
classification of data. I consider sensing and segmentation as more
image processing. But, like anything, the lines between the topics are
blurry, and I will discuss some image processing topics; however this
blog will mainly focus on pattern recognition.
No comments:
Post a Comment