Computer Vision (Basics of Image Processing) :-

In this article we are going to discuss about the basics of Image processing in more applications oriented way. 

1) What is an Image? How the Digital Image is formed?

An image is a combination of “Pixels” nothing but numbers from (0-255) where “0” is black and 255 is white.

An image is represented in a form of matrix having values (0-255), the numeric values represents the intensity of the image at each point.

Note:- A still image is called static image also called “Photography” and a moving image is typically called a “Video”,  it is a 3D image in motion and the third dimension is time. 

Two types of Images :=

a) Raster images :– Where the pixels are stored in computer memory as a raster image or raster map. These are created by digital cameras, scanners, coordinate-measuring machines like RADARS, Seismographic profiling, example: BMP, TIFF, GIF, and JPEG files.

b) Vector :- Resulted from mathematical geometry(VECTOR). A vector has both magnitude and directions like Asteroids/Space Images, Gaming systems in computers and images taken by the Air-Defence systems.

2) Why do we need to process images? :-

We process the image so that humans can understand, machine can understand and transform the image to retrieve the lost image and finally for Entertainment purpose i:e Visual effects.

3) What is Digital Image Processing? and its applications? :-

The mathematical processing or retrieval of the information from the digital images by using the algorithms is called DIP.

The applications of DIP are in the fields of :-

=================================

a) Autonomous navigation vehicles.

b) RADAR( Radar navigation and ranging).

c) Meteorology.

d) Seismology.

e) Remote Sensing.

f) Astronomy.

g) Radiology.

h) Ultrasonic Imaging.

i) Microscopy.

j) Taking Robotics Surveillance images by Drones.

k) Digital library.

l) Face recognition.

m) Holograms.

4) What are the various Image Enhancement Techniques? (in Brief) :-

Image enhancement is the process of adjusting the digital images so that the results are more suitable for display or further image analysis.

For example, you can remove noise, sharpen or brighten an image, making it easier to identify key features.

The various image enhancing techniques are :-

======================================

a) Image Denoising :- Removing the noise from the image by using various filters like Spatial filters, Linear Filters, Mean filters, Adaptive filters. etc. 

b) Image Deblurring :- Remove the blurriness or unclearness from image it use Wiener Filter, Lucy richardson algorithm.

c) Image Inpainting :- Reconstructing a lost or deteriorated parts of an image.

d) Edge Detections :- Detecting only the edge of image by observing the transition from a dark to light area or vice-versa.

e) Image Segmentations :- Partitioning the digital image into multiple segments for simpler representations.

f) Image compression :- Transforming image to a lower dimensions(Size) to reduce the cost of transfer.

5) Where are the sources of errors and image defects?

In the below pipeline, there is always the possibility of adding errors/deformity to the image.

Image -> Camera -> 3D construction of image -> ADC(Analog to Digital conversion) -> Reconstruction of Image -> Display.

The various image defects that can be introduced are :-

==========================================

a) Low contrast error :- 

==================

Contrast is the difference between the blacks and the whites.  

High contrast means the blacks are really dark and the whites are really bright and vice-versa.

b) Wrong colors error :-

==================

When you upload the image, the colors in the uploaded image might look different from the original one.

c) Noise error :-

============

Image noise is random variation of brightness or color information in images. It can be produced by the sensor and circuitry of a scanner or digital camera.

d) Blur :-

=======

This is also called “Gaussian blur” when the image is not clearly visible.

e) Non uniform lighting :-

====================

Means if the illuminations is uneven means there is a tube-light and a bulb then there is interference of different wavelengths of light. This is called “Non-Uniform Lightning”.

6) How can we improve low image contrast?

Brightness correction can improve overall contrast in an image. There are two main reasons for poor image contrast.

a) Limited range of sensor’s sensitivity.

b) Bad sensor transmission function means bad conversion from light energy to pixels brightness.

By using the following techniques:-

=============================

  1. The point Operators :- The basic idea is to map the lowest brightness value in an image to total back 
  2. Contrast Enhancement techniques :-  By using the Histogram analysis.

7) How can we treat noisy images?

Given a camera and a static scene the easiest way to reduce the noise will be to capture several images and average them, this works because the noise is random and independent. 

But usually we are presented with only one image, in this case what we do is, we replace each pixel xx with weighted average of its local neighborhood. 

This is given as:

yij= f([xkl]), xkl belongs to neighbour(xij)(3)(3)yij=f([xkl]), xkl belongs to neighbour(xij)

—————————————————————————————————————

These WEIGHTS are jointly named as FILTER KERNEL and the simplest case is EQUAL WEIGHTS and this particular filter is called the BOX FILTER.

8) How are Filter Kernels used for Image Processing ?

a) In Image processing, a kernel is a convolution matrix, The kernel matrix is convolved with the image matrix to remove the Blurring, Sharpening, Embossing and Edge detection.

b) The simplest box filter is the IDENTITY FILTER. Applying this filter for an image will result in no change to the image, But by moving 1 step in the center of the kernel to other places, the resulting image will be shifted by one pixel.

c) Generally, any filter kernel with positive weights with sum equal to 1 will be image smoothing filter. 

9) What is Edge Detection?

An Edge is the intersection of surfaces or two different quantity or discontinuities, or edges are the points of rapid changes in image intensity.

Such points can be identified by considering the first derivative of the image intensity and edges will correspond to local maxima or Local minima of derivative.

The main Idea/algorithm is to detect an edge when we go from black to white then there is a sudden change in Intensity, by this we will be able to detect the Edge.

10) What is a Canny Edge Detector?

It is a type of edge detection.

it is having two steps :-

1) Non maximum suppression :- 

=========================

During this step, the thin multi pixel of width reaches down to a single pixel width.

The algorithm goes through all the points on the gradient intensity matrix and finds the pixels with the maximum values in the edge direction.

2) Linking Edge pixels together to form continuous boundaries :-

===============================================================

This is done by assuming a point as an edge point, then we construct the tangent to the edge curve (which is normal to the gradient in that point) and use this to predict the next points. And 

For linking edge points, we use “HYSTERESIS”. “HYSTERESIS” is a phenomena, Whenever there is a transition from a High threshold to low threshold then Edge is detected.

Note :- 

=======

a) Change in intensity is not the only source of edge detection. Change in color or texture also gives us visible edges in images. But such changes can’t be detected by image gradient or canny operator. 

b) This is a pixel classification problem and we can use machine learning/ Deep learning techniques for this. 

c) But for many computer vision tasks “CANNY EDGE DETECTOR proves to be sufficient. 

d) It is often used as a feature extractor and produce features which are later used for image recognition.

Keep Learning and sharing with Grouply.org

All the best!

Satyajit Das

Share This Post
Have your say!
00
Do NOT follow this link or you will be banned from the site!
Skip to toolbar