Satyajit Das

@satyajit

active 6 hours, 49 minutes ago
  • Satyajit Das posted an update in the group Group logo of Machine LearningMachine Learning 1 month, 3 weeks ago

    Computer Vision :- Basic Image Processing:
    ==========================================
    In this article we are going to discuss about the basics of Image processing in more applications oriented way.

    1) What is an Image? Digital Image formation?
    =============================================
    An image is combinations of “Pixels” nothing but numbers from (0-255) where 0- is black and 255-represents white.
    A image is represented in a form of matrix having values (0-255), the values represents the intensity of the image at each points.

    Note:- A still image is called static image called “Photography” and A moving image is typically called a “Video” it is a 3D the third dimension is time.

    Two types of Images:=
    =====================
    a) Raster images :- Where the pixels are stored in computer memory as a raster image or raster map. These are created by digital cameras, scanners, coordinate-measuring machines like RADARS, Seismographic profiling, example: BMP, TIFF, GIF, and JPEG files.
    b) Vector :- Resulted from mathematical geometry(VECTOR). A vector has the both magnitude and directions like Asteroids/Space Images,Gaming systems in computers, and images taken by the AIR-DEFENSE systems.

    2) Why do we need to process images? :=
    =======================================
    We process the image so that human can understand, machine can understand, transform the image to retrieve the lost image and finally for Entertainment purpose i:e Visual effects.

    3) What is Digital Image Processing? and its applications? :-
    ===========================================================
    The mathematical processing(to get some information)of the digital images by using the algorithms is called DIP.
    It allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and signal distortion during processing.

    The applications of DIP are in fields of :-
    =========================================

    a) Autonomous navigational vehicles.
    b) RADAR( Radio navigation and ranging).
    c) Meteorology.
    d) Seismology.
    e) Remote Sensing.
    f) Astronomy.
    g) Radiology.
    h) Ultrasonic Imaging.
    i) Microscopy.
    j) Robotics Surveillance, images by Drones.
    k) Digital library.
    l) Face recognition.
    m) Holograms.

    4) What are the various Image Enhancement Techniques? in Brief :-
    ==============================================================
    Image enhancement is the process of adjusting digital images so that the results are more suitable for display or further image analysis.
    For example, you can remove noise, sharpen, or brighten an image, making it easier to identify key features.
    The various image enhancing techniques are:-
    =========================================
    a) Image Denoising:- Removing the noice from the image by using various filters like Spacial filters, Linear Filters, Mean filters, Adaptive filters. etc.
    b) Image Deblurring :- Remove the blurring from image it use Wiener Filter, Lucy richardson algorithm.
    c) Image Impainting :- Reconstructing a lost or deteriorated parts of an image.
    d) Edge Detection :- Detecting only the edge of image by observing the transition from a dark to light area or vice-versa.
    e) Image Segmentation :- Partitioning the digital image into multiple segments for simpler representations.
    f) Image compression :- Transforming image to a lower dimensions(Size) to reduce the cost of transfer.

    5) Where are the source of errors and image defects?
    ===================================================

    In the below pipeline, there is always possibility of adding errors/deformity to the image.

    Image -> Camera -> 3D construction of image -> ADC(Analog to Digital conversion) -> Reconstruction of Image -> Display.

    The various image defects that can be introduced are:-
    ===================================================

    a) Low contrast error :-
    ======================
    Contrast is the difference between the blacks and the whites.
    High contrast means the blacks are really dark and the whites are really bright and vice-versa.

    b) Wrong colors error :-
    ======================
    When you upload the image, the colors in the uploaded image might look different from original one.

    c) Noise error :-
    ============
    Image noise is random variation of brightness or color information in images. It can be produced by the sensor and circuitry of a scanner or digital camera.

    d) Blur :-
    ===========
    This is also called “Gaussian blur” when the image is not clearly visible.

    e) Non uniform lighting :-
    ========================
    Means if the illuminations is uneven means there is a tube-light and a bulb then there is interference of different wavelengths of light. This is called “Non-Uniform Lightning”.

    6) How can we improve low image contrast?
    =========================================
    Brightness correction can improve overall contrast in an image. There are two main reasons for poor image contrast.
    a) Limited range of sensor sensitivity.
    b) Bad sensor transmission function means bad conversion from light energy to pixels brightness.

    7) How do we evaluate the tone transfer in an image?
    ====================================================
    We use Brightness histograms to evaluate the tone transfer in an image. The brightness histogram is the chart of brightness distribution in an image.
    In this, On horizontal axis, brightness varies from black to white black=0 and white=255 in range, on the vertical axis the number of pixels and its brightness value.

    8) How do we improve the contrast in an image?
    =============================================
    The most basic technique that can be used is the point operators. These operators map input pixel value to output pixel value and all pixels are processed independently and equally.
    The point operator function is written as the f−1 because we recover true brightness from measurement of brightness.

    The most basic operation is linear correction, this is given as:

    f−1(y)= (y−ymin)∗(255−0)(ymax−ymin)(1)(1)f−1(y)= (y−ymin)∗(255−0)(ymax−ymin)
    —————————————————————————
    The idea is to map the lowest brightness value in an image to total black and vice versa.

    9) What can we use when linear correction can’t be applied?
    ==========================================================
    Gamma transformation is often used for contrast enhancement and it is the most basic type of non linear correction.

    This is given as:

    y= c⋅xγ(2)(2)y = c⋅xγ
    ——————–

    By controlling the parameter gamma we can control the Gamma transformation.

    10) How can we treat noisy images?
    ==================================

    Given a camera and a static scene the easiest way to reduce the noise will be to capture several images and average them, this works because the noise is random and independent.
    But usually we are presented with only one image, in this case what we do is, we replace each pixel xx with weighted average of its local neighborhood.
    This is given as:

    yij= f([xkl]), xkl belongs to neighbour(xij)(3)(3)yij=f([xkl]), xkl belongs to neighbour(xij)
    ———————————————————————————————-

    These WEIGHTS are jointly named as FILTER KERNEL. And the simplest case is EQUAL WEIGHTS and this particular filter is called the BOX FILTER.

    11) How are Filter Kernels used for Image Processing?
    =====================================================
    a) In Image processing, a kernel is a convolution matrix, The kernel matrix is convoluted with the image matrix to remove the Blurring, Sharpening, Embossing and Edge detection.
    b) The simplest box filter is the IDENTITY FILTER. Applying this filter for an image will result in no change to the image.
    But by moving 1 steps in the center of the kernel to other places, the resulting image will be shifted by one pixel.
    c) Generally, any filter kernel with positive weights with sum equal to 1 will be image smoothing filter.

    12) What is Edge Detection?
    ===========================
    An Edge is the intersection of surfaces or two different quantity or discontinuities, or edges are the points of rapid changes in image intensity.
    Such points can be identified by considering the first derivative of the image intensity and edges will correspond to local extrema(maxima and minima) of derivative.
    The main algorithm to detect an edge is when we go from black to white then there is a sudden change in image, by this we will be able to detect the Edge.

    13) What is a Canny Edge Detector?
    ==================================
    It is a type of edge detection.
    it is having two steps :-

    1) Non maximum suppression:-
    ============================
    During this step, the thin multi pixel of width reaches down to a single pixel width.
    The algorithm goes through all the points on the gradient intensity matrix and finds the pixels with the maximum values in the edge direction.

    2) Linking Edge pixels together to form continuous boundaries :-
    ===============================================================
    This is done by assuming a point as an edge point, then we construct the tangent to the edge curve (which is normal to the gradient in that point) and use this to predict the next points. And
    For linking edge points, we use “HYSTERESIS”. “HYSTERESIS” is just two thresholds. Whenever there is transition from a High threshold to low threshold then Edge is detected.

    Note:-
    =======
    a) Change in intensity is not the only source of edge detection. Change in color or texture also gives us visible edges in images. But such changes can’t be detected by image gradient or canny operator.
    b) This is a pixel classification problem and we can use machine learning/ Deep learning techniques for this.
    c) But for many computer vision tasks “CANNY EDGE DETECTOR proves to be sufficient.
    d) It is often used as a feature extractor and produce features which are later used for image recognition.

    Thanks

Skip to toolbar