Browse by Domains

What is Edge Detection – An Introduction

What is Edge Detection?
Methods of Edge Detection
Drawbacks of applying edge computation
Techniques to overcome the drawbacks of edge computation

What is Edge Detection?

Edge detection is a technique of image processing used to identify points in a digital image with discontinuities, simply to say, sharp changes in the image brightness. These points where the image brightness varies sharply are called the edges (or boundaries) of the image.

Contributed by: Satyalakshmi

It is one of the basic steps in image processing, pattern recognition in images and computer vision. When we process very high-resolution digital images, convolution techniques come to our rescue. Let us understand the convolution operation (represented in the below image using *) using an example-

For this example, we are using 3*3 Prewitt filter as shown in the above image. As shown below, when we apply the filter to perform detection on the given 6*6 image (we have highlighted it in purple for our understanding) the output image will contain ((a11*1)  + (a12*0) + (a13*(-1))+(a21*1)+(a22*0)+(a23*(-1))+(a31*1)+(a32*0)+(a33*(-1))) in the purple square.  We repeat the convolutions horizontally and then vertically to obtain the output image.

We would continue the above procedure to get the processed image after edge-detection. But, in the real world, we deal with very high-resolution images for Artificial Intelligence applications. Hence we opt for an algorithm to perform the convolutions, and even use Deep Learning to decide on the best values of the filter. 

Learn More Artificial Intelligence Concepts

Methods of Edge Detection

There are various methods, and the following are some of the most commonly used methods-

  • Prewitt edge detection
  • Sobel edge detection
  • Laplacian edge detection
  • Canny edge detection

Prewitt Edge Detection

This method is a commonly used edge detector mostly to detect the horizontal and vertical edges in images. The following are the Prewitt edge detection filters-

Sobel Edge Detection: This uses a filter that gives more emphasis to the centre of the filter. It is one of the most commonly used edge detectors and helps reduce noise and provides differentiating, giving edge response simultaneously. The following are the filters used in this method-

The following shows the before and after images of applying Sobel edge detection-

Laplacian Edge Detection

The Laplacian edge detectors vary from the previously discussed edge detectors. This method uses only one filter (also called a kernel). In a single pass, Laplacian detection performs second-order derivatives and hence are sensitive to noise. To avoid this sensitivity to noise, before applying this method, Gaussian smoothing is performed on the image.

The above are some of the commonly used Laplacian edge detector filters that are small in size. The following shows the original minion image and the final image after applying Gaussian smoothing (GaussianBlur() method of cv2) followed by Laplacian detection-

Canny Edge Detection

This is the most commonly used highly effective and complex compared to many other methods. It is a multi-stage algorithm used to detect/identify a wide range of edges.

  • Convert the image to grayscale
  • Reduce noise – as the edge detection that using derivatives is sensitive to noise, we reduce it.
  • Calculate the gradient – helps identify the edge intensity and direction.
  • Non-maximum suppression – to thin the edges of the image.
  • Double threshold –  to identify the strong, weak and irrelevant pixels in the images.
  • Hysteresis edge tracking – helps convert the weak pixels into strong ones only if they have a strong pixel around them.

The following are the original minion image and the image after applying this method.

Drawbacks of applying edge computation

  • Size of output will be shrunk.

If you notice in the above example with an input of 6*6 image after applying 3*3 filter, the output image is only 4*4. Usually, the formula is if the size of the input image is n*n and the filter size is r*r, the output image size will be (n-r+1)*(n-r+1). 

  • Loss of a lot of valuable information, especially from the edges of the input image.

As the output image size is much reduced than the original image used as input (as discussed above), the information towards the edges of the input image is lost as we don’t iterate multiple times using the filter on the input images’ outer edges (unlike the middle of the input image).  

Techniques to overcome the drawbacks of edge computation

To prevent the loss of such valuable information by image shrinkage, we usually use “padding” the input image before applying detection to avoid losing the valuable information in the input images.

This brings us to the end of the blog. We hope that you enjoyed it and were able to gain some valuable insights. If you wish to learn more such concepts, do check out Great Learning Academy, where you will have access to a number of free courses in emerging technologies such as Artificial Intelligence, Data Science, Cybersecurity, and more.

Avatar photo
Great Learning Team
Great Learning's Blog covers the latest developments and innovations in technology that can be leveraged to build rewarding careers. You'll find career guides, tech tutorials and industry news to keep yourself updated with the fast-changing world of tech and business.

Leave a Comment

Your email address will not be published. Required fields are marked *

Recommended AI Courses

MIT No Code AI and Machine Learning Program

Learn Artificial Intelligence & Machine Learning from University of Texas. Get a completion certificate and grow your professional career.

4.70 ★ (4,175 Ratings)

Course Duration : 12 Weeks

AI and ML Program from UT Austin

Enroll in the PG Program in AI and Machine Learning from University of Texas McCombs. Earn PG Certificate and and unlock new opportunities

4.73 ★ (1,402 Ratings)

Course Duration : 7 months

Scroll to Top