Wednesday, February 2, 2022

[SOLVED] Python analyzing a specific area in an image

Issue

This is a conveyor belt and my area of interest is the orange strip:

src="https://i.stack.imgur.com/ofOsN.jpg*" alt="image of conveyor belt" />

I have a camera system that would take pictures of the conveyor belt with its products on it. The amount of pictures are 500-2000 images per production run. What I need to ensure is that the orange strip of the conveyor belt is always clear of any objects and is not obstructed so the production runs smoothly. There's more context to this but just know that in the picture I need the orange strip to be orange, and if its not, that means its obstructed.

So i need to have a program where it can read the images and analyze both of the orange strips in the picture so when they detect anything that's obstructing the strip, it will send an error message. What I have in mind is that the program will have rectangles at the top and bottom of the image where the orange strips are at and when it's obstructed, an error message will show in the image.

I have no clue what this process is called but currently im looking at hough transform, template matching and colour detection. The issue i'm facing is that since the orange strip is continuous, the code reads numerous duplicates at the image which would overlap each other. and the entire image would just be coloured.

What i'd like is just to have it analyze in a rectangle at the top and bottom strip which would be a 2/3 length of the image from the centre. Please help and if it's not clear, please don't hesitate to ask for the details that you need.

Edited: Added a picture of an example of obstruction. The white dots could be dust or any impurities.

This is an example of an obstruction: example of obstruction

This is my progress so far: progress segmentation

I would like to analyze the area at the red rectangle only if possible.

This is the code that I'm using:

import cv2
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib import colors
import numpy as np
import matplotlib.pyplot as plt

img = cv2.imread('sample02.jpg')                   # Read image
img = cv2.resize(img, (672, 672))                    # Resize image     
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
lower_gray = np.array([0, 0, 0], np.uint8)
upper_gray = np.array([0, 0, 45], np.uint8)
mask_gray = cv2.inRange(hsv, lower_gray, upper_gray)
img_res = cv2.bitwise_and(img, img, mask = mask_gray)
cv2.imshow('detected.jpg', mask_gray)
cv2.imwrite('5.jpg',mask_gray)
cv2.waitKey(0)
cv2.destroyAllWindows()

Finally able to isolate the 2 bars And detect foreign matter

Im able to isolate the bars and the foreign matter/obstruction is shown clearly

This is the code that i used:

# importing cv2 
import cv2 
import numpy as np

# Reading an image in default mode 
img = cv2.imread('f06.bmp')                   # Read image
img = cv2.resize(img, (672, 672))             # Resize image     
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
lower_gray = np.array([0, 0, 0], np.uint8)
upper_gray = np.array([0, 0, 45], np.uint8)
mask_gray = cv2.inRange(hsv, lower_gray, upper_gray)

# 1) Start coordinate (Top rectangle)
# represents the top left corner of rectangle 
start_point = (0, 0) 
# Ending coordinate
# represents the bottom right corner of rectangle 
end_point = (672, 200) 
# color in BGR 
color = (0, 0, 0) 
# Line thickness 
thickness = -1

# 2) Start coordinate (Mid rectangle)
start_point2 = (0, 240) 
# Ending coordinate
# represents the bottom right corner of rectangle 
end_point2 = (672, 478) 
# color in BGR 
color2 = (0, 0, 0) 
# Line thickness  
thickness2 = -1

# 3) Start coordinate (Bottom rectangle)
start_point3 = (0, 515) 
# Ending coordinate
# represents the bottom right corner of rectangle 
end_point3 = (672, 672) 
# color in BGR 
color3 = (0, 0, 0) 
# Line thickness 
thickness3 = -1

# 4) Start coordinate (Left rectangle)
start_point4 = (0, 180) 
# Ending coordinate
# represents the bottom right corner of rectangle 
end_point4 = (159, 672) 
# color in BGR 
color4 = (0, 0, 0) 
# Line thickness
thickness4 = -1

# 5) Start coordinate (Right rectangle)
start_point5 = (485, 0) 
# Ending coordinate
# represents the bottom right corner of rectangle 
end_point5 = (672, 672) 
# color in BGR 
color5 = (0, 0, 0) 
# Line thickness
thickness5 = -1

# Using cv2.rectangle() method 
image1 = cv2.rectangle(mask_gray, start_point, end_point, color, thickness) 
image2 = cv2.rectangle(mask_gray, start_point2, end_point2, color2, thickness2) 
image3 = cv2.rectangle(mask_gray, start_point3, end_point3, color3, thickness3) 
image4 = cv2.rectangle(mask_gray, start_point4, end_point4, color4, thickness4) 
image5 = cv2.rectangle(mask_gray, start_point5, end_point5, color5, thickness5) 

image = image1 + image2 + image3 + image4 + image5

# Displaying the image 
cv2.imshow('test.jpg', image)
cv2.imwrite('almost02.jpg',image)
cv2.waitKey(0)
cv2.destroyAllWindows()

It's a little rigorous and long cause i manually inserted rectangles at the areas im not interested in since i've already used mask but there's still a lot of noise. so this is the best that i could come up with.

It's able to count the white pixels now!

The code i used to count the white pixels:

img = cv2.imread('almost05.jpg', cv2.IMREAD_GRAYSCALE)
n_white_pix = np.sum(img == 0)
print('Number of white pixels:', n_white_pix)

tested with pictures that are good and bad. The good would have more white pixel counts which would range from 437000-439000 and the bad would have less at 435000-436000

I'll have to play around with the dilation or other parameters to make the difference more apparent so to avoid a false positive.


Solution

I would start testing with the simplest possible options first, then add other features on top if required.

Here are a couple of ideas to test:

1. colour detection

  • Use cv2.inRange() to segment the two orange bars
  • Use cv2.countNonZero to count the number of orange pixels before any occlusions
  • If there is an occlusion there will be less orange pixels: you could use a threshold value with a bit of tolerance and test

2. connected components with stats

  • Use cv2.inRange()to segment the two orange bars
  • Use cv2.connectedComponentsWithStats() to isolate the orange bars
  • Use the stats (width, height area) to check if there are indeed two orange bars with the expected width / height ratio (otherwise, if that doesn't occur and there are more connected components from the orange bars than two potentially something is in the way)

You could so something similar with findCountours() (and you could even fit ellipse/rectangles, get angles and other metrics), but bear in mind it will be little slower on a Raspberry Pi. Connected Components is a bit limited, working with pixels only, but is faster and hopefully good enough.

Here's a super basic example:

#!/usr/bin/env python
import cv2
import numpy as np

# test image (4 corners, a ring and a + at the centre)
src = np.array([
    [255,255,  0,  0,  0,  0,  0,255,255],
    [255,  0,  0,255,255,255,  0,  0,255],
    [  0,  0,255,  0,  0,  0,255,  0,  0],
    [  0,255,  0,  0,255,  0,  0,255,  0],
    [  0,255,  0,255,255,255,  0,255,  0],
    [  0,255,  0,  0,255,  0,  0,255,  0],
    [  0,  0,255,  0,  0,  0,255,  0,  0],
    [255,  0,  0,255,255,255,  0,  0,255],
    [255,255,  0,  0,  0,  0,  0,255,255]
    ],dtype="uint8")

# connectivity type: 4 (N,E,S,W) or 8 (N,NE,E,SE,S,SW,W,NW)
connectivity = 8
# compute connected components with stats
(labelCount,labels,stats,centroids) = cv2.connectedComponentsWithStats(src, connectivity)
# number of labels
print("total number of labels",labelCount)
# labelled image
print("labels")
print(labels)
#  stats
print("stats")
print(stats)
# centroid matrix
print("centroids")
print(centroids)

# visualise, 42 = (255 // (labelCount-1))
labelsVis = (np.ones((9,9),dtype="uint8") * labels * 42).astype('uint8')
cv2.imshow('labels',labelsVis)
cv2.waitKey(0)

# just connected components (no stats), 4 connectivity
(labelCount,labels) = cv2.connectedComponents(src, 4)
print("labels")
print(labels)

3. Use a ROI and image differencing / background subtraction

Given static camera/lighting, if you're simply interested if something goes outside the bounds you could do something like:

  • mask the central region of the conveyor belt, leaving only the top and bottom areas including the orange bands visible and store (in memory or disk) an image with no objects on the orange bands
  • in the main processing loop do absolute difference between the new masked camera frame and original one stored previously
  • any objects in the way would be white on a black background: can use countNonZero() / connectedComponentsWithStats() / findCountours() / etc. to setup the threshold condition.

OpenCV comes with a pretty decent Background Subtractor. Might be worth a try though it will be more computationally expensive and you'll need to tweak the parameters (history size, rate at which it forgets the background, etc.) to best suit your project.

You're on the right track. Personally I wouldn't bother with template matching for this task (I'd save that for detecting icons on a desktop or a very stable scenario where the template to search with will always look the same in the target image when present in the image). I haven't had enough good experiences with HoughLine: you could try it, find the right parameters and hope the line detection remain consistent.



Answered By - George Profenza
Answer Checked By - Cary Denson (WPSolving Admin)