In this week’s exercise what was to be implemented was background subtraction.
The base code for reading copied from the previous exercise and the only signification differences are the use of still image(starting frame of the video) to be subtracted from frames.
Basic algorithm for how the program worked:
First it gets the starting image as the model_frame which serves as reference for the background. The con of this is when the model_frame has an object in the image, when that happens once the cannot be found in the current frame it will appear in the subtraction as if it were there. A ghost from the background. The next step is to get the next frame and compare it to model_frame. Their differences are put inside the output frame. And then the difference of greyscale intensities of the two are then compared to a threshold of model_intensity + model_intensity* 0.5 or model_intensity – model_intensity*0.5. If so the value of that pixel for the output_frame would be 255 else it would be 0. And then to make the foreground’s or the output_frame’s content more visible the differences are then dilated.
The background image looks like this.
while the subtracted image from a certain frame looks like this
While the cons of background subtraction which is when the lighting changes. This was due to our groupmate in another subject entering the video without knowing it.
She looks like a pop art.
This is the link to the video: