← back to the blog


Human Tracking with a Background Subtractor in Python

Posted in Human Tracking

    Michael gave me an existing piece of code in C++ written by someone else that he improved upon that performs human tracking; my job is to transfer this code in to Python for our own uses. The C++ version worked "scarily well" according to Michael and since most of the operations used in his code exist in Python I hope that my version will function equally well. One thing to note is that the camera Michael used was much higher quality and more sensitive than my webcam. Essentially what I've got going on right now is taking an image such as this from a camera feed: 

 

    Applying Canny edge detection to find the edge of objects in both the background and foreground and other morphological transforms to clean up the image (this is the part where my code differs from Michael's because of camera quality):

 

    And using a background subtraction algorithm that learns from the camera feed what the background looks like, leaving behind only the foreground (which in this case is moving objects such as me waving my hands or moving around my rolling desk chair). The low image quality is a function of this being a screenshot of a video of me moving, not of the program's quality.

 

 Now that we've gotten an image and cleaned it up quite a bit, we can start looking for objects. An object we want to track will have to be large enough to be something other than noise but smaller than the entire screen (the actual max size is 3/2 the screen size). So, the primary way in which we decide if an object is worth tracking is by its area. The object's area can be found by using its moments. Image moments are very hard to explain simply because there's so much calculus involved in the technical definition. I think the best way to think about them without getting too complex is as an average of pixel intensities. The function cv2.moments() returns a dictionary of these values which can then be used to calculate other helpful things like area and center of mass.

  So what we do here is find the moments of each contour (set of lines) in the image. If the area of that contour is within the range defined above, we calculate the top left coordinates of the object using its height, width, and center of mass, all of which can be calculated using the moments dictionary. From here, we can draw a rectangle covering the object's area, thus allowing us to track it. Here's what this process looks like when implemented: 

 

  This clearly requires some fine tuning. Right now, the resulting frames are not picking up a single object, me in my desk chair, but rather individual movements of my arms and head. The fixer for this is to add another condition to finding an object. Not only must the selected contour have an area within the range mentioned above, it also must be greater than the area of the last object found, which is stored in the variable refArea. This variable is intialized at zero and changes as more objects are found. With this new condition, the result now looks like this: 

 

  Now the camera picks up more of my image but is still very inaccurate and does not follow me frame by frame continuously. This program needs to be tweaked even more. One minor fix is resetting the reference area for each frame, this way an object moving away from the camera can be tracked despite its area decreasing. Another simple adjustment was increasing the lower bound on object area- this way fewer random small movements are interpreted incorrectly. An example of this would be the webcam picking up the logo on my tshirt as something different from my overall shape. However, the resulting video still did not show the program tracking objects continuously for more than a few frames at a time. 

    In other news, I'm going to have this project and its continuation added to my class schedule for the fall as EE360, which is listed as "Special Problems in Electrical Engineering" but really just means "Undergrad Research for ECE Credit." What's cool about this is this class can actually count toward my BSEE degree as an academic enrichment class. It's a great opportunity for me to have a major sequence class under my belt despite being a lowly sophomore. This makes it so that every class I will take next semester will be STEM related, which is going to be so much fun though probably very difficult.