I created my own version of the code, and began testing how accurate and responsive the model was for different forms of movement. I found that it had trouble handling video feeds with cluttered backgrounds, and furthermore, the segmentation just wasn’t precise enough.
For my arms and hands, it would segment the general region, instead of a tight boundary. Even though Blobby will eventually just render a ‘blob’ version of the body contour, I want the inner model to be accurate so that the blobs adjust as expected according to the user’s movements.
As such, I took away that this model was not what I was looking for, but I was happy that I was able to experiment with body segmentation on a live video.