show: all  february  march  april  may
week 12apr 23, 2025 This week, I added a generative tile pattern and investigated the jitter effect that I have been seeing. The generative pattern is relatively simple. The goal is just to ensure that no neighboring canvases are displaying the same webcam (diagonals are fine). The pattern starts with the top left canvas and assigns it to a random webcam. It then works its way from left to right and top to bottom. For every canvas, the pattern checks the webcam assigned to the canvas above and the webcam assigned to the canvas on the left, and randomly picks a different one. It also keeps track of how many canvases each webcam is assigned to, and adjusts the randomness so that every canvas has a good chance of being represented. I want to iterate on the algorithm by adding rotations, so that some canvases show blobs upside down or sideways.

For the jitter problem, I used the testing pipeline I created last week. My goal was to better understand how the current blobbification algorithm transforms the webcam feed. I found that tiny changes in the segmentation mask produced by BlazePose lead to big differences in the resulting blob. For example, if the head region of the segmentation mask changes slightly, then the approxPolyDp function might choose completely different points to represent the top of the polygon, leading to a different final blob. I found that by including more points in the polygon, I could lessen the intensity of this effect. However, if I include too many points, then the resulting blob will be too close to the original silhouette, defeating the purpose of the algorithm entirely. I didn’t end up making any lasting changes to the algorithm, but I now better understand the nature of the algorithm so I can make adjustments as necessary. 
©Aditi Gupta
New York University
Integrated Design & Media (IDM) Graduate Thesis