“Iteration X: a canvasUnbound” Prototype

*To have panels that disappear when triggered
*To have that reveal an underlying theme/movie
*To use the Oculus Quest as the reveal movie

*Everything worked in my office and then when changing to the basement I had to add a few more features in order for it to work. I think the version it ended up at will hopefully be more able to travel with slight modifications. *It is very difficult to create an interactive system without a body in the space to test.
*The Oculus Quest doesn’t work without light, so without directional light I did get that working but you couldn’t see the projection. So in the final video I opted to just use a movie, knowing that it did work is good enough for me at this point and when/if I’m able to use directional light that doesn’t effect the projection we can try it again then. Alternately the positive of this is that I can interact with the system more, if painting in VR, I can’t see when and if I make the panels go away and where I need to dance in order to make that happen.

Moving forward
I’m excited to be presenting this in Spring 2021 on a grander scale. I will take some time to solidify a dance score with more repetition and a longer escalation that allows for people to see the connections (if they are looking for it).

I may use the VR headset, but I am also enjoying the conversation between the live reveal of a new white score specific to the space I am performing.

I will change/flip the panels to trigger on the same side as myself (the performer). If large enough, I think it will be easy to see what I am triggering when they are on the same side as me. In my basement, I chose to trigger the opposite side because my shadow covered the whole image when triggered on the same side of the stage.

The 20 videos projection mapped to a template I made in Photoshop.

The test panel –> I used these projectors to show the whole image that the ORBECC was seeing and the slice of the frame I was using to trigger the NDI Tracker. I used the picture player to project my template for the above projection mapping.
These actors above come from the NDI tracker’s depth video -> chroma Key (turns the data tracked into the color (in this case red)) -> HSL Adjust (changes red to white) -> Zoomer (zooms the edges of the space to the exact area of space I want to track) -> IDlab Effect Horizontal tilt shift (allowed me to stretch the body so that it would cover the whole sliver that we are tracking whether upstage or downstage) -> Luminance Key (I actually don’t think it does anything, but we used it earlier to reduce the top (closest) and bottom (farthest) spaces to close in the space I wanted to track) then to the panner seen below.
This is almost the whole patch. Continued from above, the panner allowed me to track the exact verticle space I wanted to track -> the Brightness made that brighter and then the limit scale value created the trigger amount when in between the two numbers requested (45 – 55). Here you see four of the 20 progressions to the projection mapped panels which all are triggered the same and once I change panner/Calc brightness/Limit scale value to a user actor it will be easy to adjust for multiple spaces.
Here is my user actor that counts how many times each panel is triggered. It is attached to the unactive/red lines above in the image above this one. The range from the Calc Brightness (which is the range that goes into the limit scale value comes to this input and then when triggered adds on the counter until 10 and the inside range actor spits out a trigger to turn off the active parameter on the projector of the panel after I have triggered it 10 times. As I move through my score, this actor, deletes all the panels to reveal an underlying movie.
I recorded both 10 seconds of stationary and 10 seconds of slowly moving videos in the Oculus Quest (I wasn’t sure which would look better) and cut them into short clips in Davinci Resolve. I chose to use only the moving clips.

I converted all the movies to the HAP codec and it cut my 450% load in Isadora to 140%. This decision was prompted not because it was crashing anymore but it was freezing when I would click through tabs.

After some research on HAP, I found a command line method using ffmeg. My partner helped me do a batch on all my videos at the same time with the above addition.