Dear Synthesis Embodied Learning group
Here’s the video + transcript of the
Topic: Synthesis Embodied Learning Spaces
Date: May 15, 2020 01:48 PM Arizona
Zoom Meeting Recording
Access Password: SynthesisB21!
GDrive: Embodied Learning
@Ivan et folks interested in tracking freehand gesture: .. freehand part of Garrett's Diagrammatic, let’s consider
GESTURE FOLLOWER : http://rapidmix.goldsmithsdigital.com/features/gesture-follower/
The Gesture Follower allows for real-time comparison between a gesture performed live with a set of prerecorded examples. The implementation can be seen as an hybrid between DTW (Dynamic Time Warming) and HMM (Hidden Markov Models). The Gesture Follower corresponds to a different interaction paradigm motivated by applications on expressive visuals and sound control: the system outputs “continuously” (i.e. on a fine temporal grain) parameters characterising the performed gesture.
The Gesture Follower allows for real-time comparison between a gesture performed live with a set of prerecorded examples. The implementation can be seen as an hybrid between DTW (Dynamic Time Warming) and HMM (Hidden Markov Models). The Gesture Follower corresponds to a different interaction paradigm motivated by applications on expressive visuals and sound control: the system outputs “continuously” (i.e. on a fine temporal grain) parameters characterising the performed gesture.
Omar as interested in AR markup of public space, looking at Microsoft’s AR toolkit, and having built his own back end with a Google-earth interface. Just finishing his MS at NYU
Connor in-house expert, and a principal author of SC kit.
Xin Wei
___________________________________________
Professor Sha Xin Wei | skype: shaxinwei | mobile: +1-650-815-9962 | asu.zoom.us/my/shaxinwei
________________________________________________