top of page

Below you’ll find my research pertaining to various technologies such as AR, VR, projection mapping, 3D scanning, and motion capture.

Google Glanceable AR

Collaboration between computer scientists and artists at Virginia Tech and Georgia Tech. The goal was to investigate content management for always-on augmented reality interfaces, funded through a grant from Google. My roles in the project were to investigate ways of making AR unobtrusive for daily use, designing mockups for content placement in 3D space and content summoning. In addition, I investigated ways of maintaining clear visibility of text in AR regardless of real world background, kitbashed virtual environments for project testing in VR, and performed user studies pertaining to the optimal placement of content regarding depth and angle of a viewer’s gaze.

Facebook VR

Collaboration between Facebook and computer scientists and artists at Virginia Tech. The goal was to create high-fidelity models of 3 rooms with detailed furniture objects to be used in VR, along with procedural programming to reposition the objects for different scenarios. My role in the project was to oversee a group of graduate students to 3D scan real rooms that were set up, photogrammetry objects, retopologize and texture objects, and create a virtual set of each room at exact realistic scale with all objects in the room positioned and oriented as their real life counterparts. In addition, I contributed to each step in this process as well, and created online tutorials for other students working on the project to assist in these roles.

Smithsonian ACCelerate Projection Mapping

Collaboration between Virginia Tech’s Institute for Creativity, Arts, and Technology (ICAT) and the Smithsonian’s National Museum of American History. 3D projection mapping project showcased above the entrance of the museum during the ACCelerate festival on October 12th and 13th, 2017 using two 20,000 lumen projectors. My role in the project was the creation of a 4-minute 3D animation that was to be projected onto the side of the Smithsonian. 

Dig Hill 80

Collaboration between an international trans-disciplinary team of archeologists, historians, and artists. The goal was to excavate and digitally preserve a World War 1 trench fortress found near the village of Wijtschate, Belgium. My role in the project was to convert point-cloud data and 3D hand scanned data into virtual 3D models, retopologize and render the models, and composite the renders into VR 360 video of the battle site for an immersive educational experience. I also created online tutorials for other students working on the project to assist in these roles. This research was presented in London, New Zealand, and Japan.

Here is the kickstarter page which funded the project.

Vet Med VR Dog

Research project creating a detailed 3D model of a dog with multiple high accuracy anatomical layers. This was for use in multiple courses of the Virginia Tech veterinary curriculum by viewing on the web, virtual reality (VR), and augmented reality (AR). My roles in the project were to retopologize high density scan data of the dog’s bone and organ structures for VR viewing in the Unity game engine. I was also in charge of programming an AR app that displayed the 3D data along with educational text addressing different parts of each bone and organ, to use as an educational tool for veterinary classes.

Blacksburg 16 Squares

Research project to develop an online resource and on-site Augmented Reality and Virtual Reality experience of the original, historical 16 squares of Blacksburg, Virginia. My roles in the project were to photograph the Blacksburg 16 buildings, 3D model and texture them, 3D print the buildings, and projection map the details onto them.

Motion Capture Sports Simulation

Research project collecting motion capture data of soccer and football players, for use in studying sports injuries and providing virtual training simulations. My initial role in the project was to establish a motion capture pipeline for students and faculty to use around campus, and created a pdf step-by-step guide for anyone to follow in my footsteps. My secondary role in the project was to capture and clean up motion capture data of sports players, to provide convincing animations for the AI in the simulations.

Transformation Rig

Research project developing a rig that could blend between bipedal motion capture data and quadrupedal motion capture data. A unique mocap marker plan was created for a dog that mimics the standard Qualisys marker set naming conventions for a human. A human and a dog were recorded separately using Qualisys, and the captured data points were then cleaned and processed in Maya. Joints were manually created and parented to both sets of data, and the influence of the parenting was keyframed to create the blend. This research was presented at Virginia Tech’s ICAT Day 2015.

VR Skywalk

An exploration of virtual reality and how it affects interactivity throughout the real space. My project focused around an immersive skywalk between two buildings with realistic scope and scale. I designed and constructed my own tightrope set along with a procedurely generated city within Maya. The city was then exported into Unity and set up to work with the Oculus Rift.

bottom of page