A room-scale tracking system built with SystemModeler.
Features include synchronized shutters on 4 to 8 cameras with IRED arrays surrounding each camera lens for tracking spherical retroreflective "markers" in 3D.
An optical system for 3D measurement similar to Vicon, Simi, Motion Analysis, Qualisys, or even an electromagnetic 6DoF system like Polhemus.
Even just some advice on where to start: model libraries of an image chip or lens, or a built camera.
The ray tracing methodology is a fairly simple concept that has been in use for 40 years.
This sounds like a really interesting project! I am also new to SystemModeler, so I am really just brainstorming here.
There has been work on camera models in Modelica. You would need to know the physical coordinates of all of the trackers and run them through the camera model for each camera. Also, I would think computing occlusions would be difficult, but if you are familiar with ray-tracing strategies, I suppose that would work.
After you can model how the tracker coordinates would turn into pixel measurements in the CMOS chips, you could use blob detection to extract the finalized xy pixel-space coordinates of the trackers visible to each camera. Otherwise, you could just assume the tracker has a tiny size and project it through the camera equation and round to an integer to get the center pixel value.
Once you have that, you can actually implement triangulation of the locations of each tracker. This step is fuzzy to me, as I am not familiar with mocap enough to know how trackers are typically corresponded between multiple camera views. Perhaps you will need to run some numeric minimization process, similar to what is done in visual odometry algorithms where features need to be corresponded from one image to the next.
As far as cameras go, you could use FLIR cameras (very good frame rates & resolution available, multi-camera synchronization, but very expensive). Whatever you go with, you probably want to use global shutter so you do not have to worry about rolling pixel updates from the CMOS. You could use a micro-controller to control IR-LED rings for the cameras and trigger the shutter at the right time. You could integrate this into the camera model similar to how the authors of the linked paper model the shutter as a physical process.
I would love to know if you have had any more thoughts about this project in the past couple of days!
Thank you for your reply. Getting down to work on bringing in a collata file (.dae) with the 6dof of each camera, so I do not have to reinvent the calibration process.
I am very interested in the synchronization, of measurements (synchronous shutter) and the powering of IREDs. I want to do a sensitivity analysis on how much power an IRED needs to be visible to specific image chips, and across what distances between IRED and lens.
I will be doing a number of experiments on things that camera sets do, and don't do.
If this sounds like a project you want to work on PM me.