At this point, your Virtual Camera needs to have its placement determined, and the process for doing so is quite different depending on whether the Virtual Camera is mixing a Video Input feed into the scene or is ‘Free’. Regardless, at this time it’s worth launching MixCast’s Configuration Experience to get a better sense on what your camera can see. To do so, click on the Experience icon in the Status window and select the appropriate menu option.
Virtual Cameras that aren’t compositing with a physical Video Input feed aren’t constrained by any physical limitations, and therefore are less restricted when in comes to deciding where the camera is positioned or how it moves.
Free cameras can of course be stationary but can also be set to follow the motion of a Tracked Object, and has more controls than a Tracked Video Input since its motion doesn’t need to match the physical object exactly. For a full breakdown of MixCast’s Virtual Camera placement options, check out this page.
The Configuration experience allows you to review your output as well as adjust the placement of the cameras intuitively, but if you’d prefer to jump into another compatible experience to experiment further and begin generating content, you can do so now!
Virtual Cameras that combine the virtual scene with the physical one align themselves with the Video Input being mixed in. This means before we can see fully realized mixed reality video output, we need to determine the location of the Video Input rather than the Virtual Camera. To make this process as straightforward as possible, you can run MixCast’s Quick Align process through the VR Configuration app.
With the MixCast Configuration experience open, you should see your Video Input’s feed displayed on the desktop as well as the application’s UI (If you can’t see the feed, ensure that the correct Video Input is selected in the Preferences Window and that it’s connected and working on its own). To start the Quick Align process, click the button in the top right.
You should now put on your VR headset and perform the alignment calibration process by following the instructions shown within the experience. This involves holding your VR controller in such a way that it aligns with the visible marker displayed over the Video Input feed. The first instance will be performed right at the position of the lens of the camera, with the following steps requiring the user to step back and hold the controller so it appears closer to one of the corners of the view.
During the verification phase of the process, hold up your controllers to compare the virtually rendered geometry with the real controllers. This process can be repeated until the required precision is achieved.
You may find that when held still the images overlap appropriately but when in motion, the virtual or physical controllers lag their counterparts. This is known as ‘latency’ between the camera and the computer. This can be compensated for by adjusting the Delay Compensation parameter of the Video Input in the Preferences Window.
Adjust this value until your virtual and physical controllers do move in synchronicity.
If you intend to be able to move your Video Input while active and/or without re-calibrating alignment, you’ll need to affix a Tracked Object to it rigidly. With the Tracked Object active and tracked, before running Quick Align, configure it as the device tracking the Video Input in the Preferences window.
Once your Video Input’s alignment and tracking are sorted, the only remaining task is to configure its background removal. Nearly there!