MIXCAST SDK FOR UNREAL
Note on Versioning: This update to the MixCast SDK for Unreal brings it to near parity with the MixCast SDK for Unity. As a result, we’ve increased the version number to match that of the rest of MixCast for clarity. Thank you to all those who’ve been using MixCast with Unreal so far; there’s lots more to come 🙂
Until this update, the Unreal SDK could only use the (Far) Clip Plane value of the engine’s Camera component to discard objects/pixels that are behind the user’s approximate depth (and therefore not in the ‘foreground’), and required the enabling of Unreal’s “global clip plane” option in the project.
Starting with the 2.3.3 version of the MixCast SDK for Unreal, a new option is offered called “Per-Pixel Foreground Clipping”, which allows for the foreground to be clipped based on Z-Testing instead of the far clip plane. This has a number of benefits:
Unfortunately this mode doesn’t support translucency due to a lack of control in Unreal over the alpha channel in the base pass. As a substitute for the alpha channel, foreground objects are isolated using a key color approach, which doesn’t support translucency and which requires a magic color (one unlikely to appear normally during rendering) to be specified. Objects using translucent shaders are automatically hidden in MixCast’s foreground render in this scenario.
Note that if you’re using MixCast’s “Force Additive Blending” option for your project already, the restrictions around transparency with Depth-Based Foreground Clipping mentioned aren’t applicable since Force Additive Blending already causes MixCast to ignore the alpha channel.
Until this update, the Unreal SDK didn’t have the ability to display feeds from MixCast in VR natively through the SDK, only as provided by the OpenVR overlay API (limited to one overlay, no depth sorting, etc).
Starting with the 2.3.3 version of the MixCast SDK for Unreal, full In-VR display support is provided, meaning that end-users can monitor as many MixCast outputs as they’ve configured without taking their headset off. These can be toggled off at runtime through the MixCast status window as with MixCast SDK for Unity and SteamVR SDK titles.
The SDK can now check for available updates to the plugin when you start the UE4 Editor and notify you if a newer version is available!
The feature allows developers to incorporate run-time pose information from the Unreal app itself into the positioning logic of the MixCast cameras rendering the scene. This feature is initially intended only for titles working with MixCast on custom functionality (contact us for more information!). This page in the documentation describes the technical steps to follow when you’re ready.
If your experience treats its final render as an additive layer regardless of the values your materials write to the alpha channel (ex: an AR experience), you can now enable the “Force Additive Blending” checkbox in your Project’s MixCast settings to have MixCast blend the digital foreground with the physical scene using additive blending (ignoring alpha) as well.
The MixCast SDK for Unreal now provides more control over which actors are rendered to the user’s view and to the MixCast cameras. Since MixCast Cameras can take a number of forms (first-person/third-person, virtual/mixed reality, etc), additional filtering is offered for even more specificity. Check out the documentation page on this feature for more details!
Unreal SDK now supports capturing from multiple cameras as well as rendering at 1080p+ resolution regardless of monitor dimensions, bringing it in line with the Unity SDK.
Your experience can now send events to and receive commands from the MixCast Client to power advanced recording functionality and tailor experiences for demonstration or location-based usage. Contact us if you’re interested in learning more about MixCast Moments!
MixCast SDK for Unreal now available!