There are a few areas where mixed reality rendering can diverge from your existing implementation. Not all of these will pose problems in your project, or are insurmountable if seen, but are worth noting:

Transparency: In most projects, how shaders/materials interact with the alpha channel of your render target is not important, and so may produce inconsistent/unexpected results when mixed reality composition uses that buffer for determining areas of the foreground to draw over the user. Thankfully this can almost always be corrected (with a few exceptions such as multiply blend modes or very fancy shader logic). Information on how to resolve this can be found here.

Post Processing Effects: Effects must be configured on a template MixCast camera to apply to mixed reality rendering as described here. Depth of Field is generally the only effect that can be impacted depending on its implementation.

Was this helpful?