Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines
(+1)

Nerd rant ahead:

I think mono inputs are easier to control, but you can of course find stereo input workarounds by splitting the L/R channels and spacing them apart on the DearVR XY interface. The coolest way is of course multi-miking your performance with different distances from the instrument, then recreating the setup in DearVR to get some sweet 3D richness and texture. 

Having a reverb on the master after spacializing everything won't have the same physical effect  as dialing it in for each individual element using DearVR.  Imagining a virtual space, elements further away take longer to reach your ears (reminder: they won't reach both ears at the same time), and elements closer to your ears would in turn take longer to come back after each bounce.  These would be pre-delays, decay/delay times on regular reverb controls that you would have to dial in for each element - butDearVR goes further to EQ and delay that reverb signal to account for the physical head and ears - these functions can get messed up by basically any processing if you apply them after.  That's what I meant in the previous reply about the "crunching" post-processing can do to your signal.

TL:DR  - If it sounds good, it doesn't matter - but if you really want the listener to feel like they are physically "there" despite the fragility of the ilusion, It's gotta be purely in the DearVR ecosystem, including using their reverb models.

So, I do try to physically simulate the stereo signal to some degree with a delay sent to the reverb on my tracks, generally. I didn't end up actually carefully setting up that delay here, however, due to time.

The "closer" something is, the higher the delay I use.

Generally I've been using neoverb, and turning off the DearVR built in reverb, manually setting delay and wetness to what sounds good.

I think the most important things for the illusion are the initial dry signal to each ear being timed right and undergoing the Fourier transform to simulate the shape of the ear reflecting the sound (something UE5 actually has built in).

The reverb is a wetter, more complex signal that comes from too many directions to really easily distinguish between a real space and an artificial one as easily, hence the illusion can still be largely preserved in terms of the physical location of the sound source.

If you were to EQ, you'd want to EQ before DearVR and adjust the EQ after DearVR is in the chain. Mostly saying this as a conscious note to self next time I'm mixing to solidify that point.

What I was curious about was the things you are doing after the DearVR chain in post, considering what those things are, what their benefit is, and if they are necessary or can be put in a different part of the chain as to not screw up the illusion.

(+1)

To your curisoity on my approach: I do quite a bit of context-dependent post-processing on spatialized signals during both the mixing and mastering stages.  Off the top of my head: 

  • Shaping/cleaning transients on percussive elements so they cut through the mix better
  • Bringing air back to vocals (usually a high shelf somewhere around the 10k range) 
  • Just...so much low end processing. I'm always messing with things <150 hz

All of this will in some way ruin the illusion, but it often leads to a mix I enjoy listening to more. I tend to sacrifice realism for a more modern sound, which is probably why both Fox and I find some of the frequency shaping a little annoying.

Your process is probably closer to a more pure, realistic sound. I'd be interested to hear why you have a preference for NeoVerb over, let's say a traditional convolution verb sometime.

If I missed addressing something in this thread,  Hit me up on Discord and I'll be happy to chat about this topic anytime,