Nerd rant ahead:
I think mono inputs are easier to control, but you can of course find stereo input workarounds by splitting the L/R channels and spacing them apart on the DearVR XY interface. The coolest way is of course multi-miking your performance with different distances from the instrument, then recreating the setup in DearVR to get some sweet 3D richness and texture.
Having a reverb on the master after spacializing everything won't have the same physical effect as dialing it in for each individual element using DearVR. Imagining a virtual space, elements further away take longer to reach your ears (reminder: they won't reach both ears at the same time), and elements closer to your ears would in turn take longer to come back after each bounce. These would be pre-delays, decay/delay times on regular reverb controls that you would have to dial in for each element - butDearVR goes further to EQ and delay that reverb signal to account for the physical head and ears - these functions can get messed up by basically any processing if you apply them after. That's what I meant in the previous reply about the "crunching" post-processing can do to your signal.
TL:DR - If it sounds good, it doesn't matter - but if you really want the listener to feel like they are physically "there" despite the fragility of the ilusion, It's gotta be purely in the DearVR ecosystem, including using their reverb models.