Thanks for the reply Jimmy! This is very interesting, I don't believe we have direct access to the projection matrix, for a first pass I think a hard-code will be appropriate, though there are some changes being made to make more of the raw parameters from oculus available to UE's vanilla engine.
My understanding is that the DK2 has a horizontal FOV of 90 (110deg diagonal) for 960×1080 for each eye. From your calculation that means the slope is tan45 = 1, which means the crop should keep 25% of the image horizontally. A 4x zoom feels intuitively too high, but I will have to take a look at what the projection matrix spits out (in an engine fork) to compare.
Regarding the image center, in order to feed custom disparate images in UE, the only current approach that I'm aware of is using a screen position split (thanks to opamp from the oculus forum for this suggestion) in a post-process, which means we are dealing with two centers, one at 0.25 and one at 0.75 respectively for each image.
The distortion and grid image posted above doesn't even take that into account, it is simply feeding the distortion map for the right image UVs directly into a grid pattern and it is producing this bias. I suspect this may have something to do with the way the distortion map is calculated in my plugin. I couldn't get 32 bit images to work properly so this is downsampled to 8bits. I suspected that the only effect of this should have been a slight 'stepping' distortion in the lines (which are visible if you zoom in on the diagonal lines inside the distortion area). It may be that when I downsampled from a float to a uint8, that my range was incorrectly interpolated?
Here's a non-cropped image example with boundaries culled:
The image above is obtained by:
1) distortion map: discard values (store as red values) that are below 0 and above 1 for each axis, store the rest as uint8's in green and blue channels of a 32bit texture (4channels). This is the right side image.
2) Use distortion map G and B channels (flip the green axis due to UE orientation) as the UVs for the distortion map's paired raw image. This is the left side image.
If this was passed through a regular grid pattern you would get the pattern from the first post.
If we zoom it by 200% it would look like the following for a grid image
which is still biased to the bottom left hand corner.
This means that the distortion map I obtain is already biased, so perhaps something is going wrong with the way it is being converted. Is it simply the 8bit downsampling or is my image range incorrect? Do I need to add an offset to the raw values?
Perhaps a better way for me to calibrate would be to get an example of what a grid pattern should look like through the distortion map, and walk me through the steps you did to get that image.
Either way thanks in advance for any help!