When your one RAW-file is correctly exposed, then that should actually mean that your histogramme is ok and nothing is lost, shouldn't it?
So when you take that ONE file, set it to 1.5 steps below regular exposure, save it as TIFF, go back to it, set it to normal exposure, save it as TIFF, go back to it, set it to 1.5 steps above regular exposure and save that as your third TIFF out of that one file, it cannot really be that any information is lost. It has always been there in the original file, hasn't it?
So if you then merge all three exposures taken out of the one file with the help of whichever programme you have for this process, should it not mean that all the information is still there, and stacked, so that in the subsequent tone mapping process you can create an image with a much higher dynamic range than your camera would ever have been able to create otherwise?
I know this is not "true HDR".
Only for me and my flimsy, cheap tripod, this is the only way, since my tripod is so bad that it lets the camera sink by a fraction of a millimetre between bracket exposures (OK, I admit I had the zoom lens on AND bracketed manually) so that in the end the photos were no longer merge-able.
So for the time being, unless I finally learn how to have the camera autobracket (I know the theory, only have I so far never felt the urge to put my new theoretical knowledge into practise), I only test-play with single RAW files.
Examples can be found in the Graphics Programs and Photo Gallery Forum on this very TPF!
