And if you make those different images which, as you have just demonstrated for yourself, contain, overall, a greater range than any single image, and combine them appropriately, voila! a tone mapped HDR.
Can I just check something with this? Take the image, open it in Lightroom, crank recovery to 100, crank fill light up to bring up the shadows (ok image now looks like arse due to lack of contrast) but does this resulting JPEG now match the dynamic range of the RAW, since this is effectively dynamic range compression?
I don't use Lightroom but I think the answer is that you don't need to do anything to the JPG to get it up to the same range as the RAW if it was originally mapped for maximum range and you extracted both ends of the RAW range when creating the JPG source components.
Let's make up a hypothetical world where the camera has a dynamic range of 10 bands and people are used to looking at images with a range of 6 bands. (The usual caveat that we're pretending everything is linear to keep it simple).
Now, when the RAW is developed it will select 6 contiguous bands and map those to the jpg.
By default these bands will be the central 6 and there may be detail free shadows and highlights.
If we use exposure compensation we can force the selection of either lower or higher sets of six bands and thus we can create a pair of JPG's with, between them, all 10 bands that the camera recorded.
When these two JPG's are imported into suitable 'HDR' software it will find the 'common ground', normalise the JPG's so this common ground is the same and, internally, create an image that now has all ten bands that the camera recorded. It can then tone map that image and display the result.