Bit depth and lossiness are mostly unrelated concepts. Bit depth is how many values of lightness there are, how many steps. In 14 bit, there are 16,384 steps of lightness. in 12 bit, there are 4096 steps. In a jpeg, there are 256. Your eye can see a little better than 256, however jpegs are optimized strategically to cater to your eyes, to the point where basically they are as good as you can see. They were specifically designed to be as good as you can see.
Thus, 4096 values or 16,384 values are useless UNLESS you need to stretch the pixels during editing. The most common examples are correcting coloring and if you overexposed, and need to take a tiny sliver of the whitest pixels and stretch them out to a larger range. Having 4096 values means that the top 5 white steps have 80 steps in the RAW, so when you stretch them out, you won't see 5 big bands of lightness with sharp borders, you'll see a much smoother upt-to-80 steps. But usually you wouldn't stretch even enough to actually use all 80 "backup" steps in your final jpeg, so 12 bit is fine. Only in extreme situations would you need to expand those 5 steps to more than 80 pixels, in which case you would benefit from 14 bit, which has 4096 steps to work with.
Realize that 80 steps is almost 1/3 of the dynamic range of your entire jpeg, being pulled from just 2% of it. That dramatic of an overexposure (multiple stops of overexposure) is very rare, and you would almost never need more data than this, unless you have abnormal needs (like some crazy surreal style). In almost every situation, you would be better off getting 1 extra FPS if that's an option, instead of using 2 extra bits of depth. Even if you don't plan on needing burst, the likelihood that you might need it is higher than the almost certainty that you will not need 14 bits of depth, unless you know you need it ahead of time.
All of the above assumes non-lossy data.
Lossiness is different. Lossiness in jpeg doesn't use bit depth, it compresses by making neighboring pixels the same value, when they were originally slightly different values. This can be done at any bit depth, although all the cemra jpegs are 8 bit. RAWs can also be lossy depending on your brand, and they do so in a variety of ways, but probably never by changing bit depth.
The difference is that if you stretch a lossy 16 bit RAW far enough, you won't get banding, but you WILL start to see the subtle jpeg artifacts that were too minute to be recorded originally by an un-stretched jpeg. Only if you're actually stretching near to the limits of the bit depth, though.
So the difference is basically "are you more worried about the look of banding, or jpeg artifacts?" Either of which are only relevant anyway if you do a ton of correction in your RAW conversion. Several stops' worth.