The difference only becomes visible in a final print and then only if using a high end printer.
If you're using a commercial printer and you aren't paying $150.00 for an 8x10 then there's no reason to send them a 16 bit
Absolutely none of this discourse is true.
Even if a printer accepts a 16-bit file, there is no reason to print at that depth as no subtractive color space will be capable of reproducing the full 16-bit gamut, and I have a hard time belieiving the eye is capable of distinguishing between 1/65536 of a shade, let alone a resulting color shift, though I could be wrong.
It may be possible (and I repeat possible) that some printers use 16-bit files in their colorspace conversions, though, as far as I understand it, this is done by the color management system (CMS), which is perfectly capable of accepting 16-bit files. So unless there are printers that use some proprietary CMS scheme, then the *printer* has nothing to do with it. Though, I have no idea why a printer would make their own color management system when the ICC already has developed an industry standard.
And as I said, again, the CMS is capable of handling 16-bit afaik (under the same rationale that photo printers "print in RGB" [they don't]), so yes there is a theoretical advantage I suppose - but only if the profiles mismatch tremendously.
The one thing that was correct here is that printing likely isn't going to matter. The advantage of 16 bit is and always has been editing. Take for example adding contrast to the shadows of an image. With an 8-bit image there are 128 shades per channel below middle grey, and only 64 shades below half way between middle grey and absolute black. So that only leaves 32 shades on either side, one to push, the other to pull on the curve.
As you do these kinds of compressions shades are naturally truncated, giving the impression of increased contrast. Remember, all edits are destructive. So you start out with only 32 shades, which *is* certainly distinguishable by the eye, when you increase the contrast over this region you are truncating tones on an already limited pallet.
By comparison, 16-bit images have 16,384 tones in the same two regions, which gives much more room to work with before posterizing, which is a result of truncation to extreme.
But this isn't even 100% accurate since most cameras record information at 10, 12 or 14bits. But when you choose to edit in 8-bit, you are deleting information from your file that can be used to make edits, and the difference between 10bits and 8 bits may not sound like much, but in fact it's 25% - which, depending on how you work and look at things, can be pretty significant.