To sort of expand on what Big Mike said, pretend that your monitor is 8-bit, and that 8-bit means you can have colors ranging from 0 through 9.
Your 8-bit image has lots of dark colors, but almost no saturated parts, so in reality the pixels range from around 0 through 5. In the Levels, you adjust 'em so they span the whole range, 0 through 9. But, all that did was move the pixels of brightness 5 to 9, those of brightness 4 to 7, 3 -> 5, 2 -> 3, and kept 1 and 0 the same. So now, you have brightnesses of 0, 1, 3, 5, 7, and 9 in your image. You are noticeably missing 2, 4, 6, and 8, and it will appear as brightness "steps" in any kind of gradient.
Now let's say you have a 16-bit image, which supports brightnesses from 0 through 99. Your monitor will effectively scale this down from 0 to 9 for display purposes, even though the image has more information.
But say it's the same image, so the brightness only goes from 0 through 50. You adjust the levels to scale it from 0 to 99. So you've moved 50 -> 99, 49 -> 97, 48 -> 95, and so on. When you look at the histogram after this, there will still be those noticable gaps at 98, 96, 94, and so on, but because it's compressed to the same scale as the 8-bit, they'll look smaller.
And then when you convert the final 16-bit image to 8-bit, it will basically divide all the pixel brightnesses by 10. So because you had brightness information at 71, 73, 75, ... 99, you will now in 8-bit mode have brightness information at 7, 8, and 9. You will NOT be missing the brightness at 8, which you would've if you'd originally done everything in 8-bit.
Make sense?