The danger of it comes from repetition. If our computers were made by flawless machines then I'd say there is no risk, just larger file sizes. The fact is though ghosting happens. (Ghosting is where code does not always behave the same way every time).
Imagine this:
You are asked to paint a picture.
Now you are asked to paint the same picture again, identical to the first that will replace the first so it must be identical. Pretty difficult isn't it?
Granted people have a much harder time doing this than machines, which are created for the purpose of repetitive tasks.
Machines are great at repeating themselves, but like us they are not always perfect. Ghosting occurs in code that makes it behave differently then planned. So if you have a Jpeg { Jpeg [ File ] } Sandwich, what inevitably happens is artifacting that was not present originally. This doesn't have to happen, but it may. hence the "risk" of multiple compressions. Honestly will an amateur even notice? No way, I'd be amazed if anyone without a trained eye could take notice. So it isn't the end of the world, just a "best practice" sort of mindset.
Artifacts in jpegs can be anything from banding, to a pixel becoming the wrong color, to entire color shifts.
I do apologize as I was speaking out of term as photoshop allows you to re-apply jpeg compression as many times as you want. When I said "This is the difference between professional software and freeware..." I started saying one thing, and ended another. What I was trying to say is that programs like photoshop offer high quality compression that will not make you want to re-apply compression because in doing so does not increase quality but put it at risk. What I did say however couldn't be further from the truth as you can save files over and over again to stack multiple jpeg compressions (but again this just bloats the file as once compression is applied if no settings are changed it will always get larger.)
The increased file size comes from adding a second operation onto what should have only been one. The potential quality loss comes from the decompressor doing 2x the work.
I think what is hard to understand is what compression actually does. It is a mathematical formula that shrinks an image by a random but constant (for that image) factor. Once neatly grouped up into pixel quads, the compressor should always compress the image the same way time and time again. Thus never shrinking the file.
I got another good example! Just thought of it now

Ok. Stay with me on this one, it may be a bit of a jump but I think it works:
Think of a compressor as a puzzle maker! It takes the file and makes it into a puzzle that must be re-assembled by another program at a later time. Say a quality 12 compression breaks an image into 100 puzzle pieces. If you add another quality 12 compression, it pretends like there was no previous compression so it starts with the full picture again and re-compresses the same exact way. Just like a puzzle is unique and can only be assembled one way. This is where the risk is involved, because if there is one miscalculation in the decimal places it can make a wide variety of artifact appear.