For those familiar with computers, negative zero is represented as all bits set to 1...the first bit being the sign bit. Since it's a negative, the absolute number is the complement of the original number (all 1s become 0s, and all 0s become 1s), but negative. So, it's really (negative) 00000000000000000000.... And the square root of anything is whatever multiplied by itself results in the original number (2 is square root of 4, etc). Zero times zero is, of course, zero. So the square root of negative zero is therefore zero, then set negative, eg, to all 1 bits again...(FFFF to us mainframers). And dividing a number by itself, by mathematical definition, is one (1). So, the answer to your question is really negative (that old 1 bit up front!) zero.
And now you know why computers use 2s complement for mathematics...it eliminates the need for negative zero and all the hassles it causes. as a result, negative one is represented as 1000000000000010, and negative zero is 1000000000000001. Go figure!