I would like to try to make reliable size measurements of physical objects by imaging.
If I take a photo with a digital camera (like a "smartphone"), can I then count on that the pixels represent the same angle of view in horizontal as in vertical directions? For example, if I take a photo of a square straight on, perpendicular to its center, will it then measure as many pixels horizontally as vertically in the image which the camera produces?
I've made a test and the difference is less than 2%, which is within my margin of error for measuring the actual physical object and angles involved. I used 1600x1200 with a mobile phone and the "field of view" in terms of inches indeed was very close to that 1.333 proportion (up-down/left-right).
However, it strikes me that maybe this is an artefact created by the JPG-format I used to view the image, or maybe even earlier in the cameras internal automatical image processing. That the software tries to make it look better that way. Might I get different proportions if I use another image compression format?
And just for curiosity as a follow up:
If I would have the most raw form of image, say YUV or Bayer, would it still be true that the number of pixels per radian horizontally would be the same as the number of pixels per radion vertically? Are the optics and sensors design that way physcally? Or is it software managed afterwards?
Thanx y'all!
If I take a photo with a digital camera (like a "smartphone"), can I then count on that the pixels represent the same angle of view in horizontal as in vertical directions? For example, if I take a photo of a square straight on, perpendicular to its center, will it then measure as many pixels horizontally as vertically in the image which the camera produces?
I've made a test and the difference is less than 2%, which is within my margin of error for measuring the actual physical object and angles involved. I used 1600x1200 with a mobile phone and the "field of view" in terms of inches indeed was very close to that 1.333 proportion (up-down/left-right).
However, it strikes me that maybe this is an artefact created by the JPG-format I used to view the image, or maybe even earlier in the cameras internal automatical image processing. That the software tries to make it look better that way. Might I get different proportions if I use another image compression format?
And just for curiosity as a follow up:
If I would have the most raw form of image, say YUV or Bayer, would it still be true that the number of pixels per radian horizontally would be the same as the number of pixels per radion vertically? Are the optics and sensors design that way physcally? Or is it software managed afterwards?
Thanx y'all!