I edit photos to try to match my memory/feelings/impressions of a scene. A camera sensors just captures numbers. This only roughly approximates seeing, in which scenes are assembled in our minds so as to appear to have infinite depth of field and super-high contrast rations. Plus, they are in 3D! And everything is colored by our emotional reaction. You can create an image that closely maps the captured numbers or you can create an image that tries, in 2 dimensions, with limited gamut, etc., to try to replicate what you saw when you took the photo. Either can be considered "accurate" in one sense and "inaccurate" in another. Consider the simple act of applying sharpening to sensor data—we are leaving the path of numerical absoluteness and entering the path of reproducing our vision. Heck, even setting a white point has less to do with capturing "reality" than with imitating our visual system. Editing out undesired elements might seem to cross the line, but our brains do this all the time. I spent about two days on one photo recently of a waterfall in Iceland. It required adjusting the exposure levels of four areas and I had to do a ton of clean-up to make the transitions appear realistic when viewed in a large print. I showed the finished product to my wife and she didn't bat an eye—the scene looked just the way she remembered.