Following a rabbit hole to see how far will it go ...what is "real"?

NGH

No longer a newbie, moving up!
Joined
Sep 13, 2019
Messages
199
Reaction score
204
Location
California
Website
www.carrotroom.com
Can others edit my Photos
Photos NOT OK to edit
Not positive this is the right place for this, considered 'The Coffee House' but that seems a little too scary a place for a newbie like me.

I pondered this week while eating my turkey wrap about the most recent raft of camera phone technology. I was thinking about how real things have to be to be accepted. Obviously when you take a picture using these latest advances, the 'intelligence' in the device takes over and as well as maximizing the exposure settings to get the best results, it will also manipulate the data (according to themes or other settings) so that the resulting image is 'better'. Better perhaps than the original scene was perhaps? This has been a trend for a while and even my 5 year old camera has the ability, using two cameras to combine data from both to allow me to manipulate focus and DoF.

Does this matter? The device is only doing what it can to make the best picture, whether it is an 'accurate' representation of what was seen by the lens is secondary, right? As long as it makes a great picture. For most people this isn't about learning a skill it is just about capturing a scene to be able to share.
This is, in many ways, no different than 'proper' photographers manipulating the images in Lightroom or even a dark room - it is acceptable to make the most out of what was captured to create the best image. And (as long as you are honest about what you have done) it is also now acceptable to combine images so produce something removed from what was actually seen, taking the best elements from various shots.

There has been plenty of debate about whether this is still photography.

I was thinking about this and then took a slight tangent; surely the technology is available where the device knows where it is (via GPS) and when the shutter is pressed could skim all the similar images in the internet and on top of the data it just captured from the scene, use the skimmed data to further enhance the new image? This could (hypothetically) be done without infringing copyright just absorbing data from a multitude of variations to produce something new and unique - still acceptable? Surely it's not that different than what could be achieved in, for example, Photoshop?

Taking it a step further does such a device need to actually be in the place where the image is anyway? a user could just initiate a picture "make me an image of the Grand Canyon from the West on a spring morning" - device scrapes all the images it can find and through amalgamation and user preferences make a new and previously unseen image of the Grand Canyon.

With time and appropriate intelligence, both hypothetical ideas could work and produce amazing pictures - would they be acceptable as 'photographs' in mass population terms (excluding us purists)?

At what point does a line get drawn? ...If at all
 
is this even real life? are we in the matrix? who cares?
 
Not positive this is the right place for this, considered 'The Coffee House' but that seems a little too scary a place for a newbie like me.

I pondered this week while eating my turkey wrap about the most recent raft of camera phone technology. I was thinking about how real things have to be to be accepted. Obviously when you take a picture using these latest advances, the 'intelligence' in the device takes over and as well as maximizing the exposure settings to get the best results, it will also manipulate the data (according to themes or other settings) so that the resulting image is 'better'. Better perhaps than the original scene was perhaps? This has been a trend for a while and even my 5 year old camera has the ability, using two cameras to combine data from both to allow me to manipulate focus and DoF.

Does this matter? The device is only doing what it can to make the best picture, whether it is an 'accurate' representation of what was seen by the lens is secondary, right? As long as it makes a great picture. For most people this isn't about learning a skill it is just about capturing a scene to be able to share.
This is, in many ways, no different than 'proper' photographers manipulating the images in Lightroom or even a dark room - it is acceptable to make the most out of what was captured to create the best image. And (as long as you are honest about what you have done) it is also now acceptable to combine images so produce something removed from what was actually seen, taking the best elements from various shots.

There has been plenty of debate about whether this is still photography.

I was thinking about this and then took a slight tangent; surely the technology is available where the device knows where it is (via GPS) and when the shutter is pressed could skim all the similar images in the internet and on top of the data it just captured from the scene, use the skimmed data to further enhance the new image? This could (hypothetically) be done without infringing copyright just absorbing data from a multitude of variations to produce something new and unique - still acceptable? Surely it's not that different than what could be achieved in, for example, Photoshop?

Taking it a step further does such a device need to actually be in the place where the image is anyway? a user could just initiate a picture "make me an image of the Grand Canyon from the West on a spring morning" - device scrapes all the images it can find and through amalgamation and user preferences make a new and previously unseen image of the Grand Canyon.

With time and appropriate intelligence, both hypothetical ideas could work and produce amazing pictures - would they be acceptable as 'photographs' in mass population terms (excluding us purists)?

At what point does a line get drawn? ...If at all
Nigel, I think you're on to something here. You should delve into coding and develop the characteristics you've mentioned. I think you'll become the next Jobs or Gates.
 
Obviously when you take a picture using these latest advances, the 'intelligence' in the device takes over and as well as maximizing the exposure settings to get the best results, it will also manipulate the data (according to themes or other settings) so that the resulting image is 'better'. Better perhaps than the original scene was perhaps? This has been a trend for a while ..

This discussion, in various iterations, has been going on since the second half of the 19th Century. You're a bit late to the party, but it's o.k. because I don't know if it has been settled yet.

How photography evolved from science to art
 
Obviously when you take a picture using these latest advances, the 'intelligence' in the device takes over and as well as maximizing the exposure settings to get the best results, it will also manipulate the data (according to themes or other settings) so that the resulting image is 'better'. Better perhaps than the original scene was perhaps? This has been a trend for a while ..

This discussion, in various iterations, has been going on since the second half of the 19th Century. You're a bit late to the party, but it's o.k. because I don't know if it has been settled yet.

How photography evolved from science to art

I don't think I'm asking whether photography is art or even whether photo manipulation is photography (or art); I thought I had steered away from that. I was just thinking that if teh technology allowed for someone to make pictures/images from some algorithm instead of what was in front of them whether there was a line where it isn't a photograph (beyond the literal definition of light drawing) - it's just an fun muse for the idle mind is all; not trying to start a serious debate.
 
Not positive this is the right place for this, considered 'The Coffee House' but that seems a little too scary a place for a newbie like me.

I pondered this week while eating my turkey wrap about the most recent raft of camera phone technology. I was thinking about how real things have to be to be accepted. Obviously when you take a picture using these latest advances, the 'intelligence' in the device takes over and as well as maximizing the exposure settings to get the best results, it will also manipulate the data (according to themes or other settings) so that the resulting image is 'better'. Better perhaps than the original scene was perhaps? This has been a trend for a while and even my 5 year old camera has the ability, using two cameras to combine data from both to allow me to manipulate focus and DoF.

Does this matter? The device is only doing what it can to make the best picture, whether it is an 'accurate' representation of what was seen by the lens is secondary, right? As long as it makes a great picture. For most people this isn't about learning a skill it is just about capturing a scene to be able to share.
This is, in many ways, no different than 'proper' photographers manipulating the images in Lightroom or even a dark room - it is acceptable to make the most out of what was captured to create the best image. And (as long as you are honest about what you have done) it is also now acceptable to combine images so produce something removed from what was actually seen, taking the best elements from various shots.

There has been plenty of debate about whether this is still photography.

I was thinking about this and then took a slight tangent; surely the technology is available where the device knows where it is (via GPS) and when the shutter is pressed could skim all the similar images in the internet and on top of the data it just captured from the scene, use the skimmed data to further enhance the new image? This could (hypothetically) be done without infringing copyright just absorbing data from a multitude of variations to produce something new and unique - still acceptable? Surely it's not that different than what could be achieved in, for example, Photoshop?

Taking it a step further does such a device need to actually be in the place where the image is anyway? a user could just initiate a picture "make me an image of the Grand Canyon from the West on a spring morning" - device scrapes all the images it can find and through amalgamation and user preferences make a new and previously unseen image of the Grand Canyon.

With time and appropriate intelligence, both hypothetical ideas could work and produce amazing pictures - would they be acceptable as 'photographs' in mass population terms (excluding us purists)?

At what point does a line get drawn? ...If at all
Nigel, I think you're on to something here. You should delve into coding and develop the characteristics you've mentioned. I think you'll become the next Jobs or Gates.

Thanks, I will get my people right on it - best get that patent in first :D
 
Let's not waste scarce brain cells debating about how long is a piece of string is. Let's focus instead upon the classics, such as "which is heavier, a pound of feathers or a pound of gold?", or my favorite "how many angels could dance on the head of a pin?"

Computational photography is what you are referring to, and it is an emergent field.
 
Let's not waste scarce brain cells debating about how long is a piece of string is. Let's focus instead upon the classics, such as "which is heavier, a pound of feathers or a pound of gold?", or my favorite "how many angels could dance on the head of a pin?"

Computational photography is what you are referring to, and it is an emergent field.

Pound? I thought those were banned and only kilograms were allowed :D

Yes it is computational photography with a twist
 
Not positive this is the right place for this, considered 'The Coffee House' but that seems a little too scary a place for a newbie like me.

I pondered this week while eating my turkey wrap about the most recent raft of camera phone technology. I was thinking about how real things have to be to be accepted. Obviously when you take a picture using these latest advances, the 'intelligence' in the device takes over and as well as maximizing the exposure settings to get the best results, it will also manipulate the data (according to themes or other settings) so that the resulting image is 'better'. Better perhaps than the original scene was perhaps? This has been a trend for a while and even my 5 year old camera has the ability, using two cameras to combine data from both to allow me to manipulate focus and DoF.

Does this matter? The device is only doing what it can to make the best picture, whether it is an 'accurate' representation of what was seen by the lens is secondary, right? As long as it makes a great picture. For most people this isn't about learning a skill it is just about capturing a scene to be able to share.
This is, in many ways, no different than 'proper' photographers manipulating the images in Lightroom or even a dark room - it is acceptable to make the most out of what was captured to create the best image. And (as long as you are honest about what you have done) it is also now acceptable to combine images so produce something removed from what was actually seen, taking the best elements from various shots.

There has been plenty of debate about whether this is still photography.

I was thinking about this and then took a slight tangent; surely the technology is available where the device knows where it is (via GPS) and when the shutter is pressed could skim all the similar images in the internet and on top of the data it just captured from the scene, use the skimmed data to further enhance the new image? This could (hypothetically) be done without infringing copyright just absorbing data from a multitude of variations to produce something new and unique - still acceptable? Surely it's not that different than what could be achieved in, for example, Photoshop?

Taking it a step further does such a device need to actually be in the place where the image is anyway? a user could just initiate a picture "make me an image of the Grand Canyon from the West on a spring morning" - device scrapes all the images it can find and through amalgamation and user preferences make a new and previously unseen image of the Grand Canyon.

With time and appropriate intelligence, both hypothetical ideas could work and produce amazing pictures - would they be acceptable as 'photographs' in mass population terms (excluding us purists)?

At what point does a line get drawn? ...If at all

Well that technology already sort of exists, it's called photogrammetry and is a way of making 3D models from photographs or videos. Currently it only takes the information from one camera, but technology can be quite amazing and we don't know where it would take us in the future. I'm pretty sure the output would still be a 3D model, but eventually they'll probably get photorealistic rendering.
 
  • Like
Reactions: NGH
PS - the human portraits are freakishly real. Or should I say "real"? There is a cat version as well. Not so real. By any definition.
 
Wow, those artificially-created human portraits do indeed look exceptionally real.
 
  • Like
Reactions: NGH
Wow, those artificially-created human portraits do indeed look exceptionally real.

They do indeed. It reminds me of something a friend said back when I started working in computers. He said "if you set a computer to randomly populate pixels on a screen and left it for an infinite amount of time; at some point it will show a perfect image of Margaret Thatcher" I guess you can tell how long ago that was... I guess it wasn't so much of a joke
 

Most reactions

Back
Top