PDA

View Full Version : What are colours?



parallaxicality
2004-Dec-05, 11:56 AM
I was thinking about the nature of light this morning, because an ad for a new documentary on light was on the television, and a question popped into my head: why is red red? Why is blue blue? Are colours inherent properties of matter and energy, or are they merely products of our own perception?

Evan
2004-Dec-05, 12:16 PM
At first glance this sounds like a trivial question, but it isn't. Red is red because we call it red. But, there are good reasons why our vision encompasses this particular band of frequencies.

The difference between deep red and violet light comprises a single octave. That is to say that the frequency of deep blue is twice that of deep red. So, why do we see these particular colors (the names are arbitrary, the frequencies are not)? It so happens that the chemical reactions that occurs in our eyes are dependent on the energy levels of the photons. It takes a certain amount of energy to bump electrons to higher orbits and create a higher energy state in an atom.

Below the red range of color this is hard to achieve as the energy of the photon is too low. At the violet end of the spectrum we begin to encounter too much energy. It tends to strip electrons from atoms as they receive too much energy and flee the atom. This is called ionizing radiation. It isn't healthy for biological constructs.

So, it is no coincidence that we see the range of colours that we do. It is determined by the energy states that are available from the photons when they interact with organic molecules, without destroying them.

The appearance of colour may be produced by a number of mechanisms. It is not simple.

I have a tutorial that may be of interest and is related to this subject, especially for anyone that wishes to maintain colour accuracy from input to output on a computer. It may be found here. (http://vts.bc.ca/color.htm)

The subjective appearance of colour may be produced by a number of mechanisms. Without delving into the ways that the brain and eye may be fooled the color of a substance may be produced by what photons it absorbs and which it doesn't. This is the way that the colour of most common material is produced. A red shirt contains molecules that are of a size to match the wavelength of red photons. They are then reflected but no other photons are of the correct electromagnetic dimension so they pass through these molecules and are absorbed.


In a related phenomenon we have the colour produce by selective transmission. This is caused by the size of the molecules that make up the substance. A good example is smoke. The particles in smoke are of a size that they tend to reflect blue light. That means they are of a size that approaches the wavelength of blue light. A red photon has a wavelength about twice as long and cannot be reflected efficiently by such a particle. The "mirror" is too small. So, the red photon "ignores" the particle and passes it by. That is why we see smoke as bluish by reflected light but reddish by transmitted light.

Another mechanism for creating the appearance of colour is interference. By creating reflective layers that have a spacing equal to the particular wavelengths of certain photons different colours may be produced. This is readily apparent in an oil film floating on water. As the thickness of the film varies the photons that fit the thickness may be transmitted or reflected. If it fits the thickness of the film then it will be reflected. If not, then it will be transmitted.

There are other mechanisms as well. For a complete treatment of this subject I refer you to the book QED by Richard Feynman.

eburacum45
2004-Dec-05, 01:12 PM
An interesting related question is what causes the specific subjective experience associated with each colour? This is a difficult question, and cannot be resolved yet with our current understanding of neurology; but two phenomena give some insight into the subjective experience of colour.

One is colourblindness; people with various kinds of colourblindness still see colours, but are unable to distinguish between various shades. What is interesting is that colourblind people are often unaware of this inability until they are tested; as far as their internal experience goes, they can see a full range of colours. This internal experience of stimuli is sometimes called the 'qualia'; there is a long-running debate in philosophy about whether one person's 'qualia' are similar or identical to another's, given the same stimuli.

This philosophical question is impossible to settle at the moment. and may never be fully resolved; but the fact of different perception of colours by colourblind persons implies some real differences in qualia.

Another indicator of different perceptions of the same stimuli can be obtained by examining the neurological phenomenon synaesthesia; this is the unusual phenomenon (which is extremely pronounced in some few people) of involuntary association between different sensory inputs.
Some people who experience synaesthesia make involuntary associations between colours and the other senses; however it is noticable that these associations show very little correlation from person to person-
although some small correlations have been suggested;
http://home.comcast.net/%7Esean.day/Trends2004.htm

the lack of a strong correlation suggests to me that most people have dissimilar experience of colour and other stimuli, but a weak correlation may mean that qualia are broadly similar from person to person.

JohnD
2004-Dec-05, 02:35 PM
All,
Indeed, colour blindness is very common - in men, 1 in 10-20, in women it is much less common. Dalton, who first proposed modern atomic theory and named oxygen, was colour blind. Many of you reading this will be colour blind to some extent. If you go to this set of lecture slides http://www.cs.stir.ac.uk/courses/31N5/web2004/handouts/colour1_colour.pdf from Striling University, you can check your vision using the test charts therein.

Evan, thanks for the explanation of why these wavelength. I believe that animals can see a different span of wavelengths - for instance, insects see U/V - but presume that this is only a small difference in frequency?
John

TrAI
2004-Dec-05, 04:39 PM
Hmmm... There is the old question of whether all people see colors the same way. I would guess that the mapping of the different frequencies to the perceived colors is something that develops as the brain does, so that the exact mapping may be different(the same frequency may look different to two people), but there are some similarities in the genetics that makes the possible color range similar in all.

There is nothing, as far as I know, that suggests that a certain frequency of light is more suited to be mapped to a certain perceived color.

I do remember reading that the receptors in the eye is sensitive a little bit into the IR range up into the UV range, but the UV light is blocked by the cornea(IIRC). Some people going through eye surgery had volunteered for the experiment, and as I recall it, they said that the UV light had looked light brown or something like that, but sadly I do not remember where I read this.

Evan
2004-Dec-05, 09:01 PM
JohnD,

You are correct. Bees in particular can "see" in ultraviolet. They have an extended spectrum sensitivity compared to us, but not by much. Long ultraviolet is not all that damaging as it is just barely ionizing and in a short lived creature like a bee that is not an important issue compared to finding food. For us excess long term exposure to UV can and does result in blindness. That is why the Inuit invented snow goggles (http://www.ic.sunysb.edu/Clubs/socia/grasstech/inuitgog.html).

Joe Durnavich
2004-Dec-06, 12:21 AM
This philosophical question is impossible to settle at the moment. and may never be fully resolved

If the question seems so mysterious and if it seems like it may never be resolved, perhaps the problem is with the question.

Why do we want to think that color is an "experience"? I am never sure what others mean when they speak about "the experience of red" and sometimes suggest that we each may have our own "experience of red". Color blindness is sometimes offered as an example, but what is it about a person not being able to tell two objects apart on the basis of color that suggests that color is some sort of "experience" for this person? Likewise, synesthetes are those who group things together in ways most of us don't. What serves as direct evidence here that color is an "experience"?

TrAI
2004-Dec-06, 02:36 AM
This philosophical question is impossible to settle at the moment. and may never be fully resolved

If the question seems so mysterious and if it seems like it may never be resolved, perhaps the problem is with the question.

Why do we want to think that color is an "experience"? I am never sure what others mean when they speak about "the experience of red" and sometimes suggest that we each may have our own "experience of red". Color blindness is sometimes offered as an example, but what is it about a person not being able to tell two objects apart on the basis of color that suggests that color is some sort of "experience" for this person? Likewise, synesthetes are those who group things together in ways most of us don't. What serves as direct evidence here that color is an "experience"?

Hmmm, I guess it depends on what one mean with experience, if you are thinking of how you perceive the color, that is, in my view, only something that exist inside the brain and mind, there is nothing about a certain frequency of light that makes it more suited than any other to be what you think of as red, there is no red before the brain has processed the information the eyes give it, it is only a way to make differences in frequencies easily visible to you(as a personality/consciousness). It is this way with all the senses really, all of them are processed and presented to your consciousness, and the ways you experience them is just a representation of the information(which in itself is just an electro-chemical representation of stimuli), it is a sort of abstraction from the processes that is used to sense the stimuli and transfer the information of it.

Without this abstraction it would be necessary for people to consciously process all sensory data, something that would be rather tedious, really…

Joe Durnavich
2004-Dec-06, 05:26 AM
TrAI, if I understand you correctly, you draw a line at the eyeballs, with the world on one side and the brain on the other, and then want to search for "red" on either side of that line. You argue that red cannot be something out there in the world such as the frequency of light or anything that occurs before the brain has processed sensory stimuli. Therefore, by process of elimination, red must be something inside the brain.

This still doesn't point me to this thing you folks call an "experience of red". I have a red mousepad on my desk here. When I call it "my red mousepad", or when I think of "the red mousepad on my right as opposed to the black mousepad on my left", the matter of color does not seem mysterious or puzzling in the least.

But when you want me to consider "red" as some sort of thing, a thing separate and distinct from the mousepad, such as an "experience" or as a frequency of light for that matter, then the matter of color all of a sudden becomes puzzling and I feel like there is something there that we need to get to the bottom of, but never will.

I have to wonder if philosophy has mistakenly reified colors into entities of some sort, and then when it fails to find them, it charges scientists with the task of hunting them down.

Ut
2004-Dec-06, 05:33 AM
Heh. Yeah, I've had this conversation.

What it boils down to is that what my brain considers to be "red" and what your brain considers to be "red" may not necessarily be the same. But it doesn't matter, because it's still red. Red is defined by a wavelength of light. As is blue, and green. The others, too. It becomes rather difficult to philosophize about definitions like that.

What you see may be subjective. What is seen is not.

Joe Durnavich
2004-Dec-06, 05:34 AM
Without this abstraction it would be necessary for people to consciously process all sensory data, something that would be rather tedious, really…

Does the brain have to then go on and process this abstraction? Or is the abstraction "red" at this point and no further processing is needed?

TrAI
2004-Dec-06, 12:26 PM
TrAI, if I understand you correctly, you draw a line at the eyeballs, with the world on one side and the brain on the other, and then want to search for "red" on either side of that line. You argue that red cannot be something out there in the world such as the frequency of light or anything that occurs before the brain has processed sensory stimuli. Therefore, by process of elimination, red must be something inside the brain.

Well, Photons have one thing that can be changed, that is the frequency. Now, a bunch of photons in the frequency range labeled as red enters the eye, and strikes the retina(of course these are not the same photons that entered the eye, but that is another discussion really).

In the retina we have a few different color sensitive cells, the idea is that they contain chemicals that break down when hit by light, and the more light that hits it the faster this happens. They also contain a pigment so that only certain ranges of frequencies can strike the chemicals. The breakdown triggers the nerve signals. Now the signal is further processed by other cells to reduce the needed bandwidth and to improve the detection of things like edges.

Anyway, the signal reaching the brain does not contain any information about the frequency itself, that is, the signal is not in relation to the frequency of the light that hit the retina, the brain gets the relative intensity of the light by the three different types of color sensitive cells. Now this is a lot of information, and would be hard to process by the conscious mind, therefore the brain have an area dedicated to the processing of this data into a more understandable form, it is a bit like how many man-made devices contain application specific circuits, though the data could be processed fully in software on a general processor, having a circuits that are optimized to handle a certain type of data is much faster and means that the CPU have more time to handle more high-level functions.

The upshot of this is that what we experience when seeing is not the world, but a representation of it and what we experience as color is a way to represent the intensity of the stimulation on the different light sensitive cells.



This still doesn't point me to this thing you folks call an "experience of red". I have a red mousepad on my desk here. When I call it "my red mousepad", or when I think of "the red mousepad on my right as opposed to the black mousepad on my left", the matter of color does not seem mysterious or puzzling in the least.

That it is hard to se the difference between the concept of light frequency, the name label we have given a frequency range, and the way we think of and experience it is a mark of how good this system(the eyes/brain) works, I guess.


But when you want me to consider "red" as some sort of thing, a thing separate and distinct from the mousepad, such as an "experience" or as a frequency of light for that matter, then the matter of color all of a sudden becomes puzzling and I feel like there is something there that we need to get to the bottom of, but never will.

I have to wonder if philosophy has mistakenly reified colors into entities of some sort, and then when it fails to find them, it charges scientists with the task of hunting them down.

What I am trying to say is that color is not an object or entity in itself; it is a way to represent information about the world to the part of the brain where the consciousness resides. The light that reflects of the mouse pad is separate and distinct thing from the mouse pad itself, but its photons have been changed with others by the properties of the atoms in the surface, the signals from the eye is a separate thing from the photons that hit the retina, but it is a representation of the relative intensities it had, the frequency is gone, but we do have the signal from the different color sensitive cells, so we can still recognize that there was something different between the light reflected from the two pads, and it is this that the processing done in the brain is doing by mapping the different relative intensities to what we perceive as colors.


Does the brain have to then go on and process this abstraction? Or is the abstraction "red" at this point and no further processing is needed?

It is sent to a different part of the brain, the transfer and receiving is a sort of process in it. The thing with abstraction is that you need no knowledge about how the hardware(or wetware if you prefer) in your brain or body works, you do not have any understanding of how the brain represents different things except what you experience, you do not feel the electro-chemical nature of the signals transferring this, or even the ones that represent your personality itself. You do not have to bother with knowing the exact sequence of signals used to address and control your hands when typing at the keyboard, it is enough to want to move your hands to do it, and there is so many levels, I don't have to think of where the keys are to type or even look at them, in fact, when I do consciously try to remember where the different keys are, I am likely to end up making more mistakes than just focusing on what I want to write. It is fascinating that the brain can not only learn how to abstract me from the tedious tasks of controlling my own body, but also the operation of things external to it.

George
2004-Dec-06, 01:54 PM
Here's a few related questions...

In astronomy...

What are the advantages of false color?

What are the advantages of "true" color?

Joe Durnavich
2004-Dec-06, 04:26 PM
What I am trying to say is that color is not an object or entity in itself; it is a way to represent information about the world to the part of the brain where the consciousness resides.

The relation of representation, though, is a relation between two objects or entities. The physical ink markings on a letter, for example, represent my physical mailbox. Now, if there is such a thing as "an experience of red", and if it is to be found in the brain, and if it is a representation, then I presume it must be in the form of a neural state or states. Let's say we ask a neurobiologist to find the neural state in a particular person that constitutes the representation and the "experience of red". We talk about computer states in terms of numbers, so let's use that convention to describe neural states as well. Let's say that our neurobiologist informs us he has located the red representation and he hands us a table of numbers that describe a particular neural state. For convenience, let's say the numbers are (255, 0, 0) with the understanding the real neural state would be described by a larger table of numbers.

So, we have the physical mousepad on one side with a particular wavelengths of light reflecting off it, and on the other side after a great deal of brain processing, we have a final result of all that processing: (255, 0, 0).

Where does "redness" enter the picture? Is the "experience of red" nothing more than that triplet (or whatever)?

...but it is a representation of the relative intensities it had, the frequency is gone, but we do have the signal from the different color sensitive cells, so we can still recognize that there was something different between the light reflected from the two pads,...

So the thing out there in the world, the mousepad, does not have the red. How then can the internal representation be a representation of red? Using the address analogy, the address on the envelope represents my mailbox, but my mailbox exists. You seem to be suggesting a form of representation, a representation of the color red in this case, where one side of the relation doesn't exist.

It [the abstraction] is sent to a different part of the brain, the transfer and receiving is a sort of process in it. The thing with abstraction is that you need no knowledge about how the hardware(or wetware if you prefer) in your brain or body works,

I can certainly see where a representation like (255, 0, 0) (or whatever) is much simpler to deal with than all those brain processes you describe. What I am after, though, is this "experience of red" you guys speak of. From your explanation, it sounded like it was in the abstraction you describe. But now it sounds like this abstraction has to be transferred to another part of the brain. Are you suggesting that in transferring (255, 0, 0) to a special small cluster of neurons that we then have "the experience of red"?

TrAI
2004-Dec-06, 05:34 PM
The relation of representation, though, is a relation between two objects or entities. The physical ink markings on a letter, for example, represent my physical mailbox. Now, if there is such a thing as "an experience of red", and if it is to be found in the brain, and if it is a representation, then I presume it must be in the form of a neural state or states. Let's say we ask a neurobiologist to find the neural state in a particular person that constitutes the representation and the "experience of red". We talk about computer states in terms of numbers, so let's use that convention to describe neural states as well. Let's say that our neurobiologist informs us he has located the red representation and he hands us a table of numbers that describe a particular neural state. For convenience, let's say the numbers are (255, 0, 0) with the understanding the real neural state would be described by a larger table of numbers.

So, we have the physical mouse pad on one side with a particular wavelengths of light reflecting off it, and on the other side after a great deal of brain processing, we have a final result of all that processing: (255, 0, 0).

Where does "redness" enter the picture? Is the "experience of red" nothing more than that triplet (or whatever)?

Lets try to look at it from another angle, say you have hurt your hand in some way. This means that you experience pain. Now, when is it the signals from the nerves in your hand becomes pain? Pain is a way that your brain informs you that you have an injury, the it knows where the signals come from, and knows how your hand is positioned in relation to the body(this may be counted as a sense in itself, by the way) and can overlap the pain an the location in how you experience the world. If you want a computer analogy, the hardware layer(the brain and nervous system) interfaces to the operating system and its drivers(this would be like the unconscious processes in the brain) that provides an abstraction layer between hardware and applications(the conscious mind would be like an application).


So the thing out there in the world, the mousepad, does not have the red. How then can the internal representation be a representation of red? Using the address analogy, the address on the envelope represents my mailbox, but my mailbox exists. You seem to be suggesting a form of representation, a representation of the color red in this case, where one side of the relation doesn't exist.

Hmmm... Perhaps the problem is the two different ways we must use "red". When someone says "I have a red mouse pad" they are saying they have a mouse pad that reflects light with a wavelength inside a specific part of the visible spectrum, red is a label we have made. But when you look at the mouse pad you can see that it looks different than something else, for example the mouse, now you have learned that the way it looks is called red, so you might think "yes, it is a red mouse pad", so it also can mean the way the color looks to you. The difference may seem subtle, but lets say you are sick or have eat something bad and so see a flying red mouse mat. You experienced this, the attributes the mouse mat had was what you would think of as red, but to everyone else there were no such thing as a mouse mat present. So, when you look around you, you are seeing a representation of the information received by the eyes, not the information itself.



I can certainly see where a representation like (255, 0, 0) (or whatever) is much simpler to deal with than all those brain processes you describe. What I am after, though, is this "experience of red" you guys speak of. From your explanation, it sounded like it was in the abstraction you describe. But now it sounds like this abstraction has to be transferred to another part of the brain. Are you suggesting that in transferring (255, 0, 0) to a special small cluster of neurons that we then have "the experience of red"?

Well, I was talking about the transfer of the abstracted data, not the abstraction itself. As for when the information becomes an experience... I guess it is first when it is introduced to your consciousness(so it must be transferred to the parts of the brain that the processes that is your consciousness resides)...

It can not really be called an experience before that, as the word kind of implies it is the way information is represented to you, and so it must reach you before it can become that...

TrAI
2004-Dec-06, 05:39 PM
Here's a few related questions...

In astronomy...

What are the advantages of false color?

What are the advantages of "true" color?

Well, it is different ways to represent the data, so that it may be easier to understand it, I guess... Sometimes you are looking for subtle differences or at a representation of a part of the spectrum we can not see, and then false color is a good choice. If you are trying to show how something might look to a human you would use (approximate) true color.

John Dlugosz
2004-Dec-06, 09:18 PM
Are colours inherent properties of matter and energy, or are they merely products of our own perception?

Both.

Consider two green objects. One of them reflects green light and absorbs all others. The other object reflects light at several peak frequencies on either side of green. Because of the sensitivity curves of the sensors in our eyes, they look the same color to us!

But a spectrometer will spot the difference. In fact, given the raw spectral data, it takes quite a bit of math and known tables representing the eye's sensitivities in order to determine that the particular spectral curve would look green.

An alien with different implementation in his eyes might agree that the first object is green, but state that the second was not the same color at all. He might perceive the separated peaks as a distinct thing in itself rather than fooling the sensor's overlapping response curves.

We do that ourselves with the "color" we call magenta. It's not in the rainbow!

The pigments and dyes we use are designed for the perception they give us. Many spectral curves map to the same perceptual color. An alien might not find any meaning to the fact that my shirt is sea-foam green, neither caring that it looks green to humans (that is, looks the same as a particular pure frequency of light) nor agreeing that it is the same color as sea foam. So physically the color is meaningless to someone with other than human eyes.

On the other hand, a DVD pit is supposed to reflect light of a particular frequency. Plant leaves have a spectral curve that exists because it does something useful for the plant. So if color matters for technical reasons, it is inherent. The alien could construct a DVD reader from specifications, or deduce the environment under which cholophyl evolved.

But we use the word "blue", "green", "red", etc. to refer to both a narrow frequency band and any of a large number of spectrua that give the same stimulation, that is not precise and not useful for technical descriptions.

01101001
2004-Dec-07, 12:49 AM
Consider two green objects.
There is this Wikipedia article (http://en.wikipedia.org/wiki/Green) on green to consider:


The English language makes a distinction between blue and green, but some languages, such as Vietnamese or Tarahumara usually do not use separate words for green and refer to that colour using either a word that can also refer to yellow or to blue.

Joe Durnavich
2004-Dec-07, 01:12 AM
Lets try to look at it from another angle, say you have hurt your hand in some way. This means that you experience pain. Now, when is it the signals from the nerves in your hand becomes pain?

It is right here where I think the problem starts. You posit two entities: nerve signals on one side and "pain" on the other. The nerve signals I am clear on. I am not so sure what you mean by them "becoming pain". I can understand how a boy becomes a man, say, because we can compare the boy to the man. But I am not so sure on what nerve signals transform into to become this "pain" you speak of.

Considering pains or color as entities of some sort reminds me of the following quote from Ludwig Wittgenstein's Philosophical Investigations, #283:



Couldn't I imagine having frightful pains and turning to stone while they lasted? Well, how do I know, if I shut my eyes, whether I have not turned into stone? And if that has happened, in what sense will the stone have the pains? In what sense will they be ascribable to the stone? And why need the pain have a bearer at all here?!


It seems to me that illustrates how most people want to talk about colors and pains, as distinct entities that come into being somehow, or that are end results of some series of processing steps or computations.

If you want a computer analogy, the hardware layer(the brain and nervous system) interfaces to the operating system and its drivers(this would be like the unconscious processes in the brain) that provides an abstraction layer between hardware and applications(the conscious mind would be like an application).

So what kind of thing is the color red (or a pain, if you prefer)? Is it like the RGB triplet (255, 0, 0)?

Hmmm... Perhaps the problem is the two different ways we must use "red". When someone says "I have a red mouse pad" they are saying they have a mouse pad that reflects light with a wavelength inside a specific part of the visible spectrum, red is a label we have made.

Keep in mind, though, that children are not taught about wavelengths of light when they learn about colors. They are shown objects like, say, apples, cherries, and fire trucks and are taught that those are red. Then they learn to identify other objects as belonging to this group. So, when I said "I have a red mouse pad", it would be more accurate to say I meant that I cannot tell it apart from the set of objects that serve as a reference for red. If there were doubts about what I meant, after all, I just may get a color chart and point to the square marked "red".

Notice in this approach to color, there is no "experience of red", or "red sensation", or whatever you want to call it. There is just me, the world, and my dealings with the world--in this case, grouping objects together. What color I judge something as--in other words, what group of reference objects I cannot tell it apart from--is a function of the object, lighting conditions, the dyes in the photoreceptors in my eyes, my brain functions, my skill in distinguishing colors, etc. Everything both inside and out has a role. There is no final result, a "red", that comes into being.

The difference may seem subtle, but lets say you are sick or have eat something bad and so see a flying red mouse mat.

If I am sick, though, then my ability to judge things may also be impaired. If I do say I see a flying mouse pad, then it doesn't necessarily mean there is a flying mouse pad inside me, but simply that at the present time I cannot tell it apart from the reference objects or events for flying things. There can be many reasons why I make that (mis)classification. It does not have to be that I correctly perceived an inner flying mouse pad.

You experienced this, the attributes the mouse mat had was what you would think of as red, but to everyone else there were no such thing as a mouse mat present. So, when you look around you, you are seeing a representation of the information received by the eyes, not the information itself.

So far your argument has always been of this indirect form: the "red" is not out there in the world, so by process of elimination it must be inside of me. The unstated premise here is that colors (and pains and hallucination) are things. If the color red is not a thing, then there is no need to wonder where it resides.

Well, I was talking about the transfer of the abstracted data, not the abstraction itself. As for when the information becomes an experience... I guess it is first when it is introduced to your consciousness(so it must be transferred to the parts of the brain that the processes that is your consciousness resides)...

Notice how the representational model pushes consciousness farther and farther inward, making the remaining set of neurons that comprise it ever so special. This is why I asked if we become conscious of red when the (255, 0, 0) gets written into a particular small cluster of neurons. (It is like saying a computer becomes conscious of the outside world when one of the bitmaps sampled from its camera gets written to the RAM address locations 0x100000 to 0x1FFFFF. Those are very special memory cells indeed!

George
2004-Dec-07, 01:20 AM
Here's a few related questions...

In astronomy...

What are the advantages of false color?

What are the advantages of "true" color?
... If you are trying to show how something might look to a human you would use (approximate) true color.
I can not think of much scientific value for "true" color. Of course, it is nice to have true color for public presentations. I suppose, it is helpful to know the true color for quick reference. Green vs. blue nebulae are different in composition. The Sun is quite stable in the visible portion of the spectrum. However, would knowing it's "true" color be useable as a reference somehow? I suppose it could as it apply's to stars of it's class, at least.

Joe Durnavich
2004-Dec-07, 01:32 AM
Consider two green objects. One of them reflects green light and absorbs all others. The other object reflects light at several peak frequencies on either side of green. Because of the sensitivity curves of the sensors in our eyes, they look the same color to us!

I like to think of perception as an ability to tell things apart. And what we cannot tell apart, we judge the same.

Because of the limited number of different pigments in the eye and because of the overlapping nature of the spectral responses, we lack the ability to tell the spectral green and the mixed green apart. We judge them the same, then, and say they are the same color.

TrAI
2004-Dec-07, 04:14 AM
...

So what kind of thing is the color red (or a pain, if you prefer)? Is it like the RGB triplet (255, 0, 0)?

Ok. Say you have a computer with a piece of image processing software and opens a picture. Now, what happens(simplified of course) is that the software asks the OS to open the file, it doesn't care what the file is or what it contains, to it the file is just a data unit. The OS runs on the CPU, and the CPU does not know what a file is, it knows commands and data, it doesn't have any concept of wetter the bytes it pushes around is part of a letter to your friend or the picture from your holiday, it is just numbers to it, but the OS has run some commands for the CPU to send some bytes over the bus to a specific address(the device the image is on, but the CPU doesn't know this, to it the address is just a number to put on the address bus). The circuitry interfacing the CPU to the bus-system have no concept of numbers or commands, to it the data is just a stream of bits that is to be converted to the appropriate voltage levels for transfer over the bus.

The bits is received by the storage unit, its processor sees that the bytes it represents is a request for it to transmit the contents of certain addresses in on its storage hardware. But to simplify the return:

bus - voltage levels = bits.
CPU - bytes = commands or numbers, addressing
OS - data unit, a file.
App - a picture.

This is a simple layered model(we could add more layers, but it is just an attempt at showing the concept), the application have no idea how the data is handled on the lower levels, but it doesn't need to, it know how to talk to the layer beneath it(this is abstraction). The OS have no concept in this model of what the file is, but it knows how get the CPU to request it, and how to hand of the contents to the application.

The difference between the nerve signals carrying the information of an injury and the way you experience the pain is similar to the difference between the bus layer and the application layer, the nerves relaying the information to your brain have no concept of how the information will be interpreted on higher layers. So, though the injury is in your hand, the pain is all in your head...



Notice in this approach to color, there is no "experience of red", or "red sensation", or whatever you want to call it. There is just me, the world, and my dealings with the world--in this case, grouping objects together. What color I judge something as--in other words, what group of reference objects I cannot tell it apart from--is a function of the object, lighting conditions, the dyes in the photoreceptors in my eyes, my brain functions, my skill in distinguishing colors, etc. Everything both inside and out has a role. There is no final result, a "red", that comes into being.

How the world appears to you is the final result, the "experience of the world", you do not see the frequencies of light, you do not see the nerve signals, you do not have to translate some weird line coding and compression used by the eye to transfer the information. To you as a consciousness this is totally abstracted from your view. You get a ready assembled set of data that is a three dimensional image of your surrounding. This is no more the real world than a photo is the object(or the reflected light) it is a chemical representation of.



If I am sick, though, then my ability to judge things may also be impaired. If I do say I see a flying mouse pad, then it doesn't necessarily mean there is a flying mouse pad inside me, but simply that at the present time I cannot tell it apart from the reference objects or events for flying things. There can be many reasons why I make that (mis)classification. It does not have to be that I correctly perceived an inner flying mouse pad.

I did not mean that there were a flying mouse pad inside you. I mean, that you remember some object does not mean you have a copy if the object inside you, but your memory does contain information on the attributes of the object, so that when you recall it, the parts of your brain handling memories can recreate a representation of this to you. The hallucination might have been something like this, the sickness might have scrambled the workings of the brain, so that some jumble of memories was interpreted as something just seen. But it was something you experienced as real(though you could deduce that it could not have been so, especially after you got better.)


So far your argument has always been of this indirect form: the "red" is not out there in the world, so by process of elimination it must be inside of me. The unstated premise here is that colors (and pains and hallucination) are things. If the color red is not a thing, then there is no need to wonder where it resides.

Well, I am not thinking of it as "things". A picture on a computer is not a physical thing in itself, if you open the hard drive there will not be a picture on one of the platters, even if you could see the fluctuations of the magnetic fields on the platter, you would not see it as a picture. If you could see the data the fluctuations represent, you would still not see a picture, it is not a picture before it is loaded into the application. In the same way "the experience of ..." is not a physical thing, entity or object, it is represented by physical signals in the brain, of course, but the physical layer view of it is no good for understanding the whole.



Notice how the representational model pushes consciousness farther and farther inward, making the remaining set of neurons that comprise it ever so special. This is why I asked if we become conscious of red when the (255, 0, 0) gets written into a particular small cluster of neurons. (It is like saying a computer becomes conscious of the outside world when one of the bitmaps sampled from its camera gets written to the RAM address locations 0x100000 to 0x1FFFFF. Those are very special memory cells indeed!

Well, the brain is full of specialized parts, many of them does their work independently of the consciousness, but there are of course intercommunication between many parts, it would be a waste to have a separate visual system for memories, when you can just use part of the system used for vision to visualize memories. We do know that certain parts of the brain does certain things, though it is adaptable to a certain extent.

I guess there is more that could be written, but I am just to tired to think or write anymore now, so I think I will end here.

Joe Durnavich
2004-Dec-07, 04:14 PM
The difference between the nerve signals carrying the information of an injury and the way you experience the pain is similar to the difference between the bus layer and the application layer, the nerves relaying the information to your brain have no concept of how the information will be interpreted on higher layers. So, though the injury is in your hand, the pain is all in your head...

I am a programmer myself so I understand the advantages of abstraction, layering, and designing to interfaces. In your prior posts, you seemed to be stressing that the color red is a representation, that it is the end result of a series of processing steps that reduce the massive array of complex sensory data into a much easier to deal with entity. You really stressed the representation as the important thing. But are you here saying that the application is where the red is? If so, then is it your view that the color red is not a representation, but a complex application? In other words, it is not the (255,0,0) that is the important thing, but what is done with it?

How the world appears to you is the final result, the "experience of the world", you do not see the frequencies of light, you do not see the nerve signals, you do not have to translate some weird line coding and compression used by the eye to transfer the information. To you as a consciousness this is totally abstracted from your view. You get a ready assembled set of data that is a three dimensional image of your surrounding. This is no more the real world than a photo is the object(or the reflected light) it is a chemical representation of.

Why would my brain waste its time and energy to produce a 3D model, only to have to have to spend additional resources to rediscriminate the model it has built? You stress the production of the representation, but you leave unanswered how the representation is "perceived" by the brain. Does it have to build yet another model of the model?

Consider this notion: the world can serve as its own model. Let's say I give you a large, but fixed, sum of money and ask you to build a conscious computer. Furthermore let me add a constraint that we are going to drop it into a jungle and that must be mobile and compete for its energy resources.

You first try build it the representational way with a massive memory bank. You have your cameras scan the world and build a 3D model of the world. The problme is, you now have to do something with the model. So you add on more processors and more memory for applications and write the programs that scan the model's data, make decisions, and control the system's motor functions. The problem is, the system is too big and heavy to move and has a voracious appetite for energy that is impossible to supply.

So, I suggest: "Listen. Why don't you let the world serve as its own model? Instead of building an internal 3D model and scanning it, let's speed up the camera processors and have the system scan the environment as it needs information from it. You will have to tightly integrate the optical system to the motor system so that the system can efficiently direct the camera and system movements. You can have the system bob the camera's eye like a bird bobs its head, or if you prefer have the camera dart about like the saccades of a human's eye. You can have to system move about the environment to increase its 3D vision by taking advantage of the parallax and depth-of-field information that will come in. This way you don't need all the memory, extra processors, and all that equipment to cool it. Your resource usage will go way down too."

Don't you think this is a better design? Instead of modeling the world in our computer, we cleverly make the world part of the computer system. The world, in effect, becomes our massive memory bank that happens to contain all the information about the world.

You may object that information has to be in the form of electrical impulses, but I would point out in that case that information is simply "differences". Anywhere there is a (non-random) difference, there is information. This can be, for example, the optical difference between the sky and the ground, which our system can use to orient itself upright. Or it might be the thermal differences between hot sunlight and cool shade, the red apple against the green forest background, or the predator robot growing larger in our camera's field of view.

To be sure, there is a relation between much of the data running around the CPUs and the world, and it is fair to call these representations. But the representations themselves are just dead, static entities. The important thing is the overall way the system acts for its survival in the environment. Representations are just small pawns in a much grander game that invovles both the machine and its environment. They are not the important thing. It is what is achieved in the environment that matters.

Perceiving is an achievement of the individual, not an experience in the theatre of consciousness -- JJ Gibson

The central point made here is clear: the proper subject of perception is not the brain, but rather the whole embodied animal interacting with its environment. -- Evan Thompson

eburacum45
2004-Dec-07, 07:42 PM
This is kind of interesting;
reconstructed images from a cat's brain.
http://exn.ca/stories/1999/11/10/56.asp

Joe Durnavich
2004-Dec-08, 02:39 AM
Thanks. That is pretty cool. I knew they have monitored individual neurons in the past, but this is first I heard about monitoring an array of neurons.

The paragraph near the end illustrates two different views on consciousness:


Could you ever find cells further removed from the eye (deeper in the brain) and record from them to create an even truer version of what the cat sees? Maybe, but remember that as the visual signal penetrates further into the brain, it is likely split, fed back, modified and who knows what. Eventually you'd probably just throw up your hands in despair.

The first view is that "consciousness" or "the mind" is deep inside the brain, somewhere downstream of the LGN in this case. Notice how they wondered if one would find a truer version of what the cat sees the deeper one goes in the brain. Descartes located the mind in the pineal gland, presumably because it was centrally located in the brain and seemed to be the ideal place to dump and assemble all the processed sensory data into a cohesive whole. Sometimes this is called the "Central Observer Model", because it is as though there is a centrally located observer inside the brain that does all our observing for us.

The other view touched on here doesn't expect to find a nicely built up model of the world anywhere in the brain ready to be deposited in "the consciousness" and instead expects to find what scientists tend to find: that the sensory information eventually gets distilled, split, and scattered all over the place.

Joe The Dude
2004-Dec-08, 09:03 AM
This philosophical question is impossible to settle at the moment. and may never be fully resolved

If the question seems so mysterious and if it seems like it may never be resolved, perhaps the problem is with the question.

Why do we want to think that color is an "experience"? I am never sure what others mean when they speak about "the experience of red" and sometimes suggest that we each may have our own "experience of red". Color blindness is sometimes offered as an example, but what is it about a person not being able to tell two objects apart on the basis of color that suggests that color is some sort of "experience" for this person? Likewise, synesthetes are those who group things together in ways most of us don't. What serves as direct evidence here that color is an "experience"?

Color can be an experience, it just depends on how you look at it (pun intended).

The first thing that came to mind about ‘experiencing’ a color takes me back to a summer job at Toys R Us I had while a teenager.

Being the new guy, they stuck me with putting up stock on the Barbie Isle for the first week.

At first I had no problems with that, but the dizziness (never a good thing on those tall TRU ladders) and queasiness I ‘experienced’ while working on that isle quickly became progressively worse.

After the third day of 'experiencing' all of that Pink, I had to quit as the manager flat out refused to allow a different assignment.

Perhaps this is the kind of thing that is alluded to when people talk about 'experiencing' a color, and how your mileage may vary.

TrAI
2004-Dec-09, 02:21 AM
I am a programmer myself so I understand the advantages of abstraction, layering, and designing to interfaces. In your prior posts, you seemed to be stressing that the color red is a representation, that it is the end result of a series of processing steps that reduce the massive array of complex sensory data into a much easier to deal with entity. You really stressed the representation as the important thing. But are you here saying that the application is where the red is? If so, then is it your view that the color red is not a representation, but a complex application? In other words, it is not the (255,0,0) that is the important thing, but what is done with it?

I was thinking of the application as being like the consciousness, and that the data from the senses need to be interpreted into a form that it can understand easily, without having to contain the routines for processing this. But I guess the brain would be a bit more like a computer with an extreme amount(what was it, about 100 000 000 000 neurons?) of small specialized processors interconnected in an application specific way, so that each subsystem of this processing network is good at the specific tasks it has grown to do.

You must excuse me if my use of the word "representation" is confusing, English is not my primary language, so I might be using the words in a way not entirely correct for the context, it is not intentional if I do...


Why would my brain waste its time and energy to produce a 3D model, only to have to have to spend additional resources to rediscriminate the model it has built? You stress the production of the representation, but you leave unanswered how the representation is "perceived" by the brain. Does it have to build yet another model of the model?

Well, the processing is not linear, it is a lot of things that can be done at the same time. Among things that is done is to detect movement and edges, map out/correct images for defects like blind spots, things like judging distances(things like stereo measurements and perspective analyzes seems to be part of it). There is probably a lot more, but the end result is integrated to how we experience the world. I'll try to come back a bit later in the post since the thing about models seems to have bearing there too.




(...)

Don't you think this is a better design? Instead of modeling the world in our computer, we cleverly make the world part of the computer system. The world, in effect, becomes our massive memory bank that happens to contain all the information about the world.

Well, the version that makes a model doesn't have to use that much memory, the model does not need to be very detailed outside the focus of attention. You do not need much memory either, since you are using a large network of small processing devices(the brain contains a massive amount of neurons) you can use the registers and perhaps even the latency in a delay line like fashion to store the data, it doesn't need to be committed to medium or long term storage(RAM and storage devices) during or perhaps even after processing. I guess the model would be like a simple 3d-scene, a lot of lossy objects and placements, it gives a feeling of how things are placed in relation to you, and details is filled in as you move to do something. I can for example walk with my eyes closed and not bump into something, because I have a feeling of how the objects around me are, pick up something without looking, or predict the movement of things outside my field of vision(very useful when crossing roads ;)). This would be hard or impossible without creating a sort of map of my surroundings. But we do the real time thing too, for instance when picking up things it is a good idea to correct movements on the go.

Saccades is done to prevent depletion of the light sensitive chemicals in the light sensitive cells, it really is not necessary for situational awareness... The blurred images created by rapid eye movements like this is automagicaly dropped during processing(the process is called saccadic omission). The small movements of the eyes is also used to improve detail, several small, rappid passes is done of the place you are focusing on, and the resulting "frames" are integrated into one with higher resolution.




You may object that information has to be in the form of electrical impulses, but I would point out in that case that information is simply "differences". Anywhere there is a (non-random) difference, there is information. This can be, for example, the optical difference between the sky and the ground, which our system can use to orient itself upright. Or it might be the thermal differences between hot sunlight and cool shade, the red apple against the green forest background, or the predator robot growing larger in our camera's field of view.

I would expect you need multi-frame memory for movement detection(knowing the predator is coming towards you, for example). The data needed for a simple model used for awareness of ones surrounding can probably use less memory than one of those frames. The brain might focus on differences in edges though(it seems the visual system tend towards handling images as shapes, not just a bunch of pixels, quite a few optical illusions are based on peculiarities of this shape handling), at least in initial detection of movements, the map for edges would be easier to process and use less memory than a full frame... Anyway, you would probably have to predict the movement of the predator too, that would be creating a model of sorts and integrating it with your awareness of your surroundings.

It is hard though, to see what is consciously done, and what is not. The thing is, many things we do are not something we have to think about or focus on the sequence needed to do it. I guess it is like scripting of sorts, the unconscious parts of our brain can do the things we want with only a few conscious commands.

Joe Durnavich
2004-Dec-09, 05:37 AM
You must excuse me if my use of the word "representation" is confusing, English is not my primary language, so I might be using the words in a way not entirely correct for the context, it is not intentional if I do...

In these discussions, I always try to offer concrete examples like the (255,0,0) triplet to make sure that is the kind of thing the other person had in mind. However, if you think perception is a process of producing some final end product, perhaps "the world as it looks to you", then you are arguing for the traditional representational model.

Among things that is done is to detect movement and edges, map out/correct images for defects like blind spots,

In a non-representational model, you don't necessarily have to waste resources correcting for blind spots. Think of the references for "solid colored object" such as a a blank piece of white paper, a wall painted a single color, a cloudless daytime sky, etc. There is no information for discontinuity available. When you look at a white sheet of paper with a black dot on it and position it such that the dot falls in your blind spot, there is no longer information about the discontinuity specified. It matches your references for "solid colored objects", so you say the paper now looks solid white.

Saying what something looks like is like a game of matching it to reference objects. You make the match with the information you have on hand. When you have limited information, you are more likely to misclassify because the criteria you are matching on can match a greater number of reference objects. (Many optical illusions result from reducing the information available to perceive.)

things like judging distances(things like stereo measurements and perspective analyzes seems to be part of it). There is probably a lot more, but the end result is integrated to how we experience the world.

Is there really any need to re-integrate the data? The brain has went to great effort to tear that data apart. Once the brain has discriminated a feature, it seems it would make more sense for it to just get on with business than to paint us a pretty picture.

I guess the model would be like a simple 3d-scene, a lot of lossy objects and placements, it gives a feeling of how things are placed in relation to you, and details is filled in as you move to do something.

Information about how objects are placed in relation to you is already available in the array of optical information impinging on the eyes. More on the optic array below.

I can for example walk with my eyes closed and not bump into something, because I have a feeling of how the objects around me are,

I wonder how well one would do if one's hearing was blocked. I can walk around too with my eyes closed and other than smashing my toes into the damn coffee table, I do rather well. However, in my case, I seem to hear the walls closing in on me because the sound in the environment changes as I get closer to them.

pick up something without looking, or predict the movement of things outside my field of vision(very useful when crossing roads ). This would be hard or impossible without creating a sort of map of my surroundings.

I don't mean to eliminate all memory. This was more of an exercise to think ecologically about perception--trying to think of clever ways to efficiently pick up information from the environment. Of course, you don't necessarily need memory in the computer sense for some of the tasks you describe. There, of course, doesn't have to be a database of movies of objects in motion stored in the brain. A neural circuit can become tuned to a variety of patterns of motion, for example. Likewise, there doesn't have to be a map of your surroundings stored anywhere.

But we do the real time thing too, for instance when picking up things it is a good idea to correct movements on the go.

One skill I think we take for granted is the ability to move our hand and grasp an object. You don't realize it until you put on lenses that shift your view, but there is a feedback loop comprised of your motor system and your visual system such that you visually guide your hand to the object. The lenses cause you to miss the object, but you quickly adapt after a little practice. (You are screwed up again when you take the glasses off until you adapt again.)

Saccades is done to prevent depletion of the light sensitive chemicals in the light sensitive cells, it really is not necessary for situational awareness...

I think you are right about that. We do scan the environment a lot, though, and we often are moving about for a closer look, etc. Perception is not just a process of sitting back and letting the world impress itself on some inner "consciousness". Perception is making one's self a part of the environment and actively exploring it.

The blurred images created by rapid eye movements like this is automagicaly dropped during processing(the process is called saccadic omission).

We do ignore the blurs, but interestingly, I thought I read somewhere that tests showed that the subject was able to identify words that were directed into the eye and tracked with the eye movements. In other words, the subject was still perceiving during the saccades. I'll have to track that down though. Perhaps I remember that wrong.

The small movements of the eyes is also used to improve detail, several small, rappid passes is done of the place you are focusing on, and the resulting "frames" are integrated into one with higher resolution.

There is that "integration" again! Those cat images, though, seemed to show the visual system discarding a lot of the information, with a preference to keeping the edge information. I would suspect that the visual system would be more concerned with change than with building up a static model of the environment.

I would expect you need multi-frame memory for movement detection(knowing the predator is coming towards you, for example).

Perhaps with a silicon based computer, you are stuck comparing frames, but as an exercise, think about how evolution might have been forced to solve the problem. Once visual systems started to evolve, it seems that detecting the motion of advancing predators had to be a high priority, demanding a solution that required minimal brain power. If you can find the book at a library, I really recommend JJ Gibson's The Senses Considered as Perceptual Systems. Gibson treats perception in terms of the information available from the environment (what he calls affordances), and argues that animals can pick up much of this information directly. (That is, they don't process sensory data to extract or build up the information.)

For vision, Gibson describes the ambient optic array, or the array of optical information converging on an animal's position. For a picture on the wall falling to the floor, Gibson says:


As the object moves through the air it progressively covers and uncovers the physical texture of the wall behind it. In terms of optical texture, there occurs a wiping-out at the leading border, an unwiping at the trailing border, and a shearing of texture at the lateral borders of the figure in the array. These aspects of transformation involve a rupturing of the continuity of texture, a sort of topological breakage...

You could capture frames and compare them, but I suspect there are ways for neighboring neurons in an individual neural "frame buffer" to become tuned to respond to this wiping and shearing of optical texture. Then it is a matter of the circuits or additional circuits to be tuned to rates of texture flow, direction of flow, etc.

The data needed for a simple model used for awareness of ones surrounding can probably use less memory than one of those frames. The brain might focus on differences in edges though(it seems the visual system tend towards handling images as shapes, not just a bunch of pixels, quite a few optical illusions are based on peculiarities of this shape handling), at least in initial detection of movements, the map for edges would be easier to process and use less memory than a full frame...

Yes, the visual system does seem keen on edges, and the cat article posted here supports that. We don't see objects in motion in great detail, so there is no need for high resolution frame buffers in the matter of motion detection.

Anyway, you would probably have to predict the movement of the predator too, that would be creating a model of sorts and integrating it with your awareness of your surroundings.

Or you could have a neural circuit(s) tuned to a particular rates and patterns of texture flow. I don't understand neural circuits that well, but as I understand it, there doesn't have to be a model stored in them, they just are "trained" to respond to inputs in particular ways.

Gibson describes the ambient optic array for a flying bird. I'm not going to play nice and am going to post the caption for a graphic without the graphic:


When a bird moves parallel to the earth, the texture of the lower hemisphere of the optic array flows under its eyes in the manner shown. The flow is centrifugal ahead and centripedal behind--i.e., there are focuses of expansion and contraction at the two poles of the line of locomotion. The greatest velocity of backward flow corresponds to the nearest bit of the earth and the other velocities decrease outward from this perpendicular in all directions, vanishing at the horizon. The vectors in this diagram represent angular velocities. The flow pattern contains a great deal of information.

If we were to build our own bird, I suppose rather than comparing frame buffers to detect motion, I suppose the system could key on particular patterns of texture movement across the field of view to detect, say, how fast the bird is approaching the ground. And again, preferably using circuits tuned (or "taught") to respond to the flow patterns. It may be possible to more directly detect the information we are interested instead of processing and analyzing data to extract it.

TrAI
2004-Dec-09, 04:59 PM
In these discussions, I always try to offer concrete examples like the (255,0,0) triplet to make sure that is the kind of thing the other person had in mind. However, if you think perception is a process of producing some final end product, perhaps "the world as it looks to you", then you are arguing for the traditional representational model.

Hmmm... The triplets are a representation of something, all measurements are a sort of representation of some sort of information, but one triplet alone is of limited use(though, I guess it is better than nothing), we have to have many of them to get an image that can have recognizable shapes, for instance. I guess the philosophy I lean towards is a variation on the representational model, in that I think the sensory data goes through processing before it is presented to the consciousness, as opposed to the more direct view Gibson seems to have favored.


In a non-representational model, you don't necessarily have to waste resources correcting for blind spots. Think of the references for "solid colored object" such as a a blank piece of white paper, a wall painted a single color, a cloudless daytime sky, etc. There is no information for discontinuity available. When you look at a white sheet of paper with a black dot on it and position it such that the dot falls in your blind spot, there is no longer information about the discontinuity specified. It matches your references for "solid colored objects", so you say the paper now looks solid white.

It is not that hard, if the image looks solid, the defects are corrected by filling in the area with a solid or texture approximated from the surrounding area, this needs some level of processing, though correction for blind spots like the ones formed by the connection of the optic nerve is not necessarily a complex thing when done by adapted hardware.


Is there really any need to re-integrate the data? The brain has went to great effort to tear that data apart. Once the brain has discriminated a feature, it seems it would make more sense for it to just get on with business than to paint us a pretty picture.

The problem is, IMHO, that having a stereo-pair will not give you 3D unless you do some processing on those two 2D images to integrate(I use it in the meaning "to put two or more of something together to make one new whole of something" if there are any question about that) them into one 3D representation. Using stereo-pairs to make 3D images like anaglyphs, for parallel/cross-eyed viewing and so on only works because the brain at some level does the processing. That is one of the problems I see with the idea of directly sending visual information to the consciousness, the stereo pairs would have to be analyzed and integrated consciously.

I think there exists software that can make 3D representations from stereo pairs, I believe the MER-teams uses something like that to make maps/models of the landscape around the rovers.


Information about how objects are placed in relation to you is already available in the array of optical information impinging on the eyes. More on the optic array below.

Well, I feel that stereo pairs do not magically give 3D vision, the brain must in some way process those two images so that we get the 3D effect. That is the same with perspective, the brain must be able to recognize the signs of perspective and use them to understand the surroundings.

Do you see what I am thinking about here? We do not usually think about this, to us stereo vision and/or perspective and/or lighting/shadowing can give a "feeling"(it may be an illusion, like stereo photo pairs or perspective in pictures) of space or depth, but this is because the brain can utilize these cues to create this understanding of what your eyes see. If the brain did not have the capability to process this information all ready, no amount of tricks would make us see 3D. And that, of course, means that our robot would need to process the pairs if it was to use them for something useful, like measuring the distance to that tangle of roots in front of it and find an alternative path around. Indeed, these measurements would imply that a simple model or map of the surrounding is made, even if it is only a set of numbers that tell how far the obstacles is from the machine.


There is that "integration" again! Those cat images, though, seemed to show the visual system discarding a lot of the information, with a preference to keeping the edge information. I would suspect that the visual system would be more concerned with change than with building up a static model of the environment.

Yes, integration, you have to put the pictures together to get any improvement from micro scanning with a sensor array. It is not incompatible with change, you still feed the images through at the same speed, you only improve resolution at subsequent updates. Anyway, I expect that change is detected by separate cells, the structure of the brain do allow for parallel processing.

As for the cat images, well, I do not know, they are made from data not that far into the brain, I don't know if cats even do this trick, it may not be a good thing for them, unless they are reading or something... But a similar system could be used to improve the contrast and brightness of slow-moving things. That would be more useful for a cat, since it often likes to hunt in the dark.


Perhaps with a silicon based computer, you are stuck comparing frames, but as an exercise, think about how evolution might have been forced to solve the problem.

Yes, computers are rather limited, at least the way we build and use them at present, the neurons can adapt and learn by themselves, and there would be evolutionary advantages to having a quick detection of movement. Of course, the brain can probably process movements and the still images as separate things, so that the circuit for movement detection have much less latency. Movement and sudden changes in pattern/lighting do tend to draw the eyes, under some circumstances it seems almost like a reflex.
Anyway, I guess it is time to round of this post now.

umop ap!sdn
2004-Dec-10, 08:13 PM
Wow, I go away for a while and this thread happens! :lol:


A red shirt contains molecules that are of a size to match the wavelength of red photons. They are then reflected but no other photons are of the correct electromagnetic dimension so they pass through these molecules and are absorbed.

Not quite. :) A red shirt contains dye, which is a complex organic molecule that absorbs light within a specific range of wavelengths. All organic compounds absorb somewhere in the spectrum, typically in the shortwave UV range. Alternating single and double bonds (conjugation) between carbon atoms and single bonds to nitrogen or oxygen atoms have the effect of lowering the energy levels, causing the molecule to absorb at longer wavelengths. Red light is not energetic enough to excite a dye that absorbs in the green and blue regions, therefore it is transmitted by the dye and scattered by the fabric.

By far the most common reason for something that is not self luminous to have color is absorption. Rainbows and oil slicks notwithstanding. :D


I do remember reading that the receptors in the eye is sensitive a little bit into the IR range up into the UV range, but the UV light is blocked by the cornea(IIRC). Some people going through eye surgery had volunteered for the experiment, and as I recall it, they said that the UV light had looked light brown or something like that, but sadly I do not remember where I read this.

I'll tell ya from firsthand experience you can see the light from an 880nm infrared LED if you're looking right at the emitter chip. Whether it is safe to blast your eye with that much IR is another matter. :D Point is, you're right - the boundary between "red" and "infrared" is a very very fuzzy line.

UV light below about 380nm is blocked by the lens, not the cornea. Patients who have had their lenses removed (aphakia) because of cataracts or other reasons, and replaced with glass lenses have reported seeing UV as a grayish violet color, down to about 300nm.

Bees, like most insects, see into the UV but lack the sensitivity to red light that humans have. In fact, many animals that have adapted to seeing shorter wavelengths (mice, penguins, white-tailed deer) also lose some sensitivity at the red end of the spectrum. The reason for this is, the receptor pigment (opsin) molecules have a second "beta" band at shorter wavelengths than their peak sensitivity. It just so happens that the beta band for the human red cone pigment is centered right about where the lens cuts off at the violet/UV end. IIRC, stimulating the beta band tends to damage the photoreceptor.

It's not really correct to say that bees see a wider spectrum than we do, just a somewhat different range.

Many reptiles, birds, and fish have a four color visual system: red, green, blue, and ultraviolet. They are able to achieve this through "screening pigments" on their photoreceptors, protecting the opsins underneath. Because of this, many freshwater fish can even see farther into the IR than humans can while retaining their UV sensitivity.

I've been over the color debate on another board before, and really didn't want to get involved here, but should point out that the reason why colorblind people don't realize anything's amiss is because they have nothing to compare it to. They have never experienced the missing qualia and therefore literally don't realize anything is absent.

*whew* Any questions? :D

~ umop ap!sdn, resident self taught color vision expert.

Joe Durnavich
2004-Dec-11, 03:12 AM
I think there exists software that can make 3D representations from stereo pairs, I believe the MER-teams uses something like that to make maps/models of the landscape around the rovers.

...

Well, I feel that stereo pairs do not magically give 3D vision, the brain must in some way process those two images so that we get the 3D effect. That is the same with perspective, the brain must be able to recognize the signs of perspective and use them to understand the surroundings.

Do you see what I am thinking about here? We do not usually think about this, to us stereo vision and/or perspective and/or lighting/shadowing can give a "feeling" (it may be an illusion, like stereo photo pairs or perspective in pictures) of space or depth, but this is because the brain can utilize these cues to create this understanding of what your eyes see.

I know what you mean about this "feeling" of 3D. I went to school in downtown Chicago to learn computer programming and after many hours staring at a flat computer screen, stepping outside into a canyon of skyscrapers gave me a stunning feeling of "depth".

If the brain produced a model, though, what would it analyze in the model to determine that it had depth? This sounds a bit like someone saying to us, "I cannot see the village in 3D. I first must build a miniture replica of it and then look at the replica."

Is it the case that the model contains depth information that the real scene doesn't?

Joe Durnavich
2004-Dec-11, 03:23 AM
It is not that hard [filling in the blind spot]...

I have had people suggest to me that the brain also "fills in" the gaps between the photoreceptors in the retina (otherwise our visual field would be pixelated). Someone even suggested to me that the brain fills in the gaps between atoms in objects, which is why some surfaces appear "solid colored" to us. I give these people credit, though. They recoginized the implications of the representational model and were willing to stick it out.

George
2004-Dec-11, 03:43 AM
*whew* Any questions? :D
Yes. (Are you rested? :) )

I am curious about the eye's behavior at high intensities. Specifically, the Sun emits light of great intensity in blue and green, less in yellow, even less in orange, etc. It is my assumption that the eye will see the Sun as "white" primarily due to flux levels greater than each color cones upper threshold. Astronauts, as I understand, report the Sun to look extremely bright and white. :o If the intensity is diminished (e.g. strobe), then it might look.....(I'm working on that on another thread) :)

Is this idea correct? I have not been able to google-up anything. [Although I did find some lower threshold data.]

Joe Durnavich
2004-Dec-11, 04:33 AM
You may want to see what you can find out about the Bezold-Brucke phenonemon. As the light level increases, the colors of things tend to shift from the red and greens up to the yellow and blues.

Joe Durnavich
2004-Dec-11, 05:13 PM
the reason why colorblind people don't realize anything's amiss is because they have nothing to compare it to. They have never experienced the missing qualia and therefore literally don't realize anything is absent.

Well, I don't know about the qualia, or how we could show that is really the case, but in the common cases color blind people often can distinguish the colors if the light level is increased or the objects are increased in size. Matching objects in terms of color is a skill and as with perception in general, we do all sorts of things to make out what we see such as moving around to get a closer look or to allow more light to fall on the object.

Color blind people often do not find out they are color blind until an eye doctor test them with Ishihara color plates and they discover there are some plates that they cannot make out the number in. (I never tested as color blind, but I recently came across one plate I cannot see the number in, but others can. Damn! Interestingly I can trace the number with my finger, but I don't see a number like I do with other plates.) If color blind people don't realize anything is amiss, it is because the lack of that color matching skill never created a problem for them in life.

I am 44 years old and over the last few years my eyes have begun the inevitable decline where the lenses stiffen and are not able to focus as well on objects that are close by. I was expecting this to happen, but I expected that my near-field vision would become distinctly blurry as if a camera lens were out of focus because, well, that is exactly what would happen in my eyeballs. Instead, my impression, and my complaint to my eye doctor, is that is takes me more effort to read. It takes me longer to make out the words on the page. I grumbled to myself that everybody was now printing in smaller fonts. I had trouble getting enough light on the page. I thought the lights in my house were dimmer than normal.

So, my impression is not that some sharp qualia have been replaced by blurry qualia, or however that is supposed to work, but that my ability to read small-sized text (OK, normal-sized text!) has declined and that I cannot attribute that to anything different in myself.

umop ap!sdn
2004-Dec-12, 06:17 PM
It is my assumption that the eye will see the Sun as "white" primarily due to flux levels greater than each color cones upper threshold. Astronauts, as I understand, report the Sun to look extremely bright and white. :o If the intensity is diminished (e.g. strobe), then it might look.....

What you have here sounds to me like a good idea for determining the Sun's true color, and personally I'd be very interested in your results! :) It is true about the flux levels, in fact there are 3 wavelengths where two of the cone types' response curves "meet", the so called invariant hues. What your idea needs though is a neutral white to compare to. That's the difficult part, because it would seem to have to be another light source, and unless it was spectrally flat, it would have to be color balanced against a spectrally flat reference. Whew! :o


Well, I don't know about the qualia, or how we could show that is really the case, but in the common cases color blind people often can distinguish the colors if the light level is increased or the objects are increased in size.

Well, color perception isn't as exact as dividing the spectrum cleanly into 3 regions and measuring each one. A protanope or deuteranope could still distinguish red from green, say, because no matter which longwave opsin is present, it is more sensitive to the green so the red will look darker. Blue cones continue to respond into the green region as well; because of that, and because most green objects tend to reflect a some degree of blue-green light, one can expect the green to also be less saturated. Greater illumination and greater apparent size mean more information (more photons or more photoreceptors) is available to the brain for processing.

Another factor is mesopic color perception. In intermediate light conditions, rods and cones function simultaneously. There is some evidence (I lost the cite though) that we have cells that specifically function only in mesopic light conditions, receiving signals from both rods and cones. In any case, a dichromat can under the right conditions function as a trichromat, experiencing reds and greens (or blues and yellows, in the case of a tritanope) as distinct colors, but presumably without the qualia that people with "normal" color vision experience.

And I suspect chromatic aberration probably plays a role as well.


(I never tested as color blind, but I recently came across one plate I cannot see the number in, but others can. Damn! Interestingly I can trace the number with my finger, but I don't see a number like I do with other plates.)

FWIW, several years ago I showed a friend the online Ishihara plates, and while I could see all of the numbers faintly, he said he saw them very vividly. :)


I expected that my near-field vision would become distinctly blurry as if a camera lens were out of focus because, well, that is exactly what would happen in my eyeballs. Instead, my impression, and my complaint to my eye doctor, is that is takes me more effort to read. It takes me longer to make out the words on the page. I grumbled to myself that everybody was now printing in smaller fonts. I had trouble getting enough light on the page. I thought the lights in my house were dimmer than normal.

Not having had this problem myself, it sounds like because of the stiffening of the lens, more effort is needed to focus up close. The eye tries (reflexively?) to adapt itself to focus at any distance, but if you hold something really close to your eye (for me, six inches seems to be about the limit), it will be blurry no matter what.

George
2004-Dec-13, 12:56 AM
It is my assumption that the eye will see the Sun as "white" primarily due to flux levels greater than each color cones upper threshold. Astronauts, as I understand, report the Sun to look extremely bright and white. :o If the intensity is diminished (e.g. strobe), then it might look.....

What you have here sounds to me like a good idea for determining the Sun's true color, and personally I'd be very interested in your results! :)
You will likely enjoy this thread... Suns color (http://www.badastronomy.com/phpBB/viewtopic.php?t=9583)
Color theory with computer rendering reveals a pinkish peach color from one site listed, which started this thread... girl star (http://www.badastronomy.com/phpBB/viewtopic.php?t=9807&highlight=girl+star) :)
Just for the record, the BA is too blame (parially) :) . He pointed out in his book the Sun is not yellow. Much more here (http://www.badastronomy.com/phpBB/viewtopic.php?t=8123)
The latter thread revealed the Sun is white as viewed at various intensity levels (thanks to a high tech, :wink: , strobe of about 19 cents)


... What your idea needs though is a neutral white to compare to. That's the difficult part, because it would seem to have to be another light source, and unless it was spectrally flat, it would have to be color balanced against a spectrally flat reference. Whew! :o
I have recently obtained several prisms. A Kodak gray card, so I'm told, reflects all colors evenly. I plan to use it with various color backgrounds to help yield a fair assesment. Does this make sense to you? My hope is the true color is not marginal in color when atmospheric corrections are made. Your comments on the ideas within the other threads will be welcome.

zebo-the-fat
2004-Dec-13, 07:58 PM
I expected that my near-field vision would become distinctly blurry as if a camera lens were out of focus because, well, that is exactly what would happen in my eyeballs. Instead, my impression, and my complaint to my eye doctor, is that is takes me more effort to read. It takes me longer to make out the words on the page. I grumbled to myself that everybody was now printing in smaller fonts. I had trouble getting enough light on the page. I thought the lights in my house were dimmer than normal.

I had the same problem, I first realised that I could not read the road numbers or fine detail on a map unless i turned the lights up higher than usual. Damn! :(

umop ap!sdn
2004-Dec-13, 08:35 PM
A Kodak gray card, so I'm told, reflects all colors evenly. I plan to use it with various color backgrounds to help yield a fair assesment. Does this make sense to you?

Not really. What do you plan to use to illuminate the card with? Is it your intention to compare the spectrally flat surface of the card with the Sun, or to use the card to make the Sun's light less bright so that it can be compared with another reference? :-k


Your comments on the ideas within the other threads will be welcome.

I reread them. :) Your prism setup and strobe both sound doable, if you have a reference that is flat across the spectrum.

As I write this, it is about 15 minutes past Noon on another gorgeous sunny day in Nevada [/gloat]. Looking outside my window, the scene looks very bright and generally... white. There is a pale blue sky with some brownish tint near the horizon... some white and off-white buildings... a gray brick wall that looks bluish because it is shaded... brownish gray asphalt. Across the street is a white car with sunlight glinting off the front.

The side of the car that faces my direction appears somewhat bluish, on account of it being shaded, whereas the highlight where the hood is illuminated looks less bluish (= more yellowish). My eyes are picking up a wealth of different colors, but my brain is assembling them into an impression that the car is very very white. Guess its conforms very closely to the average color across all the "pixels" in that scene. Must be all that titanium dioxide. :lol:

The glint itself is too bright to see what color it is. Therefore it looks white. I can try making my eyes go blurry to spread the light out a little, but then chromatic aberration stymies my attempt, resulting in a yellowish blur with a blue border. :-? OK, what about using my hand as a strobe, by making a small slit between 2 fingers and moving it back and forth? Oddly enough, this results in everything taking on a bluish tint. :o

Seems our blue cones have a different temporal response to flashes of light than red or green. Blue cones take longer to adjust to a change in light level, which is the basis behind Benham's disk (http://www.exploratorium.edu/snacks/benhams_disk.html). So, my crude hand strobe is too slow for this purpose. :( Moving my eye back and forth past the glint and and noting the color of the streak also gives a violetish tinge, for probably the same reason.

It never seemed to me that the Sun was yellow at midday. IMO, what's probably happening is it's inherently pale bluish-greenish but the atmosphere's scattering evens it out to white. If it has any color to it whatsoever, we're probably looking at only a few percent saturation. However, if this curve (http://hyperphysics.phy-astr.gsu.edu/hbase/vision/solirrad.html) (was it you who originally cited it? I forget) is accurate, that Earth's surface curve looks like a recipe for a pale yellow. :-k

So I guess while you're trying to figure out why the Sun looks yellow when the atmosphere's Rayleigh scattering isn't enough to account for it, I'm trying to figure out why it looks white when that spectrum suggests it should look yellow. 8-[ #-o

(edited for clarity and to fix usage of terminology)

George
2004-Dec-13, 09:53 PM
A Kodak gray card, so I'm told, reflects all colors evenly. I plan to use it with various color backgrounds to help yield a fair assesment. Does this make sense to you?

Not really. What do you plan to use to illuminate the card with? Is it your intention to compare the spectrally flat surface of the card with the Sun, or to use the card to make the Sun's light less bright so that it can be compared with another reference? :-k
Just needed a surface to illuminate onto which would have no color bias. [I assume it is cheap and easy to purchase locally.]


OK, what about using my hand as a strobe, by making a small slit between 2 fingers and moving it back and forth? Oddly enough, this results in everything taking on a bluish tint. :o

Seems our blue cones have a different temporal response to flashes of light than red or green. Blue cones take longer to adjust to a change in light level, which is the basis behind...
You are coming up to speed quickly. :)
A strobe was made. Most of the drama begins... here (http://www.badastronomy.com/phpBB/viewtopic.php?p=185507&highlight=peachy+pink#18550 7)


It never seemed to me that the Sun was yellow at midday. IMO, what's probably happening is it's inherently pale bluish-greenish but the atmosphere's scattering evens it out to white. If it has any color to it whatsoever, we're probably looking at only a few percent saturation. However, if this curve (http://hyperphysics.phy-astr.gsu.edu/hbase/vision/solirrad.html) (was it you who originally cited it? I forget) is accurate, that Earth's surface curve looks like a recipe for a pale yellow. :-k
I would think the surprising flatness of the plot would be slam dunk for white.

umop ap!sdn
2004-Dec-16, 12:51 AM
The Sun is the brightest light source we have access to, so our eyes have no external reference frame with which to compare it. It is close enough to white as it is that we tend to (or at least I tend to) see it as such when there is no comparison. It's the same reason the Moon looks white in the night sky. If we had a perfectly spectrally flat light source, we could put it side by side with the Sun and say "the Sun is such-and-such a color."

Meanwhile, how about we just define white as the color of the Sun at midday from sea level at 40°N on June 21? :lol:

George
2004-Dec-16, 02:17 PM
The Sun is the brightest light source we have access to, so our eyes have no external reference frame with which to compare it. It is close enough to white as it is that we tend to (or at least I tend to) see it as such when there is no comparison. It's the same reason the Moon looks white in the night sky. If we had a perfectly spectrally flat light source, we could put it side by side with the Sun and say "the Sun is such-and-such a color."
That makes sense. Wouldn't the flat, essentially, spectrum of the Sun, viewed from air's bottom, suffice? I plan to have a darkened tent to help observe the recombined adjusted spectrum result, yet, still allow enough "flat spectrum" normal sunlight so I won't knock the scope and tent down. [This is yet another reason I plan to do this during the daytime. :wink: :) gig'em]

I am also considering using the S.A.D. to see if intesity is still an issue.


Meanwhile, how about we just define white as the color of the Sun at midday from sea level at 40°N on June 21? :lol:
Don't forget temp., humidity, dust, etc. We would want the white to be right, right?

umop ap!sdn
2004-Dec-17, 12:06 AM
Wouldn't the flat, essentially, spectrum of the Sun, viewed from air's bottom, suffice? I plan to have a darkened tent to help observe the recombined adjusted spectrum result, yet, still allow enough "flat spectrum" normal sunlight so I won't knock the scope and tent down.

You'd lose a little accuracy, but I suppose the result would be close enough. :)



Meanwhile, how about we just define white as the color of the Sun at midday from sea level at 40°N on June 21? :lol:
Don't forget temp., humidity, dust, etc. We would want the white to be right, right?

Oh yeah... forgot about those. #-o