"Perception reduction" is a common phrase amongst followers of Ayn Rand. Their argument is that, ultimately, conceptual knowledge must be justified by connecting it to human perceptions. Sometimes they misspeak and say "sensory reduction", which is very different. What's the difference between a sense and a percept? It's the difference between knowing "button 2513 is down" and knowing "some object is moving toward me at 5 m/s". In other words, perception requires an intelligent agent such as a person, a circuit, or an ant. Mere sensory data does not. A rock receives the same sensory input as an eye. It's important to note that humans are not rocks, and due to our complexity, perception is the base-level interaction we have with reality. Even for babies. (Something Rand did not believe; the science of the matter (perceptual psychology) has been done since her death in 1982. Alas, figures in history whose work did not contain hard mathematics and formal logic are destined to be shown wrong sooner or later on more or less all accounts. The degree of wrongness is varying, of course, and is often proportional to how mathematical an idea is without being pure math.)
Like much of Randian language, "perception reduction" is a moderately vague phrase with a moderately vague interpretation that can change. Is the Christian who insists they perceived God's touch and God's light justifying their belief in God? And indeed the brain is a complex piece of machinery: for mysterious reasons we can suddenly feel cold, or warm, even though we are not physically next to a source or sink of heat.
In other words, many of our "perceptions" are made-up. Many are filtered. Did you know there is a disorder in which a patient can look at their own arm, perceive it as an arm, but not perceive it as their arm? They fiercely deny it's theirs. And when a jet of cold water is sprayed in their ear, for a moment they recognize the error and they agree, indeed, that their arm is their arm. Time passes, and they deny it again. This is just a tiny example of fun stupidities our brain can have, and of a rare case, unlike the humanly universal cognitive biases we share. Our brains are corrupted.
Many of our "senses" are corrupted. Do our eyes respond to photons in the ultraviolet wavelength? No. (So how did we learn about the ultraviolet wavelength?) At higher levels of organization, there is corruption everywhere from a hypothetical "ideal". Our brains are corrupted, our consciousnesses are corrupted, our models of reality are corrupted. And being part of reality, there are hard limits to what we can perceive.
Is this a problem? Yes, but not a crippling one. Errors can cancel each other out, or they can be negligible for certain purposes. Evolution does not design ideal creatures, yet it designed us well enough that we can comprehend it.
The magical thing about intelligence is that, even while it's in error, it can detect its own errors and (sometimes) compensate for them. Of course it gets tricky when stepping up a level of meta. What if our error detection systems have errors? They do! We can detect them! It's confusing unless you've studied Control Systems and are familiar with the concept of feedback. Recursive approaches get rid of a lot of meta-level confusion.
What's the "perception reduction" argument supposed to be an argument for, anyway? It's to stop an infinite regress problem. You claim to know something, I ask an infinite number of "why"s and "how"s. Eventually, Rand followers claim, you will arrive at perceiving something with your senses. True, I'll admit, I know my shoe is untied because photons from the sun have hit the shoe and some photons have been scattered off the shoe toward my pupil which is struck by some of those photons which activates an electrical response in my eye and up through my brain, where more electrochemical processes happen eventually leading to me saying "I perceive my shoe to be untied." In another era, you might have the physics wrong, but the story should more-or-less be the same: something "out there" interacts with your eye and makes something happen "in here". But is this the correct stopping point?
On the contrary! It's just the beginning! What you perceive is the start of inference, and you have to look at how your inference engine works to find out if its process can be justified. Your brain can't not run inference algorithms on the received raw data. It's built in to how your brain is designed.
Rand wasn't aware of Ray Solomonoff. Unfortunate, for he had a mathematical model of inference. It's not how human inference works, but it represents an "ideal", and we can discover parts of the brain whose job ends up approximating the ideal. We can model bee flights as solutions to differential equations. Those equations are the "ideal", and we can see that the bee's brain is an approximate solver for the "ideal". For NP problems we can find things in nature that approximately solve them, sometimes using the exact (computational-equivalence-wise anyway) same algorithms we've come up with independently using computers. And so on.
Ultimately, beliefs have to be accurate. If they aren't accurate, there is no point in trying to justify them. What does it mean for a belief to be accurate? It means that the belief must constrain our anticipated experiences in some way such that we're never surprised. If we're surprised, that indicates our knowledge is wrong. It is not enough that a belief can somehow be linked to an experience, or a perception, it must constrain our expected experiences so that we aren't surprised. It must pay rent.
Sometimes we have two competing beliefs that equally explain our past experiences and even provide forward-predictions that leave us non-surprised when we believe either belief. (i.e. their predictions come true.) An informal version of Occam's Razor suggests we should pick the one that is the "simplest", i.e. the one that constrains our anticipations the most. Why? Here we run into the infinite regress problem again: Occam's Razor can't be justified by anything other than an appeal to Occam's Razor. One might suppose that brains that did not, as part of their architecture, somehow approximate Occam's Razor, would not last long in an evolutionary competitive environment. The regress stops, then, by saying "it is what it is". I do think recursive justification has a bottom, but it's not "perception reduction".
How do we know about ultraviolet light?
The human story can be worded as progressively more sophisticated methods of poking something, more than once, and inferring a model that can predict the future. At some point humans poked everything as direct as humans could, then they poked things that killed them, but the ones that learned to poke things with a stick instead survived. Eventually those humans died too, to things such as radiation poisoning, and the survivors had to develop very sophisticated sticks to start gathering the necessary data to accurately infer things about radioactive phenomena.
When one looks at one's stretched finger and sees it straight, and knows it is straight, and never sees a counterexample, they are justified in believing it is straight. Then one day they put it partway into a stream, and see it as crooked. They see it as crooked, they feel it as straight, and they are surprised. This means their model is wrong. Indeed, it is incomplete. What's missing? What's missing is the knowledge of what "seeing" actually is. It took us a long time to figure it out, but now we know about photons and under the model of quantum mechanics, simultaneously seeing our finger bent while feeling it straight is no longer surprising. It is this reduction to an unsurprisingly accurate, constrained, prediction-generating model that I think is the important basis of justification. The fact that we as beings in reality must gather data about reality through very narrow inputs to our brains in order to generate such models is kind of a side-note. It's unimportant where the data comes from. Our brains will crunch on whatever data is present, whether that data is "faithful" to reality or not, and by measuring surprise to future data we can determine whether our beliefs were justified.
By the way, that tense is on purpose. Many beliefs are only justified in the past, with inferior information. Can we really say a belief is justified? Such vague phrasing! When I say my belief is justified, I mean that I expect it to remain accurate in the future, and I will be surprised if it does not. Since I can't know the future for certain, I can't take a justified piece of knowledge as absolute inalterable truth. Bayes' Theorem captures this concept just fine. A belief may appear justified right now with the current information. With tomorrow's information, however, we may have done a Bayesian update and determined the belief to no longer be justified. Indeed, if God stopped the Earth from rotating around the sun or from rotating around its axis, as he has done before according to the Bible, I would no longer consider my belief that no God-like entity is or was present in our light-cone of the universe as "justified".
Posted on 2013-02-11 by Jach