What would perfect knowledge look like? Would we recognize if we saw it? Would it be of any use?
Imagining success, or the end result, before launching an exploration is often helpful: so is defining an exploration’s scope. Three minute eggs can be made by boiling water in a small saucepan. It is not necessary to boil an ocean.
This article’s scope is strictly limited to correlations between mental constructs and physical items ~ both natural items (such as mountains and quarks) and manmade (such as televisions and forks.) It assumes such things “truly” exist (at least for awhile) even if humans vanish. Neither moral nor sociological “truths” are addressed.
In one respect, rationalists are spot-on. With the possible exception of certain mathematics and logic domains, human perceptions cast faint shadows which are distorted glimmers of reality. Unlike Plato’s forms, there is little pure or idealized about human perceptions of mountains and eating utensils.
The solid comfortable chair in which we sit is, in reality, an ephemeral potpourri of widely separated particles stitched together by invisible forces. As of late 2009, the particle that may be responsible for our chair’s “heft” or mass, the Higgs boson, remains a conceptual construct not yet experimentally detected.
Let’s name this level of reality, “particle reality” and assume that it represents reality as it “truly” is. Assume also that humans develop the ability to “see,” perhaps with an enabling technology, particle reality and cognitively process it. By way of illustration, consider an analogy – how digital cameras create crude reality constructs.
Light, in the form of photons, is reflected from objects we wish to photograph. A camera’s lens directs the reflected light onto a sensor containing millions of “photon traps” or photosites. Each photosite corresponds to a pixel – a 12 megapixel (12MP) camera has 12 million photosites. Sensing and processing color involves several steps. (Hover on the adjacent pictures.)
A red, green, or blue filter (the familiar RGB primary colors) located in front of each photosite ensures that only one of the three colors reaches a particular photosite. Intensity of the mono-color light is recorded in the form of an electrical current. Within limits, stronger light produces stronger current.
Cameras are designed to cater to human perceptions of reality rather than aim for the most accurate possible rendering of reality. Note, for example, that the pattern or mosaic of individual filters contains twice as many green filters as red or blue to compensate for the human eye’s greater sensitivity to green.
At this point, the camera has captured and recorded millions of dots of red, green, or blue. It has also recorded the relative strength of the monocolor light that reached each dot. But, there are no yellows, violets, grays, or whites.
The "full" color range of our visible spectrum is computed by a processor in the camera. The process is called demosaicing. Contents of the sites surrounding each photo site are analyzed. If photosites surrounding a particular site are all at 100% strength, demosiacing logic may encode the site white (255, 255, 255 in RGB lingo.) Different color strengths in surrounding sites produce different results. (Hover on the 3x3 mosaic.)
Though helpful, the analogy has at least two shortcomings.
First, only an image or perception of an object’s surface is formed – most cameras create flat, distorted 2D renderings of 3D surfaces. Not even 3D cameras perceive what lies beneath the surface. A camera perceives one inch thick ice on a pond the same as ice three feet thick.
A second shortcoming is that several hard truths prevent us from using reflected photons in any “particle reality” sensor. One of these hard truths involves sensing: another involves conceptualizing.
Sensing Roughly speaking, photons are “too large” to show detail about subatomic particles. Using very fast film (say ISO 3200) to photograph details on a dense circuit design would be an analogy. The subatomic particles we wish to sense are so small they would be tossed about if any known particle were “reflected” off them. A different type of sensor would be needed: perhaps (a highly speculative "perhaps") a reimagined gravimeter that detects minute perturbations made by our target particles on certain forces.4
Conceptualizing Locations of particles cannot be known in the way humans have come to think of location – only probabilities of potential locations can be known. A different way of conceptualizing where the little guys are is needed. For thousands of years, humans thought of locations as fixed points. To deal with subatomic particles, scientists had to reinvent how to conceptualize location. Currently they think of location as a “cloud” and to compute location in terms of a probability cloud.
Conceptually “friendlier” ways of perceiving “location clouds” may evolve in the same way that WW II model ships on maps were replaced by 1950s’ trackballs and electronic location displays which in turn were replaced when mice and iPad touch screens opened port holes into a multitude of location applications.
Several additional issues contribute to conceptual gaps.
Invisible wavelengths of “light” such as infrared and ultraviolet are ignored. Only visible light is measured and recorded. RGB has no direct correlation to visible wavelengths: it's merely an approximate encoding of how human brains interpret light.5 Additional information is lost when RAW data from the sensors is converted to lossy formats such as JPEG.
Even if an imaginary “particle camera” were able to create 3D images of probability clouds for each and every particle in our subject, the image is “true” only at one instant in time. Accurate perception of reality requires perception of a continuum of such instants – analog tracks of probability clouds for untold trillions of particles.
Language also gets in the way of understanding. Words such as color, violet, light, and particle are linguistic signs with semantic baggage. They may help humans go about their lives: but they can widen the gap between reality and perception.
This page is being revised. noise: linguistic obfuscation, scale, modeling, encoding, determinant, broken symmetry, losey, number of particles vs synaptic connections, audit trail, tv screen, refresh
3. Some (very expensive) cameras use three sensors. Incoming light is “split” so that all three colors are captured for each pixel (each color in a different photosite). When only one color is captured per pixel as described in the text, RGB coding must be inferred by interpolating the other two colors from neighboring pixels. Literature on this technical process is extensive. See.. Chang, Lanlan and Tan, Yap-Peng “Effective use of Spatial and Spectral Correlations for Color Filter Array Demosaicking” IEEE Transactions on Consumer Electronics, Vol. 50, No. 1, FEBRUARY 2004 pp. 355-365 and Ramanath, Rajeev and Snyder, Wesley E. “Adaptive demosaiking” Journal of Electronic Imaging 12(4), October 2003 pp. 633-642
4. The force perturbation analogy may be useful as a thought experiment, but unlikely to be practical. The standard model explains forces as resulting from matter particles exchanging messenger particles (also called force mediating particles.) The effect of exhanging messenger particles is equivalent to a force influencing the particles that exchange messenger particles. See... Greene, Brian The Elegant Universe New York: W. W. Norton & Company 2003 pp. 123-125
Think of RGB as an element of a type of user interface.
Participants in the Syber Group multilingual cognition project now include linguists, philologists,
physicists, educators, students, entrepreneurs, mathematicians, and computer scientists.