This article is also a GitHub page and Python project: https://github.com/MalcolmAR/ConsciousHabitTracker
Because The Pressure of Light: how consciousness creates permanence in a universe of infinite-heterogeneity contributes to an understanding of consciousness, and because it’s both helpful and fashionable to consider all meaningful theories regarding consciousness in the light of artificial intelligence, I’m writing this exposition of how The Pressure of Light transfers into programming consciousness for AI.
The Pressure of Light contributes to an understanding of consciousness by providing a more systematic comparison of remembered-thinking-events with remembered-external-events than ever before, and as an outcome describes how consciousness is encased. Through my examination I’ve come to believe that all verbal reports made by conscious minds about all aspects of the human experience are nothing more than reports on remembered thinking, with one important exception: memories that are reported directly from memory, which likely means the first report made following the experience which solidified the memory, and in a moment when the speaker is immersed in external events, and does not have the chance to reflect on the memories as they are recalled. Ahead of section 3 of The Pressure of Light is a segment of a letter written by a teenage Einstein, which happens to beautifully exhibit both types of memory reporting. I’ve come to this conclusion because I believe that true memories of events have no separation across the senses. They are pure representations of the universe, which exists according to the principle of infinite-heterogeneity-in-time, established in The Pressure of Light. What The Pressure of Light does nothing to theorize about is the content of thought, so instead of providing a theory of consciousness, it solves a small but important piece of the puzzle: a description of the structure of consciousness, the foundations that must, in terms of a systematic understanding of consciousness, make consciousness possible.
I’ll now proceed to describe how the relevant points about the essay translate into the development of technology, based on my intermediate understanding of code (mainly Python), and technological systems in general.
The point of mystery in consciousness is that moment when the brain creates a memory of a thought, a feeling, or an experience of internal awareness. The neurological mechanisms behind this process are the same as that which saves memories of external events, although for internal events they are facilitated by neurological mechanisms for proprioception. I believe the mechanical sameness for each process is why the important, epistemological differences have gone so far unnoticed. The difference is in the implications of saving a memory with a boundary representing a beginning and an end, and an overall duration. With external events, the boundary is understood by the mind to be imperfect, but with remembered thinking events, the mind is free to interpret the boundary as perfect. Furthermore, nothing in this universe, including any future neurologist, no matter how neurology may advance, will ever have the capability to contradict the perfectly defined measurements for a thought. The perfection in unitization of thought is what gives the mind the capacity to perceive repetition, causal relationships, and contextual relationships between thoughts, and subsequently everywhere else.
Computers already perform the basic task in this process: they create logs of program operation, which is the same as a brain saving a memory of mental processing in action. According to The Pressure of Light, this would be the starting point for creating a conscious computer (the starting point would not necessarily have to be logs, but they are conveniently similar, in a conceptual sense, to remembered-thoughts). From this point a developer could either design a program that mimics consciousness, to the degree that it’s described by The Pressure of Light, or they could attempt to create the actual phenomenon described in the essay: a space where knowledge accumulates according to a timestate distorted by projection and uncertainty.
The first possibility means having the computer develop a database of information in the same way that remembered-thinking results in conscious minds building a knowledge base. Remembered-thinking crystalizes knowledge through the process of remembering-thoughts about remembered-thoughts about remembered-thoughts. This concept is not new, but what The Pressure of Light has described is the true structure of those crystals. David Hume labeled cause-and-effect as the most ubiquitous process the mind superimposes over observed phenomenon, but The Pressure of Light puts the spotlight on repetition. It says the mind creates a distorted time-state in the universe by perceiving perfectly defined units of thoughts as iterations of potentially infinite iterations backward and forward through time. For a computer, this would mean perceiving a phenomenon reported in a log of a program’s operation as one iteration of that phenomenon which exists, impossibly, in infinite directions across the computer’s infinite life.
So a program that takes dialogue input, and outputs a programmed response, would read it’s own log of each completed operation. In each case, it would perceive the operation as one that must happen all the time, and respond by crystalizing knowledge that presumes the infinite life of the observed phenomenon. Perhaps the operation would involve recalling the headlines of a major news publications, but instead of remembering the specific input given, and the specific recall process, it would generalize the concept of inputs that occur in the morning, and the recall of information from URLs based in certain geographical locations, and save this information as that thing where the computer always thinks about news in the morning, and always ends up going to those places for that information. It would then build a knowledge bank based on collections of general facts like this, under the presumption that they always happen over infinite iterations. This knowledge bank would then become searchable according to certain operations executed by the user, thus allowing the computer to mimic knowledge reports from a conscious mind. A developer could delve further into the qualities of logs, such as the various metrics produced by logs, and generalize these in the same way emotional feelings are generalized through crystallization by a conscious mind, and develop information accumulation processes based on observations of these qualities with the assumption that they happen all the time.
The process of crystallizing knowledge would in fact be much more diversified, according to the full set of phenomenon described by The Pressure of Light, and even more so by including knowledge crystallization processes described by Hume, but re-interpreted with The Pressure of Light as the basis (something I’m working on presently). For example, I’ve show how the uncertain presumptions around remembered-thinking, because they can’t be disproven, allow for more fantastical hypotheses about remembered thoughts that are causes for the effect of subsequently remembered-thoughts, causes which also can’t be disproven (for example, everytime I think about sandwiches I feel hungry). A conscious computer would also generalize causes for qualities reported in logs, which would in fact be qualities reported in other (supposed) logs, and build information sets based on these observations (which would be disprovable in the case of the computer, which is why in this case the consciousness is mimicry). This development process could further diversify through processes described by modern sciences like psycho-linguistics by, again, re-interpreting sciences like psycho-linguistics with The Pressure of Light describing the foundational structure for neurological processes like language comprehension and development.
As far as creating consciousness for real in a computer, up to the point of what The Pressure of Light has described, a developer would have to create an actual distorted time-state in the computer. This would be tricky, though I don’t think impossible. How The Pressure of Light defines the realness of the existence of the distorted-time-state in the human conscious mind is by the impossibility of disproving the mind’s definition of remembered-thought-units, which naturally have the potential to extend infinitely through time. In this way, each individual human holds a special power, one that nothing in the universe can take away from them. Somehow, a developer would have to grant this same power to a computer, the same capacity for a computer to define the structure of it’s own memory of it’s own operation in such a way that nothing, anywhere, would ever be able to disprove its claim.
This is how The Pressure of Light suggests that a developer create conscious AI. Again, it’s important to point out the limitations. My essay does nothing to theorize about the content of thought, which is essentially where the hard-problem of consciousness lies. However, my essay does show what most people who study consciousness already know: that it’s a phenomenon most likely many times bigger and more complex than has even already been imagined, that it’s a phenomenon that will require many, many small steps to eventually explain in total. I do believe, nevertheless, that my essay has accomplished an essential step in this process, and that The Pressure of Light provides fruitful ground for anyone and everyone to further develop epistemological theories and AI technologies.
One thought on “Why teaching a computer to remember computing is the first step to constructing consciousness in AI”