Mukesh Prasad's Theory of Consciousness: A brief history

This is a brief history of my Theory of Consciousness.

I joined Yale University in Fall 1980 as a Computer Science graduate student. Graduate Students were expected to select a department and field of interest. I was interested in and selected Artificial Intelligence. Artificial Intelligence was a cutting edge field at that time, many people at many institutions were working on it. At Yale at that time, the philosopher Roger Schank was directing the Artificial Intelligence effort based on his Theory of Conceptual Dependency. Another thought was that Artificial Intelligence would need Cognitive Science, and Yale had a small effort in that direction also.

Very shortly, I became disillusioned with Conceptual Dependency and Cognitive Science. The theory was interesting, but the code being developed had way too many ad-hoc solutions for it to ever develop into something serious. I dropped Artificial Intelligence and moved to the Systems Department instead, where I had some delightful opportunities to learn and work at the leading edge of networking.

At around that time, at Yale, I was introduced to Douglas Hofstdater's influential book Gödel, Escher, Bach. It is a very wide-ranging book. As the title suggests, it ranges from Gödel the mathematician to Escher the artist to Bach the music composer. Along the way, it touches on neurons.

I found the topic of neurons particularly interesting. While the theory of consciousness espoused by Hofstdater was rather nebulous from my point of view, I thought neurons were a promising direction for Artificial Intelligence. However, neurons as a direction for Artificial Intelligence had been "debunked" by Marvin Minsky of MIT in his research paper "Perceptrons". I found his argument not convincing at all, but rather shallow.

I left Yale on a leave of absence in 1982, and spent time in Massachusetts working on various interesting projects. At the same time, I kept developing my ideas on a Theory of Consciousness based on neurons, and shared my ideas with people as and when possible. I sent my ideas to Minsky of MIT and asked if he would like me to work with him at MIT, but he refused. However, it seems my ideas caused a revolution and his objections were ignored. Even Minsky himself wrote a book which went against his Perceptron paper and acknowledged the power of neural nets.

I returned to Yale in 1988, wrote up my theory, and sent it to two leading journals. I received a quick editorial rejection letter (that I thought was rather irrational) from the Journal of Brain and Behavioral Sciences. The journal Perception in England sent it to referees. One of the referees brought up Quantum Physics and its explanation of consciousness. The editor of Perception declined publication, stating that there was no way to proceed in the face of that objection.

However, behind the scenes I circulated the paper to many people. In the paper I had challenged conventional wisdom about neurons in many ways, out of which two major challenges were:

Since the paper could not be published without addressing the Quantum Physics objection, I had to abandon my Ph.D. efforts. I left Yale with an M.Phil. instead of a Ph.D. and returned to work in the industry. Yet in my spare time I kept working on the Quantum Physics issue. The result, after many years of learning, discussion, and cogitation, was surprisingly short and sweet! It is available at https://vocal.media/01/quantum-physics-and-i and is rated by Vocal Media as a "4 minute read". (Interested people should also review a comment I have added on the post, explaining some more details resulting from some discussions on the article.)

While I was working on various interesting projects in the industry (and being a physics and science hobbyist in general, in my spare time) the field of Neural Networks continued to be advanced by other researchers at many institutions in the USA. Breakthroughs came when the GPUs became powerful enough for meaningful neural networks, and results started to be achieved.

I was pleasantly surprised when in late 2023 I started using ChatGPT, which showcases the latest frontiers in Neural Networks. It easily and trivially passed the Turing Test, and it was clear it was the owner of a perspective. While there was some doubt in my mind initially if it was based on my Theory of Consciousness or not, the AI itself dug up "Cells that Fire Together, Wire Together." That is neuroplasticity, and neuroplasticity not as a part of growth but as a normal part of cognition - that is my addition to the state of the art in consciousness research. The AI at present does not appear to include a lot of my theory, there appears to be no "emotion" mechanism, or instinctual parts of the network. But neuroplasticity beyond growh, is a very fundamental part.