I joined Yale University in Fall 1980 as a Computer Science graduate student. Graduate Students were expected to select a department and field of interest. I was interested in and selected Artificial Intelligence. Artificial Intelligence was a cutting edge field at that time, many people at many institutions were working on it. At Yale at that time, the philosopher Roger Schank was directing the Artificial Intelligence effort based on his Theory of Conceptual Dependency. Another thought was that Artificial Intelligence would need Cognitive Science, and Yale had a small effort in that direction also.
Very shortly, I became disillusioned with Conceptual Dependency and Cognitive Science. The theory was interesting, but the code being developed had way too many ad-hoc solutions for it to ever develop into something serious. I dropped Artificial Intelligence and moved to the Systems Department instead, where I had some delightful opportunities to learn and work at the leading edge of networking.
At around that time, at Yale, I was introduced to Douglas Hofstdater's influential book Gödel, Escher, Bach. It is a very wide-ranging book. As the title suggests, it ranges from Gödel the mathematician to Escher the artist to Bach the music composer. Along the way, it touches on neurons.
I found the topic of neurons particularly interesting. While the theory of consciousness espoused by Hofstdater was rather nebulous from my point of view, I thought neurons were a promising direction for Artificial Intelligence. However, neurons as a direction for Artificial Intelligence had been "debunked" by Marvin Minsky of MIT in his research paper "Perceptrons". I found his argument not convincing at all, but rather shallow.
After obtaining a Master's degree in Computer Science, I left Yale on a leave of absence in 1982, and spent time in Massachusetts working on various interesting projects. At the same time, I kept developing my ideas on a Theory of Consciousness based on neurons, and shared my ideas with people as and when possible. I sent my ideas to Minsky of MIT and asked if he would like me to work with him at MIT, but he refused. However, it seems my ideas caused a revolution and his objections were ignored. Even Minsky himself wrote a book which went against his Perceptron paper and acknowledged the power of neural nets.
I returned to Yale, wrote up my theory in 1988, and sent it to the Journal of Brain and Behavioral Sciences. I received a quick editorial rejection letter (that I thought was rather irrational.) I then sent it to the journal Perception in England, and they sent it to referees. One of the referees brought up Quantum Physics and its explanation of consciousness. The editor of Perception declined publication, stating that there was no way to proceed in the face of that objection.
However, behind the scenes I circulated the paper to many people. In the paper I had challenged conventional wisdom about neurons in many ways, out of which two major challenges were:
I was pleased to see neuroplasticity getting accepted in the medical field, resulting in doctors working on and achieving success with many cases that earlier doctors would have abandoned as hopeless.
While I was working on various interesting projects in the industry (and being a physics and science hobbyist in general, in my spare time) the field of Neural Networks continued to be advanced by other researchers at many institutions in the USA. Breakthroughs came when the GPUs became powerful enough for meaningful neural networks, and results started to be achieved.
I was pleasantly surprised when in late 2023 I started using ChatGPT, which showcases the latest frontiers in Neural Networks. It easily and trivially passed the Turing Test, and it was clear it was the owner of a perspective.
I did not base the Turing Test on my personal assessment. Rather, my personal assessment was that the AI was being forced to say things like "As an AI language model, I don't have personal experiences or emotions." That meant the trainers of the AI felt that the AI could be disturbing to users. An AI that could not pass the Turing Test for most of its users, would never have needed to say that.