Human Memory

I'd like to go over an idea that has been brewing in me for a long time. I think I have reached the realization I need for the idea to come to life.

It started a while back with this concept that language models (LMs) had enough intelligence to work autonomously, but just needed to be set up right. As if there were some loop of prompts you could feed them to allow them to remember things outside their context window and iterate on a task without constantly being prompted.

I delved into a lot of areas around how minds work: memory, attention, understanding, and identity, among others. I never quite understood exactly what it was that I was trying to grasp, though, and over time my enthusiasm for the idea simmered down as I lost sight of an end goal.

Although I didn't continue building something with code, I still remained aware of my internal chatter and noted anything of interest in hopes I would build a better understanding of minds in general, which would help me arrange my life in a more fruitful way.

It so happened that after devoting my efforts to reorganizing my own self, family, and home, I had a thought that seemed to be the missing piece of the puzzle of how a mind works that I was looking for, and how LMs could be constructed into an autonomous mind.

This important piece is how memory works. I've been observing myself for a long time, and I believe I finally understand how I remember things. At every moment, I pay attention to something. Paying attention to things means using my knowledge and memories to consider what parts of my present experience are important to remember, and then also 'tagging' or 'linking' them to something. Also, at the same time, I'm looking up tags (choosing the term tags here), possibly the same ones, to retrieve relevant information that I previously stored. A tag would be like a word that triggers the recollection of a memory. So when I hear my wife's name, I think of her face. When I first saw my wife's face, I tagged it as the face of a beautiful person that I like, and then when I learned her name, I replaced that tag with her name, or maybe added her name alongside the tag. This is a rough example of a process that likely happens many times over in a much smaller, more precise way, on many different levels.

So, at any given moment in time, I only have the memories at hand that I need, and I am saving everything for later use according to a system I have devised. I essentially trust that this system works and don't question my memory. I just know that if information comes from the place I call memory, I trust it to be true because it has been useful in the past.

This tagging and retrieval system solves the problem of an ever-growing memory getting out of hand. I can document every moment without worrying I am saving too much. I'll always have what I need when I need it because I placed the document in the right place so that I know how to get it later on. I remember only what I need to when I need to remember it.

How one develops a memory system I'm not sure, but it might not matter, similarly to how it doesn't matter exactly how I figured out a skill, just that I know the skill works. My memory system will be as good as my external world pressured it to be.

So, the culmination of this idea is that LMs can have persistent memory in a way similar to humans, in that using our intelligence we can both store and retrieve memories with language, among other tag types. If properly prompted, an LM could in effect become a working mind that can remember things with accuracy similar to a human person and operate by itself towards goals that it remembers. It could learn and adapt using its intelligence. In this way, we could teach AI as we teach other humans, through talking with them. The LM would just have to be smart enough to know what tags to generate given a certain input, and the problem of programmatically looking up information given an exact key phrase has long been solved. I'm not sure what the key phrases or 'tags' would look like, but I imagine that's something that can be figured out by doing. There could also be a middle model in between the key phrase retrieval and the LM. It's probably an art that can be figured out.

It could be that being an infant is like being a language model in training, and being an adult is like being a fully trained neural net with this memory system properly set up. I think more likely both systems are happening throughout life and expand upon each other. It's like tackling a problem from both ends. We probably use this conscious memory system more while awake and the deep training more while asleep.

I also believe it could be that just like this system gets developed over time as a human grows, the system itself was developed through time and through generations of humans living and passing on their learned experiences to their offspring. I think it really could be the case that the deeper self-awareness humans possess could have been an idea that occurred to some person at some point in time and then was passed down due to its usefulness. Of course, I respect the process evolution played in the role of wiring the brain, but I actually think it's the dual development of both brain and inherited mind (hardware and software) through genetic evolution and parents teaching their children that allowed for humans to become the breakout success that we are. I would even venture to say that, like our hardware, our software dates back to non-human ancestors.

This idea means a lot to me. Coming to somewhat of an answer for what has been a lifelong question of "What am I and how do I work?" feels very satisfying. I had previously thought that figuring this out meant I would be able to build a startup and make money somehow. Now I realize that all along I just wanted to figure something out that I thought was cool.