NOTES

󰉋
Thoughts
󰉋
Data science
󰉋
Knowledge organisation
Back to blog

Are LLMs just a big Memex?

What is a Memex?

A Memex is a hypotetical device thought to interact with microform documents, introduced in Vannevar Bush's 1945 article "As We May Think". Its name comes from the union of the words memory and experience.

From the Memex Wikipedia page:

Bush envisioned the memex as a device in which individuals would compress and store all of their books, records, and communications, "mechanized so that it may be consulted with exceeding speed and flexibility". The individual was supposed to use the memex as an automatic personal filing system, making the memex "an enlarged intimate supplement to his memory"

Basically, what I'm trying to do by building a second brain.

My inner child here giggles at the fact that this is called Meme(x), and I love learning about it in this era. Because this makes the perfect desecrating analogy to what LLMs actually are. Truth be told, I'd love to have a peek into the future and see which meme will condense and desecrate this capital-inflated hype around LLMs.

LLMs and the marketing of AI

As Daniel Stenberg put it in this article, The I in LLMs stands for intelligence. You need to take just a glance at the principles behind LLMs, constructed to predict the next most probable word to say, to realise how misleading and reductive is calling this AI. AI is many other things already out there, and they can't be put in the LLM box.

Yes, it's impressive that we tricked rocks into talking. Yes, we are into an historical moment. Like tons before in this century. And yes, we are going to go through change. But aren't we constantly doing that? Just considering the last decade, I can name at least one event that has changed drastically human habits, and I dare to say human life. And that started as a way to show off to other students.

LLMs have already had an incredible impact, but on nothing else than the future of search. We have created a sophisticated Memex, and nothing else.

The biggest concern I have about LLMs is the impact it will have on the slow-moving education system. Because if we don't understand those tools for what they are, the amount of functional illiterates will just skyrocket. This has been a central issue throughout the last decade: dealing with false, misleading or non-factual information.

Can't wait for that meme and laugh at all this.

Also, do LLMs really pass the Touring test?

I mean... Don't we all start noticing when a text is LLM-generated? There are some keywords, a certain style, the control flow of the sentence - an ensemble of things gives it off. With just a numerable set of interactions and one year of exposure, I think I'm confident in distinguishing GPT-generated text. I bet this is easily mappable. If a human can distinguish a text as "written by an AI", we have disproven the Touring test. And I strongly believe that the more LLMs will be trained, the more they have difficulty at diversifying the output. Every round of training will just reinforce the bias. The math doesn't add up (I feel the urge to add: at the moment).

References

Connections