LLMs are Humans

Last updated: May 23, 2025

Most engineers over complicate AI application development. I get it. It's fun to design complex systems to achieve an outcome. But LLMs have drastically changed in the last 2 years. For the vast majority of business use cases, you don't need some complex GraphRAG algo to design an AI system.

Treat the LLM like a human. Instead of jumping to the newest VectorDB to design your retrieval system, question how a human would accomplish the same task.

I'll give an example. Let's say you're an analyst that has to comb through 100 financial documents to understand the EBITDA of a business you are evaluating. The analyst would read through each document, pull the relevant information into a consolidated document, then analyze the data.

Today, most AI engineers would immediately jump to VectorDBs, chunk the data, and rely on embeddings to find ALL of the relevant information. This method guarantees an incomplete result as embeddings are lossy1.

Now assume the LLM is a human. Have thousands of analysts (lightweight LLMs) read through every document in the dataset in parallel. Each human (LLM) can track of all the relevant information into a consolidated document. Finally the partner (a heavy LLM) can generate a final answer from all of the gathered context.

Sure the latency and cost is higher, but isn't that worth it for those valuable tasks? Next time you catch yourself over complicating an AI application, imagine how you'd solve the same problem if you were given access to infinite humans.

As LLMs get better, LLMs are humans.

1 Embeddings are a numerical representation of the underlying data. We can use this representation to find similar information, however embeddings don't indicate intent. Imagine asking a question about "why is my code bad" and trying to use the numerical representations of the underlying code to find "bad code".