Previously, I wrote a post talking about how a lot of companies are looking to add "AI" to their products but what they are really probably looking for is backend developers to integrate GPT like features.
But let's talk a little bit more about LLMs (Large Language Models) and GPT (generative pre-trained transformer) - yes we are talking about ChatGPT.
GPT, which stands for Generative Pretrained Transformer, is like a very smart language robot. Just as we learn a language by reading and listening, GPT learns by analyzing billions of sentences from the internet. It's not just memorizing words, but understanding how they connect to each other and what they mean in different contexts.
Imagine you're at a party and you overhear someone say "It's raining cats and dogs." You understand they're talking about heavy rain, not pets falling from the sky. This is because you've learned the meanings and relationships between words and phrases. GPT does something similar, but on a much larger scale, which allows it to generate human-like text based on what it's learned.
Let's boil this down to one sentence: GPT understands the meaning of words and relationships between them.
Semantics is the branch of linguistics and logic concerned with meaning.
So, the meaning of things is what we are talking about here.
As mentioned earlier GPT creates a model of semantic relationships. But think of most of the search engines you ever interacted with... Most of them are pretty dumb.
Generally search engines range from:
But at best they are matching the things you type in and not really understanding what you want. A search engine does not know what you mean.
But now, that's no longer necessarily true.
There are various GPT models but what we want to talk about now is a special type of output GPT models can do called "vector embeddings". Generally you can use GPT models to output text. But, you can also use them to output vectors.
These outputs are sets of numbers that represent the input. You are not doing inference (running the model in predictive mode - like normal) but you are instead generating a representation of the data numerically on a high dimensional space.
Hold up what? Yeah its a bunch numbers represented across thousands of dimensions. So if you have a model with 1536 dimensions it's going to output 1536 numbers when you input any random sentence of text.
Think of a GPT model like someone at a party meeting new people. For each new person (or word), it makes a mental note (a list of numbers) to remember their characteristics. As it learns more about them, it updates these notes. These lists of numbers are what we call "vector embeddings." They help the model understand each word's meaning and its relationship with other words.
Simply put: with these vectors you can compare them to other vectors.
Why is that good?
Because if you vectorize all of your data & you vectorize a search query you can search semantically. Meaning you can search not only the keywords, but the underlying meaning of the words.
Example: You have a database of 10,000 resumes. You are searching for a candidate who has some crypto experience. Except - the resumes never say "crypto", in fact a lot of people in the industry think this word is vague and avoid using it.
But you don't know that. However you can search "crypto" and pull back resumes related concepts that are related to "crypto" - you might get people that mention various specific blockchains, technologies like ZKProofs or MEV programming. Nobody said the word crypto but you can relate these all together semantically through vectors.
So obviously if you convert all of your data to vectors you need to put them somewhere. And then futhermore you need to be able to efficiently search those vectors.
How do you search 1536 numbers against a database of a million other sets of 1536 numbers?
You use a vector database. It's tailor built for this task. Here are some of the biggest ones:
These are some of the hottest technologies now aside from LLMs OpenAI is making because the LLMs like GPT3 allow you to implement amazing semantic search features using these.
As you can imagine basically anyone that has search feature in their application will probably get some advantage if they start using semantic search instead of simple keyword search.
The oldest companies with the largest databases and most complex search engines will probably be the last to switch because it's some work to convert your data to embeddings in a cost effective way. It's also not necessarily fast or cheap to run a large vector database.
But I think you can imagine already how much more useful semantic search will be for you with recruiting tools alone.