I recently learned that semantic search embeddings mostly represent topics and concepts, but they don’t handle negation or emotion very well.
For example, if you search for “paintings of winter landscapes but without sun and trees,” you’ll still get results with trees. That’s because embeddings capture the presence of concepts like “tree” or “landscape,” but not logical relationships like “without” or “not.”
Similarly, embeddings aren’t great at capturing how something feels. They can tell that “sad poem” and “happy poem” are different mainly because of the words used, not because they truly understand emotional tone.
This happens because most embedding models (like OpenAI’s or sentence-transformers) are trained to group things by semantic similarity, not logical meaning or sentiment. Negation, polarity, and affect aren’t explicitly represented in the vector space.
Might be common knowledge to some, but it was a cool TIL moment for me, realizing that embeddings are great at what something is about, but not how it feels or what it excludes.