Embeddings convert unstructured data into fixed-size vectors (typically 384–1536 dimensions) using a neural model trained on a similarity objective. Two pieces of text with similar meaning end up at nearby points; unrelated text lands far apart. The vector space is the medium for semantic search, recommendation, deduplication, and clustering.
For ecommerce, you typically embed: product titles + descriptions + attributes (one vector per item), category names, search queries, and sometimes images. Same model for queries and products is the rule — if you mix models, distance becomes meaningless.
Operational headache: when you upgrade your embedding model, the entire catalog must be re-embedded before queries with the new model will work. Many teams version their indexes and keep both running during cutover.