Milvus
Milvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models.
This notebook shows how to use functionality related to the Milvus vector database.
Setup
You'll need to install langchain-milvus
with pip install -qU langchain-milvus
to use this integration.
%pip install -qU langchain_milvus
Note: you may need to restart the kernel to use updated packages.
Credentials
No credentials are needed to use the Milvus
vector store.
Initialization
pip install -qU langchain-openai
import getpass
import os
if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter API key for OpenAI: ")
from langchain_openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(model="text-embedding-3-large")
Milvus Lite
The easiest way to prototype is to use Milvus Lite, where everything is stored in a local vector database file. Only the Flat index can be used.
from langchain_milvus import Milvus
URI = "./milvus_example.db"
vector_store = Milvus(
embedding_function=embeddings,
connection_args={"uri": URI},
index_params={"index_type": "FLAT", "metric_type": "L2"},
)
Milvus Standalone
If you have a large amount of data (e.g., more than a million vectors), we recommend setting up a more performant Milvus server on Docker or Kubernetes.
Milvus Standalone also supports different indexes, if you want to improve retrieval functionality.
To launch the Docker container, run:
!curl -sfL https://raw.githubusercontent.com/milvus-io/milvus/master/scripts/standalone_embed.sh -o standalone_embed.sh
!bash standalone_embed.sh start
Password:
Here we create a Milvus database:
from pymilvus import Collection, MilvusException, connections, db, utility
conn = connections.connect(host="127.0.0.1", port=19530)
# Check if the database exists
db_name = "milvus_demo"
try:
existing_databases = db.list_database()
if db_name in existing_databases:
print(f"Database '{db_name}' already exists.")
# Use the database context
db.using_database(db_name)
# Drop all collections in the database
collections = utility.list_collections()
for collection_name in collections:
collection = Collection(name=collection_name)
collection.drop()
print(f"Collection '{collection_name}' has been dropped.")
db.drop_database(db_name)
print(f"Database '{db_name}' has been deleted.")
else:
print(f"Database '{db_name}' does not exist.")
database = db.create_database(db_name)
print(f"Database '{db_name}' created successfully.")
except MilvusException as e:
print(f"An error occurred: {e}")
Database 'milvus_demo' does not exist.
Database 'milvus_demo' created successfully.
Note the change in the URI below. Once the instance is initialized, navigate to http://127.0.0.1:9091/webui to view the local web UI.
Here is an example of how you create your vector store instance with the Milvus database serivce:
from langchain_milvus import BM25BuiltInFunction, Milvus
URI = "http://localhost:19530"
vectorstore = Milvus(
embedding_function=embeddings,
connection_args={"uri": URI, "token": "root:Milvus", "db_name": "milvus_demo"},
index_params={"index_type": "FLAT", "metric_type": "L2"},
consistency_level="Strong",
drop_old=False, # set to True if seeking to drop the collection with that name if it exists
)
If you want to use Zilliz Cloud, the fully managed cloud service for Milvus, please adjust the uri and token, which correspond to the Public Endpoint and Api key in Zilliz Cloud.
Compartmentalize the data with Milvus Collections
You can store unrelated documents in different collections within the same Milvus instance.
Here's how you can create a new collection:
from langchain_core.documents import Document
vector_store_saved = Milvus.from_documents(
[Document(page_content="foo!")],
embeddings,
collection_name="langchain_example",
connection_args={"uri": URI},
)
And here is how you retrieve that stored collection:
vector_store_loaded = Milvus(
embeddings,
connection_args={"uri": URI},
collection_name="langchain_example",
)
Manage vector store
Once you have created your vector store, we can interact with it by adding and deleting different items.
Add items to vector store
We can add items to our vector store by using the add_documents
function.
from uuid import uuid4
from langchain_core.documents import Document
document_1 = Document(
page_content="I had chocolate chip pancakes and scrambled eggs for breakfast this morning.",
metadata={"source": "tweet"},
)
document_2 = Document(
page_content="The weather forecast for tomorrow is cloudy and overcast, with a high of 62 degrees.",
metadata={"source": "news"},
)
document_3 = Document(
page_content="Building an exciting new project with LangChain - come check it out!",
metadata={"source": "tweet"},
)
document_4 = Document(
page_content="Robbers broke into the city bank and stole $1 million in cash.",
metadata={"source": "news"},
)
document_5 = Document(
page_content="Wow! That was an amazing movie. I can't wait to see it again.",
metadata={"source": "tweet"},
)
document_6 = Document(
page_content="Is the new iPhone worth the price? Read this review to find out.",
metadata={"source": "website"},
)
document_7 = Document(
page_content="The top 10 soccer players in the world right now.",
metadata={"source": "website"},
)
document_8 = Document(
page_content="LangGraph is the best framework for building stateful, agentic applications!",
metadata={"source": "tweet"},
)
document_9 = Document(
page_content="The stock market is down 500 points today due to fears of a recession.",
metadata={"source": "news"},
)
document_10 = Document(
page_content="I have a bad feeling I am going to get deleted :(",
metadata={"source": "tweet"},
)
documents = [
document_1,
document_2,
document_3,
document_4,
document_5,
document_6,
document_7,
document_8,
document_9,
document_10,
]
uuids = [str(uuid4()) for _ in range(len(documents))]
vector_store.add_documents(documents=documents, ids=uuids)
Delete items from vector store
vector_store.delete(ids=[uuids[-1]])
(insert count: 0, delete count: 1, upsert count: 0, timestamp: 0, success count: 0, err count: 0, cost: 0)
Query vector store
Once your vector store has been created and the relevant documents have been added, you will most likely wish to query it during the running of your chain or agent.
Query directly
Similarity search
Performing a simple similarity search with filtering on metadata can be done as follows:
results = vector_store.similarity_search(
"LangChain provides abstractions to make working with LLMs easy",
k=2,
expr='source == "tweet"',
)
for res in results:
print(f"* {res.page_content} [{res.metadata}]")
* Building an exciting new project with LangChain - come check it out! [{'pk': '9905001c-a4a3-455e-ab94-72d0ed11b476', 'source': 'tweet'}]
* LangGraph is the best framework for building stateful, agentic applications! [{'pk': '1206d237-ee3a-484f-baf2-b5ac38eeb314', 'source': 'tweet'}]
Similarity search with score
You can also search with score:
results = vector_store.similarity_search_with_score(
"Will it be hot tomorrow?", k=1, expr='source == "news"'
)
for res, score in results:
print(f"* [SIM={score:3f}] {res.page_content} [{res.metadata}]")
* [SIM=21192.628906] bar [{'pk': '2', 'source': 'https://example.com'}]
For a full list of all the search options available when using the Milvus
vector store, you can visit the API reference.
Query by turning into retriever
You can also transform the vector store into a retriever for easier usage in your chains.
retriever = vector_store.as_retriever(search_type="mmr", search_kwargs={"k": 1})
retriever.invoke("Stealing from the bank is a crime", filter={"source": "news"})
[Document(metadata={'pk': 'eacc7256-d7fa-4036-b1f7-83d7a4bee0c5', 'source': 'news'}, page_content='Robbers broke into the city bank and stole $1 million in cash.')]
Hybrid Search
The most common hybrid search scenario is the dense + sparse hybrid search, where candidates are retrieved using both semantic vector similarity and precise keyword matching. Results from these methods are merged, reranked, and passed to an LLM to generate the final answer. This approach balances precision and semantic understanding, making it highly effective for diverse query scenarios.
Full-text search
Since Milvus 2.5, full-text search is natively supported through the Sparse-BM25 approach, by representing the BM25 algorithm as sparse vectors. Milvus accepts raw text as input and automatically converts it into sparse vectors stored in a specified field, eliminating the need for manual sparse embedding generation.
For full-text search Milvus VectorStore accepts a builtin_function
parameter. Through this parameter, you can pass in an instance of the BM25BuiltInFunction
. This is different than semantic search which usually passes dense embeddings to the VectorStore
,
Here is a simple example of hybrid search in Milvus with OpenAI dense embedding for semantic search and BM25 for full-text search:
from langchain_milvus import Milvus, BM25BuiltInFunction
from langchain_openai import OpenAIEmbeddings
vectorstore = Milvus.from_documents(
documents=documents,
embedding=OpenAIEmbeddings(),
builtin_function=BM25BuiltInFunction(),
# `dense` is for OpenAI embeddings, `sparse` is the output field of BM25 function
vector_field=["dense", "sparse"],
connection_args={
"uri": URI,
},
consistency_level="Strong",
drop_old=True,
)
- When you use
BM25BuiltInFunction
, please note that the full-text search is available in Milvus Standalone and Milvus Distributed, but not in Milvus Lite, although it is on the roadmap for future inclusion. It will also be available in Zilliz Cloud (fully-managed Milvus) soon. Please reach out to support@zilliz.com for more information.
In the code above, we define an instance of BM25BuiltInFunction
and pass it to the Milvus
object. BM25BuiltInFunction
is a lightweight wrapper class for Function
in Milvus. We can use it with OpenAIEmbeddings
to initialize a dense + sparse hybrid search Milvus vector store instance.
BM25BuiltInFunction
does not require the client to pass corpus or training, all are automatically processed at the Milvus server's end, so users do not need to care about any vocabulary and corpus. In addition, users can also customize the analyzer to implement the custom text processing in the BM25.
Rerank the candidates
After the first stage of retrieval, we need to rerank the candidates to get a better result. You can refer to the Reranking for more information.
Here is an example for weighted reranking:
query = "What are the novels Lila has written and what are their contents?"
vectorstore.similarity_search(
query, k=1, ranker_type="weighted", ranker_params={"weights": [0.6, 0.4]}
)
For more information about Full-text search and Hybrid search, please refer to the Using Full-Text Search with LangChain and Milvus and Hybrid Retrieval with LangChain and Milvus.
Usage for retrieval-augmented generation
For guides on how to use this vector store for retrieval-augmented generation (RAG), see the following sections:
Per-User Retrieval
When building a retrieval app, you often have to build it with multiple users in mind. This means that you may be storing data not just for one user, but for many different users, and they should not be able to see each other’s data.
Milvus recommends using partition_key to implement multi-tenancy. Here is an example:
The Partition key feature is not available in Milvus Lite, if you want to use it, you need to start Milvus server, as mentioned above.
from langchain_core.documents import Document
docs = [
Document(page_content="i worked at kensho", metadata={"namespace": "harrison"}),
Document(page_content="i worked at facebook", metadata={"namespace": "ankush"}),
]
vectorstore = Milvus.from_documents(
docs,
embeddings,
connection_args={"uri": URI},
drop_old=True,
partition_key_field="namespace", # Use the "namespace" field as the partition key
)
To conduct a search using the partition key, you should include either of the following in the boolean expression of the search request:
search_kwargs={"expr": '<partition_key> == "xxxx"'}
search_kwargs={"expr": '<partition_key> == in ["xxx", "xxx"]'}
Do replace <partition_key>
with the name of the field that is designated as the partition key.
Milvus changes to a partition based on the specified partition key, filters entities according to the partition key, and searches among the filtered entities.
# This will only get documents for Ankush
vectorstore.as_retriever(search_kwargs={"expr": 'namespace == "ankush"'}).invoke(
"where did i work?"
)
[Document(page_content='i worked at facebook', metadata={'namespace': 'ankush'})]
# This will only get documents for Harrison
vectorstore.as_retriever(search_kwargs={"expr": 'namespace == "harrison"'}).invoke(
"where did i work?"
)
[Document(page_content='i worked at kensho', metadata={'namespace': 'harrison'})]
API reference
For detailed documentation of all __ModuleName__VectorStore features and configurations head to the API reference: https://python.langchain.com/api_reference/milvus/vectorstores/langchain_milvus.vectorstores.milvus.Milvus.html
Related
- Vector store conceptual guide
- Vector store how-to guides