I am following the official documentation scrips for semantic cache. In the following code

from redisvl.extensions.llmcache import SemanticCache
llmcache = SemanticCache(
    name="llmcache",                     # underlying search index name
    redis_url="redis://localhost:6379",  # redis connection url string
    distance_threshold=0.2               # semantic cache distance threshold
)
llmcache.store(
    prompt="What is the capital city of France?",
    response="Paris",
    metadata={"city": "Paris", "country": "france"}
)
llmcache.check(prompt=question)[0]['response']

I have these questions,

1 - In llmcache.store() hope I can store the custom vector of the prompt directly rather than prompt. That custom vector can be generated by any embedding model like sentence transformer, minilm-l12-v2, openai embeddings, huggingface embeddings...etc 2 - Is there any embedding length limitation to store using llmcache.store()? Shall I use a vector with any length? 3 - In llmcache.check() hope we can pass the vector( from sentence transformer, minilm-l12-v2, openai embeddings, huggingface embeddings,..etc) directly for the semantic matching purpose,rather than the query 4 - inside llmcache.check() which distance measure using for finding the semantic similarity ? is it cosine similarity or any other? do we have the privilege to configure that?

Comment From: sundb

seems you come to the wrong place, please report this in SemanticCache project.