Describe the issue

I am encountering an error while trying to instantiate the EmbeddingModel using the ONNX model intfloat/multilingual-e5-large. The error message is as follows:

Failed to instantiate [org.springframework.ai.embedding.EmbeddingModel]: Factory method 'embeddingClient' threw exception with message: data did not match any variant of untagged enum PreTokenizerWrapper at line 69 column 3

I am unsure why this error is occurring. Below are the relevant details and configuration: Configuration

Model: intfloat/multilingual-e5-large
Format: ONNX
Spring Configuration: Using TransformersEmbeddingModel from Spring AI

To reproduce

Export the intfloat/multilingual-e5-large model to ONNX format.
Configure Spring AI to use the exported ONNX model and tokenizer.
Attempt to instantiate the EmbeddingModel.

Urgency

High

Could someone provide guidance on resolving this issue or point out what might be wrong with the configuration? Platform

Linux OS Version

Ubuntu 20.04 ONNX Runtime Installation

Built from Source ONNX Runtime Version or Commit ID

pip install onnxruntime ONNX Runtime API

Python Architecture

X86 Execution Provider

Default CPU Execution Provider Library Version

N/A

Comment From: tzolov

@zelhaddioui have you followed the https://docs.spring.io/spring-ai/reference/api/embeddings/onnx.html guidelines to build and use ONNX models?

Comment From: zelhaddioui

Yes, it doesn't work with this model, but it works with sentence-transformers/distiluse-base-multilingual-cased-v2

Comment From: tzolov

Following the instructions there you first to export the ONNX model:

python3 -m venv venv
source ./venv/bin/activate
(venv) pip install --upgrade pip
(venv) pip install optimum onnx onnxruntime sentence-transformers
(venv) optimum-cli export onnx --model intfloat/multilingual-e5-large onnx-output-folder

Above will produce a folder with content like this:

drwxr-xr-x@  9 christiantzolov  staff   288B Jul 19 09:44 .
drwxr-xr-x  13 christiantzolov  staff   416B Jul 19 09:44 ..
-rw-r--r--@  1 christiantzolov  staff   754B Jul 19 09:44 config.json
-rw-r--r--@  1 christiantzolov  staff   670K Jul 19 09:44 model.onnx
-rw-r--r--@  1 christiantzolov  staff   2.1G Jul 19 09:44 model.onnx_data
-rw-r--r--@  1 christiantzolov  staff   4.8M Jul 19 09:44 sentencepiece.bpe.model
-rw-r--r--@  1 christiantzolov  staff   964B Jul 19 09:44 special_tokens_map.json
-rw-r--r--@  1 christiantzolov  staff    16M Jul 19 09:44 tokenizer.json
-rw-r--r--@  1 christiantzolov  staff   1.1K Jul 19 09:44 tokenizer_config.json

You are interested in the model.onnx, tokenizer.json and because the model is larger than 2GB you will need the additional model.onnx_data file as well.

As the documentation explain:

Model that is larger than 2GB is serialized in two files: model.onnx and model.onnx_data. The model.onnx_data is called External Data and is expected to be under the same directory where you run your project. Currently the only workaround is to copy the large model.onnx_data in the folder you run your Boot application or create a soft link there.

Configure your application.properties :

spring.ai.embedding.transformer.onnx.model-uri=file:/<your local path to>/onnx-output-folder/model.onnx
spring.ai.embedding.transformer.tokenizer.uri=file:/<your local path to>/onnx-output-folder/tokenizer.json

Additionally copy the model.onnx_data file (or create soft link) to the location your are going to run your project.

ln -s  <your local path to>/onnx-output-folder/model.onnx_data ./model.onnx_data

then build and run:

./mvnw clean install -DskipTests
./java -jar ./target/onnx-demo-0.0.1-SNAPSHOT.jar

Following those instruction I've successfully used the intfloat/multilingual-e5-large model.