This is a demo of running a port of the MiniLM model to dfdx that can run in WASM on a browser.
The minilm model is a good model for semantic retrieval tasks, and is ideally suited for client side implementation because it is quite lite weight.

In order to run it locally you'll need to download some model files from the huggingface hub. This can by done from the python directory by running
Dependencies for this
poetry install
poetry run python minilm/load_model.pyThis will create a sites/model.safetensors in the top level sites directory.
The install step will create a virtual environment, which can be determined by running
poetry run which pythonThe location of this virtual environment is useful for testing.
Now to start the site up, go to the site directory and run
trunk serve --open --releaseTo run the test you will need to run
PYO3_PYTHON=$VIRTUAL_ENV_PATH/bin/python PYTHONPATH=$VIRTUAL_ENV_PATH/lib/python3.11/site-packages cargo test -F pyo3 embeddingswhere VIRTUAL_ENV_PATH ithe path of the the poetry generated virtual environment (before /bin/python).