When loading the GGUF model, special token ids aren't validated to be in range, this can lead to index errors later on when they're used to looked up tokens, etc.
Here's an example: https://huggingface.co/Undi95/Mistral-11B-OmniMix/blob/main/config.json
We have
However the model's vocab size is 32,000 so that is out of bounds. Currently trying to load that model just crashes.