Releases: run-llama/llama_index
Releases · run-llama/llama_index
v0.13.3.post1
Release Notes
v0.13.3
Release Notes
[2025-08-22]
llama-index-core [0.13.3]
- fix: add timeouts on image
.get()requests (#19723) - fix: fix StreamingAgentChatResponse losses message bug (#19674)
- fix: Fixing crashing when retrieving from empty vector store index (#19706)
- fix: Calling ContextChatEngine with a QueryBundle (instead of a string) (#19714)
- fix: Fix faithfulness evaluate crash when no images provided (#19686)
llama-index-embeddings-heroku [0.1.0]
- feat: Adds support for HerokuEmbeddings (#19685)
llama-index-embeddings-ollama [0.8.2]
- feat: enhance OllamaEmbedding with instruction support (#19721)
llama-index-llms-anthropic [0.8.5]
- fix: Fix prompt caching with CachePoint (#19711)
llama-index-llms-openai [0.5.4]
- feat: add gpt-5-chat-latest model support (#19687)
llama-index-llms-sagemaker-endpoint [0.4.1]
- fix: fix constructor region read to not read region_name before is popped from kwargs, and fix assign to super (#19705)
llama-index-llms-upstage [0.6.2]
- chore: remove deprecated model(solar-pro) (#19704)
llama-index-readers-confluence [0.4.1]
- fix: Support concurrent use of multiple ConfluenceReader instances (#19698)
llama-index-vector-stores-chroma [0.5.1]
- fix: fix
get_nodes()with empty node ids (#19711)
llama-index-vector-stores-qdrant [0.8.1]
- feat: support qdrant sharding (#19652)
llama-index-vector-stores-tencentvectordb [0.4.1]
- fix: Resolve AttributeError in CollectionParams.filter_fields access (#19695)
v0.13.2.post1
Release Notes
- docs fixes
v0.13.2
Release Notes
[2025-08-14]
llama-index-core [0.13.2]
- feat: allow streaming to be disabled in agents (#19668)
- fix: respect the value of NLTK_DATA env var if present (#19664)
- fix: Order preservation and fetching in batch non-cached embeddings in
a/get_text_embedding_batch()(#19536)
llama-index-embeddings-ollama [0.8.1]
llama-index-graph-rag-cognee [0.3.0]
- fix: Update and fix cognee integration (#19650)
llama-index-llms-anthropic [0.8.4]
- fix: Error in Anthropic extended thinking with tool use (#19642)
- chore: context window for claude 4 sonnet to 1 mln tokens (#19649)
llama-index-llms-bedrock-converse [0.8.2]
- feat: add openai-oss models to BedrockConverse (#19653)
llama-index-llms-ollama [0.7.1]
- fix: fix ollama role response detection (#19671)
llama-index-llms-openai [0.5.3]
- fix: AzureOpenAI streaming token usage (#19633)
llama-index-readers-file [0.5.1]
- feat: enhance PowerPoint reader with comprehensive content extraction (#19478)
llama-index-retrievers-bm25 [0.6.3]
- fix: fix persist+load for bm25 (#19657)
llama-index-retrievers-superlinked [0.1.0]
- feat: add Superlinked retriever integration (#19636)
llama-index-tools-mcp [0.4.0]
- feat: Handlers for custom types and pydantic models in tools (#19601)
llama-index-vector-stores-clickhouse [0.6.0]
- chore: Updates to ClickHouse integration based on new vector search capabilities in ClickHouse (#19647)
llama-index-vector-stores-postgres [0.6.3]
- fix: Add other special characters in
ts_querynormalization (#19637)
v0.13.1
Release Notes
[2025-08-08]
llama-index-core [0.13.1]
- fix: safer token counting in messages (#19599)
- fix: Fix Document truncation in
FunctionTool._parse_tool_output(#19585) - feat: Enabled partially formatted system prompt for ReAct agent (#19598)
llama-index-embeddings-ollama [0.8.0]
- fix: use /embed instead of /embeddings for ollama (#19622)
llama-index-embeddings-voyageai [0.4.1]
- feat: Add support for voyage context embeddings (#19590)
llama-index-graph-stores-kuzu [0.9.0]
- feat: Update Kuzu graph store integration to latest SDK (#19603)
llama-index-indices-managed-llama-cloud [0.9.1]
- chore: deprecate llama-index-indices-managed-llama-cloud in favor of llama-cloud-services (#19608)
llama-index-llms-anthropic [0.8.2]
- feat: anthropic citation update to non-beta support (#19624)
- feat: add support for opus 4.1 (#19593)
llama-index-llms-heroku [0.1.0]
- feat: heroku llm integration (#19576)
llama-index-llms-nvidia [0.4.1]
- feat: add support for gpt-oss NIM (#19618)
llama-index-llms-oci-genai [0.6.1]
- chore: update list of supported LLMs for OCI integration (#19604)
llama-index-llms-openai [0.5.2]
llama-index-llms-upstage [0.6.1]
- fix: Fix reasoning_effort parameter ineffective and Add new custom parameters (#19619)
llama-index-postprocessor-presidio [0.5.0]
- feat: Support presidio entities (#19584)
llama-index-retrievers-bm25 [0.6.2]
- fix: BM25 Retriever allow
top_kvalue greater than number of nodes added (#19577) - feat: Add metadata filtering support to BM25 Retriever and update documentation (#19586)
llama-index-tools-aws-bedrock-agentcore [0.1.0]
- feat: Bedrock AgentCore browser and code interpreter toolspecs (#19559)
llama-index-vector-stores-baiduvectordb [0.6.0]
- fix: fix filter operators and add stores_text support (#19591)
- feat: add wait logic for critical operations (#19587)
llama-index-vector-stores-postgres [0.6.2]
v0.13.0.post3
Release Notes
v0.13.0.post2
Release Notes
v0.13.0.post1
Release Notes
v0.13.0
Release Notes
NOTE: All packages have been bumped to handle the latest llama-index-core version.
llama-index-core [0.13.0]
- breaking: removed deprecated agent classes, including
FunctionCallingAgent, the olderReActAgentimplementation,AgentRunner, all step workers,StructuredAgentPlanner,OpenAIAgent, and more. All users should migrate to the new workflow based agents:FunctionAgent,CodeActAgent,ReActAgent, andAgentWorkflow(#19529) - breaking: removed deprecated
QueryPipelineclass and all associated code (#19554) - breaking: changed default
index.as_chat_engine()to return aCondensePlusContextChatEngine. Agent-based chat engines have been removed (which was the previous default). If you need an agent, use the above mentioned agent classes. (#19529) - fix: Update BaseDocumentStore to not return Nones in result (#19513)
- fix: Fix FunctionTool param doc parsing and signature mutation; update tests (#19532)
- fix: Handle empty prompt in MockLLM.stream_complete (#19521)
llama-index-embeddings-mixedbreadai [0.5.0]
- feat: Update mixedbread embeddings and rerank for latest sdk (#19519)
llama-index-instrumentation [0.4.0]
- fix: let wrapped exceptions bubble up (#19566)
llama-index-llms-google-genai [0.3.0]
- feat: Add Thought Summaries and signatures for Gemini (#19505)
llama-index-llms-nvidia [0.4.0]
- feat: add support for kimi-k2-instruct (#19525)
llama-index-llms-upstage [0.6.0]
- feat: add new upstage model(solar-pro2) (#19526)
llama-index-postprocessor-mixedbreadai-rerank [0.5.0]
- feat: Update mixedbread embeddings and rerank for latest sdk (#19519)
llama-index-readers-github [0.8.0]
- feat: Github Reader enhancements for file filtering and custom processing (#19543)
llama-index-readers-s3 [0.5.0]
- feat: add support for region_name via
client_kwargsin S3Reader (#19546)
llama-index-tools-valyu [0.4.0]
- feat: Update Valyu sdk to latest version (#19538)
llama-index-voice-agents-gemini-live [0.2.0]
- feat(beta): adding first implementation of gemini live (#19489)
llama-index-vector-stores-astradb [0.5.0]
- feat: astradb get nodes + delete nodes support (#19544)
llama-index-vector-stores-milvus [0.9.0]
- feat: Add support for specifying partition_names in Milvus search configuration (#19555)
llama-index-vector-stores-s3 [0.2.0]
- fix: reduce some metadata keys from S3VectorStore to save space (#19550)
llama-index-vector-stores-postgres [0.6.0]
- feat: Add support for ANY/ALL postgres operators (#19553)