Skip to content

Commit ab5d38a

Browse files
chore(deps): update container image docker.io/localai/localai to v2.19.1 by renovate (#24152)
This PR contains the following updates: | Package | Update | Change | |---|---|---| | [docker.io/localai/localai](https://github.com/mudler/LocalAI) | minor | `v2.17.1-aio-cpu` -> `v2.19.1-aio-cpu` | | [docker.io/localai/localai](https://github.com/mudler/LocalAI) | minor | `v2.17.1-aio-gpu-nvidia-cuda-11` -> `v2.19.1-aio-gpu-nvidia-cuda-11` | | [docker.io/localai/localai](https://github.com/mudler/LocalAI) | minor | `v2.17.1-aio-gpu-nvidia-cuda-12` -> `v2.19.1-aio-gpu-nvidia-cuda-12` | | [docker.io/localai/localai](https://github.com/mudler/LocalAI) | minor | `v2.17.1-cublas-cuda11-ffmpeg-core` -> `v2.19.1-cublas-cuda11-ffmpeg-core` | | [docker.io/localai/localai](https://github.com/mudler/LocalAI) | minor | `v2.17.1-cublas-cuda11-core` -> `v2.19.1-cublas-cuda11-core` | | [docker.io/localai/localai](https://github.com/mudler/LocalAI) | minor | `v2.17.1-cublas-cuda12-ffmpeg-core` -> `v2.19.1-cublas-cuda12-ffmpeg-core` | | [docker.io/localai/localai](https://github.com/mudler/LocalAI) | minor | `v2.17.1-cublas-cuda12-core` -> `v2.19.1-cublas-cuda12-core` | | [docker.io/localai/localai](https://github.com/mudler/LocalAI) | minor | `v2.17.1-ffmpeg-core` -> `v2.19.1-ffmpeg-core` | | [docker.io/localai/localai](https://github.com/mudler/LocalAI) | minor | `v2.17.1` -> `v2.19.1` | --- > [!WARNING] > Some dependencies could not be looked up. Check the Dependency Dashboard for more information. --- ### Release Notes <details> <summary>mudler/LocalAI (docker.io/localai/localai)</summary> ### [`v2.19.1`](https://github.com/mudler/LocalAI/releases/tag/v2.19.1) [Compare Source](https://github.com/mudler/LocalAI/compare/v2.19.0...v2.19.1) ![local-ai-release-219-shadow](https://github.com/user-attachments/assets/c5d7c930-656f-410d-aab9-455a466925fe) ##### LocalAI 2.19.1 is out! :mega: ##### TLDR; Summary spotlight - 🖧 Federated Instances via P2P: LocalAI now supports federated instances with P2P, offering both load-balanced and non-load-balanced options. - 🎛️ P2P Dashboard: A new dashboard to guide and assist in setting up P2P instances with auto-discovery using shared tokens. - 🔊 TTS Integration: Text-to-Speech (TTS) is now included in the binary releases. - 🛠️ Enhanced Installer: The installer script now supports setting up federated instances. - 📥 Model Pulling: Models can now be pulled directly via URL. - 🖼️ WebUI Enhancements: Visual improvements and cleanups to the WebUI and model lists. - 🧠 llama-cpp Backend: The llama-cpp (grpc) backend now supports embedding ( https://localai.io/features/embeddings/#llamacpp-embeddings ) - ⚙️ Tool Support: Small enhancements to tools with disabled grammars. ##### 🖧 LocalAI Federation and AI swarms <p align="center"> <img src="https://github.com/user-attachments/assets/17b39f8a-fc41-47d9-b846-b3a88307813b"/> </p> LocalAI is revolutionizing the future of distributed AI workloads by making it simpler and more accessible. No more complex setups, Docker or Kubernetes configurations – LocalAI allows you to create your own AI cluster with minimal friction. By auto-discovering and sharing work or weights of the LLM model across your existing devices, LocalAI aims to scale both horizontally and vertically with ease. ##### How it works? Starting LocalAI with `--p2p` generates a shared token for connecting multiple instances: and that's all you need to create AI clusters, eliminating the need for intricate network setups. Simply navigate to the "Swarm" section in the WebUI and follow the on-screen instructions. For fully shared instances, initiate LocalAI with `--p2p --federated` and adhere to the Swarm section's guidance. This feature, while still experimental, offers a tech preview quality experience. ##### Federated LocalAI Launch multiple LocalAI instances and cluster them together to share requests across the cluster. The "Swarm" tab in the WebUI provides one-liner instructions on connecting various LocalAI instances using a shared token. Instances will auto-discover each other, even across different networks. ![346663124-1d2324fd-8b55-4fa2-9856-721a467969c2](https://github.com/user-attachments/assets/19ebd44a-20ff-412c-b92f-cfb8efbe4b21) Check out a demonstration video: [Watch now](https://www.youtube.com/watch?v=pH8Bv\_\_9cnA) ##### LocalAI P2P Workers Distribute weights across nodes by starting multiple LocalAI workers, currently available only on the llama.cpp backend, with plans to expand to other backends soon. ![346663124-1d2324fd-8b55-4fa2-9856-721a467969c2](https://github.com/user-attachments/assets/b8cadddf-a467-49cf-a1ed-8850de95366d) Check out a demonstration video: [Watch now](https://www.youtube.com/watch?v=ePH8PGqMSpo) ##### What's Changed ##### Bug fixes :bug: - fix: make sure the GNUMake jobserver is passed to cmake for the llama.cpp build by [@&#8203;cryptk](https://github.com/cryptk) in [https://github.com/mudler/LocalAI/pull/2697](https://github.com/mudler/LocalAI/pull/2697) - Using exec when starting a backend instead of spawning a new process by [@&#8203;a17t](https://github.com/a17t) in [https://github.com/mudler/LocalAI/pull/2720](https://github.com/mudler/LocalAI/pull/2720) - fix(cuda): downgrade default version from 12.5 to 12.4 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2707](https://github.com/mudler/LocalAI/pull/2707) - fix: Lora loading by [@&#8203;vaaale](https://github.com/vaaale) in [https://github.com/mudler/LocalAI/pull/2893](https://github.com/mudler/LocalAI/pull/2893) - fix: short-circuit when nodes aren't detected by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2909](https://github.com/mudler/LocalAI/pull/2909) - fix: do not list txt files as potential models by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2910](https://github.com/mudler/LocalAI/pull/2910) ##### 🖧 P2P area - feat(p2p): Federation and AI swarms by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2723](https://github.com/mudler/LocalAI/pull/2723) - feat(p2p): allow to disable DHT and use only LAN by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2751](https://github.com/mudler/LocalAI/pull/2751) ##### Exciting New Features 🎉 - Allows to remove a backend from the list by [@&#8203;mauromorales](https://github.com/mauromorales) in [https://github.com/mudler/LocalAI/pull/2721](https://github.com/mudler/LocalAI/pull/2721) - ci(Makefile): adds tts in binary releases by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2695](https://github.com/mudler/LocalAI/pull/2695) - feat: HF `/scan` endpoint by [@&#8203;dave-gray101](https://github.com/dave-gray101) in [https://github.com/mudler/LocalAI/pull/2566](https://github.com/mudler/LocalAI/pull/2566) - feat(model-list): be consistent, skip known files from listing by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2760](https://github.com/mudler/LocalAI/pull/2760) - feat(models): pull models from urls by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2750](https://github.com/mudler/LocalAI/pull/2750) - feat(webui): show also models without a config in the welcome page by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2772](https://github.com/mudler/LocalAI/pull/2772) - feat(install.sh): support federated install by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2752](https://github.com/mudler/LocalAI/pull/2752) - feat(llama.cpp): support embeddings endpoints by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2871](https://github.com/mudler/LocalAI/pull/2871) - feat(functions): parse broken JSON when we parse the raw results, use dynamic rules for grammar keys by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2912](https://github.com/mudler/LocalAI/pull/2912) - feat(federation): add load balanced option by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2915](https://github.com/mudler/LocalAI/pull/2915) ##### 🧠 Models - models(gallery): :arrow_up: update checksum by [@&#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2701](https://github.com/mudler/LocalAI/pull/2701) - models(gallery): add l3-8b-everything-cot by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2705](https://github.com/mudler/LocalAI/pull/2705) - models(gallery): add hercules-5.0-qwen2-7b by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2708](https://github.com/mudler/LocalAI/pull/2708) - models(gallery): add llama3-8b-darkidol-2.2-uncensored-1048k-iq-imatrix by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2710](https://github.com/mudler/LocalAI/pull/2710) - models(gallery): add llama-3-llamilitary by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2711](https://github.com/mudler/LocalAI/pull/2711) - models(gallery): add tess-v2.5-gemma-2-27b-alpha by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2712](https://github.com/mudler/LocalAI/pull/2712) - models(gallery): add arcee-agent by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2713](https://github.com/mudler/LocalAI/pull/2713) - models(gallery): add gemma2-daybreak by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2714](https://github.com/mudler/LocalAI/pull/2714) - models(gallery): add L3-Stheno-Maid-Blackroot-Grand-HORROR-16B-GGUF by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2715](https://github.com/mudler/LocalAI/pull/2715) - models(gallery): add qwen2-7b-instruct-v0.8 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2717](https://github.com/mudler/LocalAI/pull/2717) - models(gallery): add internlm2\_5-7b-chat-1m by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2719](https://github.com/mudler/LocalAI/pull/2719) - models(gallery): add gemma-2-9b-it-sppo-iter3 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2722](https://github.com/mudler/LocalAI/pull/2722) - models(gallery): add llama-3\_8b_unaligned_alpha by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2727](https://github.com/mudler/LocalAI/pull/2727) - models(gallery): add l3-8b-lunaris-v1 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2729](https://github.com/mudler/LocalAI/pull/2729) - models(gallery): add llama-3\_8b_unaligned_alpha_rp_soup-i1 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2734](https://github.com/mudler/LocalAI/pull/2734) - models(gallery): add hathor_respawn-l3-8b-v0.8 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2738](https://github.com/mudler/LocalAI/pull/2738) - models(gallery): add llama3-8b-instruct-replete-adapted by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2739](https://github.com/mudler/LocalAI/pull/2739) - models(gallery): add llama-3-perky-pat-instruct-8b by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2740](https://github.com/mudler/LocalAI/pull/2740) - models(gallery): add l3-uncen-merger-omelette-rp-v0.2-8b by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2741](https://github.com/mudler/LocalAI/pull/2741) - models(gallery): add nymph\_8b-i1 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2742](https://github.com/mudler/LocalAI/pull/2742) - models(gallery): add smegmma-9b-v1 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2743](https://github.com/mudler/LocalAI/pull/2743) - models(gallery): add hathor_tahsin-l3-8b-v0.85 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2762](https://github.com/mudler/LocalAI/pull/2762) - models(gallery): add replete-coder-instruct-8b-merged by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2782](https://github.com/mudler/LocalAI/pull/2782) - models(gallery): add arliai-llama-3-8b-formax-v1.0 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2783](https://github.com/mudler/LocalAI/pull/2783) - models(gallery): add smegmma-deluxe-9b-v1 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2784](https://github.com/mudler/LocalAI/pull/2784) - models(gallery): add l3-ms-astoria-8b by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2785](https://github.com/mudler/LocalAI/pull/2785) - models(gallery): add halomaidrp-v1.33-15b-l3-i1 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2786](https://github.com/mudler/LocalAI/pull/2786) - models(gallery): add llama-3-patronus-lynx-70b-instruct by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2788](https://github.com/mudler/LocalAI/pull/2788) - models(gallery): add llamax3 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2849](https://github.com/mudler/LocalAI/pull/2849) - models(gallery): add arliai-llama-3-8b-dolfin-v0.5 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2852](https://github.com/mudler/LocalAI/pull/2852) - models(gallery): add tiger-gemma-9b-v1-i1 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2853](https://github.com/mudler/LocalAI/pull/2853) - feat: models(gallery): add deepseek-v2-lite by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2658](https://github.com/mudler/LocalAI/pull/2658) - models(gallery): :arrow_up: update checksum by [@&#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2860](https://github.com/mudler/LocalAI/pull/2860) - models(gallery): add phi-3.1-mini-4k-instruct by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2863](https://github.com/mudler/LocalAI/pull/2863) - models(gallery): :arrow_up: update checksum by [@&#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2887](https://github.com/mudler/LocalAI/pull/2887) - models(gallery): add ezo model series (llama3, gemma) by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2891](https://github.com/mudler/LocalAI/pull/2891) - models(gallery): add l3-8b-niitama-v1 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2895](https://github.com/mudler/LocalAI/pull/2895) - models(gallery): add mathstral-7b-v0.1-imat by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2901](https://github.com/mudler/LocalAI/pull/2901) - models(gallery): add MythicalMaid/EtherealMaid 15b by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2902](https://github.com/mudler/LocalAI/pull/2902) - models(gallery): add flammenai/Mahou-1.3d-mistral-7B by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2903](https://github.com/mudler/LocalAI/pull/2903) - models(gallery): add big-tiger-gemma-27b-v1 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2918](https://github.com/mudler/LocalAI/pull/2918) - models(gallery): add phillama-3.8b-v0.1 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2920](https://github.com/mudler/LocalAI/pull/2920) - models(gallery): add qwen2-wukong-7b by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2921](https://github.com/mudler/LocalAI/pull/2921) - models(gallery): add einstein-v4-7b by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2922](https://github.com/mudler/LocalAI/pull/2922) - models(gallery): add gemma-2b-translation-v0.150 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2923](https://github.com/mudler/LocalAI/pull/2923) - models(gallery): add emo-2b by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2924](https://github.com/mudler/LocalAI/pull/2924) - models(gallery): add celestev1 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2925](https://github.com/mudler/LocalAI/pull/2925) ##### 📖 Documentation and examples - :arrow_up: Update docs version mudler/LocalAI by [@&#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2699](https://github.com/mudler/LocalAI/pull/2699) - examples(gha): add example on how to run LocalAI in Github actions by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2716](https://github.com/mudler/LocalAI/pull/2716) - docs(swagger): enhance coverage of APIs by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2753](https://github.com/mudler/LocalAI/pull/2753) - docs(swagger): comment LocalAI gallery endpoints and rerankers by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2854](https://github.com/mudler/LocalAI/pull/2854) - docs: add a note on benchmarks by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2857](https://github.com/mudler/LocalAI/pull/2857) - docs(swagger): cover p2p endpoints by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2862](https://github.com/mudler/LocalAI/pull/2862) - ci: use github action by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2899](https://github.com/mudler/LocalAI/pull/2899) - docs: update try-it-out.md by [@&#8203;eltociear](https://github.com/eltociear) in [https://github.com/mudler/LocalAI/pull/2906](https://github.com/mudler/LocalAI/pull/2906) - docs(swagger): core more localai/openai endpoints by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2904](https://github.com/mudler/LocalAI/pull/2904) - docs: more swagger, update docs by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2907](https://github.com/mudler/LocalAI/pull/2907) - feat(swagger): update swagger by [@&#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2916](https://github.com/mudler/LocalAI/pull/2916) ##### 👒 Dependencies - :arrow_up: Update ggerganov/llama.cpp by [@&#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2700](https://github.com/mudler/LocalAI/pull/2700) - :arrow_up: Update ggerganov/llama.cpp by [@&#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2704](https://github.com/mudler/LocalAI/pull/2704) - deps(whisper.cpp): update to latest commit by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2709](https://github.com/mudler/LocalAI/pull/2709) - :arrow_up: Update ggerganov/llama.cpp by [@&#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2718](https://github.com/mudler/LocalAI/pull/2718) - :arrow_up: Update ggerganov/llama.cpp by [@&#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2725](https://github.com/mudler/LocalAI/pull/2725) - :arrow_up: Update ggerganov/llama.cpp by [@&#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2736](https://github.com/mudler/LocalAI/pull/2736) - :arrow_up: Update ggerganov/llama.cpp by [@&#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2744](https://github.com/mudler/LocalAI/pull/2744) - :arrow_up: Update ggerganov/whisper.cpp by [@&#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2746](https://github.com/mudler/LocalAI/pull/2746) - :arrow_up: Update ggerganov/llama.cpp by [@&#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2747](https://github.com/mudler/LocalAI/pull/2747) - :arrow_up: Update ggerganov/llama.cpp by [@&#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2755](https://github.com/mudler/LocalAI/pull/2755) - :arrow_up: Update ggerganov/llama.cpp by [@&#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2767](https://github.com/mudler/LocalAI/pull/2767) - :arrow_up: Update ggerganov/whisper.cpp by [@&#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2756](https://github.com/mudler/LocalAI/pull/2756) - :arrow_up: Update ggerganov/llama.cpp by [@&#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2774](https://github.com/mudler/LocalAI/pull/2774) - chore(deps): Update Dependencies by [@&#8203;reneleonhardt](https://github.com/reneleonhardt) in [https://github.com/mudler/LocalAI/pull/2538](https://github.com/mudler/LocalAI/pull/2538) - chore(deps): Bump dependabot/fetch-metadata from 2.1.0 to 2.2.0 by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2791](https://github.com/mudler/LocalAI/pull/2791) - chore(deps): Bump llama-index from 0.9.48 to 0.10.55 in /examples/chainlit by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2795](https://github.com/mudler/LocalAI/pull/2795) - chore(deps): Bump openai from 1.33.0 to 1.35.13 in /examples/functions by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2793](https://github.com/mudler/LocalAI/pull/2793) - chore(deps): Bump nginx from 1.a.b.c to 1.27.0 in /examples/k8sgpt by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2790](https://github.com/mudler/LocalAI/pull/2790) - chore(deps): Bump setuptools from 69.5.1 to 70.3.0 in /backend/python/coqui by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2798](https://github.com/mudler/LocalAI/pull/2798) - chore(deps): Bump inflect from 7.0.0 to 7.3.1 in /backend/python/openvoice by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2796](https://github.com/mudler/LocalAI/pull/2796) - chore(deps): Bump setuptools from 69.5.1 to 70.3.0 in /backend/python/parler-tts by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2797](https://github.com/mudler/LocalAI/pull/2797) - chore(deps): Bump setuptools from 69.5.1 to 70.3.0 in /backend/python/petals by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2799](https://github.com/mudler/LocalAI/pull/2799) - chore(deps): Bump causal-conv1d from 1.2.0.post2 to 1.4.0 in /backend/python/mamba by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2792](https://github.com/mudler/LocalAI/pull/2792) - chore(deps): Bump docs/themes/hugo-theme-relearn from `c25bc2a` to `1b2e139` by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2801](https://github.com/mudler/LocalAI/pull/2801) - chore(deps): Bump tenacity from 8.3.0 to 8.5.0 in /examples/langchain/langchainpy-localai-example by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2803](https://github.com/mudler/LocalAI/pull/2803) - chore(deps): Bump openai from 1.33.0 to 1.35.13 in /examples/langchain-chroma by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2794](https://github.com/mudler/LocalAI/pull/2794) - chore(deps): Bump setuptools from 69.5.1 to 70.3.0 in /backend/python/bark by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2805](https://github.com/mudler/LocalAI/pull/2805) - chore(deps): Bump streamlit from 1.30.0 to 1.36.0 in /examples/streamlit-bot by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2804](https://github.com/mudler/LocalAI/pull/2804) - chore(deps): Bump setuptools from 69.5.1 to 70.3.0 in /backend/python/diffusers by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2807](https://github.com/mudler/LocalAI/pull/2807) - chore(deps): Bump grpcio from 1.64.0 to 1.64.1 in /backend/python/exllama2 by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2809](https://github.com/mudler/LocalAI/pull/2809) - chore(deps): Bump grpcio from 1.64.0 to 1.64.1 in /backend/python/common/template by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2802](https://github.com/mudler/LocalAI/pull/2802) - chore(deps): Bump grpcio from 1.64.0 to 1.64.1 in /backend/python/autogptq by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2800](https://github.com/mudler/LocalAI/pull/2800) - chore(deps): Bump weaviate-client from 4.6.4 to 4.6.5 in /examples/chainlit by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2811](https://github.com/mudler/LocalAI/pull/2811) - chore(deps): Bump gradio from 4.36.1 to 4.37.1 in /backend/python/openvoice in the pip group by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2815](https://github.com/mudler/LocalAI/pull/2815) - chore(deps): Bump setuptools from 69.5.1 to 70.3.0 in /backend/python/vall-e-x by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2812](https://github.com/mudler/LocalAI/pull/2812) - chore(deps): Bump certifi from 2024.6.2 to 2024.7.4 in /examples/langchain/langchainpy-localai-example by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2814](https://github.com/mudler/LocalAI/pull/2814) - chore(deps): Bump setuptools from 69.5.1 to 70.3.0 in /backend/python/transformers by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2817](https://github.com/mudler/LocalAI/pull/2817) - chore(deps): Bump grpcio from 1.64.0 to 1.64.1 in /backend/python/sentencetransformers by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2813](https://github.com/mudler/LocalAI/pull/2813) - chore(deps): Bump grpcio from 1.64.0 to 1.64.1 in /backend/python/rerankers by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2819](https://github.com/mudler/LocalAI/pull/2819) - chore(deps): Bump grpcio from 1.64.0 to 1.64.1 in /backend/python/parler-tts by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2818](https://github.com/mudler/LocalAI/pull/2818) - chore(deps): Bump setuptools from 69.5.1 to 70.3.0 in /backend/python/vllm by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2820](https://github.com/mudler/LocalAI/pull/2820) - chore(deps): Bump grpcio from 1.64.0 to 1.64.1 in /backend/python/coqui by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2825](https://github.com/mudler/LocalAI/pull/2825) - chore(deps): Bump faster-whisper from 0.9.0 to 1.0.3 in /backend/python/openvoice by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2829](https://github.com/mudler/LocalAI/pull/2829) - chore(deps): Bump grpcio from 1.64.0 to 1.64.1 in /backend/python/exllama by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2841](https://github.com/mudler/LocalAI/pull/2841) - chore(deps): Bump scipy from 1.13.0 to 1.14.0 in /backend/python/transformers-musicgen by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2842](https://github.com/mudler/LocalAI/pull/2842) - chore: :arrow_up: Update ggerganov/llama.cpp by [@&#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2846](https://github.com/mudler/LocalAI/pull/2846) - chore(deps): Bump langchain from 0.2.3 to 0.2.7 in /examples/functions by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2806](https://github.com/mudler/LocalAI/pull/2806) - chore(deps): Bump mamba-ssm from 1.2.0.post1 to 2.2.2 in /backend/python/mamba by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2821](https://github.com/mudler/LocalAI/pull/2821) - chore(deps): Bump pydantic from 2.7.3 to 2.8.2 in /examples/langchain/langchainpy-localai-example by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2832](https://github.com/mudler/LocalAI/pull/2832) - chore(deps): Bump langchain from 0.2.3 to 0.2.7 in /examples/langchain-chroma by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2822](https://github.com/mudler/LocalAI/pull/2822) - chore(deps): Bump grpcio from 1.64.0 to 1.64.1 in /backend/python/bark by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2831](https://github.com/mudler/LocalAI/pull/2831) - chore(deps): Bump grpcio from 1.64.0 to 1.64.1 in /backend/python/diffusers by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2833](https://github.com/mudler/LocalAI/pull/2833) - chore(deps): Bump setuptools from 69.5.1 to 70.3.0 in /backend/python/autogptq by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2816](https://github.com/mudler/LocalAI/pull/2816) - chore(deps): Bump gradio from 4.36.1 to 4.38.1 in /backend/python/openvoice by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2840](https://github.com/mudler/LocalAI/pull/2840) - chore(deps): Bump the pip group across 1 directory with 2 updates by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2848](https://github.com/mudler/LocalAI/pull/2848) - chore(deps): Bump grpcio from 1.64.0 to 1.64.1 in /backend/python/transformers by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2837](https://github.com/mudler/LocalAI/pull/2837) - chore(deps): Bump sentence-transformers from 2.5.1 to 3.0.1 in /backend/python/sentencetransformers by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2826](https://github.com/mudler/LocalAI/pull/2826) - chore(deps): Bump grpcio from 1.64.0 to 1.64.1 in /backend/python/vall-e-x by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2830](https://github.com/mudler/LocalAI/pull/2830) - chore(deps): Bump setuptools from 69.5.1 to 70.3.0 in /backend/python/rerankers by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2834](https://github.com/mudler/LocalAI/pull/2834) - chore(deps): Bump grpcio from 1.64.0 to 1.64.1 in /backend/python/vllm by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2839](https://github.com/mudler/LocalAI/pull/2839) - chore(deps): Bump librosa from 0.9.1 to 0.10.2.post1 in /backend/python/openvoice by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2836](https://github.com/mudler/LocalAI/pull/2836) - chore(deps): Bump setuptools from 69.5.1 to 70.3.0 in /backend/python/transformers-musicgen by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2843](https://github.com/mudler/LocalAI/pull/2843) - chore(deps): Bump grpcio from 1.64.0 to 1.64.1 in /backend/python/mamba by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2808](https://github.com/mudler/LocalAI/pull/2808) - chore(deps): Bump llama-index from 0.10.43 to 0.10.55 in /examples/langchain-chroma by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2810](https://github.com/mudler/LocalAI/pull/2810) - chore(deps): Bump langchain from 0.2.3 to 0.2.7 in /examples/langchain/langchainpy-localai-example by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2824](https://github.com/mudler/LocalAI/pull/2824) - chore(deps): Bump numpy from 1.26.4 to 2.0.0 in /backend/python/openvoice by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2823](https://github.com/mudler/LocalAI/pull/2823) - chore(deps): Bump grpcio from 1.64.0 to 1.64.1 in /backend/python/transformers-musicgen by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2844](https://github.com/mudler/LocalAI/pull/2844) - build(deps): bump docker/build-push-action from 5 to 6 by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2592](https://github.com/mudler/LocalAI/pull/2592) - chore(deps): Bump chromadb from 0.5.0 to 0.5.4 in /examples/langchain-chroma by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2828](https://github.com/mudler/LocalAI/pull/2828) - chore(deps): Bump torch from 2.2.0 to 2.3.1 in /backend/python/mamba by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2835](https://github.com/mudler/LocalAI/pull/2835) - chore: :arrow_up: Update ggerganov/llama.cpp by [@&#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2851](https://github.com/mudler/LocalAI/pull/2851) - chore(deps): Bump setuptools from 69.5.1 to 70.3.0 in /backend/python/sentencetransformers by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2838](https://github.com/mudler/LocalAI/pull/2838) - chore: update edgevpn dependency by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2855](https://github.com/mudler/LocalAI/pull/2855) - chore: :arrow_up: Update ggerganov/llama.cpp by [@&#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2859](https://github.com/mudler/LocalAI/pull/2859) - chore(deps): Bump langchain from 0.2.7 to 0.2.8 in /examples/functions by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2873](https://github.com/mudler/LocalAI/pull/2873) - chore(deps): Bump langchain from 0.2.7 to 0.2.8 in /examples/langchain/langchainpy-localai-example by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2874](https://github.com/mudler/LocalAI/pull/2874) - chore(deps): Bump numexpr from 2.10.0 to 2.10.1 in /examples/langchain/langchainpy-localai-example by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2877](https://github.com/mudler/LocalAI/pull/2877) - chore: :arrow_up: Update ggerganov/whisper.cpp by [@&#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2885](https://github.com/mudler/LocalAI/pull/2885) - chore: :arrow_up: Update ggerganov/llama.cpp by [@&#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2886](https://github.com/mudler/LocalAI/pull/2886) - chore(deps): Bump debugpy from 1.8.1 to 1.8.2 in /examples/langchain/langchainpy-localai-example by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2878](https://github.com/mudler/LocalAI/pull/2878) - chore(deps): Bump langchain-community from 0.2.5 to 0.2.7 in /examples/langchain/langchainpy-localai-example by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2875](https://github.com/mudler/LocalAI/pull/2875) - chore(deps): Bump langchain from 0.2.7 to 0.2.8 in /examples/langchain-chroma by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2872](https://github.com/mudler/LocalAI/pull/2872) - chore(deps): Bump openai from 1.33.0 to 1.35.13 in /examples/langchain/langchainpy-localai-example by [@&#8203;dependabot](https://github.com/dependabot) in [https://github.com/mudler/LocalAI/pull/2876](https://github.com/mudler/LocalAI/pull/2876) - chore: :arrow_up: Update ggerganov/whisper.cpp by [@&#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2898](https://github.com/mudler/LocalAI/pull/2898) - chore: :arrow_up: Update ggerganov/llama.cpp by [@&#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2897](https://github.com/mudler/LocalAI/pull/2897) - chore: :arrow_up: Update ggerganov/llama.cpp by [@&#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2905](https://github.com/mudler/LocalAI/pull/2905) - chore: :arrow_up: Update ggerganov/llama.cpp by [@&#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2917](https://github.com/mudler/LocalAI/pull/2917) ##### Other Changes - ci: add pipelines for discord notifications by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2703](https://github.com/mudler/LocalAI/pull/2703) - ci(arm64): fix gRPC build by adding googletest to CMakefile by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2754](https://github.com/mudler/LocalAI/pull/2754) - fix: arm builds via disabling abseil tests by [@&#8203;dave-gray101](https://github.com/dave-gray101) in [https://github.com/mudler/LocalAI/pull/2758](https://github.com/mudler/LocalAI/pull/2758) - ci(grpc): disable ABSEIL tests by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2759](https://github.com/mudler/LocalAI/pull/2759) - ci(deps): add libgmock-dev by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2761](https://github.com/mudler/LocalAI/pull/2761) - fix abseil test issue \[attempt 3] by [@&#8203;dave-gray101](https://github.com/dave-gray101) in [https://github.com/mudler/LocalAI/pull/2769](https://github.com/mudler/LocalAI/pull/2769) - feat(swagger): update swagger by [@&#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2766](https://github.com/mudler/LocalAI/pull/2766) - ci: Do not test the full matrix on PRs by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2771](https://github.com/mudler/LocalAI/pull/2771) - Git fetch specific branch instead of full tree during build by [@&#8203;LoricOSC](https://github.com/LoricOSC) in [https://github.com/mudler/LocalAI/pull/2748](https://github.com/mudler/LocalAI/pull/2748) - fix(ci): small fixups to checksum_checker.sh by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2776](https://github.com/mudler/LocalAI/pull/2776) - fix(ci): fixup correct path for check_and_update.py by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2777](https://github.com/mudler/LocalAI/pull/2777) - fixes to `check_and_update.py` script by [@&#8203;dave-gray101](https://github.com/dave-gray101) in [https://github.com/mudler/LocalAI/pull/2778](https://github.com/mudler/LocalAI/pull/2778) - Update remaining git clones to git fetch by [@&#8203;LoricOSC](https://github.com/LoricOSC) in [https://github.com/mudler/LocalAI/pull/2779](https://github.com/mudler/LocalAI/pull/2779) - feat(scripts): add scripts to help adding new models to the gallery by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2789](https://github.com/mudler/LocalAI/pull/2789) - build: speedup `git submodule update` with `--single-branch` by [@&#8203;dave-gray101](https://github.com/dave-gray101) in [https://github.com/mudler/LocalAI/pull/2847](https://github.com/mudler/LocalAI/pull/2847) - Revert "chore(deps): Bump inflect from 7.0.0 to 7.3.1 in /backend/python/openvoice" by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2856](https://github.com/mudler/LocalAI/pull/2856) - Revert "chore(deps): Bump librosa from 0.9.1 to 0.10.2.post1 in /backend/python/openvoice" by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2861](https://github.com/mudler/LocalAI/pull/2861) - feat(swagger): update swagger by [@&#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2858](https://github.com/mudler/LocalAI/pull/2858) - Revert "chore(deps): Bump numpy from 1.26.4 to 2.0.0 in /backend/python/openvoice" by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2868](https://github.com/mudler/LocalAI/pull/2868) - feat(swagger): update swagger by [@&#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2884](https://github.com/mudler/LocalAI/pull/2884) - fix: update grpcio version to match version used in builds by [@&#8203;cryptk](https://github.com/cryptk) in [https://github.com/mudler/LocalAI/pull/2888](https://github.com/mudler/LocalAI/pull/2888) - fix: cleanup indentation and remove duplicate dockerfile stanza by [@&#8203;cryptk](https://github.com/cryptk) in [https://github.com/mudler/LocalAI/pull/2889](https://github.com/mudler/LocalAI/pull/2889) - ci: add workflow to comment new Opened PRs by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2892](https://github.com/mudler/LocalAI/pull/2892) - build: fix go.mod - don't import ourself by [@&#8203;dave-gray101](https://github.com/dave-gray101) in [https://github.com/mudler/LocalAI/pull/2896](https://github.com/mudler/LocalAI/pull/2896) - feat(swagger): update swagger by [@&#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2908](https://github.com/mudler/LocalAI/pull/2908) - refactor: move federated server logic to its own service by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2914](https://github.com/mudler/LocalAI/pull/2914) - refactor: groundwork - add pkg/concurrency and the associated test file by [@&#8203;dave-gray101](https://github.com/dave-gray101) in [https://github.com/mudler/LocalAI/pull/2745](https://github.com/mudler/LocalAI/pull/2745) ##### New Contributors - [@&#8203;a17t](https://github.com/a17t) made their first contribution in [https://github.com/mudler/LocalAI/pull/2720](https://github.com/mudler/LocalAI/pull/2720) - [@&#8203;LoricOSC](https://github.com/LoricOSC) made their first contribution in [https://github.com/mudler/LocalAI/pull/2748](https://github.com/mudler/LocalAI/pull/2748) - [@&#8203;vaaale](https://github.com/vaaale) made their first contribution in [https://github.com/mudler/LocalAI/pull/2893](https://github.com/mudler/LocalAI/pull/2893) **Full Changelog**: https://github.com/mudler/LocalAI/compare/v2.18.1...v2.19.0 ### [`v2.19.0`](https://github.com/mudler/LocalAI/releases/tag/v2.19.0) [Compare Source](https://github.com/mudler/LocalAI/compare/v2.18.1...v2.19.0) ![local-ai-release-219-shadow](https://github.com/user-attachments/assets/c5d7c930-656f-410d-aab9-455a466925fe) ##### LocalAI 2.19.0 is out! :mega: ##### TLDR; Summary spotlight - 🖧 Federated Instances via P2P: LocalAI now supports federated instances with P2P, offering both load-balanced and non-load-balanced options. - 🎛️ P2P Dashboard: A new dashboard to guide and assist in setting up P2P instances with auto-discovery using shared tokens. - 🔊 TTS Integration: Text-to-Speech (TTS) is now included in the binary releases. - 🛠️ Enhanced Installer: The installer script now supports setting up federated instances. - 📥 Model Pulling: Models can now be pulled directly via URL. - 🖼️ WebUI Enhancements: Visual improvements and cleanups to the WebUI and model lists. - 🧠 llama-cpp Backend: The llama-cpp (grpc) backend now supports embedding ( https://localai.io/features/embeddings/#llamacpp-embeddings ) - ⚙️ Tool Support: Small enhancements to tools with disabled grammars. ##### 🖧 LocalAI Federation and AI swarms <p align="center"> <img src="https://github.com/user-attachments/assets/17b39f8a-fc41-47d9-b846-b3a88307813b"/> </p> LocalAI is revolutionizing the future of distributed AI workloads by making it simpler and more accessible. No more complex setups, Docker or Kubernetes configurations – LocalAI allows you to create your own AI cluster with minimal friction. By auto-discovering and sharing work or weights of the LLM model across your existing devices, LocalAI aims to scale both horizontally and vertically with ease. ##### How it works? Starting LocalAI with `--p2p` generates a shared token for connecting multiple instances: and that's all you need to create AI clusters, eliminating the need for intricate network setups. Simply navigate to the "Swarm" section in the WebUI and follow the on-screen instructions. For fully shared instances, initiate LocalAI with `--p2p --federated` and adhere to the Swarm section's guidance. This feature, while still experimental, offers a tech preview quality experience. ##### Federated LocalAI Launch multiple LocalAI instances and cluster them together to share requests across the cluster. The "Swarm" tab in the WebUI provides one-liner instructions on connecting various LocalAI instances using a shared token. Instances will auto-discover each other, even across different networks. ![346663124-1d2324fd-8b55-4fa2-9856-721a467969c2](https://github.com/user-attachments/assets/19ebd44a-20ff-412c-b92f-cfb8efbe4b21) Check out a demonstration video: [Watch now](https://www.youtube.com/watch?v=pH8Bv\_\_9cnA) ##### LocalAI P2P Workers Distribute weights across nodes by starting multiple LocalAI workers, currently available only on the llama.cpp backend, with plans to expand to other backends soon. ![346663124-1d2324fd-8b55-4fa2-9856-721a467969c2](https://github.com/user-attachments/assets/b8cadddf-a467-49cf-a1ed-8850de95366d) Check out a demonstration video: [Watch now](https://www.youtube.com/watch?v=ePH8PGqMSpo) ##### What's Changed ##### Bug fixes :bug: - fix: make sure the GNUMake jobserver is passed to cmake for the llama.cpp build by [@&#8203;cryptk](https://github.com/cryptk) in [https://github.com/mudler/LocalAI/pull/2697](https://github.com/mudler/LocalAI/pull/2697) - Using exec when starting a backend instead of spawning a new process by [@&#8203;a17t](https://github.com/a17t) in [https://github.com/mudler/LocalAI/pull/2720](https://github.com/mudler/LocalAI/pull/2720) - fix(cuda): downgrade default version from 12.5 to 12.4 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2707](https://github.com/mudler/LocalAI/pull/2707) - fix: Lora loading by [@&#8203;vaaale](https://github.com/vaaale) in [https://github.com/mudler/LocalAI/pull/2893](https://github.com/mudler/LocalAI/pull/2893) - fix: short-circuit when nodes aren't detected by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2909](https://github.com/mudler/LocalAI/pull/2909) - fix: do not list txt files as potential models by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2910](https://github.com/mudler/LocalAI/pull/2910) ##### 🖧 P2P area - feat(p2p): Federation and AI swarms by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2723](https://github.com/mudler/LocalAI/pull/2723) - feat(p2p): allow to disable DHT and use only LAN by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2751](https://github.com/mudler/LocalAI/pull/2751) ##### Exciting New Features 🎉 - Allows to remove a backend from the list by [@&#8203;mauromorales](https://github.com/mauromorales) in [https://github.com/mudler/LocalAI/pull/2721](https://github.com/mudler/LocalAI/pull/2721) - ci(Makefile): adds tts in binary releases by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2695](https://github.com/mudler/LocalAI/pull/2695) - feat: HF `/scan` endpoint by [@&#8203;dave-gray101](https://github.com/dave-gray101) in [https://github.com/mudler/LocalAI/pull/2566](https://github.com/mudler/LocalAI/pull/2566) - feat(model-list): be consistent, skip known files from listing by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2760](https://github.com/mudler/LocalAI/pull/2760) - feat(models): pull models from urls by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2750](https://github.com/mudler/LocalAI/pull/2750) - feat(webui): show also models without a config in the welcome page by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2772](https://github.com/mudler/LocalAI/pull/2772) - feat(install.sh): support federated install by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2752](https://github.com/mudler/LocalAI/pull/2752) - feat(llama.cpp): support embeddings endpoints by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2871](https://github.com/mudler/LocalAI/pull/2871) - feat(functions): parse broken JSON when we parse the raw results, use dynamic rules for grammar keys by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2912](https://github.com/mudler/LocalAI/pull/2912) - feat(federation): add load balanced option by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2915](https://github.com/mudler/LocalAI/pull/2915) ##### 🧠 Models - models(gallery): :arrow_up: update checksum by [@&#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2701](https://github.com/mudler/LocalAI/pull/2701) - models(gallery): add l3-8b-everything-cot by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2705](https://github.com/mudler/LocalAI/pull/2705) - models(gallery): add hercules-5.0-qwen2-7b by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2708](https://github.com/mudler/LocalAI/pull/2708) - models(gallery): add llama3-8b-darkidol-2.2-uncensored-1048k-iq-imatrix by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2710](https://github.com/mudler/LocalAI/pull/2710) - models(gallery): add llama-3-llamilitary by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2711](https://github.com/mudler/LocalAI/pull/2711) - models(gallery): add tess-v2.5-gemma-2-27b-alpha by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2712](https://github.com/mudler/LocalAI/pull/2712) - models(gallery): add arcee-agent by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2713](https://github.com/mudler/LocalAI/pull/2713) - models(gallery): add gemma2-daybreak by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2714](https://github.com/mudler/LocalAI/pull/2714) - models(gallery): add L3-Stheno-Maid-Blackroot-Grand-HORROR-16B-GGUF by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2715](https://github.com/mudler/LocalAI/pull/2715) - models(gallery): add qwen2-7b-instruct-v0.8 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2717](https://github.com/mudler/LocalAI/pull/2717) - models(gallery): add internlm2\_5-7b-chat-1m by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2719](https://github.com/mudler/LocalAI/pull/2719) - models(gallery): add gemma-2-9b-it-sppo-iter3 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2722](https://github.com/mudler/LocalAI/pull/2722) - models(gallery): add llama-3\_8b_unaligned_alpha by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2727](https://github.com/mudler/LocalAI/pull/2727) - models(gallery): add l3-8b-lunaris-v1 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2729](https://github.com/mudler/LocalAI/pull/2729) - models(gallery): add llama-3\_8b_unaligned_alpha_rp_soup-i1 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2734](https://github.com/mudler/LocalAI/pull/2734) - models(gallery): add hathor_respawn-l3-8b-v0.8 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2738](https://github.com/mudler/LocalAI/pull/2738) - models(gallery): add llama3-8b-instruct-replete-adapted by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2739](https://github.com/mudler/LocalAI/pull/2739) - models(gallery): add llama-3-perky-pat-instruct-8b by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2740](https://github.com/mudler/LocalAI/pull/2740) - models(gallery): add l3-uncen-merger-omelette-rp-v0.2-8b by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2741](https://github.com/mudler/LocalAI/pull/2741) - models(gallery): add nymph\_8b-i1 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2742](https://github.com/mudler/LocalAI/pull/2742) - models(gallery): add smegmma-9b-v1 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2743](https://github.com/mudler/LocalAI/pull/2743) - models(gallery): add hathor_tahsin-l3-8b-v0.85 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2762](https://github.com/mudler/LocalAI/pull/2762) - models(gallery): add replete-coder-instruct-8b-merged by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2782](https://github.com/mudler/LocalAI/pull/2782) - models(gallery): add arliai-llama-3-8b-formax-v1.0 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2783](https://github.com/mudler/LocalAI/pull/2783) - models(gallery): add smegmma-deluxe-9b-v1 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2784](https://github.com/mudler/LocalAI/pull/2784) - models(gallery): add l3-ms-astoria-8b by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2785](https://github.com/mudler/LocalAI/pull/2785) - models(gallery): add halomaidrp-v1.33-15b-l3-i1 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2786](https://github.com/mudler/LocalAI/pull/2786) - models(gallery): add llama-3-patronus-lynx-70b-instruct by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2788](https://github.com/mudler/LocalAI/pull/2788) - models(gallery): add llamax3 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2849](https://github.com/mudler/LocalAI/pull/2849) - models(gallery): add arliai-llama-3-8b-dolfin-v0.5 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2852](https://github.com/mudler/LocalAI/pull/2852) - models(gallery): add tiger-gemma-9b-v1-i1 by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2853](https://github.com/mudler/LocalAI/pull/2853) - feat: models(gallery): add deepseek-v2-lite by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2658](https://github.com/mudler/LocalAI/pull/2658) - models(gallery): :arrow_up: update checksum by [@&#8203;localai-bot](https://github.com/localai-bot) in [https://github.com/mudler/LocalAI/pull/2860](https://github.com/mudler/LocalAI/pull/2860) - models(gallery): add phi-3.1-mini-4k-instruct by [@&#8203;mudler](https://github.com/mudler) in [https://github.com/mudler/LocalAI/pull/2863](https://github.com/mudl </details> --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Enabled. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about these updates again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR has been generated by [Renovate Bot](https://github.com/renovatebot/renovate). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy40NDAuNiIsInVwZGF0ZWRJblZlciI6IjM3LjQ0MC42IiwidGFyZ2V0QnJhbmNoIjoibWFzdGVyIiwibGFiZWxzIjpbImF1dG9tZXJnZSIsInVwZGF0ZS9kb2NrZXIvZ2VuZXJhbC9ub24tbWFqb3IiXX0=-->
1 parent b964014 commit ab5d38a

File tree

2 files changed

+11
-11
lines changed

2 files changed

+11
-11
lines changed

charts/stable/local-ai/Chart.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ annotations:
66
truecharts.org/min_helm_version: "3.11"
77
truecharts.org/train: stable
88
apiVersion: v2
9-
appVersion: 2.17.1
9+
appVersion: 2.19.1
1010
dependencies:
1111
- name: common
1212
version: 24.1.5
@@ -33,4 +33,4 @@ sources:
3333
- https://github.com/truecharts/charts/tree/master/charts/stable/local-ai
3434
- https://hub.docker.com/r/localai/localai
3535
type: application
36-
version: 11.10.13
36+
version: 11.11.0

charts/stable/local-ai/values.yaml

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,39 +1,39 @@
11
image:
22
repository: docker.io/localai/localai
33
pullPolicy: IfNotPresent
4-
tag: v2.17.1@sha256:0b2ea102e40903eee881272468bc8e81ea9603f4c718e4de99740ed1553daa7f
4+
tag: v2.19.1@sha256:a41034c890b260478411826df20deefb9afddb58692100c3e74210b1954ac973
55
ffmpegImage:
66
repository: docker.io/localai/localai
77
pullPolicy: IfNotPresent
8-
tag: v2.17.1-ffmpeg-core@sha256:1228933ed913618754724b314257359f547860205a7e39f262d27deaa2b2bd35
8+
tag: v2.19.1-ffmpeg-core@sha256:c63afdd8ede2b29b8a8fa15dc3b5370d33c28deab45d3334ff07064fe0682662
99
cublasCuda12Image:
1010
repository: docker.io/localai/localai
1111
pullPolicy: IfNotPresent
12-
tag: v2.17.1-cublas-cuda12-core@sha256:275af7b66c4dd290e079bfe8b05005317f69d38fe209b2206aab3f21be56eae4
12+
tag: v2.19.1-cublas-cuda12-core@sha256:89d0680ec8690fe53cbe4f1608fd514277a425e6dd5e69978d9ac9ee8d9f9d1e
1313
cublasCuda12FfmpegImage:
1414
repository: docker.io/localai/localai
1515
pullPolicy: IfNotPresent
16-
tag: v2.17.1-cublas-cuda12-ffmpeg-core@sha256:b08a7fe4d92aaf346704ab515d5fe7b0dae021cba1a02df0c57cdda884059b58
16+
tag: v2.19.1-cublas-cuda12-ffmpeg-core@sha256:fefb32ece781e700a43771c4b0108a3af0d5a7c0d5c1849c17d7f265e9408b3f
1717
cublasCuda11Image:
1818
repository: docker.io/localai/localai
1919
pullPolicy: IfNotPresent
20-
tag: v2.17.1-cublas-cuda11-core@sha256:19b48b55407dad78b56435ab19a87fe55ad09b5338df35a06261868fba04a2ce
20+
tag: v2.19.1-cublas-cuda11-core@sha256:7499e2e610e7e3870185fc1337f02b837e20a52cba85b9088e666e1c37690f88
2121
cublasCuda11FfmpegImage:
2222
repository: docker.io/localai/localai
2323
pullPolicy: IfNotPresent
24-
tag: v2.17.1-cublas-cuda11-ffmpeg-core@sha256:a3249d2e366407be99fef8b5fa699cc6607bb40d7a4bfcfe80ffaebcb15a8e11
24+
tag: v2.19.1-cublas-cuda11-ffmpeg-core@sha256:3b868811ca8ee589eb4ea86bcabf0e991794a57a2aa1aa25ab6c023e4ba1811d
2525
allInOneCuda12Image:
2626
repository: docker.io/localai/localai
2727
pullPolicy: IfNotPresent
28-
tag: v2.17.1-aio-gpu-nvidia-cuda-12@sha256:4d8b84a627791ea96194c3499374a2df492aeed698db4751ec427666293ddc9c
28+
tag: v2.19.1-aio-gpu-nvidia-cuda-12@sha256:07fc7e18e871b711f42688a6c98fdf588f5a8711a754b44c1afbda663cc2b35d
2929
allInOneCuda11Image:
3030
repository: docker.io/localai/localai
3131
pullPolicy: IfNotPresent
32-
tag: v2.17.1-aio-gpu-nvidia-cuda-11@sha256:68c1aced181282bd39ece7f3ad4eae6b0140f672c1c8c4119f694f352730e8b4
32+
tag: v2.19.1-aio-gpu-nvidia-cuda-11@sha256:78c99ca29bf1cccd3586b54ae8fe863a44f46e3e5b27e8e2e0d9b18e20e990dc
3333
allInOneCpuImage:
3434
repository: docker.io/localai/localai
3535
pullPolicy: IfNotPresent
36-
tag: v2.17.1-aio-cpu@sha256:a98a793f17dc01c4cf20b1b22155f80dfdd81e58f14e8e714101741ae0976b35
36+
tag: v2.19.1-aio-cpu@sha256:f28abab3ab6a04a7e569cb824eb1e012312eeeab8cfaad4a69ef1ffe8910199c
3737
securityContext:
3838
container:
3939
runAsNonRoot: false

0 commit comments

Comments
 (0)