πŸ€– x Text-based AI LLM

πŸ€– Text-based AI LLM Guide

πŸ’‘ LLM Software

ResourceCategoryLink
TheBloke on HuggingFace🌐 Download Large Language ModelsLink (opens in a new tab)
oobabooga's text-generation-webuiπŸ€– Text-based AI/LLM SoftwareLink (opens in a new tab)
ggerganov's llamacppπŸ€– Text-based AI/LLM SoftwareLink (opens in a new tab)
LM StudioπŸ€– Text-based AI/LLM SoftwareLink (opens in a new tab)
lostruin's koboldcppπŸ€– Text-based AI/LLM SoftwareLink (opens in a new tab)
nomic-ai's gpt4allπŸ€– Text-based AI/LLM SoftwareLink (opens in a new tab)
koboldai's koboldclientπŸ€– Text-based AI/LLM SoftwareLink (opens in a new tab)
TavernAIπŸ€– Text-based AI/LLM SoftwareLink (opens in a new tab)
SillyTavernπŸ€– Text-based AI/LLM SoftwareLink (opens in a new tab)
ReplicateπŸ€– Text-based AI AI/LLM SoftwareLink (opens in a new tab)

πŸ”‹ Compute Resources

Do you have a GPU and CPU? Try running or self-hosting GGUF and/or GPTQ models at home! If you're short power, you can find extra compute below.

⚑ Power

ResourceCategoryLink
db0's AI HordeFree AI Horde ComputeLink (opens in a new tab)
PetalsFree Distributed AI ComputeLink (opens in a new tab)
Runpod.ioRent-a-Server / GPULink (opens in a new tab)
Vast.aiRent-a-Server / GPULink (opens in a new tab)

🌐 Download

ResourceCategoryLink
TheBlokeDownload Large Language ModelsLink (opens in a new tab)

⏳ Install

ResourceCategoryLink
How-to-Installoobabooga's text-generation-webuiLink (opens in a new tab)

πŸ€– Text Generation Resources

These platforms empower you to run text-generating large language models (LLMs) on your personal servers, desktops, or laptops.

Each software shares different benefits and features. If you're not sure where to start, oobabooga serves as an safe launching point. As you grow familiar with ooba, consider exploring other platforms to see which suits you the most (and runs best on your machine).

Features and user experiences may differ from one software to another.

Download Text-Generation Models (opens in a new tab)


text-generation-webui (opens in a new tab)

text-generation-webui - a big community favorite gradio web UI by oobabooga designed for running almost any free open-source and large language models downloaded off of HuggingFace (opens in a new tab) which can be (but not limited to) models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and many others. Its goal is to become the AUTOMATIC1111/stable-diffusion-webui (opens in a new tab) of text generation. It is highly compatible with many formats.


llama.cpp (opens in a new tab)

Plain C/C++ implementation without dependencies. Apple silicon first-class citizen - optimized via ARM NEON, Accelerate and Metal frameworks. CUDA, Metal and OpenCL GPU backend support.


LM Studio (opens in a new tab)

Discover, download, and run local LLMs entirely offline. Use models through the in-app Chat UI or an OpenAI compatible local server with any compatible model files from HuggingFace repositories.


Exllama (opens in a new tab)

A standalone Python/C++/CUDA implementation of Llama for use with 4-bit GPTQ weights, designed to be fast and memory-efficient on modern GPUs.


gpt4all (opens in a new tab)

Open-source assistant-style large language models that run locally on your CPU. GPT4All is an ecosystem to train and deploy powerful and customized large language models that run locally on consumer-grade processors.


TavernAI (opens in a new tab)

The original branch of software SillyTavern was forked from. This chat interface offers very similar functionalities but has less cross-client compatibilities with other chat and API interfaces (compared to SillyTavern).


SillyTavern (opens in a new tab)

Developer-friendly, Multi-API (KoboldAI/CPP, Horde, NovelAI, Ooba, OpenAI+proxies, Poe, WindowAI(Claude!)), Horde SD, System TTS, WorldInfo (lorebooks), customizable UI, auto-translate, and more prompt options than you'd ever want or need. Optional Extras server for more SD/TTS options + ChromaDB/Summarize. Based on a fork of TavernAI 1.2.8


koboldcpp (opens in a new tab)

A self contained distributable from Concedo that exposes llama.cpp function bindings, allowing it to be used via a simulated Kobold API endpoint. What does it mean? You get llama.cpp with a fancy UI, persistent stories, editing tools, save formats, memory, world info, author's note, characters, scenarios and everything Kobold and Kobold Lite have to offer. In a tiny package around 20 MB in size, excluding model weights.


KoboldAI-Client (opens in a new tab)

This is a browser-based front-end for AI-assisted writing with multiple local & remote AI models. It offers the standard array of tools, including Memory, Author's Note, World Info, Save & Load, adjustable AI settings, formatting options, and the ability to import existing AI Dungeon adventures. You can also turn on Adventure mode and play the game like AI Dungeon Unleashed.


h2ogpt (opens in a new tab)

h2oGPT is a large language model (LLM) fine-tuning framework and chatbot UI with document(s) question-answer capabilities. Documents help to ground LLMs against hallucinations by providing them context relevant to the instruction. h2oGPT is fully permissive Apache V2 open-source project for 100% private and secure use of LLMs and document embeddings for document question-answer.


πŸ‘©β€πŸš€ AI Communities πŸ‘¨β€πŸš€

Reddit

Fediverse

HYPERION (Coming Soon!)
πŸŽ“ Enroll
- πŸ’« HyperTech Academy
β˜„οΈ Apply
- πŸ”¬ Hyperion Technologies
πŸ•ΉοΈ Play
- β˜„οΈ HYPERION

✍️ Contribute to FOSAI β–² XYZ

First, clone the repo (opens in a new tab) to your device and then run pnpm i in your terminal of choice to install the dependencies.

Then, run pnpm dev to start the development server and visit localhost:3000.

From here, you should be able to see the 'pages' folder, which contains all of the webpage content you see here (editable in simple markdown).