Using Chinda LLM 4B with Ollama - Complete User Guide
🎯 Introduction
Chinda LLM 4B is an open-source Thai language model developed by the iApp Technology team, capable of thinking and responding in Thai with high accuracy using the latest Qwen3-4B architecture.
Ollama is a powerful command-line tool that allows you to easily run large AI models locally on your computer with minimal setup and maximum performance.
🚀 Step 1: Installing Ollama
Download and Install Ollama

For macOS and Linux:
curl -fsSL https://ollama.com/install.sh | sh
For Windows:
- Go to https://ollama.com/download
- Download the Windows installer
- Run the installer and follow the installation wizard
Verify Installation
After installation, open your terminal or command prompt and run:
ollama --version
You should see the Ollama version information if the installation was successful.
$ ollama --version
ollama version is 0.9.0
🔍 Step 2: Download Chinda LLM 4B Model
Downloading the Model
Once Ollama is installed, you can download Chinda LLM 4B with a simple command:
ollama pull iapp/chinda-qwen3-4b
The model is approximately 2.5GB in size and will take some time to download depending on your internet speed. You'll see a progress bar showing the download status.
$ ollama pull iapp/chinda-qwen3-4b
pulling manifest
pulling f2c299c8384c: 100% ▕██████████████████▏ 2.5 GB
pulling 62fbfd9ed093: 100% ▕██████████████████▏ 182 B
pulling 70a7c2ca54f5: 100% ▕████████████████ ██▏ 159 B
pulling c79654219fbe: 100% ▕██████████████████▏ 74 B
pulling e8fb2837968f: 100% ▕██████████████████▏ 487 B
verifying sha256 digest
writing manifest
success
Verify Model Download
To check if the model was downloaded successfully:
ollama list
You should see chinda-qwen3-4b:latest in the list of available models.
$ ollama list
NAME ID SIZE MODIFIED
iapp/chinda-qwen3-4b:latest f66773e50693 2.5 GB 35 seconds ago
⚙️ Step 3: Basic Usage
Start Chatting with Chinda LLM
To start an interactive chat session with Chinda LLM 4B:
ollama run iapp/chinda-qwen3-4b
This will start an interactive chat session where you can type your questions in Thai and get responses immediately.
Example Conversation
ollama run iapp/chinda-qwen3-4b
>>> สวัสดีครับ ช่วยอธิบายเกี่ยวกับปัญญาประดิษฐ์ให้ฟังหน่อย
# Chinda LLM will respond in Thai explaining artificial intelligence
>>> /bye # Type this to exit the chat
Single Question Mode
If you want to ask a single question without entering chat mode:
ollama run iapp/chinda-qwen3-4b "อธิบายเกี่ยวกับปัญญาประดิษฐ์ให้ฟังหน่อย"
🌐 Step 4: API Server Usage
Starting the Ollama API Server
For developers who want to integrate Chinda LLM into their applications:
ollama serve
This starts the Ollama API server on http://localhost:11434
Using the API with curl
Basic API Call:
curl http://localhost:11434/api/generate -d '{
"model": "iapp/chinda-qwen3-4b",
"prompt": "สวัสดีครับ",
"stream": false
}'
Chat API Call:
curl http://localhost:11434/api/chat -d '{
"model": "iapp/chinda-qwen3-4b",
"messages": [
{
"role": "user",
"content": "