This repository contains minimal code and resources for inference using the Kokoro-82M model. The repository supports inference using ONNX Runtime and uses optimized ONNX weights for inference.
| Machine learning models rely on large datasets and complex algorithms to identify patterns and make predictions. | Did you know that honey never spoils? Archaeologists have found pots of honey in ancient Egyptian tombs that are over 3,000 years old and still edible! |
en-us and en-gb.Clone the repository:
git clone https://github.com/yakhyo/kokoro-82m.git
cd kokoro-82m
Install dependencies:
pip install -r requirements.txt
Install espeak for text-to-speech functionality:
Linux:
apt-get install espeak -y
docker build -t kokoro-docker . && docker run --rm -p 7860:7860 kokoro-docker
What this does:
kokoro-docker.7860 (container) to port 7860 (host).--rm).Access your app at http://localhost:7860 once it’s running.
| Filename | Description | Size |
|---|---|---|
kokoro-quant.onnx |
Mixed precision model (faster) | 169MB |
kokoro-v0_19.onnx |
Original model | 330MB |
Run inference using the jupyter notebook:
Specify input text and model weights in inference.py then run:
python inference.py
Run below start Gradio App
python app.py
This project is licensed under the MIT License.
Model weights licensed under the Apache 2.0