Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
Tokumei-no-hito
7 months ago
|
parent
|
context
|
favorite
| on:
Chatterbox TTS
thanks for sharing. are some local models better than others? can small models work well or do you want 8B+?
vunderba
7 months ago
[–]
So in my experience smaller models tend to produce worse results
BUT
I actually got really good transcription cleanup with CoT (Chain of Thought models) like Qwen even quantized down to 8b.
dragonwriter
7 months ago
|
parent
[–]
I think the 8B+ question was about parameter count (8 billion+ parameters), not quantization level (8 bits per weight).
vunderba
7 months ago
|
root
|
parent
[–]
Yeah I should have been more specific - Qwen 8b at a 5_K_M quant worked very well.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: