Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
GaggiX
on Dec 11, 2023
|
parent
|
context
|
favorite
| on:
Mixtral of experts
That doesn't explain how we know that GPT-4 is a sparse MoE model with X experts of Y size and using Z of them during inference.
Laaas
on Dec 11, 2023
[–]
IIRC it was leaked/confirmed by accident or something like that
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: