Hydra-MoE: A new class of Open-Source Mixture of Experts
a SkunkWorks Project
Skunkworks OSS introduces Hydra-MoE, an innovative Mixture of Experts (MoE) architecture that leverages LoRA/QLoRA experts to scale and augment the performance of base language models. The central aim of this research is to transmute any base language model into an advanced, lightweight, and efficient MoE framework, employing swappable QLoRA Expert Adapters, with the objective of achieving performance levels that rival state-of-the-art models and can run on commodity/consumer hardware.


Core Team

Far El

Far El

Follow @Far__El
Prateek Yadav

Prateek Yadav

Follow @Prateeky2806
Alpay Ariyak

Alpay Ariyak

Follow @AlpayAriyak
Artem Yatsenko

Artem Yatsenko

Follow @Sumo43_
Harrison Kinsley

Harrison Kinsley

Follow @Sentdex
Yaroslav Shipilov

Nisten

Follow @Nisten
Yaroslav Shipilov

Yaroslav Shipilov

Follow @TheSlavant
Yaroslav Shipilov

Teknium

Follow @Teknium1
And many more contributors!