ClearView
ClearView
ClearView

Task-Routed Mixture of Experts

Task-Routed Mixture of Experts

Research/Academic • NLP/LLM

About The Project

Built a modular MoE pipeline where a TaskClassifier directs inputs to expert models—BART-large-cnn for summarization and BERT-base for sentiment—achieving efficient, scalable performance without retraining the entire stack. Designed with hooks for future multi-modal experts (vision/audio).

Achievements

  • 98.95% task-routing accuracy

  • ~30% ROUGE-1 improvement on summarization (BART-large-cnn)

  • 92.1% accuracy on SST-2 sentiment (BERT-base)

  • Extensible design for multi-modal (image) experts

Links

sunny.sunho.park@gmail.com

unsplash.com/@reddfrancisco
unsplash.com/@reddfrancisco