BEE-spoke-data smol llama-101M-GQA

From OODA WIKI
Revision as of 20:21, 8 August 2025 by VeritasNumeri (talk | contribs) (Bot update: Populating AM Almanac data.)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
BEE-spoke-data/smol_llama-101M-GQA
Extrinsic Performance (LLM Leaderboard)
Rank N/A
Average Score 0.30
Intrinsic Architecture
Architecture LlamaForCausalLM
Hidden Layers 6
Attention Heads 24
Vocab Size 32128


BEE-spoke-data/smol_llama-101M-GQA is a deep learning model with a recorded average score of 0.30 on the Open LLM Leaderboard.

Performance Metrics

  • ARC Score: N/A
  • HellaSwag Score: N/A
  • MMLU Score: N/A

This page was last updated automatically by the Almanac Ingestor bot.