Children's Intelligence Tests Pose Challenges for MLLMs? KidGym: A 2D Grid-Based Reasoning Benchmark for MLLMs

Abstract

Multimodal Large Language Models (MLLMs) combine the linguistic strengths of LLMs with the ability to process multimodal data, enabling them to address a broader range of tasks. This progression parallels the developmental trajectory from language acquisition to visual perception in children. Inspired by the Wechsler Intelligence Scales, we introduce KidGym, a comprehensive 2D grid-based benchmark for assessing five essential capabilities of MLLMs: execution, perception reasoning, learning, memory, and planning. The benchmark comprises 12 unique tasks, each targeting at least one core capability, specifically designed to gauge MLLMs' adaptability and developmental potential, mirroring the stages of children's cognitive growth. Additionally, our tasks encompass diverse scenarios and objects with randomly generated layouts, ensuring more accurate and robust evaluation of MLLM capabilities. KidGym is designed to be fully user-customizable and extensible, allowing researchers to create new evaluation scenarios and adjust difficulty levels to accommodate the rapidly growing MLLM community. Through evaluation of state-of-the-art MLLMs using KidGym, we identified significant insights into model capabilities and revealed important limitations in current status.

Results

BibTeX