The Limitations and Failure Cases of DreamLLM: How Far Can it Go? | HackerNoon
Briefly

DREAMLLM represents a significant leap in developing adaptable, creative, and foundational Multimodal Large Language Models (MLLMs), yet it faces notable limitations in scale and data quality.
The scale of models used for DREAMLLM primarily revolves around 7B parameters, which, while yielding impressive results, raises questions about the undiscovered potential of larger models in generating more complex outputs.
Training data quality is paramount. The presence of noise, like commercial advertisements within datasets like MMC4, can detrimentally impact the MLLM's efficiency, requiring more meticulous data curation.
Prompt sensitivity remains a challenge for MLLMs, as their reliance on specific human prompts can lead to suboptimal performance if not appropriately guided, highlighting a need for improved interaction techniques.
Read at Hackernoon
[
|
]