SUPR
Visual Language Model and Task Planning
Dnr:

NAISS 2025/22-933

Type:

NAISS Small Compute

Principal Investigator:

Paolo Forte

Affiliation:

Örebro universitet

Start Date:

2025-06-23

End Date:

2025-10-01

Primary Classification:

10210: Artificial Intelligence

Webpage:

Allocation

Abstract

Large Language Models (LLMs) excel at generating contextually relevant text but lack logical reasoning abilities. They rely on statistical patterns rather than logical inference, making them unreliable for structured decision-making. Integrating LLMs with task planning can address this limitation by combining their natural language understanding with the precise, goal-oriented reasoning of planners. We propose ViPlanH, a hybrid system that leverages Vision Language Models (VLMs) to extract high-level semantic information from visual and textual inputs while integrating classical planners for logical reasoning. ViPlan utilizes VLMs to generate syntactically correct and semantically meaningful HDDL problem files from images, the domain file, and natural language instructions, which are then processed by a task planner to generate an executable plan. The entire process is embedded within a behavior tree framework, enhancing efficiency, reactivity, replanning, modularity, and flexibility.