Transparency in HRI describes the method of making the current state of a robot or intelligent agent understandable to a human user. Applying transparency mechanisms to robots improves the quality of interaction as well as the user experience.
Explanations are an effective way to make a robot’s decision making transparent. We introduce a framework that uses natural language labels attached to a region in the continuous state space of the robot to automatically generate local explanations of a robot’s policy.
We conducted a pilot study and investigated how the generated explanations helped users to understand and reproduce a robot policy in a debugging scenario.