Aaquib Tabrez

Aaquib Tabrez

University of Colorado Boulder

H-index: 5

North America-United States

About Aaquib Tabrez

Aaquib Tabrez, With an exceptional h-index of 5 and a recent h-index of 4 (since 2020), a distinguished researcher at University of Colorado Boulder, specializes in the field of Explainable AI, Human-Robot Interaction, Reinforcement Learning, Robotics, Augmented Reality.

His recent articles reflect a diverse array of research interests and contributions to the field:

Autonomous Policy Explanations for Effective Human-Machine Teaming

Autonomous Justification for Enabling Explainable Decision Support in Human-Robot Teaming

Effective Human-Machine Teaming through Communicative Autonomous Agents that Explain, Coach, and Convince

Descriptive and prescriptive visual guidance to improve shared situational awareness in human-robot teaming

Augmented reality-based explainable AI strategies for establishing appropriate reliance and trust in human-robot teaming

Mediating Trust and Influence in Human-Robot Interaction via Explainable AI

One-shot Policy Elicitation via Semantic Reward Manipulation

Interactive Constrained Learning from Demonstration Using Visual Robot Behavior Counterfactuals

Aaquib Tabrez Information

University

University of Colorado Boulder

Position

PhD Student

Citations(all)

209

Citations(since 2020)

196

Cited By

36

hIndex(all)

5

hIndex(since 2020)

4

i10Index(all)

4

i10Index(since 2020)

4

Email

University Profile Page

University of Colorado Boulder

Aaquib Tabrez Skills & Research Interests

Explainable AI

Human-Robot Interaction

Reinforcement Learning

Robotics

Augmented Reality

Top articles of Aaquib Tabrez

Autonomous Policy Explanations for Effective Human-Machine Teaming

Authors

Aaquib Tabrez

Journal

Proceedings of the AAAI Conference on Artificial Intelligence

Published Date

2024/3/24

Policy explanation, a process for describing the behavior of an autonomous system, plays a crucial role in effectively conveying an agent's decision-making rationale to human collaborators and is essential for safe real-world deployments. It becomes even more critical in effective human-robot teaming, where good communication allows teams to adapt and improvise successfully during uncertain situations by enabling value alignment within the teams. This thesis proposal focuses on improving human-machine teaming by developing novel human-centered explainable AI (xAI) techniques that empower autonomous agents to communicate their capabilities and limitations via multiple modalities, teach and influence human teammates' behavior as decision-support systems, and effectively build and manage trust in HRI systems.

Autonomous Justification for Enabling Explainable Decision Support in Human-Robot Teaming

Authors

Matthew B Luebbers,Aaquib Tabrez,Kyler Ruvane,Bradley Hayes

Journal

Proceedings of Robotics: Science and Systems. Daegu, Republic of Korea. https://doi. org/10.15607/RSS

Published Date

2023

Justification is an important facet of policy explanation, a process for describing the behavior of an autonomous system. In human-robot collaboration, an autonomous agent can attempt to justify distinctly important decisions by offering explanations as to why those decisions are right or reasonable, leveraging a snapshot of its internal reasoning to do so. Without sufficient insight into a robot’s decision-making process, it becomes challenging for users to trust or comply with those important decisions, especially when they are viewed as confusing or contrary to the user’s expectations (eg, when decisions change as new information is introduced to the agent’s decision-making process). In this work we characterize the benefits of justification within the context of decision-support during humanrobot teaming (ie, agents giving recommendations to human teammates). We introduce a formal framework using value of information theory to strategically time justifications during periods of misaligned expectations for greater effect. We also characterize four different types of counterfactual justification derived from established explainable AI literature and evaluate them against each other in a human-subjects study involving a collaborative, partially observable search task. Based on our findings, we present takeaways on the effective use of different types of justifications in human-robot teaming scenarios, to improve user compliance and decision-making by strategically influencing human teammate thinking patterns. Finally, we present an augmented reality system incorporating these findings into a real-world decision-support system for human-robot teaming.

Effective Human-Machine Teaming through Communicative Autonomous Agents that Explain, Coach, and Convince

Authors

Aaquib Tabrez

Published Date

2023/5/30

Effective communication is essential for human-robot collaboration to improve task efficiency, fluency, and safety. Good communication between teammates provides shared situational awareness, allowing them to adapt and improvise successfully during uncertain situations, and helps identify and remedy any potential misunderstandings in the case of incongruous mental models. This doctoral proposal focuses on improving human-agent communication by leveraging explainable AI techniques to empower autonomous agents to 1) communicate insights into their capabilities and limitations to a human collaborator, 2) coach and influence human teammates’ behavior during joint task execution, and 3) successfully convince and mediate trust in human-robot interactions.

Descriptive and prescriptive visual guidance to improve shared situational awareness in human-robot teaming

Authors

Aaquib Tabrez,Matthew B Luebbers,Bradley Hayes

Published Date

2022/5/9

In collaborative tasks involving human and robotic teammates, live communication between agents has potential to substantially improve task efficiency and fluency. Effective communication provides essential situational awareness to adapt successfully during uncertain situations and encourage informed decision-making. In contrast, poor communication can lead to incongruous mental models resulting in mistrust and failures. In this work, we first introduce characterizations of and generative algorithms for two complementary modalities of visual guidance: prescriptive guidance (visualizing recommended actions), and descriptive guidance (visualizing state space information to aid in decision-making). Robots can communicate this guidance to human teammates via augmented reality (AR) interfaces, facilitating synchronization of notions of environmental uncertainty and offering more collaborative and interpretable recommendations. We also introduce a min-entropy multi-agent collaborative planning algorithm for uncertain environments, informing the generation of these proactive visual recommendations for more informed human decision-making. We illustrate the effectiveness of our algorithm and compare these different modalities of AR-based guidance in a human subjects study involving a collaborative, partially observable search task. Finally, we synthesize our findings into actionable insights informing the use of prescriptive and descriptive visual guidance.

Augmented reality-based explainable AI strategies for establishing appropriate reliance and trust in human-robot teaming

Authors

Matthew B Luebbers,Aaquib Tabrez,Bradley Hayes

Published Date

2022/2/18

In human-robot teaming, live and effective communication is of critical importance for maintaining coordination and improving task fluency, especially in uncertain environments. Poor communication between teammates can foster doubt and misunderstanding, and lead to task failures. In previous work, we explored the idea of visually communicating notions of environmental uncertainty alongside robot-generated suggestions through augmented reality (AR) interfaces in a human-robot teaming setting. We introduced two complementary modalities of visual guidance: prescriptive guidance (visualizing recommended actions), and descriptive guidance (visualizing state space information to aid in decision-making), along with an algorithm to generate and utilize these modalities in partially-observable multi-agent collaborative tasks. We compared these modalities in a human subjects study, where we showed the ability of this combined guidance to improve trust, interpretability, performance, and human teammate independence. In this new work, we synthesize key takeaways from that study, leveraging them to describe remaining open challenges for live communication for human-robot teaming under uncertainty, and propose a set of approaches to address them via a collection of explainable AI techniques such as visual counterfactual explanations, predictable and explicable planning, and robot-generated justifications.

Mediating Trust and Influence in Human-Robot Interaction via Explainable AI

Authors

Aaquib Tabrez,Bradley Hayes

Published Date

2022

For robots to effectively collaborate with humans in highstakes applications (eg, autonomous driving), insights into these autonomous systems’ capabilities and their limitations are required [26, 18, 21]. Our work leverages explainable AI (xAI) techniques to provide those insights, enabling more fluent teaming and agent-to-human communication. While most recent literature in robotics focuses on enabling robots to adapt to their human teammates (eg, imitation learning)[1, 19], in this work, we focus on the converse, empowering autonomous agents to be capable of manipulating and adapting their teammates’ behavior during joint task execution. One critical aspect for safe and effective collaboration between teammates is maintaining awareness over the collaborator’s mental model, enabling agents to reason about what their teammate is likely to do or need [6]. While people are quite skillful in this task, robots lack this intuition and capability. As described in our survey on mental modeling techniques in human-robot teaming [22], researchers have leveraged xAI for knowledge sharing and expectation matching to achieve fluent collaboration and improve shared awareness [3]. Explanations enhance transparency and functionally help in the synchronization of expectation between the human and robot teams [2]. With this in mind, we pursue two research themes at the intersection of xAI and human-robot interaction: RT1: Formulate and operationalize a framework for explainable robot coaching within human-robot teaming scenarios to improve shared awareness, RT2: Characterize and generate semantic and visual modalities for robot explanation …

One-shot Policy Elicitation via Semantic Reward Manipulation

Authors

Aaquib Tabrez,Ryan Leonard,Bradley Hayes

Journal

arXiv preprint arXiv:2101.01860

Published Date

2021/1/6

Synchronizing expectations and knowledge about the state of the world is an essential capability for effective collaboration. For robots to effectively collaborate with humans and other autonomous agents, it is critical that they be able to generate intelligible explanations to reconcile differences between their understanding of the world and that of their collaborators. In this work we present Single-shot Policy Explanation for Augmenting Rewards (SPEAR), a novel sequential optimization algorithm that uses semantic explanations derived from combinations of planning predicates to augment agents' reward functions, driving their policies to exhibit more optimal behavior. We provide an experimental validation of our algorithm's policy manipulation capabilities in two practically grounded applications and conclude with a performance analysis of SPEAR on domains of increasingly complex state space and predicate counts. We demonstrate that our method makes substantial improvements over the state-of-the-art in terms of runtime and addressable problem size, enabling an agent to leverage its own expertise to communicate actionable information to improve another's performance.

Interactive Constrained Learning from Demonstration Using Visual Robot Behavior Counterfactuals

Authors

Carl Mueller,Aaquib Tabrez,Bradley Hayes

Published Date

2021

Collaborative robots continue to depend on substantial robot programming expertise to be useful to end-consumers and small to mid-level enterprise. Robot skill learning techniques, like Concept Constrained Learning from Demonstration, allow a robot to learn robust skills from non-expert users. This method combines traditional Robot Learning from Demonstration data with constraints to enable the communication of richer skillpertinent information as task specific behavior restrictions. This approach is integrated into a visual interactive system called Augmented Reality for Constrained Learning from Demonstration (ARC-LfD). This interactive system enables users to iteratively program robot skills through demonstration and constraint application in situ using augmented reality. However, as constraints and acquired skills grow in number, users might not have a deep understanding of the capabilities of the robot for any given learned skill. This paper proposes an extension to the ARC-LfD system that will provide ‘whatif’visualizations called Robot Behavior Counterfactuals (RBCs). RBCs serve to explain the effects of alternative constraint usage, as well as the effects constraints have on the potential for skill success, particularly when adapting skills to altered environments. ARC-LfD will also be extended with visuals called Behavioral Verification Indicators that aid users in understanding where and why a potential model will fail or succeed. This proposed system will be evaluated with a human-trial study to test for objective and subjective measures of belief in robot capability.

Asking the Right Questions: Facilitating Semantic Constraint Specification for Robot Skill Learning and Repair

Authors

Aaquib Tabrez,Jack Kawell,Bradley Hayes

Published Date

2021

Developments in human-robot teaming have given rise to significant interest in training methods that enable collaborative agents to safely and successfully execute tasks alongside human teammates. While effective, many existing methods are brittle to changes in the environment and do not account for the preferences of human collaborators. This ineffectiveness is typically due to the complexity of deployment environments and the unique personal preferences of human teammates. These complications lead to behavior that can cause task failure or user discomfort. In this work, we introduce Plan Augmentation and Repair through SEmantic Constraints (PARSEC): a novel algorithm that utilizes a semantic hierarchy to enable novice users to quickly and effectively select constraints using natural language that correct faulty behavior or adapt skills to their preferences. We show through a case study that our algorithm …

Emerging Autonomy Solutions for Human and Robotic Deep Space Exploration

Authors

MATTHEW B LUEBBERS,CHRISTINE T CHANG,AAQUIB TABREZ,JORDAN DIXON,BRADLEY HAYES

Published Date

2021

The space community has traditionally been extremely cautious about the usage of autonomy for high stakes applications. Autonomy deployed in control systems has often been deterministic and verifiable, contrasting with modern, non-deterministic learning or interaction-based techniques. This is justifiable, as the cost of mission failure is exceptionally high. On the other hand, deterministic models inherently restrict the designs of deep space missions and preclude many exciting concepts. In this work, we briefly describe the challenges faced in deploying autonomy in deep space, and present two case studies to showcase where more advanced models or human-robot techniques would be useful. We also discuss a collection of emerging technologies that could be leveraged to hedge the risks of autonomy deployment for future deep space applications.

Solutions for Socially Intelligent HRI in Real-World Scenarios (SSIR-HRI)

Authors

Karen Tatarian,Sera Buyukgoz,Marine Chamoux,Aaquib Tabrez,Bradley Hayes,Mohamed Chetouani

Published Date

2021/3/8

Today it seems even more evident that social robots will have a more integral role to play in the real-world scenarios and need to participate in the full richness of human society. Central to the success of robots being socially intelligent agents is insuring effective interactions between humans and robots. In order to achieve that goal, researchers and engineers from both industry and academia need to come together to share ideas, trials, failures, and successes. This workshops aims at creating the bridge between industry and academia and as such creating a community to tackle the current and future challenges of socially intelligent human-robot interaction in real-world scenarios by finding solutions for them.

Automated failure-mode clustering and labeling for informed car-to-driver handover in autonomous vehicles

Authors

Aaquib Tabrez,Matthew B Luebbers,Bradley Hayes

Journal

arXiv preprint arXiv:2005.04439

Published Date

2020/5/9

The car-to-driver handover is a critically important component of safe autonomous vehicle operation when the vehicle is unable to safely proceed on its own. Current implementations of this handover in automobiles take the form of a generic alarm indicating an imminent transfer of control back to the human driver. However, certain levels of vehicle autonomy may allow the driver to engage in other, non-driving related tasks prior to a handover, leading to substantial difficulty in quickly regaining situational awareness. This delay in re-orientation could potentially lead to life-threatening failures unless mitigating steps are taken. Explainable AI has been shown to improve fluency and teamwork in human-robot collaboration scenarios. Therefore, we hypothesize that by utilizing autonomous explanation, these car-to-driver handovers can be performed more safely and reliably. The rationale is, by providing the driver with additional situational knowledge, they will more rapidly focus on the relevant parts of the driving environment. Towards this end, we propose an algorithmic failure-mode identification and explanation approach to enable informed handovers from vehicle to driver. Furthermore, we propose a set of human-subjects driving-simulator studies to determine the appropriate form of explanation during handovers, as well as validate our framework.

A survey of mental modeling techniques in human–robot teaming

Authors

Aaquib Tabrez,Matthew B Luebbers,Bradley Hayes

Published Date

2020/12

Purpose of Review As robots become increasingly prevalent and capable, the complexity of roles and responsibilities assigned to them as well as our expectations for them will increase in kind. For these autonomous systems to operate safely and efficiently in human-populated environments, they will need to cooperate and coordinate with human teammates. Mental models provide a formal mechanism for achieving fluent and effective teamwork during human–robot interaction by enabling awareness between teammates and allowing for coordinated action. Recent Findings Much recent research in human–robot interaction has made use of standardized and formalized mental modeling techniques to great effect, allowing for a wider breadth of scenarios in which a robotic agent can act as an effective and trustworthy teammate …

See List of Professors in Aaquib Tabrez University(University of Colorado Boulder)

Aaquib Tabrez FAQs

What is Aaquib Tabrez's h-index at University of Colorado Boulder?

The h-index of Aaquib Tabrez has been 4 since 2020 and 5 in total.

What are Aaquib Tabrez's top articles?

The articles with the titles of

Autonomous Policy Explanations for Effective Human-Machine Teaming

Autonomous Justification for Enabling Explainable Decision Support in Human-Robot Teaming

Effective Human-Machine Teaming through Communicative Autonomous Agents that Explain, Coach, and Convince

Descriptive and prescriptive visual guidance to improve shared situational awareness in human-robot teaming

Augmented reality-based explainable AI strategies for establishing appropriate reliance and trust in human-robot teaming

Mediating Trust and Influence in Human-Robot Interaction via Explainable AI

One-shot Policy Elicitation via Semantic Reward Manipulation

Interactive Constrained Learning from Demonstration Using Visual Robot Behavior Counterfactuals

...

are the top articles of Aaquib Tabrez at University of Colorado Boulder.

What are Aaquib Tabrez's research interests?

The research interests of Aaquib Tabrez are: Explainable AI, Human-Robot Interaction, Reinforcement Learning, Robotics, Augmented Reality

What is Aaquib Tabrez's total number of citations?

Aaquib Tabrez has 209 citations in total.

What are the co-authors of Aaquib Tabrez?

The co-authors of Aaquib Tabrez are Bradley Hayes, Matthew B Luebbers, Carl L Mueller, Shivendra Agrawal, Jack Kawell.

    Co-Authors

    H-index: 17
    Bradley Hayes

    Bradley Hayes

    University of Colorado Boulder

    H-index: 5
    Matthew B Luebbers

    Matthew B Luebbers

    University of Colorado Boulder

    H-index: 4
    Carl L Mueller

    Carl L Mueller

    University of Colorado Boulder

    H-index: 2
    Shivendra Agrawal

    Shivendra Agrawal

    University of Colorado Boulder

    H-index: 1
    Jack Kawell

    Jack Kawell

    University of Colorado Boulder

    academic-engine

    Useful Links