From Black Box to Clear Logic: Building Explainable AI Copilots with ReasonX

Anna Alexandra Grigoryan
7 min readNov 3, 2024

--

As AI continues to advance, Explainable AI (XAI) has become a critical component in deploying intelligent systems responsibly. Many of today’s cutting-edge solutions use LLMs to power multi-agent assistants – Copilots, that handle complex tasks autonomously. Despite their utility, these systems often face a significant challenge: ensuring users understand how decisions are made. This is where ReasonX comes into play, a tool specifically designed to address transparency by providing contrastive explanations in AI systems. In this post, we’ll explore what ReasonX is, how it differentiates itself from other XAI solutions, and how it fits into multi-agent Copilot systems.

The Innovation of ReasonX in XAI

ReasonX is an innovative framework proposed to address the limitations of traditional XAI methods, particularly in complex and interactive systems. Unlike static XAI techniques, ReasonX allows users to query explanations interactively, apply background knowledge, and receive contrastive feedback on decisions. As explored in the paper ,here’s what sets ReasonX apart:

Key Features of ReasonX

1. Declarative and Interactive Reasoning: Unlike conventional XAI approaches that generate a one-time explanation, ReasonX supports a dynamic and user-driven exploration of decisions. Users can specify scenarios or hypothetical changes, receiving targeted answers on how those changes impact outcomes.

2. Contrastive Explanations: Central to ReasonX is the concept of contrastive explanations, which answer “why not” questions. For example, in a credit decision scenario, ReasonX can explain why a user was denied credit and what minimal changes could lead to approval, providing a “what could be different” perspective.

3. Background Knowledge Integration: ReasonX allows users to input real-world constraints and context, making explanations more realistic. This might include setting constraints on age, income, or other variables that are immutable or legally bounded, thus generating explanations grounded in the user’s specific environment.

4. Dual Architecture for Flexibility and Power: ReasonX operates in two layers: a Python-based front-end for data manipulation and integration with ML tools, and a Prolog-based backend that performs declarative reasoning using Constraint Logic Programming (CLP). This combination allows ReasonX to handle both symbolic logic and numerical constraints efficiently.

Imagine ReasonX explaining a credit rejection. The system identifies that the applicant’s income is below a required threshold and suggests two possible paths for approval: increasing income or decreasing debt.

Introducing ReasonX into Multi-Agent LLM-Based Copilots

Multi-agent systems are ideal for complex, dynamic tasks that require collaboration across specialized agents. For instance, a multi-agent Copilot for project management might include:

Data Retrieval Agent: Fetches relevant documents and information.

Analysis Agent: Processes data to identify trends or insights.

Task Management Agent: Prioritizes and schedules tasks.

Quality Control Agent: Ensures outputs meet accuracy standards.

These agents frequently interact, depend on each other’s outputs, and sometimes even conflict in objectives. Integrating ReasonX into this environment brings transparency by explaining dependencies between agents and resolving conflicts through a structured reasoning process.

How ReasonX Enhances a Multi-Agent Copilot

  1. Decision Dependencies

In multi-agent systems, decisions made by one agent can influence others. For example, if a Data Retrieval Agent provides outdated information, a Summarization Agent might generate an inaccurate report. ReasonX can map these dependencies, showing how each agent’s output affects others and clarifying the reasoning behind specific system actions.

Let’s say the Analysis Agent identifies potential delays in project timelines. The Scheduling Agent then reschedules tasks based on this analysis. If a user questions the change, ReasonX can explain that it was influenced by the delay detected by the Analysis Agent, providing a clear causal link between agents.

2. Conflict Resolution

In multi-agent systems, conflicts often arise due to competing goals. For instance, a Speed Optimization Agent may prioritize fast responses, while an Accuracy Agent requires more time for thorough analysis. ReasonX detects these conflicts by analyzing each agent’s constraints and suggesting resolutions, such as balancing speed and accuracy based on priority.

In an autonomous vehicle Copilot, a Navigation Agent might suggest the fastest route, while a Safety Agent favors a route with fewer accident-prone areas. ReasonX could identify the conflict and propose a middle ground, such as a slightly longer but safer route, enhancing system coherence.

Practical Examples of Constraints in Multi-Agent Systems

Constraints define the operational boundaries within which each agent works. They ensure agents adhere to performance, resource, and ethical standards, enhancing both control and interpretability.

Types of Constraints for Multi-Agent Systems:

1. Accuracy Constraints

Accuracy Thresholds: Certain agents, like Prediction Agents, may require minimum accuracy levels. For instance, a constraint could ensure predictions are only used if they reach 85% accuracy.

Confidence Levels: For agents that provide probabilistic outputs, ReasonX can restrict decisions based on a minimum confidence level. If a Data Retrieval Agent fetches information with low confidence, ReasonX might limit its influence on critical downstream agents.

2. Performance and Latency Constraints

Latency Limits: Agents in real-time applications need strict latency constraints. For example, a Chatbot Agent might need to respond within 1 second. ReasonX could enforce this by blocking any operation that exceeds the time limit, ensuring prompt responses.

Resource Constraints: ReasonX can ensure that agents don’t overuse computational resources. For instance, the Data Processing Agent might be limited to 2GB of memory, avoiding resource monopolization in systems with limited computational capacity.

3. Agent-Specific Constraints

Immutability Constraints: Some features may be fixed. In customer service, for example, a demographics agent might treat “location” as immutable due to privacy regulations.

Feature Range Constraints: Certain variables might have restricted ranges. For a Market Analysis Agent, ReasonX could enforce a rule to ignore price fluctuations below 1% to focus on significant changes only.

4. Goal-Oriented Constraints

Task Prioritization: When agents prioritize different goals, ReasonX can ensure the system respects critical objectives. If a Summary Agent and Quality Control Agent operate together, ReasonX could prioritize Quality Control outputs to maintain standards.

Output Constraints: ReasonX can enforce constraints on output form. For example, a summarization agent might be required to produce outputs under 200 words, ensuring concise and readable responses.

Building Explainable Multi-Agent Systems with Constraint Logic Programming (CLP)

Step 1: Defining Constraints in Python

Let’s start by defining constraints in Python to specify agent requirements. We’ll use a simple multi-agent system where each agent has performance targets, latency limits, and accuracy thresholds.

# Define constraints in Python
agents = {
'prediction_agent': {'accuracy_threshold': 0.85, 'priority': 'medium'},
'chatbot_agent': {'max_latency': 1.0, 'priority': 'high'},
'summarization_agent': {'max_length': 200, 'priority': 'low'}
}

The agents dictionary specifies constraints for each agent:

Prediction Agent requires an accuracy threshold of 85%.

Chatbot Agent has a latency constraint of 1 second.

Summarization Agent should generate outputs of 200 words or fewer.

Step 2: Converting Constraints to Prolog (CLP) Format

In Prolog, we’ll define each agent’s constraints as rules. Prolog’s syntax for constraints is straightforward. For instance, X > Y in Prolog represents that X should be greater than Y. To implement ReasonX, we’ll need to translate each Python constraint into Prolog facts and rules.

% Define constraints for Prediction Agent
accuracy_threshold(prediction_agent, 0.85).
priority(prediction_agent, medium).

% Define constraints for Chatbot Agent
max_latency(chatbot_agent, 1.0).
priority(chatbot_agent, high).

% Define constraints for Summarization Agent
max_length(summarization_agent, 200).
priority(summarization_agent, low).

Each line here is a fact in Prolog, representing specific constraints for each agent.

Step 3: Writing CLP Rules to Enforce Dependencies and Resolve Conflicts

Now that we have defined constraints, we can create rules in Prolog to enforce them and to detect dependencies or conflicts. For example, a rule can specify that if two agents have overlapping resources or conflicting priorities, ReasonX should identify and flag these conflicts.

Let’s define rules to detect:

1. High-priority conflicts: Detect if multiple agents have the same high priority.

2. Latency constraints: Ensure no agent exceeds its specified latency limit.

% Rule to check for high-priority conflicts
conflict(Agent1, Agent2) :-
priority(Agent1, high),
priority(Agent2, high),
Agent1 \= Agent2.

% Rule to check if an agent exceeds latency constraints
check_latency(Agent) :-
max_latency(Agent, MaxLatency),
Latency = MaxLatency, % Assume Latency is measured and known here
Latency =< MaxLatency.

The conflict/2 rule identifies if two agents (Agent1 and Agent2) both have high priority, which may indicate a conflict.

The check_latency/1 rule verifies that the latency constraint for an agent is respected. Here, Latency would be a measured value in a real-world system.

Step 4: Setting Up a Prolog Engine in Python

To integrate Prolog constraints with Python, we’ll use a Prolog engine in Python. SWI-Prolog is a popular choice, and with the pyswip library, we can query Prolog facts and rules directly from Python.

from pyswip import Prolog

# Initialize Prolog
prolog = Prolog()

# Load Prolog rules and facts from file
prolog.consult("reasonx_rules.pl")

# Query to detect high-priority conflicts
conflicts = list(prolog.query("conflict(Agent1, Agent2)"))
print("Conflicts:", conflicts)

# Check latency constraints for each agent
latency_checks = [f"check_latency({agent})" for agent in agents.keys()]
for query in latency_checks:
result = list(prolog.query(query))
print(f"{query}:", result if result else "No violations")

Step 5: Generating Explanations Based on Constraints

ReasonX’s strength lies in generating actionable explanations for constraint violations or dependencies. For each constraint checked, ReasonX can output an explanation of the result and suggest potential changes.

Here’s an additional Prolog rule for generating explanation:

% Rule to explain conflict resolution for high-priority agents
explanation_conflict_resolution(Agent1, Agent2) :-
conflict(Agent1, Agent2),
format("Conflict detected between ~w and ~w. Consider lowering priority for one agent.", [Agent1, Agent2]).

This explanation_conflict_resolution/2 rule provides a human-readable message whenever two agents have a high-priority conflict. This message can be retrieved from Python as an explanation and added to the logs of the back end server.

# Explanation for conflict resolution
explanations = list(prolog.query("explanation_conflict_resolution(Agent1, Agent2)"))
for explanation in explanations:
print("Explanation:", explanation)

Wrapping Up

ReasonX stands out as a powerful tool for ensuring transparency and reliability in multi-agent LLM-based Copilots. By enabling engineers to set constraints, understand dependencies, and resolve conflicts, ReasonX transforms complex, opaque AI systems into interpretable and trustworthy tools. As AI-driven assistants continue to evolve, ReasonX will be essential in bridging the gap between technical complexity and explainability, empowering engineers to confidently add more complexity to their multi-agent systems.

--

--

Anna Alexandra Grigoryan
Anna Alexandra Grigoryan

Written by Anna Alexandra Grigoryan

red schrödinger’s cat thinking of doing something brilliant

No responses yet