Research

Research in reinforcement learning, swarm robotics, and AI for education at UTRGV's MARS Lab

MAPPO in Swarm Robotics for the Foraging Problem

Multi-Agent Proximal Policy Optimization (MAPPO) applied to swarm robotics for solving the foraging problem using Webots simulation.

Overview

This research explores the application of MAPPO, a state-of-the-art multi-agent reinforcement learning algorithm, to coordinate swarm robots in solving the foraging problem. The work is conducted at UTRGV's MARS Lab using Webots as the simulation environment.

Research Focus

The foraging problem in swarm robotics involves coordinating multiple autonomous agents to efficiently search, collect, and transport resources in an environment. This research investigates how MAPPO can enable emergent cooperative behaviors in robot swarms without centralized control.

Key Components

Multi-Agent System

  • Decentralized control architecture
  • Agent-to-agent communication protocols
  • Scalable coordination strategies

MAPPO Algorithm

  • Proximal Policy Optimization adapted for multi-agent scenarios
  • Centralized training with decentralized execution (CTDE)
  • Shared value function approximation

Simulation Environment

  • Webots physics-based simulation
  • Realistic robot dynamics and sensor models
  • Configurable foraging scenarios

Current Status

Active research in progress at UTRGV's MARS Lab as part of graduate studies in Computer Science, focusing on reinforcement learning and swarm robotics.

Screenshot of MAPPO in Swarm Robotics for the Foraging Problem - Mobile view

Trust in AI-Generated and AI-Assisted Emails

Investigating dispositional and situational trust in AI-only, AI-assisted, and human-written emails to understand trust dynamics in human-AI communication.

Overview

This research examines how people perceive and trust emails generated by AI systems compared to human-written communications. The study explores the nuanced differences between AI-only emails, AI-assisted emails, and purely human-written emails, focusing on trust formation and decision-making.

Research Questions

Primary Focus

  • How does trust differ across AI-only, AI-assisted, and human-written emails?
  • What role does disclosure play in trust formation?
  • How do dispositional trust and situational trust interact in AI communication contexts?

Trust Dimensions

  • Dispositional Trust: Individual's general tendency to trust AI systems
  • Situational Trust: Context-specific trust based on the particular email or scenario

Methodology

Email Categories

  1. AI-Only Emails: Fully generated by AI without human intervention
  2. AI-Assisted Emails: Human-written with AI enhancement or editing
  3. Human Emails: Entirely written by humans without AI assistance

Disclosure Conditions

  • Disclosed AI involvement vs. undisclosed AI usage
  • Impact of transparency on trust levels
  • User preferences for disclosure

Implications

This research has significant implications for:

  • Organizational communication policies
  • AI transparency guidelines
  • Human-AI collaboration frameworks
  • Trust-building in digital communications

Current Status

Active research project at UTRGV's MARS Lab, investigating the intersection of human-computer interaction and trust in AI systems.

Screenshot of Trust in AI-Generated and AI-Assisted Emails - Mobile view