Development of an Autonomous Multilingual Guidance Robot: Integrating Distributed AI with ROS 2 Navigation

Location

SU-215

Start Date

1-5-2026 11:20 AM

Department

Computer Science

Abstract

Autonomous service robots are increasingly used in public and institutional environments to assist users with navigation and information access. Many existing systems are constrained by the limited computational resources of embedded hardware and often provide minimal multilingual interaction capabilities, reducing accessibility and scalability in environments such as universities, hospitals, and museums. This project investigates how distributed artificial intelligence and robotic navigation frameworks can be integrated to develop an autonomous indoor guidance robot capable of multilingual interaction and reliable navigation. The study aims to determine whether a distributed computing architecture can support real-time speech recognition and translation while maintaining responsive autonomous movement. The system uses the Robot Operating System 2 (ROS 2) and LiDAR (Light Detection and Ranging)–based Simultaneous Localization and Mapping to construct and maintain maps of indoor academic environments. Grid-based path planning algorithms, including A*, enable navigation to predefined destinations across the first floor of a campus building. To address computational limitations of embedded platforms, the architecture offloads computationally intensive speech recognition and translation tasks to a networked host. Spanish is implemented as the initial supported language, with additional languages planned for integration. In addition to navigation and language interaction, the system design considers operational factors such as battery capacity, wireless network reliability, and system connectivity to ensure sustained autonomous operation in indoor environments. Current work focuses on integrating navigation and multilingual interaction subsystems and evaluating system performance in an academic environment. Effectiveness will be assessed through navigation success rates across first-floor destinations and user feedback on interaction quality. This work contributes to a distributed architecture that integrates multilingual speech interaction with ROS 2–based navigation, enabling embedded robots to perform computationally intensive language processing while maintaining real-time mobility. The research aims to demonstrate a scalable approach for multilingual robotic guidance in public-facing environments.

Faculty Sponsor

Malek Abunaemeh

This document is currently not available here.

Share

COinS
 
May 1st, 11:20 AM May 1st, 11:40 AM

Development of an Autonomous Multilingual Guidance Robot: Integrating Distributed AI with ROS 2 Navigation

SU-215

Autonomous service robots are increasingly used in public and institutional environments to assist users with navigation and information access. Many existing systems are constrained by the limited computational resources of embedded hardware and often provide minimal multilingual interaction capabilities, reducing accessibility and scalability in environments such as universities, hospitals, and museums. This project investigates how distributed artificial intelligence and robotic navigation frameworks can be integrated to develop an autonomous indoor guidance robot capable of multilingual interaction and reliable navigation. The study aims to determine whether a distributed computing architecture can support real-time speech recognition and translation while maintaining responsive autonomous movement. The system uses the Robot Operating System 2 (ROS 2) and LiDAR (Light Detection and Ranging)–based Simultaneous Localization and Mapping to construct and maintain maps of indoor academic environments. Grid-based path planning algorithms, including A*, enable navigation to predefined destinations across the first floor of a campus building. To address computational limitations of embedded platforms, the architecture offloads computationally intensive speech recognition and translation tasks to a networked host. Spanish is implemented as the initial supported language, with additional languages planned for integration. In addition to navigation and language interaction, the system design considers operational factors such as battery capacity, wireless network reliability, and system connectivity to ensure sustained autonomous operation in indoor environments. Current work focuses on integrating navigation and multilingual interaction subsystems and evaluating system performance in an academic environment. Effectiveness will be assessed through navigation success rates across first-floor destinations and user feedback on interaction quality. This work contributes to a distributed architecture that integrates multilingual speech interaction with ROS 2–based navigation, enabling embedded robots to perform computationally intensive language processing while maintaining real-time mobility. The research aims to demonstrate a scalable approach for multilingual robotic guidance in public-facing environments.