Engineering students at Texas A&M University developed an AI-powered robotic dog that sees, remembers, and plans routes autonomously, targeting search-and-rescue and disaster response missions.
Researchers at Texas A&M University have developed an AI-powered robotic dog designed to operate in complex, unpredictable environments using memory-driven navigation and human-like decision-making. Built by graduate engineering students, the robot is capable of seeing, remembering where it has been, and responding dynamically to new situations. The system is aimed primarily at search-and-rescue and disaster response missions, where conditions are often chaotic and GPS signals are unavailable.
Unlike conventional robotic systems that rely on pre-mapped environments or simple obstacle avoidance, the robotic dog integrates vision, memory, and language-based reasoning. It understands voice commands, analyzes camera input in real time, and plans routes autonomously. The developers say this combination allows the robot to behave more like a human responder than a traditional machine.
At the core of the system is a memory-driven navigation architecture powered by a custom multimodal large language model. The model interprets visual data captured by onboard cameras and combines it with stored environmental memory to make navigation decisions. This enables the robot to recall previously traveled paths and reuse them, improving efficiency and reducing redundant exploration.
A hybrid control structure allows the robot to balance reactive behavior with high-level planning. It can quickly respond to immediate hazards, such as avoiding collisions, while simultaneously reasoning about longer-term navigation goals. According to the research team, this mirrors how humans navigate unfamiliar spaces by combining instinctive reactions with deliberate planning.
“Some academic and commercial systems have integrated language or vision models into robotics,” said Sandun Vitharana, an engineering technology master’s student involved in the project. “However, we haven’t seen an approach that leverages MLLM-based memory navigation in the structured way we describe, especially with custom pseudocode guiding decision logic.”
The robot’s navigation system was designed specifically for unstructured and unpredictable environments, such as disaster zones or remote areas. Traditional autonomous navigation methods often struggle in these conditions due to changing layouts, debris, and limited visibility.
The project was led by Vitharana and Sanjaya Mallikarachchi, an interdisciplinary engineering doctoral student, with guidance from faculty at Texas A&M University. With support from the National Science Foundation, the team explored how multimodal AI models could be deployed at the edge, rather than relying on cloud-based processing.
“Moving forward, this kind of control structure will likely become a common standard for human-like robots,” Mallikarachchi said.
Beyond search-and-rescue operations, the researchers see broader potential applications for the technology. The robot’s ability to navigate large, complex spaces could make it useful in hospitals, warehouses, and other industrial facilities. Its memory-based system may also assist people with visual impairments, conduct reconnaissance in hazardous areas, or support exploration tasks where human access is limited.
Dr. Isuru Godage, an assistant professor in the Department of Engineering Technology and Industrial Distribution, emphasized the importance of deploying advanced AI directly on robotic platforms. “The core of our vision is deploying MLLM at the edge, which gives our robotic dog the immediate, high-level situational awareness previously impossible,” Godage said. “Our goal is to ensure this technology is not just a tool, but a truly first responder-ready system for unmapped environments.”
The robot was recently demonstrated at the 22nd International Conference on Ubiquitous Robots, where the team presented experimental results and system design details. The work highlights how advances in multimodal AI are beginning to reshape autonomous robotics, moving systems closer to adaptive, human-like behavior in real-world conditions.