The Autonomous Mobile Robot Group

Department of Computer Science
Northwestern University

Projects
Robots
People
Publications
Courses
Contact

Introduction

The state of the art in robotic architectures is the multi-tiered approach, e.g. (Arkin and Balch 97) (Firby et al. 98) (Connell 92). Such systems combine the expressive power of symbolic systems while retaining the significant advantages of a behavior-based approach. However, tiered architectures face the model-coherency problem. Symbolic systems reason using a world model, which is a database of logical assertions. This database has to be updated from the sensory subsystems. However, unlike the behavior-based layer, the symbolic system is not tightly coupled to the perceptual system. In general, the programmer is responsible for adding rules to the domain model to ensure that any epistemic updates are done appropriately. 

Extending this same issue to multi-robot systems is fairly straightforward. As mentioned above, the world model must be frequently updated to reflect the real world. If an agent is participating in a team, the source of these updates can either be the agent’s sensory systems or a communiqué from another agent. Some agents will be able to observe events that others cannot and it is necessary that they somehow inform the other members as such. Failing to do so will eventually cause the team members to diverge in their understanding of the current situation. For a team to behave coherently, all members need to have a uniform understanding of the situation, i.e. a consistent world model. Under this viewpoint, the other agents in the team are essentially another sensor for the current agent. This leads again to the model-coherency issue. In this case however, epistemic rules must reside on both the “sensor” and the current reasoning system. Suppose Agent1 observes an event, e.g. begins to track an intruder. Agent1 now has to keep the rest of the team informed of the position of this intruder. There have to be rules in Agent1’s reasoning system that indicate this is a significant team event and the team needs to be updated continually by communication. On the other end, the other agents also have to contain rules that update their world model from the communication buffer, which acts as another “sensor”. Placing these special rules in the domain theory, as in the single agent case, is often the responsibility of the programmer. It is easy to surmise that this situation leads to expensive and complicated communication protocols, since significant events have to be tagged as such and transmitted in a timely and reliable manner. It is perhaps for this reason that not very many multi-robot systems utilizing a tiered approach exist.

HIVEMind

I am currently working with Ian Horswill on extending the role-passing architecture (Horswill 98) to a multi-robot platform. In addition to allowing fast inference, the role-passing representation allows for very compact storage of a robot’s knowledge base.  Since each predicate can be stored in a single machine word, the entire knowledge base of a typical contemporary robot can fit in a single UDP packet.  It is therefore practical for robots to share information by periodically broadcasting the entire knowledge base of every robot.  By doing so, we provide each robot with transparent access to the sensory data and inferences of every other robot, a kind of “group mind.”

We have developed HIVEMind (Highly-Interconnected VErbose Mind), an architecture for cooperative robot teams based on knowledge-base broadcast.  Its simplified communication and coordination model allows the team to efficiently maintain a shared situational awareness with hard real-time guarantees; when a team member detects a contingency, other members immediately share its awareness and respond in O(1) time.

The above figure shows a HIVEMind configuration for a two-robot team.  Each team member has its own inference network that is driven both by its own sensory system and by the sensory data of the other team members.   The entire HIVEMind can be considered a single, parallel control network whose components happen to be distributed between the different robot bodies being controlled.  Wires crossing between bodies are simulated using the RF broadcast mechanism, so that each member of the team is “connected” to every other member in a web-like structure of virtual wires. 

It may seem inefficient for each robot to have its own separate copy of the inference network.  However, to have a single robot perform each inference and share the results would require much more complicated coordination protocols (Cohen and Levesque 91) analogous to the multi-phase commit protocols used in distributed database systems.  Since communication bandwidth is a scare resource and inference in our system is essentially free, it is more efficient for HIVEMind robots to perform redundant computation.

SEEKERs

Systematic Environment Exploring tracKERs (SEEKERs) is a multi-robot project which implements the HIVEMind architecture on physical robots. The goal of the project is to build robotic teams that can track objects in an indoor environment. Team members divide the environment up amongst themselves and search it systematically. There is an extra "virtual" team member which is a command console controlled by a human user. The human user is responsible for setting the variables appropriately in the system, i.e. binding the roles to appropriate values, so the team can perform its task. There are two levels of development :

  • Level 1 -- Finding a static object in a known environment

  • Level 2 -- Tracking a human in a known environment

Level 1 has been completed successfully and is described in (Khoo and Horswill 01). Our robotic team of Kineso and Ergasio (pictured above) can located a brightly color object in an office environment. The robots utilize a topographical (i.e. landmark based) map of the office and divide the search space amongst themselves. The task can be accomplished by sharing the role bindings for each color, the bit-vector for the goal(near(X)) predicate, the bit-vector for the see-object(X) predicate, a location(X) function, which give the two nearest landmarks, if known, to any role X, and a bit-vector specifying the set of landmarks that the robot has personally visited.

Figure 1 : Two robots setting out
on search task
Figure 2 : One of the robots locating
the desired object

The Command Console for the HIVEMind team is based on the Cerebus project (Horswill et al 00). It provides a natural language interface for the human user and allows commands such as “find green ball” or “find red ball” to be entered. The desired color is bound to the patient role and transmitted to members to the team. The console appears as another robot to other team member, albeit one that is not doing any physical work. The Command Console also provides status information in the form of display windows based on the broadcast knowledge it is receiving from other team members. Using this interface, the human commander can inject new information into the team, as well as receive data about the current state of the “group mind”.  

Figure 3 : Screenshot of SEEKers status display.
Thin yellow lines -- unsearched corridors
Gray lines -- searched corridors
Thick green line -- location of desired object

Current task list :

  • Modifying SEEKers to accept reified plans

  • Having SEEKers perform multiple different tasks (not just search) to demonstrate viability of reified plans

  • Algorithm for finding chokepoints

Publications

A. Khoo and I. Horswill (2001) HIVEMind : Grounding Inference in Cooperative Activity. AAAI 2001 Fall Symposium on Anchoring Symbols to Sensor Data in Single and Multiple Robot Systems. [Zipped Postscript]

References

R.C. Arkin and T.R. Balch(1997) Aura: principles and practice in review. Journal of Experimental and Theoretical Artificial Intelligence, 9(2).

P. Bonasso, R.J. Firby, E. Gat, and D. Kortenkamp (1997). Experiences with an Architecture for Intelligent Reactive Agents. In Journal of Theoretical and Experimental Artificial Intelligence, special issue on software architectures for physical agents, Hexmoor, Horswill and Kortenkamp, eds., 9:2-3. Taylor and Francis, Ltd.

J.H. Connell(1992) SSS: A hybrid architecture applied to robot navigation. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 92), pages 2719--2724, Nice, France, 1992. IEEE Press, New York, NY.

I. Horswill (1998) Grounding Mundane Inference in Perception. In Autonomous Robots, 5, pp.63-77.

I. Horswill, R. Zubek, A. Khoo, C. Le, S. Nicholson(2000). The Cerebus Project Proceedings of the 2000 AAAI Fall Symposium on Parallel Cognition and Embodied Agents.



AMRG home