Get Latest ECE/EEE Projects in your Email

Intelligent Robot Guidance in Fixed External Camera Network for Navigation in Crowded and Narrow Passages


Autonomous indoor service robots use the same passages which are used by people for navigation to specific areas. These robots are equipped with visual sensors, laser or sonar based range estimation sensors to avoid collision with obstacles, people, and other moving robots. However, these sensors have a limited range and are often installed at a lower height (mostly near the robot base) which limits the detection of far-off obstacles.

In addition, these sensors are positioned to see forward, and robot is often ’blind’ about objects (ex. people and robots) moving behind the robot which increases the chances of collision. In places like warehouses, the passages are often narrow which can cause deadlocks. We propose to use a network of external cameras fixed on the ceiling (ex. surveillance cameras) to guide the robots by informing about moving obstacles from behind and far-off regions. This enables the robot to have a ’birds-eye view’ of the navigation space which enables it to take decisions in real-time to avoid the obstacles efficiently.

The camera sensor network is also able to notify the robots about moving obstacles around blind-turns. A mutex based resource sharing scheme in camera sensor network is proposed which allows multiple robots to intelligently share narrow passages through which only one of the robots/person can pass at a given time. Experimental results in simulation and real scenarios show that the proposed method is effective in robot navigation in crowded and narrow passages.


In this section we describe the architecture of the system and define terms used in the rest of the paper. It is convenient to represent an environment comprising of cameras, processing boards, pathways, and direction of flow of people in the passages, into a node map, which is essentially a directional graph with nodes and links.


The resource allocator maintains a database of the power required by the robots to do various tasks. Each task is given a unique ID (Ti) and task priority (TP). Tasks related to security like surveillance and patrolling are given high priorities, whereas tasks related to cleaning, etc. are given lower priorities, as summarized in Table 1. A robot records its battery status before the task and when the task is finished.


Figure 3. Flowchart of path allocation to robots

Figure 3. Flowchart of path allocation to robots

Figure 3 shows the flowchart of the narrow path allocation. Each camera node has image processing modules to detect motion, extract blobs and match templates. If a robot is detected and there is a request from the robot, there is a handshake between the robot and the camera node. To avoid message lost scenarios, the node tries to receive the message several times. A sample request message is shown in Listing 1.


Figure 4. Experiment setup. (a) Node comprising of a Raspberry Pi board with camera; (b) ‘T’ shaped experimental passage; (c) Graph representation

Figure 4. Experiment setup. (a) Node comprising of a Raspberry Pi board with camera; (b) ‘T’ shaped experimental passage; (c) Graph representation

In our implementation, a node comprised of a Raspberry Pi board which features a 700 MHz Low Power ARM11 processor with 512MB of SDRAM, and a webcam attached to it. It has a 10/100 BaseT Ethernet socket but no wifi capabilities, so we used an external wifi adapter. All the nodes were assigned a unique IP address (as shown in Figure 4c), and could communicate with each other and the robot in vicinity.


Our results show that robots can benefit from external sensor networks in which they operate. Vision is a powerful information and most of the public places like hospitals, universities, and airports already have a large network of surveillance cameras installed. Therefore, there is no need to specially install a new infrastructure and there are cost benefits.

The main contribution of the proposed idea is that the robots operating in the sensor network are no longer limited to the attached sensor specifications. Rather, robots can access a rich content of information from the sensor network for better navigation. The proposed idea is not limited to vision sensors, and robots can access a wide range of relevant information from different types of sensors to perform their tasks more efficiently.

Source: Hokkaido University
Authors: Abhijeet Ravankar | Ankit Ravankar | Yukinori Kobayashi | Takanori Emaru

Download Project

For Free ECE/EEE Project Downloads:
Enter your email address:
( Its Free 100% )

Leave a Comment

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>