研究目的
To develop a cost-effective stereo vision system for autonomous indoor robot navigation that avoids obstacles and maps environments using fusion with ultrasound sensors.
研究成果
The paper successfully implements a cost-effective stereo vision system for indoor robot navigation, achieving reliable obstacle avoidance and 3D environment mapping. Fusion with ultrasound sensors enhances robustness. Future work should focus on extending disparity range, optimizing algorithms for dedicated processors, and adapting for outdoor use with GPS.
研究不足
Stereo vision fails in low-texture environments (e.g., plain walls or glass surfaces) and is affected by illumination levels. The system is processor-intensive, requiring adequate hardware, and point cloud data handling can be challenging. Dynamic obstacles may not be fully detected by vision alone, relying on ultrasound sensors for interruption.
1:Experimental Design and Method Selection:
The study uses a stereo vision system with two webcams for depth perception, combined with ultrasound and infrared sensors for obstacle avoidance and navigation. Algorithms include stereo calibration, block matching for disparity calculation, and PID control for robot movement.
2:Sample Selection and Data Sources:
The robot navigates unknown indoor environments; data comes from stereo image pairs, ultrasound sensors, and odometric feedback.
3:List of Experimental Equipment and Materials:
Includes two CMOS webcams, Arduino boards, ultrasound sensors, infrared range finders, a digital compass, motors with encoders, and a PC for processing.
4:Experimental Procedures and Operational Workflow:
Calibrate stereo rig using a chessboard, capture image pairs, compute disparity maps, segment obstacles based on depth, fuse with sensor data for navigation decisions, and perform 3D reconstruction.
5:Data Analysis Methods:
Use OpenCV for image processing, PCL for point cloud filtering, and custom algorithms for obstacle avoidance and mapping; performance metrics include frame processing time and navigation accuracy.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容-
CMOS web camera
640x480 resolution, USB 2.0 UVC interface
Used to capture stereo image pairs for depth perception in the vision system.
-
Arduino board
ATmega328 Microcontroller
Arduino
Serves as the embedded system core for sensor data collection and motor control.
-
Ultrasound sensor
Used for obstacle detection and fusion with vision data for navigation.
-
Infrared range finder
Monitors vertical depth to avoid falls from elevated surfaces.
-
Digital compass module
three-axis
Provides heading direction for odometric feedback and PID control.
-
Geared motor
45 RPM
Powers the robot's wheels for movement.
-
Optical encoder
400 pulses per revolution
Tracks distance traveled by counting wheel rotations.
-
Motor driver
Controls the motors based on signals from the Arduino.
-
USB to USART serial converter module
Facilitates communication between Arduino boards and the onboard PC.
-
Intel atom processor board
1.6 GHz
Intel
Processes stereo vision algorithms for navigation decisions.
-
登录查看剩余8件设备及参数对照表
查看全部