LANGUAGE
YouTube
FXB-Z04001 ROS Intelligent Driving Car Development Platform
About Feature Technical Parameter Basic Configuration
The platform is based on the current mainstream ROS system platform for research and development, using the mainstream sensor LIDAR, ultrasonic radar, depth camera and other components of automatic driving as the environment perception system. The device uses the Raspberry Pi master control as the decision control system, and the body chassis and STM32 master control board as the system execution part to realize multi-sensor perception fusion and vehicle intelligent driving control, such as LIDAR mapping and navigation, visual mapping and navigation, multi-point cruise, laser radar following, depth visual follow, visual inspection, ultrasonic radar obstacle avoidance, autonomous navigation and obstacle avoidance, APP image transmission, PS2 wireless handle control, voice call and control, voice navigation and sound source positioning, etc, to make the car achieve low-speed automatic driving and realize the core teaching function of automatic driving. It is suitable for the teaching needs of theoretical research and debugging practical training of intelligent connected vehicles technology system in middle and higher vocational technical colleges, general education colleges and training institutions.

1. Intelligent driving car body structure: car body structure using metal frame spraying process; steering system using the mainstream Ackermann steering structure; body control system using the mainstream STM32 main control board control; motor control mode using DC AB code motor PID speed control.
2. Bottom control system: Receive control commands sent by ROS system, APP, PS2 handle, CAN, serial port, voice module and other upper control systems to control vehicle execution, acceleration, deceleration, braking, steering and so on, while returning the current vehicle speed, steering angle, mileage and current position information, and automatically controlling the throttle, brake, steering, turn signals, gears, etc.
3. Vehicle data collection system: collect data information related to automatic driving and vehicle driving back to the display screen and APP for display.
4. Voice recognition system: intelligent recognition of voice commands and generation of underlying control commands to control the vehicle to achieve voice autonomous navigation, raw source positioning, voice summoning and control functions.
5. Emergency parking system: manual button can be triggered to exit the automatic driving mode and emergency parking in case of emergency.
6. Vision processing: The depth camera, which is composed of binocular camera and RGB camera, collects the obstacle environment in front of the car in real time, and through the processing of the depth learning algorithm to realize the car visual mapping navigation, depth visual tracking, RGB visual line patrol, visual target tracking, traffic lights and pedestrian identification and obstacle avoidance functions. And the camera installation height and angle can be dynamically adjusted.
7. LIDAR processing: The LIDAR perception system scans the surrounding obstacles 360 degrees by the LIDAR, and according to the scanned data on the computer automatically generate LIDAR point cloud map and 2D navigation map, while controlling the car to achieve LIDAR multi-point positioning navigation, dynamic obstacle avoidance and other automatic driving control.
8. Positioning: The chassis control of the car integrates nine-axis attitude sensor, which can collect the current position and acceleration changes of the car in real time and transmit back to the ROS system for processing and APP for display to achieve accurate positioning of the car.
9. Ultrasonic radar obstacle avoidance: ultrasonic radar sensors are integrated in the front and rear of the car, which can collect obstacles around the car in real time, and when approaching an obstacle, it can perform active braking, steering and obstacle avoidance functions according to the current movement status.
10. Decision planning: the car detects the surrounding obstacles through the environment sensing sensor, and after analysis by the upper layer algorithm, it automatically compares the planning trajectory and obstacle relationship (far away, closer, cross), makes decision for each obstacle (ignore, go around, stop), and then these decisions are integrated to give the preview distance and speed required for speed planning.
11. Human-computer interaction interface display
Intelligent car through Bluetooth or WIFI connection mobile phone app real-time display car running steering wheel corner, running speed, battery power, PID parameters adjustment, camera video screen and other information, and can control the car movement through APP. The control modes include gravity induction control, rocker control, key control, speed control, etc. The change curve of the throttle braking amount sent by the control module and executed in place is displayed in the form of a graph.
12. Provide ROS intelligent car complete development source code and controller schematic, interface detailed communication protocol, development information and video, and intelligent car practical training guide.

1. Body chassis part
Body structure: Aluminum alloy body
Steering structure: Ackermann electronically controlled steering
Braking method: Motor code brake 
Battery: 24V20AH
Motor: 100W DC AB code motor                                        
Charger: Portable full intelligent charger, automatic power-off when fully charged
Charging input voltage: 220V 
Size(mm): not less than 465 x 365 x 125(length × width x height)
Braking distance: ≤0.5m
The whole vehicle equipment quality: ≥20kg
Overall vehicle load: ≤45kg
Maximum driving speed: ≤ 12km / h
Wheel size: 180mm rubber wheel
Electronic control mode: mobile phone APP, PS2 wireless handle, CAN, serial port, voice, ROS
The underlying main control chip: STM32F103VET6
 
 
2. Autopilot ROS control part
Hardware platform: Raspberry Pi 4B
CPU: ARM Cortex-A72 
GPU: Broadcom VideaCore VI(32bit)
OS: Ubuntu18.04+ROS melodic
Memory: 4GB
USB: 3*USB3.0+2*USB2.0
Number of GPIO pins: 40
Rated Function: 15W
Input Voltage: 5V
 
 
3. Environmental sensing part
3.1. LIDAR:
Brand: Silan
Measuring range: 0.15~12m measuring radius
Scanning angle 0~360 degrees
Single scan time: 0.25 milliseconds
Scanning frequency: 10HZ
Measuring frequency: 8000HZ
Dimension: φ76MM*41MM
Weight: 190g
3.2. Depth camera.
Brand: LeSee
RGB pixel: 1080P
Depth resolution: 1280 x 1024mm
Depth field of view: 58.4 x 45.5cm
Visible range: 0.6M~8M
Product Size: 165 x 40 x 30mm
Interface type: USB2.0
Input voltage: 5V

1 set of body chassis (Ackermann steering mechanism), 2 pieces of DC gear motors, 1 piece of steering engine, 1 piece of STM32 main board, 1 set of 24V power battery (with battery manager), 1 set of Raspberry Pi ROS main board, 1 piece of LIDAR, 1 piece of depth camera, 1 piece of voice control module, 1 piece of ultrasonic radar host, 4 pieces of ultrasonic radar probes, 1 piece of PS2 wireless remote control, 1 set of connecting harness, 1 piece of U disk (complete development data included), 1 piece of Bluetooth module, 1 piece of set of CAN analyzer, 1 piece of 24V charger.