LANGUAGE
YouTube
FXB-Z04001 ROS Intelligent Driving Car Development Platform
About Feature Technical Parameter Basic Configuration
i. About the Product
The platform is based on the current mainstream ROS system platform for research and development, using the mainstream sensor LIDAR, ultrasonic radar, depth camera and other components of automatic driving as the environment perception system. The device uses the Raspberry Pi master control as the decision control system, and the body chassis and STM32 master control board as the system execution part to realize multi-sensor perception fusion and vehicle intelligent driving control, such as LIDAR mapping and navigation, visual mapping and navigation, multi-point cruise, laser radar following, depth visual follow, visual inspection, ultrasonic radar obstacle avoidance, autonomous navigation and obstacle avoidance, APP image transmission, PS2 wireless handle control, voice call and control, voice navigation and sound source positioning, etc, to make the car achieve low-speed automatic driving and realize the core teaching function of automatic driving. It is suitable for the teaching needs of theoretical research and debugging practical training of intelligent connected vehicles technology system in middle and higher vocational technical colleges, general education colleges and training institutions.

ii. Features

1. Intelligent driving car body structure: car body structure using metal frame spraying process; steering system using the mainstream Ackermann steering structure; body control system using the mainstream STM32 main control board control; motor control mode using DC AB code motor PID speed control.

2. Bottom control system: Receive control commands sent by ROS system, APP, PS2 handle, CAN, serial port, voice module and other upper control systems to control vehicle execution, acceleration, deceleration, braking, steering and so on, while returning the current vehicle speed, steering angle, mileage and current position information, and automatically controlling the throttle, brake, steering, turn signals, gears, etc.

3. Vehicle data collection system: collect data information related to automatic driving and vehicle driving back to the display screen and APP for display.

4. Voice recognition system: intelligent recognition of voice commands and generation of underlying control commands to control the vehicle to achieve voice autonomous navigation, raw source positioning, voice summoning and control functions.

5. Vision processing: The depth camera, which is composed of binocular camera and RGB camera, collects the obstacle environment in front of the car in real time, and through the processing of the depth learning algorithm to realize the car visual mapping navigation, depth visual tracking, RGB visual line patrol, visual target tracking, traffic lights and pedestrian identification and obstacle avoidance functions. And the camera installation height and angle can be dynamically adjusted. 

6. LIDAR processing: The LIDAR perception system scans the surrounding obstacles 360 degrees by the LIDAR, and according to the scanned data on the computer automatically generate LIDAR point cloud map and 2D navigation map, while controlling the car to achieve LIDAR multi-point positioning navigation, dynamic obstacle avoidance and other automatic driving control.

7. Positioning: The chassis control of the car integrates nine-axis attitude sensor, which can collect the current position and acceleration changes of the car in real time and transmit back to the ROS system for processing and APP for display to achieve accurate positioning of the car.

8. Decision planning: the car detects the surrounding obstacles through the environment sensing sensor, and after analysis by the upper layer algorithm, it automatically compares the planning trajectory and obstacle relationship (far away, closer, cross), makes decision for each obstacle (ignore, go around, stop), and then these decisions are integrated to give the preview distance and speed required for speed planning. 

9. Human-computer interaction interface display

Intelligent car through Bluetooth or WIFI connection mobile phone app real-time display car running steering wheel corner, running speed, battery power, PID parameters adjustment, camera video screen and other information, and can control the car movement through APP. The control modes include gravity induction control, rocker control, key control, speed control, etc. The change curve of the throttle braking amount sent by the control module and executed in place is displayed in the form of a graph. 

10. Provide ROS intelligent car complete development source code and controller schematic, interface detailed communication protocol, development information and video, and intelligent car practical training guide.

iii. Technical Parameters

1. Body chassis part

Body structure: Aluminum alloy body

Steering structure: Ackermann electronically controlled steering

Braking method: Motor code brake

Battery: 24V20AH

Motor: 100W DC AB code motor                                        

Charger: Portable full intelligent charger, automatic power-off when fully charged

Charging input voltage: 220V

Size(mm): not less than 435*365*405(length × width x height)

Braking distance: ≤0.5m

The whole vehicle equipment quality: ≥10kg

Overall vehicle load: ≤22kg

Maximum driving speed: Maximum driving speed: up to 1.3m/s, default 0.5m/s

Wheel size: 125mm rubber wheel

Electronic control mode: mobile APP, model airplane wireless remote control, CAN, serial port, voice, ROS

Communication interface:

MicroUSB*2

CH340USB-TTL serial port * 1

CP2102USB-TTL serial port * 1

CAN interface * 1

TTL serial port * 1

Model airplane remote control interface * 1

SWD online debugging interface * 1

The underlying main control chip: STM32F103VET6

 

2. Autopilot ROS control part

Hardware platform: Jetson Nano B01

CPU: ARM A57 64-bit@1.43GHz

GPU: 128-core Maxwell

OS: Ubuntu18.04+ROS melodic

Memory: 4GB 64-Bit LPDDR4 25.6GB/s

Storage: microSD 64GB

USB4*USB3.0+1*USB2.0+Micro-B

Serial port functions: GPIO, I ² C, I ² S, SPI, UART

GPIO pin count: 40

Rated function: 15W

Input voltage: 5V

Camera interface: 1 * MIPI CSI-2 DPHY lanes

Video output: HDMI 2.0 and eDP 1.4

 

3. Environmental sensing part

3.1. LIDAR:

Measuring range: 0.15~12m measuring radius

Scanning angle 0~360 degrees

Baud rate: 115200Bps

Single scan time: 0.25 milliseconds

Scanning frequency: 10HZ

Measuring frequency: 8000HZ

Interface type: USB 2.0

Supply voltage: 5V DC

Dimensions: 76MM * 41MM

Weight: 190g

Working temperature range: 0-40 ℃

3.2. Depth camera.

RGB pixel: 1080P

Depth resolution: 1280 x 1024mm

Depth field of view: 164.85*30*48.25mm

Visible range: 0.6M~8M

Product Size: 165 x 40 x 30mm

Interface type: USB2.0

Input voltage: 5V

3.3 Ultrasonic radar:

Work blind spot: 0.25 meters

Measurement range: 0.25~4.5 meters

Measurement angle: ≈ 60 degrees

Baud rate: 9600Bps

Single scan time: 300 milliseconds

Measurement frequency: 4HZ

Interface type: 485 to USB 2.0

Working voltage: 9-36V DC

Average working current: ≤ 35mA

Dimensions: 96.5MM * 50MM * 31.5mm

Working temperature range: 0-40 ℃

iv. Basic Configuration

One set of body chassis (Ackermann steering mechanism), two pieces of DC gear motors, one piece of steering engine, one piece of STM32 main board, one set of 24V power battery (with battery manager), one set of Raspberry Pi ROS main board, one piece of LIDAR, one piece of depth camera, one piece of voice control module, one piece of ultrasonic radar host, four pieces of ultrasonic radar probes, one piece of PS2 wireless remote control, one set of connecting harness, one piece of U disk (complete development data included), one piece of Bluetooth module, one piece of set of CAN analyzer, one piece of 24V charger.