Robot Sensors & Perception Hardware

Sensors are a robot's connection to the physical world. Without them, a robot is blind and deaf — it cannot react to its environment at all. Choosing the right sensor for the right task is one of the most important hardware decisions you'll make. This guide walks through every major sensor type, explaining what it measures, how it works, and where it's used.

1. Cameras — The Information-Rich Sensor

Cameras give the most information per unit cost of any robot sensor. Modern deep learning makes cameras incredibly powerful — but they also require the most computation to process.

RGB cameras

A standard color camera. Cheap, light, and ubiquitous — a Raspberry Pi Camera Module costs $25 and captures 1080p at 30fps. The downside: a 2D image has no depth information. A camera can't tell you whether an obstacle is 1 meter or 10 meters away without extra processing (stereo vision or depth estimation models).

Depth cameras (RGB-D)

Cameras that produce both a color image and a per-pixel depth map. The Intel RealSense D435 uses structured light (projected IR pattern) to compute depth up to ~10 meters. The result is a 3D point cloud you can use for obstacle avoidance, 3D mapping, and object grasping. Cost: $100–300.

Stereo cameras

Two cameras spaced apart like human eyes. Software computes depth from the disparity (difference in position) of the same object in both images. Works outdoors where structured light cameras fail (sunlight washes out the IR pattern). Used extensively in autonomous vehicles and outdoor mobile robots.

2. LiDAR — Precise 3D Mapping

LiDAR (Light Detection and Ranging) fires pulses of laser light and measures how long each pulse takes to return. The result is a highly accurate 3D point cloud of the environment.

2D vs. 3D LiDAR

A 2D LiDAR (like the popular RPLidar A1, ~$100) spins a single laser beam in a horizontal plane, producing a 2D slice of the surroundings at ~10 meters range. This is enough for indoor mobile robots and SLAM. A 3D LiDAR (like Velodyne or Ouster) has multiple laser channels stacked vertically, producing a full 3D point cloud. These are used in autonomous vehicles and cost $500–$3,000+.

Why LiDAR is precise

LiDAR measures distance with centimeter-level accuracy, regardless of lighting conditions. It works in complete darkness, direct sunlight, and rain (though heavy rain degrades performance). Cameras struggle with all of these scenarios. This reliability is why autonomous vehicles use both cameras AND LiDAR — they complement each other.

3. Ultrasonic Sensors — The Budget Distance Meter

An ultrasonic sensor emits a pulse of sound above the frequency humans can hear (typically 40kHz) and measures how long the echo takes to return. Distance = (time × speed of sound) / 2.

HC-SR04 — the robotics staple

This $2 sensor works with any microcontroller, measures 2cm to 4 meters, and is accurate to about 3mm. It's noisy at angles and can't reliably detect soft or angled surfaces (sound scatters), but it's perfect for "is there an obstacle in front of me?" decisions in low-cost robots.

Limitations to know

Ultrasonic sensors have a beam angle of 15–30° — they can't tell you exactly where within that cone the obstacle is. They also have a dead zone of about 2cm (too close to distinguish the outgoing and incoming pulse). For precision mapping, use LiDAR. For simple obstacle detection, ultrasonic is fine.

4. IMU — The Balance Organ

An Inertial Measurement Unit (IMU) combines an accelerometer and gyroscope in a single chip. It tells the robot how it's accelerating and rotating in 3D space, which is essential for balance and orientation tracking.

Accelerometer

Measures linear acceleration along three axes. At rest, it measures gravity (9.8 m/s² downward), which tells you the tilt angle. During motion, it measures dynamic acceleration. Used for detecting falls, impacts, and computing tilt in slow-moving systems.

Gyroscope

Measures angular velocity (how fast the robot is rotating). Integrating the gyroscope reading over time gives you rotation angle — but this integration accumulates drift error over time. That's why IMU-based orientation must be fused with other sensors (accelerometer, magnetometer, or LiDAR) to stay accurate.

Sensor fusion — the key insight

No single sensor gives perfect orientation. The accelerometer is noisy during motion; the gyroscope drifts over time. Combining them with a Kalman filter or Mahony filter gives stable, accurate orientation estimates. The MPU-6050 ($3) and the BNO055 ($35, with built-in sensor fusion) are the two most popular IMUs in robotics.

5. Force & Torque Sensors — The Sense of Touch

Force/torque (F/T) sensors measure the forces and moments acting at a robot's wrist or gripper fingertips. They give the robot a sense of touch.

Why robots need to feel

Without force sensing, a robot arm gripping an object either crushes it (too much force) or drops it (too little). With force sensing, the controller can regulate grip force in real time — firm enough to hold, gentle enough not to break. This is essential for handling eggs, fruit, or delicate electronics.

Applications

Surgical robots use F/T sensors to detect tissue resistance and prevent excessive force on delicate anatomy. Industrial cobots use them for compliant motion — if a human pushes the arm, it yields rather than fighting back. Legged robots use foot F/T sensors to detect ground contact and adjust gait.

Frequently Asked Questions

What sensors should a beginner buy first?

An HC-SR04 ultrasonic sensor and an MPU-6050 IMU. Together they cost under $10, teach the core concepts of distance sensing and orientation measurement, and have excellent Arduino library support. Add a Raspberry Pi Camera Module when you're ready for vision.

Why do self-driving cars use both cameras and LiDAR?

Cameras are excellent at classification (reading signs, identifying pedestrians) but struggle with distance. LiDAR is excellent at precise 3D geometry but gives no color or texture information. Together they cover each other's weaknesses — this redundancy is also a safety requirement for road vehicles.

What is sensor fusion?

Combining data from multiple sensors to get a better estimate than any single sensor could provide. The Kalman filter is the most common algorithm. Example: fuse GPS (slow, accurate globally) + IMU (fast, drifts over time) to get smooth, accurate robot positioning in real time.

How do I connect sensors to ROS?

Most sensors have community-maintained ROS 2 driver packages. For example, `rplidar_ros` for RPLidar, `realsense2_camera` for Intel RealSense, and `imu_tools` for IMU filtering. Install the driver, launch the node, and the sensor data appears as a ROS topic that your other nodes can subscribe to.

Frequently Asked Questions

What will I learn here?

This page covers the core concepts and techniques you need to understand the topic and progress confidently to the next lesson.

How should I use this page?

Start with the overview, then follow the section links to deepen your understanding. Use the table of contents on the right to jump to specific sections.

What should I read next?

Use the navigation below to continue to the next lesson or explore related topics.