Every day we see the world in three dimensions, our brains can identify people, animals and objects and figure out how big and far away they are. But the image captured by regular cameras are 2-dimensional, with geometric information lost, such as size, volume and distance, because the cameras can’t function like human eyes.
How do we perceive depth?
Before we talk about the features and functions of 3D camera, we need to brief on how we humans see the world in three dimensions. Humans and most animals have two eyes to capture image, which is dubbed binocular vision. Our two eyes are at two sides of our head, so we see one object from two slightly different angles when the image is projected to the retina of each eye. The angle difference results in horizontal position differences of the two images captured by left and right eye, which is known as binocular disparities. We perceive depth when the disparities are processed by the brain.
What are 3D cameras?
Although binocular disparities that enables us to capture 3-dimensional image is a natural phenomenon when we are viewing 3-dimensional scenes, they can also be simulated and applied to machines like cameras. Therefore, the simplest application of 3D camera is based on binocular stereo vision.
As the 3D imaging technologies mature, 3D cameras will unleash unimaginable power. Obvious examples are the emerging IoT systems in our homes, exciting AR/VR games and more secure face recognition technologies. With “eyes” to perceive the world as we do, machines and computers will have a better understanding of the world around them, and even interact with us, intelligently.
Revopoint’s Acusense 3D depth camera