Get Adobe Flash player

Gábor Kátai-Urbán, Ferenc Koszna and Zoltán Megyesi
Department of Information Technologies, Faculty of Mechanical Engineering and Automation, Kecskemét College, Hungary
Corresponding author e-mail: katai-urban.gabor@gamf.kefo.hu

 

The omnidirectional camera is an optical device which can capture images in 360 degrees field of view. We developed a 3D camera system using multiple omnidirectional cameras, which will be used as a car mounted camera system. In this article we discuss the design, construction and calibration of omnidirectional cameras. We present the used camera model that allows us to use an omnidirectional camera as a perspective camera, and the distortion model, that is suitable to correct precision problems of the assembly process.

 

Introduction

Observing the space around a moving car is a challenging problem, which often requires untraditional equipment. Existing systems often use RADAR or LIDAR technologies to scan the surrounding, but using visual sensors still have some benefits. Apart from being considerably cheaper, the visual signal can hold more details, and can be processed rapidly with the current computer technologies. The most important advantage is that the color property of the objects provide more information for identification than the 3D shape alone. Road signs, lane border marks, or pedestrian crossings can only be recognized from visual signal. Also, detecting and tracking moving objects is more feasible using the image of objects then from a limited 3D point set of the objects alone.

These advantages more than compensate the obvious drawback of visual signals, namely that there are driving conditions when they are mostly missing, e.g. during the night or in a fog. However, these conditions are also inconvenient for the human drivers, who need to change their driving style under these conditions, and are also forced to increase their visual range, which would in turn benefit the visual assistant system. As a result many existing driver assistant systems utilize cameras, mostly to observe the space in front of the car [10].

Our goal is to address the problem of observing the space around a car in 360 degrees field of view using visual methods. This has been done before using omnidirectional cameras [9] or large field of view cameras [8].

Our novelty is that we apply multiple omnidirectional cameras that can provide stereo (and mostly also multi–view) image data in every direction. From this multi-view data it is possible to reconstruct 3D properties, while still having access to the color properties of the objects (thus combining the advantage of RADAR, or LIDAR systems with large field of view visual sensors). Our system contain three omnidirectional cameras on top of the car and two conventional cameras observing the front of the car in larger detail. Figure 1 shows a frame from a 200 frame recorded video sequence.

 

gamf1

Figure 1. A frame from a 200 frame sequence

 

The future goal is to develop a 3D camera system that can provide data for many practical research fields: recognizing road signs; lane borders; pedestrian crossings; to predict motion of moving objects around the car; detect pedestrians in danger zones; reduce blind spots and to reconstruct the 3D scene around the car. Many of the above fields are based on 3D multi view reconstruction, which in turn requires that the camera system was calibrated. In this article we address the problem of calibration, with special focus on calibration of omnidirectional cameras. In Section 2 we discuss the used camera models and in Section 3 we present the calibration method suitable to calibrate the selected models. In Section 4 we summarize the calibration of a 3D camera system, and the detected calibration errors are provided. In Section 5 we conclude and discuss the future work.

Camera models

We would like to use our camera system for reconstruction purposes. For this, the mapping of the recorded light rays of the 3D scene to the camera images has to been known. This mapping is based on the projection model of the camera system, with some unknown parameters that need to be determined during a calibration process.

The camera projection model is often divided into two parts.

The intrinsic camera parameters describe the projection of the 3D objects as viewed from the camera to the 2D image [5]. In case of a normal perspective camera (see Figure. 2. a,), the projection of a X = [xw,yw,zw]T 3D word point to ui = [u,v,1]T homogenous image point on the image plane can be described with the following linear equation:

gamf-k1

where s is a scaling factor, K is the intrinsic camera calibration matrix, Rc,tc are the extrinsic calibration parameters. The 3×3 K matrix hold the focal length, aspect and principal point of the camera as described in [5].

In case of omnidirectional camera the projection cannot be described with linear equation and more additional parameters are needed.

Our omnidirectional camera consists of a hyperboloid mirror and a normal perspective camera (see Figure. 2. b,) [2]. The mirror surface can be described with the equation:

 

gamf-k2
where and are the main parameters of the hyperbola.

Suppose that we are observing a scene point X by an omnidirectional camera. The ray, which come from X and go the direction of the mirror center (Om), reflected by the mirror at point Xm = [xm,ym,zm]T This ray goes through the external focal point of the hyperboloid (Oc). If we place a normal perspective camera into Oc it can project the ray to point on its image plane.

 

gamf-2a

Figure 2. a, Perspective camera model,
b, Omnidirectional camera with hyperbolic mirror

 

The projection of the internal camera (Oc) (from Xm.
to ui) can be described by equation (1). But the mapping of X point to point Xm. cannot be described by a linear equation. To formalize the projection the following equation (2) should be solved:

 

gamf-k3
where and describe the shape of the mirror as in (2) and l = a2 + b2. Using the l1,2 solutions the intersection points on the mirror computed as

gamf-k3b

In summary, the intrinsic parameters of a hyperboloid omnidirectional camera are the intrinsic parameters of the internal perspective camera, the distance between the camera and the mirror, and the shape of the mirror (a, b).

The extrinsic camera parameters (t, R) describe the camera position and orientation in the word coordinate system (see Figure. 2. a, b,). These parameters are essential in multi camera systems [4].

The projection center of a normal perspective camera is the Oc focal point. The position (t, 3×1 vector) and the orientation (R, 3×3 rotation matrix) store all the missing parameters for the projection described in equation (1).

In case of an omnidirectional camera we chose the center of the mirror Om as the projection center, since all light rays recorded by the camera pass through that point. With this model, the extrinsic parameters of the omnidirectional camera are identical to that of the perspective camera: a t position vector and an R rotation vector of the focal point.

Calibration

During the camera calibration process the projection parameters use in equation (1) and (3) are estimated. Usually the process consists of two main steps.

The intrinsic calibration step is well known in case of perspective cameras. A known 2D structure is moved in front of the camera and together with the detected structure equations can be formalized to constrain parameters of matrix.

Zhang’s method [7] is a popular solution to this problem. This method estimates intrinsic camera parameters using images from black and white planar chessboard pattern (see Figure 3.). Taking pictures from 30-50 different views of the pattern lead to a good calibration result.

 

gamf3

Figure 3. Intrinsic calibration with chessboard pattern

 

In case of an omnidirectional camera, the model contains an inner conventional perspective camera and a hyperboloid mirror [1].

The inner camera can be calibrated like a normal perspective camera. This step is performed before the mirror and the camera are assembled. The parameters of the hyperboloid mirror can be measured during or after manufacturing using non-visual means (see Section 4 for an alternative). Together with the inner camera intrinsic parameters, the parameters (a, b) of the hyperboloid are all that is needed to perform a projection from the 3D world. If the internal camera is in the focal point and its image is rectilinear, then according to section 2, for every ray passing through the internal camera center, there is a ray passing through the focal point of the hyperboloid mirror. If the internal camera is calibrated (i.e. for each image point we know the light ray passing through its center), these corresponding external rays can be calculated using (3) and no other calibration is needed.

It must be noted however, that to use our model, the following assumptions must be true:

1. The shape of the mirror to be perfectly hyperboloid.
2. The inner camera must be positioned exactly in the external focal point.

These presumptions must be considered during camera assembly, and we will address these issues in section 4.

The extrinsic calibration step is performed by detecting 2D image points of corresponding pre-known 3D word points. The relative position (t) and orientation (R) of the cameras to a fixed point (usually selected as one of the camera centers) are estimated solving a least squares problem [4].

It is important that the calibrating 3D world points must be visible from multiple cameras, and there must enough 3D points to over determine all camera extrinsic parameters.

Since the goal of this calibration step is the same for both types of cameras, we would like to use the same estimation method for both types. Unfortunately the estimation method exploits that the 3D points were projected by a linear projection matrix to a 2D image plane and the omnidirectional camera images are not the images of a perspective camera. To address this problem, we form a virtual perspective camera in the center of the mirror (Om) (see Figure 4.) and project the mirror points (Xm) to a virtual plane (points uv).

 

gamf4

Figure 4. Forming a virtual perspective camera
The intrinsic parameters of the virtual camera can be arbitrarily selected (e.g. the identity matrix is a suitable for matrix K)
Projecting all detected 2D points to the virtual plane, the standard estimation method is applicable.

Application and evaluation

In Section 1 we introduced a 5 camera car mounted camera system using 2 normal and 3 omnidirectional cameras. Our intention was to calibrate this system to a fixed world coordinate system.

As a first step, we fixed the focus of all optics, and performed intrinsic calibration for all cameras as described in section 3. After this, we fixed the positions of mirrors and normal cameras in the camera system. The relative positioning of the internal cameras of the omnidirectional cameras was done as the final step of the assembly.

To complete the internal calibration of the omnidirectional cameras, we have to learn the parameters of the hyperbola and we have to confirm that the two assumptions used in our model (described in Section 3) are true. The parameters (a , b) were measured and assumption 1 (the mirror is indeed hyperboloid) was verified using a Mitutoyo contour measuring device.

Assumption 2 (the camera is in the external focal point of the hyperboloid) was assured during assembly using a software tool, described below.

We exploited the following facts:

1. The image of the camera optics is visible around the principal point of the camera only if the camera axis is locally perpendicular to the mirror surface.
2. The mirror was manufactured such, that the hyperboloid was cut off using a perpendicular plane to the rotation axis, making the border of the mirror a perfect circle.
3. Under perspective projection, the image of a circle can only be a co-centric circle if the image plane is parallel with the circle and the camera axis passes through the circle center.
4. Knowing both the diameter of the mirror and the calibration data of the internal camera, the image of the mirror border must be of known size on the image.

Based on these facts we developed a software that locates the image of the camera optics and detects the mirror border. Verifies the circularity and size and of the border we can assure that the camera is in the external focal point of the hyperboloid (see Figure 5.).

 

gamf5

Figure 5. Camera positioning tool used during the  assembly process

 

The camera system assembled, and intrinsically calibrated, we could perform the external calibration process.

We selected an external reference coordinate system: the coordinate system of an industrial ABB robot. We used a light source, positioned by the robot to mark 3D points in its own word coordinate system. On the camera images the points are detected (see Figure 6.).

 

gamf6

Figure 6. A light source held by an industrial robot

 

Using the known 3D coordinates from the robot coordinate system and the detected image points on all images, the external calibration method described in section 3 could be applied.

To verify the method in section 3, we need to measure the accuracy of the calibration of the camera system. There are more ways to do that, but the most informative is when we reconstruct 2D calibration points, and measure their distance from their corresponding 3D point in 3D space using Euclidean distance. The advantage of this method is that the results can be acquired in millimeters, and as such it is more meaningful. The drawback is that we apply the calibration data multiple times and the triangulation error is also incorporated to the measured error. Nevertheless, in practice this error estimates the error of the calibration reliably. We received the following results using 50 calibration points for the normal and the omnidirectional cameras.

As can be seen on Table 1, the normal cameras were calibrated with close to 1 mm accuracy, while the accuracy of the omnidirectional cameras can be measured in cm-s. This error can be contributed mostly to the limited resolution of the active regions of the omnidirectional cameras; the inaccuracies of the assembly process, the short baseline and the non-point like image of the calibration light blob.

 

gamf-t1

Table 1. Calibration results

 

Since the camera system was approximately 4 meters away from the robot, we can extrapolate that at the interesting 10 – 20 m range the accuracy will fall to 5-10 cm and below, due to the bad baseline-object ratio. This accuracy is presumably enough for detecting static objects, but for moving cars, where the relative movement of the cars between frames can be below 1m, the adequateness requires additional evaluation.

Conclusion and future work

In this article we introduced the setup of a car mounted 3D camera system using multiple omnidirectional cameras. We presented the camera model of both perspective and omnidirectional cameras and provided intrinsic and extrinsic calibration methods for both models. We demonstrated the calibration steps of real cameras and measured the calibration accuracy. While the current accuracy is limited due to several factors, this method is applicable for a range of applications, mostly for detecting static objects.

As future work, we need to determine the minimum required accuracy for moving objects, to determine the adequateness of these calibration results.

To improve the accuracy we need to refine the model and consider both inner camera misalignment and the errors of the mirror surface. Along with the refined model we need to find a calibration method that can measure these new parameters reliably.

Acknowledgement

This research is supported by TÁMOP-4.2.2.C-11/1/KONV-2012-0012: “Smarter Transport” – IT for co-operative transport system – The Project is supported by the Hungarian Government and co-financed by the European Social Fund.
References

[1] C. Geyer and K. Daniilidis, “A unifying theory for central panoramic systems and practical applications,” in European Conference on Computer Vision (ECCV), pp. 445–461, June 2000.
[2] S. Baker and S. Nayar, “A theory of single-viewpoint catadioptric image formation,” International Journal of Computer Vision (?CV), vol. 35, no. 2, pp. 175–196, 1999.
[3] Z. Zivkovic and O. Booij, “How did we built our hyperbolic mirror omnidirectional camera – practical issues and basic geometry“ , Technical Report IAS-UVA-05-04, Informatics Institute, University of Amsterdam, 2005.
[4] R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, second edition, 2004.
[5] Milan Sonka, Vaclav Hlavac, and Roger Boyle. Image Processing, Analysis, and Machine Vision. Thomson-Engineering, 2007.
[6] Sato, T.; Pajdla, T.; Yokoya, N., “Epipolar geometry estimation for wide-baseline omnidirectional street view images,” Computer Vision Workshops (ICCV Workshops), 2011 IEEE International Conference on , vol., no., pp. 56,63, 6-13 November 2011.
[7] Zhengyou Zhang. “A ?exible new technique for camera calibration”, IEEE Trans. Pattern Anal. Mach. Intell., 22(11):pp. 1330–1334, November 2000.
[8] Shigang Li, “Binocular Spherical Stereo,” Intelligent Transportation Systems, IEEE Transactions on , vol.9, no.4, pp. 589,600, December 2008.
[9] Gandhi, Tarak and Trivedi, Mohan, “Parametric ego-motion estimation for vehicle surround analysis using an omnidirectional camera,” Machine Vision and Applications, vol.16, no.2, pp. 85-95, 2005.
[10] ZuWhan Kim, “Robust Lane Detection and Tracking in Challenging Scenarios,” Intelligent Transportation Systems, IEEE Transactions on , vol.9, no.1, pp.16,26, March 2008.