Euclidean Camera Calibration Using GPS Side InformationGerald DalleyIntroductionConsider the problem of wide-area surveillance, such as traffic monitoring and activity classification around critical assets (e.g. an embassy, a troop base, critical infrastructure facilities such as oil depots, port facilities, airfield tarmacs). We want to monitor the flow of movement in such a setting from a large number of cameras, typically with non-overlapping fields of view. To coordinate the observations in these distributed cameras, we need to know the relative locations of the fields of view (e.g. what portion of the earth's surface does each camera see). In some instances, one can carefully site and calibrate the cameras to manually obtain a mapping from the camera pixel coordinates to latitude, longitude coordinates on the earth's surface. However, in many cases, cameras must be rapidly deployed and may not last for long periods of time. Additionally, even carefully calibrated cameras tend to move after being bumped or even rattled by large passing vehicles; new cameras tend to be added to systems over time, and older cameras fail. Additionally, outfitting the camera itself with a global positioning system (GPS) receiver is a suboptimal solution. First, many cameras are mounted on the sides of large structures such as buildings that block and/or distort GPS readings. Secondly, a GPS receiver located at the camera indicates where the camera is, not where the groundplane that it views is located. GPS Side InformationFor our project, we assume that we have an installed network of cameras and at least one object moving through the surveillance area that is instrumented with a GPS receiver. Note that we do not have correspondence between the instrumented objects and camera observations, e.g. when we see an object pass through a camera, we do not know to which, if any, instrumented object it corresponds. Under this setup, we know the latitude and longitude of each instrumented object at each point in time. We denote this data as the set , where v indexes the vehicle and i indexes the time for a given vehicle. We may estimate where is the Kronecker delta function. The image below shows for the real traffic network we tested:
We separately have access to the recorded in-camera tracking data that reports when vehicles enter and exit each camera's field-of-view: , where c indexes the camera number and j indexes the times during which reports have occurred. We seek to correlate the spatio-temporal GPS data with the camera report times. To do so, we wish to estimate , the probability that a vehicle will be at location (x,y) given that it is seen at the edge of camera c. Although we cannot quite estimate directly, we can approximate it with the mixture distribution where ε is a hidden factor that indicates how well approximates . For our current implementation, we assume that and do not attempt to remove . To test this algorithm, we use a dataset consisting of five cameras, five instrumented vehicles following scripted behavior, and approximately unplanned 17 vehicles that passed through the cameras during the data collection period. In the below figure, we show as green and black (bold) dots, composited from all cameras. We threshold and consider high values to be candidate entry/exit locations for the cameras. These are shown in the figure as large dark black dots. For each camera, we find the bounding square of a fixed size that contains the largest number of high values. Note that we only consider the high values generated for the particular camera in question. These bounding squares are our estimated camera locations and are shown in red in the figure. Overlayed as well are dark red trapezoids indicating the manually-drawn approximate ground-truth fields-of-view of the cameras. Note that with this data we are able to correctly identify the camera locations.
Future WorkWe have several areas where we are working on improving and extending these preliminary results:
|
||||
|