The Placement of Digitized Objects in a Point Cloud as a Photogrammetric Technique.
When an investigator is asked to reconstruct an incident, there may be limited data available to perform the analysis. As such, the limited available evidence, specifically the relative positions of the subject objects relative to permanent, fixed points, can make it challenging to accurately reconstruct the incident. Thus, the physical evidence that has been documented becomes more valuable, and therefore more scrutinized. In certain situations, an investigator may be limited to only post-incident photographs taken after a vehicle or other object has already been moved from its post-incident position. Other times, video of the subject incident is available. In these cases, photogrammetric methods are useful and valid approaches for determining the relative locations of moving or stationary objects to permanent, fixed points within the photographs or video.
Previously, receipt of video of an incident was an unexpected but beneficial addition to the materials received. With the proliferation and widespread use of surveillance cameras, DashCam recorders, and cellular phones capable of capturing high-quality video, receipt of video is becoming more common as part of the discovery package received by investigators. As opposed to answering more questions about the incident, the receipt of video often prompts more inquiries about the specifics of the event in question. Thus, an investigator is often tasked with answering questions about what cannot be seen in the video; the perspective that can be viewed provides the bases to answer these inquiries.
The receipt of surveillance video as part of an investigation typically provides an additional benefit over other video sources. Surveillance cameras are often mounted as fixed objects in a specific location relative to the accident scene. Thus, the physical location of the surveillance camera relative to the incident in question is unlikely to change over time. Although verification of this permanence can often be confirmed through Google Maps Streetview perspectives, verification of a surveillance camera's location and orientation can best be confirmed through a physical inspection of the site.
In one sense, the receipt of surveillance video can simply be considered a set of "photographs" for the reconstructionist to utilize during their analysis. As such, photo-grammetric techniques can be utilized to take measurements of the objects observed in the surveillance video [1, 2, 3]. The aforementioned relative permanence of the surveillance camera location, however, eliminates one workflow that needs to be accounted for as part of a photogrammetric analysis of post-incident photographs--the physical location of the camera itself.
As video and high-quality surveillance cameras have become more common, the ability to three-dimensionally digitize the area the camera perspective has captured has become significantly easier. The advent and widespread adoption of three-dimensional laser scanning has fundamentally changed an investigator's ability to quickly and accurately digitize incident locations. A comprehensive, colorized, three-dimensional digitization of the incident location, which would have seemed nearly impossible to cost-effectively accomplish previously, can now be performed by a single operator in a matter of minutes. Myriad peer-reviewed studies have been published that use laser-based techniques as the "control", concluding that the quality of the data acquired from three-dimensional laser scanners is high and the measurements taken from the scans are precise [4, 5]. Coupled with the availability and prevalence of surveillance video, the technological advances of three-dimensional laser scanning provide an investigator powerful tools to analyze incidents.
Previous research has investigated the integration of three-dimensional laser scanning and photogrammetric techniques. Coleman staged a collision on both planar and curved surfaces to investigate the accuracy and efficiency of various photogrammetric techniques. The research concluded that the slowest methodology employed was the use of PhotoModeler software. Techniques that were more efficient, such as photograph rectification of point cloud data, failed to provide a compelling visual for any object that had more than two dimensions, i.e. a motorcycle helmet compared to a skid mark . Carter investigated the use of point clouds from conventional laser scanners and unmanned aerial vehicles to camera match scene evidence. Although specific times for the methodologies were not provided, the techniques required the use of more than one software program to process and evaluate the data .
This research aims to outline and validate the methodology for utilizing surveillance video from an incident to place vehicles, or any other object, quickly and efficiently into a registered point cloud generated from three-dimensional laser scans of an incident site utilizing only one software package. The potential benefit of this technique is an efficient and time-saving methodology compared to other photogrammetric techniques. For example, the authors have found that this technique can take less than half of the time that a conventional photogrammetry project in a software package such as PhotoModeler can take. In addition, a compelling and geometrically-accurate three-dimensional visual of the evidence at issue has been created that can be presented to a jury at trial.
Another benefit of the methodology is that it utilizes only one piece of software. The photogrammetric technique in this research can be likened to on-site photogrammetry or reverse camera projection , with the important distinction that the on-scene work associated with those techniques is now taking place in a three-dimensional computer environment. After the objects are placed in the three-dimensional environment, measurements can then be taken to fixed reference points in the environment, like crosswalks or utility poles, which can either be observed in the surveillance video or are out of the video's range.
While there are other three-dimensional laser scanners and software packages available, this research utilized and evaluated FARO three-dimensional laser scanners and SCENE software. As one of the technique's main benefits is the time savings compared to other photogrammetric techniques, the timeframe to acquire spatial information utilizing this methodology will be quantified.
To validate the technique, a series of six hypothetical scenarios were devised based on real-world cases the researchers have investigated. In each scenario, a vehicle or vehicles were driven into a position relative to a simulated scene attribute such as a painted line simulating a crosswalk or a simulated centerline. Two surveillance cameras captured the motion involved with the placement of each vehicle in the six staged scenarios. The site used for the testing was the rear parking lot of the Lingohocken Fire Company in Wycombe, Pennsylvania. A 2013 MINI Countryman and a 1997 E-One Fire-Rescue Apparatus (herein known as the Fire Truck) were utilized.
For each of the six scenarios, two researchers who were uninvolved with the validation of the technique took longitudinal and lateral physical measurements of scene attributes. The measurements included distances from the vehicles to fixed objects within the scene, or measurements between the two vehicles, utilizing two 25-foot steel measuring tapes. These control measurements were withheld from the two researchers who were evaluating the technique. The control measurements involved measurements that could and could not be directly observed in the surveillance video. These 18 physical longitudinal and lateral measurements would serve as the basis for the accuracy of the technique. Although two different surveillance cameras recorded each scenario, only one perspective was utilized per scenario to simulate the receipt of one video during an investigation (Table 1).
After the scenarios were staged and the physical measurements were taken, the surveillance video was secured from the Lingohocken Fire Company security system. The two researchers utilized three-dimensional laser scanners to create geometrically accurate three-dimensional point clouds of the incident site and vehicles. A FARO Focus.sup.3D] X330 laser scanner was used to scan the incident site at 11 different locations in the vicinity of the two surveillance cameras. After the scanning was completed, FARO SCENE software (versions 220.127.116.11 and 18.104.22.168) was used to register the scans of the incident site to create the same three-dimensional digital environment that the surveillance cameras' video captured (Figure 1).
Similarly, using both a FARO [Focus.sup.3D] S120 and a FARO [Focus.sup.3D] X330 3D laser scanner, separate scans of the MINI and Fire Truck were performed. Each vehicle was individually compiled in the SCENE software as a separate project. Using clipping boxes, both the MINI and Fire Truck were saved and exported individually as separate .e57 files (Figure 2a and 2b).
Depending on the scenario, either one or both of the vehicles were imported as .e57 files into the three-dimensional point cloud of the incident site for analysis. After the appropriate vehicle or vehicles were imported into the incident site, the vehicles were moved, rotated, and adjusted to match the available surveillance perspective for the given scenario. It is important to note that this research focused on utilizing still frames of the static objects that were observed within the available surveillance perspective. The research focused on quantifying the positions of final rest of the vehicles, not determining the positons of moving objects over time. The location of fixed reference objects relative to the vehicles that were visible in the still frames from the surveillance video, such as painted lines, crack seals, utility poles, and curbs in Scenario 6, for example, were used to place the vehicles in the SCENE software so that they best-matched the location observed in the surveillance video by visual approximation. Clipping boxes were utilized throughout the workflow to assist in the best-fit placement of the vehicles relative to the location of unique fixed reference objects (Figure 3a through 3c).
The two researchers independently compiled the three-dimensional incident site and placed the vehicles for each scenario without the input or influence of the other researcher. This was purposely done to demonstrate and validate whether different individuals' methodologies can produce similar results.
To illustrate the workflow, Scenario 1, the Fire Truck outside of the crosswalk, will be analyzed to document the placement of one vehicle within the three-dimensional environment. Figures from all six scenarios are presented in Appendix A. The Fire Truck was driven into a position outside of a simulated marked crosswalk in Scenario 1 (Figure 4).
The position of the Fire Truck and crosswalk were selected so that the surveillance camera could not directly observe the front of the Fire Truck nor the crosswalk line. A screen capture still image of the Fire Truck at its point of rest in the surveillance video was taken for comparison purposes with the three-dimensional workflow. The unobserved simulated crosswalk is in front of the Fire Truck and to the right of the available perspective in Scenario 1 (Figure 5).
Utilizing the fixed reference points observed within the available surveillance video, the digitized .e57 file of the Fire Truck was placed into the three-dimensional project point cloud of the incident site. The Fire Truck was moved within the three-dimensional project point cloud to match the position observed in the surveillance video until a best-fit was achieved.
Using the 3D camera tool within SCENE, the perspective of the surveillance camera was matched by visual approximation with the SCENE software to demonstrate the substantial similarity of the placement of the Fire Truck within the scene. This perspective-matching was achieved by placing a virtual camera in the SCENE software in the same location as the scanned surveillance Camera 6. Surveillance Camera 6 could be located precisely in the SCENE software because it was scanned in as part of the registered point cloud (Figure 6).
Finally, the researchers were given a list of the physical measurements for each scenario that needed to be measured within the SCENE software. For Scenario 1, the front bumper of the Fire Truck to the outside of the painted crosswalk line was measured for the longitudinal control measurement. All measurements within the project point cloud utilized the scan point or object measuring tool within the SCENE software. For the lateral control measurement, the left side of the Fire Truck was measured to the center of a traffic cone that was placed to the left of the Fire Truck. This cone could not be observed in the available surveillance perspective (Figure 7).
For the testing, a longitudinal measurement was defined as any measurement taken utilizing a vector which was parallel to the vehicle centerline. A lateral measurement was defined as any measurement taken with a vector which was perpendicular to the vehicle centerline.
Each scenario was completed using the methodology outlined above. Figures 8 through 11 illustrate placing two vehicles within the digitized three-dimensional environment for Scenario 5, two vehicles traveling in the same direction.
Eighteen measurements were taken for this study which were then divided up into two categories: (1) longitudinal and (2) lateral measurements. Each of the two researchers used techniques independent of the other researcher's influence to determine the experimental value of the measurements taken using the FARO SCENE software.
Measurements were taken inside of a 450.0" x 419.2" area of the simulated scene. Figure 12 illustrates the area where the measurements were taken, i.e. the measurement box, with respect to the geometric locations of the two surveillance cameras utilized in the research. The furthest measurement from Camera 6 was Scenario 2, Measurement Number 3, which was 656.1 inches away from the camera. Likewise, the furthest measurement from Camera 5 was Scenario 6, Measurement Number 18, which was 452.6 inches away from the camera.
Table 2 displays the tabulated data of the thirty-six experimental values obtained by the two researchers using the SCENE software and eighteen reference measurements obtained using conventional measuring techniques. Longitudinal measurements are blue, and lateral measurements are orange. The 'Ref. Actual' column lists the control measurements taken utilizing the 25-foot steel tapes by the researchers uninvolved with the validation of the technique to the nearest quarter of an inch. The columns 'SH' and 'GML' list the measurements, delta, and percent error from this research utilizing the proposed technique. The 'Measurement Distance' column reported the furthest distance from the surveillance camera to the termination of the measurement acquired in inches. Finally, the 'Visible' column reported if all (Y), partial (P), or none (N) of the measurement's beginning or end points could be viewed in the surveillance perspective. Further details are included in Appendix B.
The longitudinal and lateral measurements generated from the two researchers were compared to the actual reference values and analyzed in terms of percent error and root-mean-square-error using the following equations:
% Error [|Actual - Experimental|/Actual]= x 100
RMSE = [square root of [[[SIGMA].sup.n.sub.t=1][([Actual.sub.t], - [Experimental.sub.t]).sup.2]/n]]
where Experimental is the set of experimental data which corresponds with [Actual.sub.t], the set of data retrieved during the physical examination of the scene and vehicles.
Tables 3 and 4 display the average percent error and the root-mean-square error of the data, respectively.
The percent error of the measurements was also reported in distance-based subsets based on the camera-to-measurement distance (Table 5).
Likewise, the percent error of the measurements was also reported by sorting the data based on the measurement's visibility. Measurements were coded 'Yes' when both the beginning and end points of the measurement was visible, 'Partial' when only one of the termination points was visible, and 'No' when neither point was visible in the surveillance camera perspective (Table 6).
Lastly, the time required to perform the technique was quantified. On average, approximately two to three hours was required from the initial processing of the scans after they were acquired to the final placement and measurement of the vehicles relative to fixed objects. One software package, FARO SCENE, was utilized in the workflow (Table 7).
With an average error of 8.7 percent, the results of this research demonstrated a sufficient and acceptable accuracy for validating this technique in the accident reconstruction and scientific communities [9, 10, 11, 12, 13]. The results of this research confirmed that the technique can accurately place objects in their three-dimensional environment based on a review of surveillance video to an average lateral and longitudinal root-mean-square-error of less than six inches.
Importantly, this research quantified the accuracy and efficiency of using this technique to measure the distance between objects that cannot be seen in the provided surveillance video. Measurements like these, such as the distance between the front of the Fire Truck and a marked crosswalk in Scenarios 1 and 2, represent a novel application of this technique. With an average longitudinal error of 11.9 percent, an average lateral error of 5.6 percent, and an average overall error of 8.7 percent, these known rates of error can guide the investigator when attempting to quantify the potential error of similar analyses. These findings expand the area that investigators can accurately measure to outside the field of vision of the surveillance video even when an object is only partially visible in the acquired perspective.
While each investigator may take differing nuanced approaches in the workflow, their results may be equally accurate. This is an important confirmation for the technique, as there are often multiple paths to arrive at the same destination. Individual researchers achieving accurate and consistent results was as important a part of this research as the validation of the technique itself. Specifically, the two researchers had similar average longitudinal percent errors and longitudinal root-mean-square-errors.
The average lateral measurements were more than twice as accurate as the average longitudinal measurements. The accuracy of the technique is dependent on the quantity, quality, and location of fixed reference objects that can be observed in the camera perspective. In this research, where an objective was to measure and quantify the error of longitudinal distances of unseen objects a significant distance from the camera, increased uncertainty in the longitudinal direction was expected and observed. This was expected as there are often more unique fixed reference points in the lateral direction (i.e. lane lines parallel to a vehicle centerline), compared to the longitudinal direction (i.e. crosswalks).
Although the delta of Scenario 4, Measurement 7, was 4.3 and 9.6 inches for researchers SH and GML, respectively, the percent error associated with the measurements was significant. This does not invalidate the technique, rather, the relatively small control measurement of 14 inches and the size of the respective deltas magnified the otherwise normal delta, especially when compared to the average root-mean-square-error. Careful consideration and recognition of the error associated with the technique needs to be applied when attempting to apply this methodology to small magnitudes.
As expected, the average error of the measurements increased as the measurement distance from the surveillance camera increased. That is, measurements that were taken closer to the camera were more precise than measurements that were taken further from the camera's physical location. For measurement distances of zero to 600 inches (50 feet) from the surveillance camera, the average error was only 4.6 percent. Furthermore, the precision of measurements increased when both the beginning and end points of the measurement were visible.
Camera locations that are more vertical with respect to the ground plane they are capturing, i.e. surveillance cameras that are positioned higher off of the ground, produce more easily visible fixed reference objects that lead to increased accuracy of the technique. Consideration of the angle of incidence needs to be taken into account by the investigator.
Utilizing an analog control methodology to verify the accuracy of the technique, i.e. two steel measuring tapes, was a decision made based on the expedience of acquiring the control points via this method. As the Fire Truck and parking lot of Lingohocken Fire Company were out of service while testing was conducted, the expedient return of the apparatus and parking spaces back into service was a functional limitation of the research. While the measuring tapes provided a less precise comparison than a methodology like scanning-in-place, the speed of the acquisition of these control points utilizing measuring tapes was a tangible benefit. It is possible that future studies that employ this methodology while utilizing more accurate control measurements may achieve more accurate results with a lower range of error.
All control measurements were taken parallel or perpendicular to the longitudinal axis of the respective vehicles to ensure a sufficiently accurate baseline reference setup. Control measurements were taken between the vehicles in Scenarios 5 and 6, in order to simulate a "who crossed the centerline" scenario, and to fixed objects in all scenarios within the simulated scene. Taking advantage of fixed objects is not only a common photogrammetric technique, but ensured a sufficiently accurate baseline reference setup.
While other photogrammetric techniques may yield a higher degree of precision, the time requirement associated with those methodologies may be as many as two to three times the investment of this technique. Furthermore, these photogrammetric methodologies may involve managing multiple pieces of software to construct the photogrammetric solution. The efficiency of this technique compared to a conventional photogrammetric solution is one of the main benefits. Furthermore, compelling and scaled three-dimensional visuals can be created based on the technique that have a known rate of error.
Careful consideration needs to be taken when the surveillance perspective involves soft targets, i.e. foliage, native vegetation, etc., that can potentially change with time and season. Although these objects certainly do not preclude application of this technique, they must be dealt with on a case-by-case basis.
Each scene is unique. While this study presents a specific error range for evidence placement, there are many factors to consider that may either positively or negatively affect the results. These factors can include the proximity of the camera or video camera to the evidence to be placed from video, the number of unique landmarks that can be used as a reference and their proximity to evidence to be placed from video, the number of cameras capturing the event, the resolution of video frames, the angle of incidence, and lens distortion and pixel aspect ratio correction.
The purpose of this research was to evaluate the technique when only one surveillance perspective was received by an investigator. The receipt of more than one perspective of the same area, like receiving both Camera 5 and Camera 6 for each scenario, would only enhance the accuracy of the technique.
While this research focused on the static positions of objects at their points of final rest utilizing still images from the surveillance video, this does not preclude the application of this technique to quantifying objects in motion to determine position and velocity using a frame-by-frame analysis. Potential uncertainties associated with acquiring still frames from video, such as motion blur, must be accounted for when applying this technique in such a way.
Although this research focused on video, the same methodologies apply to quantifying photographs using the same principles. The screen captures and corresponding still images from the video represent one frame from the video, which is essentially the same as a photograph. The only additional step would be to solve for the camera location, using previously-published photogrammetric techniques.
1. The results of this research validated this photogrammetric technique as a sufficiently accurate tool for placing objects into a three-dimensional scene and measuring distances that can and cannot be seen in the available surveillance video perspective.
2. The average percent error associated with this technique was 8.7 percent. The average root-mean-square-error was 5.5 inches.
3. While each investigator may take differing nuanced approaches in the three-dimensional workflow, their results may be equally accurate.
4. Although this research focused on video, the same principles apply to quantifying object locations in photographs.
5. The expediency of this photogrammetric technique can be two to three times faster than conventional photogrammetric solutions.
Shawn F. Harrington
2288 Second Street Pike
Penns Park, Pennsylvania 18943
(215) 598 9750
(800) 700 4944
The authors would like to thank the Lingohocken Fire Company for making this research possible. They would also like to thank Chris Ferrone for his thoughtful insight into this research, and Donald Eisentraut and Jennifer Shultz for their many contributions. Finally, the effective presentation of this research would not have been possible without the diligent and thorough peer-review by the SAE manuscript review committee.
[1.] Breen, K.C. and Anderson, C.E., "The Application of Photogrammetry to Accident Reconstruction," SAE Technical Paper 861422, 1986, doi:10.4217/861422.
[2.] Tumbas, N.S., Kinney, J.R., and Smith, G.C., "Photogrammetry and Accident Reconstruction: Experimental Results," SAE Technical Paper 940925, 1994, doi:10.4217/940925.
[3.] Neale, W.T.C., Fenton, S., McFadden, S., Rose, N.A. et al., "A Video Tracking Photogrammetry Technique to Survey Roadways for Accident Reconstruction," SAE Technical Paper 2004-01-1221, 2004, doi:10.4217/2004-01-1221.
[4.] Callahan, M.A., LeBlanc, B., Vreeland, R., Bretting, G. et al., "Close-Range Photogrammetry with Laser Scan Point Clouds," SAE Technical Paper 2012-01-0607, 2012, doi:10.4217/2021-01-0607.
[5.] Tandy, D.F., Coleman, C., Colborn, J., Hoover, T. et al., "Benefits and Methodology for Dimensioning a Vehicle Using a 3D Scanner for Accident Reconstruction Purposes," SAE Technical Paper 2012-01-0617, 2012, doi:10.4217/2021-01-0617.
[6.] Coleman, C., Tandy, D., Colborn, J., Ault, N. et al., "Applying Camera Matching Methods to Laser Scanned Three Dimensional Scene Data with Comparisons to Other Methods," SAE Technical Paper 2015-01-1416, 2015, doi:10.4217/2015-01-1416.
[7.] Carter, N., Hashemian, A., Rose, N., Neale, W. et al., "Evaluation of the Accuracy of Image Based Scanning as a Basis for Photogrammetric Reconstruction of Physical Evidence," SAE Technical Paper 2016-01-1467, 2016, doi:10.4217/2016-01-1467.
[8.] Smith, G. and Allsop, D., "A Case Comparison of Single-Image Photogrammetry Methods," SAE Technical Paper 890737, 1989, doi:10.4217/890737.
[9.] Smith, R.A. and Toga, J.T., "Accuracy and Sensitivity of CRASH," SAE Technical Paper 821169, 1982, doi:10.4217/821169.
[10.] Pepe, M.D., Sobek, J.S., and Zimmerman, D.A., "Accuracy of Three-Dimensional Photogrammetry as Established by Controlled Field Tests," SAE Technical Paper 930662, 1993, doi:10.4217/930662.
[11.] Bartlett, W., Wright, W., Masory, O., Brach, R. et al., "Evaluating the Uncertainty in Various Measurement Tasks Common to Accident Reconstruction," SAE Technical Paper 2002-01-0546, 2002, doi:10..4217/2002-01-0546.
[12.] Rucoba, R., Duran, A., Carr, L., Eddeljac, D. et al., "A Three-Dimensional Crush Measurement Methodology Using Two-Dimensional Photographs," SAE Technical Paper 2008-01-0163, 2008, doi:10..4217/2008-01-0163.
[13.] Randles, B., Jones, B., Welcher, J., Szabo, T. et al., "The Accuracy of Photogrammetry vs. Hands-On Measurement Techniques Used in Accident Reconstruction," SAE Technical Paper 2010-01-0065, 2010, doi:104.217/2010-01-0065.
TABLE B.1 Description of control points and their visibility utilized in the research. Scenario 1 Meas. # Type Measurement Fire Truck Out of Crosswalk Center of Fire Truck 1 Long 86 3/4 inches front bumper to crosswalk; measurement along longitudinal axis of Fire Truck Left front door of 2 Lat 115 1/4 inches Fire Truck to center of cone; measurement is perpendicular to longitudinal axis of truck Scenario 2 Meas. # Type Measurement Fire Truck Inside Crosswalk Center of Fire Truck 3 Long 38 inches front bumper to crosswalk; measurement along longitudinal axis of fire truck Left side of Fire 4 Lat 117 3/4 inches Truck to center of cone; measurement is perpendicular to longitudinal axis of truck Scenario 3 Meas. # Type Measurement MINI Out of Crosswalk Center front of MINI 5 Long 92 1/2 inches to crosswalk line; measurement along longitudinal axis of MINI Left front tire of MINI 6 Lat 113 3/4 inches to center of cone; measurement is perpendicular to longitudinal axis of MINI Scenario 4 Meas. # Type Measurement MINI Inside Crosswalk Center front of MINI 7 Long 14 inches to crosswalk line; measurement along longitudinal axis of MINI Left rear tire of 8 Lat 114 1/4 inches MINI to center of cone; measurement is perpendicular to longitudinal axis of MINI Scenario 5 Meas. # Type Measurement Same Direction (Fire Truck and MINI) Distance between 2 9 Lat 51 1/4 inches vehicles; measurement from left front door of MINI to right rear door of Fire Truck Distance between 2 10 Lat 49 inches vehicles; measurement from left rear wheel (center hub) of MINI to right front of underside slide-out compartment on Fire Truck Distance between 11 Lat 16 3/4 inches the right rear wheel of the MINI and the forward edge of the parking block Distance between the 12 Lat 68 1/2 inches right rear door handle of the MINI and the outside stucco wall of Station 35 Center-left of the 13 Long 148 3/4 inches Fire Truck's front bumper to the center of three yellow bollards. Center front bumper 14 Long 100 1/2 inches of MINI to yellow parking line adjacent to the three yellow bollards. Scenario 6 Meas. # Type Measurement Opposite Direction (Fire truck and MINI) Distance between 15 Lat 44 inches MINI and Fire Truck; measurement made starting at left front A-pillar door seam of MINI (above the 'ALL 4' emblem) to the center of the white stripe on the left front door of the Fire Truck Center of MINI's 16 Lat 16 1/2 inches right real wheel to concrete pad located in front of Station 35 rear doors Center-left front 17 Long 78 1/4 inches bumper of the fire truck to the front of the second parking block Center front bumper 18 Long 174 1/4 inches of MINI over to wooden beams bordering the grass area beyond the LECK green dumpster Scenario 1 Meas. Distance Meas. Visible? Fire Truck Out of Crosswalk from Camera Center of Fire Truck 615.4 inches No front bumper to crosswalk; measurement along longitudinal axis of Fire Truck Left front door of 562.7 inches No Fire Truck to center of cone; measurement is perpendicular to longitudinal axis of truck Scenario 2 Meas. Distance Meas. Visible? Fire Truck from Camera Inside Crosswalk Center of Fire Truck 656.1 inches No front bumper to crosswalk; measurement along longitudinal axis of fire truck Left side of Fire 562.7 inches No Truck to center of cone; measurement is perpendicular to longitudinal axis of truck Scenario 3 Meas. Distance Meas. Visible? MINI Out of Crosswalk from Camera Center front of MINI 615.4 inches No to crosswalk line; measurement along longitudinal axis of MINI Left front tire of MINI 562.7 inches Partial - cone to center of cone; visible measurement is perpendicular to longitudinal axis of MINI Scenario 4 Meas. Distance Meas. Visible? MINI Inside from Camera Crosswalk Center front of MINI 633.4 inches No to crosswalk line; measurement along longitudinal axis of MINI Left rear tire of 562.7 inches Partial - cone MINI to center of visible cone; measurement is perpendicular to longitudinal axis of MINI Scenario 5 Meas. Distance Meas. Visible? Same Direction from Camera (Fire Truck and MINI) Distance between 2 383.3 inches Partial - right vehicles; measurement rear door from left front of Fire Truck door of MINI to visible right rear door of Fire Truck Distance between 2 438.7 inches Partial - right vehicles; measurement underside from left rear compartment visible wheel (center hub) of MINI to right front of underside slide-out compartment on Fire Truck Distance between 424.1 inches Partial - right the right rear wheel rear heel of the MINI and visible the forward edge of the parking block Distance between the 395.9 inches Partial - right right rear door rear door andle handle of the MINI of MINI visible and the outside stucco wall of Station 35 Center-left of the 311.8 inches Yes Fire Truck's front bumper to the center of three yellow bollards. Center front bumper 351.2 inches Yes of MINI to yellow parking line adjacent to the three yellow bollards. Scenario 6 Meas. Distance Meas. Visible? Opposite Direction from Camera (Fire truck and MINI) Distance between 285.8 inches Partial - left MINI and Fire front door Truck; measurement of Fire Truck made starting at left visible front A-pillar door seam of MINI (above the 'ALL 4' emblem) to the center of the white stripe on the left front door of the Fire Truck Center of MINI's 205.0 inches Yes right real wheel to concrete pad located in front of Station 35 rear doors Center-left front 312.2 inches Yes bumper of the fire truck to the front of the second parking block Center front bumper 452.6 inches Partial - wooden of MINI over to beam visible wooden beams bordering the grass area beyond the LECK green dumpster
Shawn Harrington and GabrielLebak, ARCCA, Inc.
Received: 05 Aug 2017
Revised: 10 Jul 2018
Accepted: 15 Jul 2018
e-Available: 08 Aug 2018
Harrington, S. and Lebak, G., "The Placement of Digitized Objects in a Point Cloud as a Photogrammetric
Technique," SAE Int. J. Trans.
Safety 6(2):87-105, 2018, doi:10.4271/09-06-02-0007.
TABLE 1 Description of simulated scenarios. Scenario Camera # # of Description of Simulated Scenario Measurements 1 6 2 Fire Truck positioned outside of painted marked crosswalk 2 6 2 Fire Truck positioned inside of painted marked crosswalk 3 6 2 MINI positioned outside of painted marked crosswalk 4 6 2 MINI positioned inside of painted marked crosswalk 5 5 6 Fire Truck and MINI positioned parallel to one another in the same direction 6 5 4 Fire Truck and MINI positioned parallel to one another in the opposite direction TABLE 2 Master results from testing. Units (inches) Delta (in) % Error Scenario Type Ref. SH GML SH GML SH GML (Meas#) of Meas Actual 1 (1) Long 86 3/4 78.3 85.8 -8.5 -0.9 9.8 1.1 1 (2) Lat 115 1/4 106.1 107.8 -9.1 -7.5 7.9 6.5 2 (3) Long 38 49.0 45.9 11.0 7.9 28.8 20.7 2 (4) Lat 117 3/4 103.3 117.5 -14.5 -0.3 12.3 0.2 3 (5) Long 92 1/2 86.8 83.5 -5.7 -9.0 6.2 9.7 3 (6) Lat 113 3/4 110.8 109.7 -3.0 -4.1 2.6 3.6 4 (7) Long 14 18.3 23.6 4.3 9.6 30.7 68.4 4 (8) Lat 114 1/4 113.8 105.3 -0.5 -8.9 0.4 7.8 5 (9) Lat 51 1/4 56.5 51.0 5.2 -0.3 10.1 0.5 5 (10) Lat 49 52.1 46.4 3.1 -2.6 6.3 5.3 5 (11) Lat 16 3/4 19.2 16.5 2.5 -0.3 14.9 1.6 5 (12) Lat 68 1/2 71.4 66.8 2.9 -1.7 4.2 2.5 5 (13) Long 148 3/4 153.1 148.2 4.4 -0.5 3.0 0.3 5 (14) Long 100 1/2 103.0 99.0 2.5 -1.5 2.4 1.5 6 (15) Lat 44 51.4 43.6 7.4 -0.4 16.8 0.8 6 (16) Lat 16 1/2 16.3 17.5 -0.2 1.0 1.5 5.9 6 (17) Long 78 1/4 76.5 78.9 -1.7 0.6 2.2 0.8 6 (18) Long 174 1/4 169.4 171.7 -4.9 -2.6 2.8 1.5 Measurement Scenario Type Dist (in) Visible (Meas#) of Meas 1 (1) Long 615.4 N 1 (2) Lat 562.7 N 2 (3) Long 656.1 N 2 (4) Lat 562.7 N 3 (5) Long 615.4 N 3 (6) Lat 562.7 P 4 (7) Long 633.4 N 4 (8) Lat 562.7 P 5 (9) Lat 383.3 P 5 (10) Lat 438.7 P 5 (11) Lat 424.1 P 5 (12) Lat 395.9 P 5 (13) Long 311.8 Y 5 (14) Long 351.2 Y 6 (15) Lat 285.8 P 6 (16) Lat 205.0 Y 6 (17) Long 312.2 Y 6 (18) Long 452.6 P TABLE 3 Average percent error from testing. SH longitudinal average % error: 10.7 % GML longitudinal average % error: 13.0 % Longitudinal average % error: 11.9 % SH lateral average % error: 7.7 % GML lateral average % error: 3.5 % Lateral average % error: 5.6 % Average error: 8.7 % TABLE 4 Root-mean-square-error from testing. SH longitudinal RMSE: 6.1 inches GML longitudinal RMSE: 5.5 inches Longitudinal average RMSE: 5.8 inches SH lateral RMSE: 6.4 inches GML lateral RMSE: 4.0 inches Lateral average RMSE: 5.2 inches Average RMSE: 5.5 inches TABLE 5 Percent error sorted by measurement distance from the surveillance camera. Percent Error (%) Subset n SH GML Average 200-400 (in) 7 5.7 1.8 3.8 400-600 (in) 7 6.7 3.8 5.3 600+ (in) 4 18.9 24.9 21.9 TABLE 6 Percent error sorted by the visibility of the measurement when viewed from the surveillance perspective. Percent Error (%) Subset n SH GML Average Yes 4 2.3 2.2 2.2 Partial 8 7.3 3.0 5.1 No 6 16.0 17.8 16.9 TABLE 7 Approximate time for the three-dimensional photogrammetric workflow. Task Time Scanning of Object(s) 0.5-2 hours Scanning of Scene 0.5-1.5 hours Processing of Scans 0.5-2 hours Placement of Object(s) 0.5-1 hour Measurements 15 minutes
|Printer friendly Cite/link Email Feedback|
|Author:||Lebak, Shawn Harrington And Gabriel|
|Publication:||SAE International Journal of Transportation Safety|
|Date:||Jul 1, 2018|
|Previous Article:||Wheel Chock Key Design Elements and Geometrical Profile for Truck Vehicle Restraint.|
|Next Article:||Carbon Monoxide Density Pattern Mapping from Recreational Boat Testing.|