Lowest floor elevation estimation via street view images
Street view imagery such as Google Street View is widely used in people’s daily lives. Many studies have been conducted to detect and map objects such as traffic signs and sidewalks for urban builtup environment analysis. While mapping objects in the horizontal dimension is common in those studies, automatic vertical measuring in large areas is underexploited. Vertical information from street view imagery can benefit a variety of studies. One notable application is estimating the lowest floor elevation, which is critical for building flood vulnerability assessment and insurance premium calculation. In this article, we explored the vertical measurement in street view imagery using the principle of tacheometric surveying. In the case study of lowest floor elevation estimation using Google Street View images, we trained a neural network (YOLO-v5) for door detection and used the fixed height of doors to measure doors’ elevation. The results suggest that the average error of estimated elevation is 0.218 m. The depthmaps of Google Street View were utilized to traverse the elevation from the roadway surface to target objects. The proposed pipeline provides a novel approach for automatic elevation estimation from street view imagery and is expected to benefit future terrain-related studies for large areas.
We propose a six-step geoprocessing model to generate an intersection feature layer from road networks and utilize up to three nearest SVIs to capture streetscapes at each intersection. Then, a deep learning-based image segmentation model is adopted to recognize traffic light-related pixels from each SVI. Last, we design a postprocessing step to generate new features characterizing SVIs’ segmentation results at each intersection and build a decision tree model to determine the traffic control type. This study can directly benefit transportation agencies by providing a ready-to-use smart audit for large-scale mapping of signalized intersections
Local motion simulation using deep reinforcement learning
In this article, we propose a local motion simulation method integrating optimal reciprocal collision avoidance (ORCA) and DRL, referred to as ORCA-DRL. The main idea of ORCA-DRL is to perform local collision avoidance detection via ORCA and smooth the trajectory at the same time via DRL. We use a deep neural network (DNN) as the state-to-action mapping function, where the state information is detected by virtual visual sensors and the action space includes two continuous spaces: speed and direction. To improve data utilization and speed up the training process, we use the proximal policy optimization based on the actor–critic (AC) framework to update the DNN parameters.
Energy Harvesting Scenario using Photovoltaic Solar Panels
This study designed a solar energy harvesting scenario for Georgia Tech Campus. Multi-criteria scenarios and GIS techniques were used to select suitable solar panel sites on the rooftop. Solar analysis tools were applied to estimate the total solar energy that can be harvested. Finally, the cost analysis was applied to assess how much benefit can be made after this harvesting scenario was applied. (Huang, 2017)