Recently, due to the global warming and heat island effects, many urban areas meet heavy rainfalls, which often exceed drainage capacity of sewerage systems and lead to damage of inundation above and/or under the floor... [ view full abstract ]
Recently, due to the global warming and heat island effects, many urban areas meet heavy rainfalls, which often exceed drainage capacity of sewerage systems and lead to damage of inundation above and/or under the floor flooding. Especially for the people who happen to be staying in underground facilities, it is hard to know the risk of short-time heavy rainfall, since they are isolated from outdoor weather conditions. Therefore, it is very important to have an emergency plan for flooding and systematic solution to notify the estimated hazardous risks according to real-time rain conditions.
In order to estimate the underground flooding level caused by short-time and high-intensity rainfalls relative to time and place, simulation of inflow discharges by different profiles of short-time high-intensity rainfall can be conducted for the center of Osaka city in Japan, by Ishigaki et.al., in 2013. This kind of simulation can provide a reliable estimation of the inundation risk.
This paper proposes an effective visualization method of expected flood risk by using AR representation for mobile phone users in underground spaces. Our method is designed to acquire information of the individual user's viewpoint in an underground environment, where GPS does not function, in order to superimpose the flood surfaces expressed by computer graphics (CG) on the photograph captured by the mobile phone of the user.
We prepare a 3D model with real dimensions of the target area based on point cloud data captured by a laser scanner data. We also render the 3D scene using colored point cloud data to create an image collection of the target area by photo-realistic CG. Each of the rendered photo-realistic CG images is processed to extract image feature points, which can be associated with 3D coordinate of the point cloud.
When a new photo comes in from the mobile user, the same type of the feature points can be extracted and immediately used for searching for the most similar image in the pre-rendered image set. Then the combinations of the 2D-3D coordinate of the feature points allows to solve the Perspective-n-Point (PnP) problem, that is, 2D feature points on the input photo are associated with the corresponding 3D coordinates in the physical space, and finally an extrinsic camera parameter of the mobile device can be estimated in the 3D model coordinates. The 3D model can be used for rendering a CG scene of the flooded situation, which is estimated by simulation assuming that the area has a short-time heavy rain. Thus, the visualized flooded water depth can be overlaid on the newly taken photo, taking into account the consistency between the user's viewpoint and orientation and flood CG. The paper shows the applicability and practicality of our proposed method based on experiments at an actual train station in an underground. Depending on the user’s viewpoint, particular flood situation can be displayed by the first-person perspective. The relative occlusion of the virtual water surfaces and the physical objects allows recognizing the water depth.
Laser scanning and photogrammetry , Visualization and VR/AR