Links

Tools

Export citation

Search in Google Scholar

Visible Routes in 3d Dense City Using Reinforcement Learning

Preprint published in 2018 by O. Gal, Y. Doytsher
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Question mark in circle
Preprint: policy unknown
Question mark in circle
Postprint: policy unknown
Question mark in circle
Published version: policy unknown

Abstract

In the last few years, the 3D GIS domain has developed rapidly, and has become increasingly accessible to different disciplines. 3D Spatial analysis of Built-up areas seems to be one of the most challenging topics in the communities currently dealing with spatial data. One of the most basic problems in spatial analysis is related to visibility computation in such an environment. Visibility calculation methods aim to identify the parts visible from a single point, or multiple points, of objects in the environment. In this work, we present a unique method combining visibility analysis in 3D environments with dynamic motion planning algorithm, named Visibility Velocity Obstacles (VVO) with Markov process defined as spatial visibility analysis for routes in 3D dense city environment. Based on our VVO analysis, we use Reinforcement Learning (RL) method in order to find an optimal action policy in dense 3D city environment described as Markov decision process, navigating in the most visible routes. As far as we know, we present for the first time a Reinforcement Learning (RL) solution to the visibility analysis in 3D dense environment problem, generating a sequence of viewpoints that allows an optimal visibility in different routes in urban city. Our analysis is based on fast and unique solution for visibility boundaries, formulating the problem with RL methods.

Beta version