Achieving accurate and robust global situational awareness of a complex time-evolving field from a limited number of sensors has been a long-standing challenge. This reconstruction problem is especially difficult when sensors are sparsely positioned in a seemingly random or unorganized manner, which is often encountered in a range of scientific and engineering problems. Moreover, these sensors could be in motion and could become online or offline over time. The key leverage in addressing this scientific issue is the wealth of data accumulated from the sensors. As a solution to this problem, we propose a data-driven spatial field recovery technique founded on a structured grid-based deep-learning approach for arbitrary positioned sensors of any numbers. It should be noted that naive use of machine learning becomes prohibitively expensive for global field reconstruction and is furthermore not adaptable to an arbitrary number of sensors. In this work, we consider the use of Voronoi tessellation to obtain a structured-grid representation from sensor locations, enabling the computationally tractable use of convolutional neural networks. One of the central features of our method is its compatibility with deep learning-based super-resolution reconstruction techniques for structured sensor data that are established for image processing. The proposed reconstruction technique is demonstrated for unsteady wake flow, geophysical data and three-dimensional turbulence. The current framework is able to handle an arbitrary number of moving sensors and thereby overcomes a major limitation with existing reconstruction methods. Our technique opens a new pathway toward the practical use of neural networks for real-time global field estimation.
ASJC Scopus subject areas
- Human-Computer Interaction
- Computer Vision and Pattern Recognition
- Computer Networks and Communications
- Artificial Intelligence