[ad_1]
Substantive fundamental mathematical, statistical, and computational research is required to bridge the gap between current state-of-the-art technology and aspiring digital twins.
virtual representation
A fundamental challenge for digital twins is the vast spatial and temporal scale that virtual representations must address. In many applications, computationally feasible scales are insufficient to resolve important phenomena and do not achieve the fidelity required to support decision-making. Although different digital twin applications have different requirements for modeling fidelity, data, precision, precision, visualization, and time to solution, many of the potential uses for digital twins currently exist within existing computational This cannot be done with resources. Investments in both computing resources and mathematical/algorithmic advances are necessary elements to bridge the gap between what can be simulated and what is needed to achieve a trusted digital twin. Areas of particular interest include multiscale modeling, hybrid modeling, and surrogate modeling. Hybrid modeling requires a combination of empirical and mechanistic modeling approaches that leverage the strengths of both data-driven and model-driven models. Combining data-driven and mechanistic models requires effective coupling techniques to facilitate information flow while understanding the constraints and assumptions specific to each model. More generally, models of different fidelity can be used across different subsystems, assumptions may need to be adjusted, and multimodal data from different sources need to be synchronized. Overall, digital twin simulation will likely require a federation of individual simulations rather than a single monolithic software system, which will need to be integrated for a complete digital twin ecosystem. Aggregating risk measurements and quantifying uncertainty across multiple dynamic systems is challenging and requires extensions of existing methodologies.
physical counterpart
Digital twins rely on real-time (or near-real-time) processing of accurate, reliable data that is often heterogeneous, large-scale, and multi-resolution. Although much literature is devoted to best practices for collecting and preparing data for use, several important opportunities are worth further investigation. Handling outliers or anomalous data is important for data quality assurance. Robust techniques are needed to accurately represent salient rare events while identifying and ignoring spurious outliers. However, resource, time, and accessibility constraints can prevent data collection at the frequency and resolution needed to adequately capture system dynamics. This undersampling can lead to missing important events and important features, especially in complex systems with large spatiotemporal variations. To optimize data collection, innovative sampling techniques should be used. Artificial intelligence (AI) and machine learning (ML) techniques that focus on maximizing average-case performance can introduce large errors in rare events, so new loss functions and performance metrics is required. Improving the integrity, performance, and reliability of sensors and their ability to detect and mitigate adversarial attacks are critical to increasing the trustworthiness of digital twins. To address the vast amounts of data, including the large-scale streaming data required for digital twins of specific applications, data assimilation techniques that leverage optimized ML models, architectures, and computational frameworks need to be developed. there is.
Ethics, privacy, data governance, security
Digital twins for certain settings may rely on identifiable (or re-identifiable) data, while other settings may contain proprietary or sensitive information. Protecting individual privacy requires active consideration within each element of the digital twin ecosystem. In sensitive or high-risk environments, digital twins require a higher level of security, especially when it comes to transmitting information between the physical and the virtual. In some cases, an automated controller may directly issue commands to its physical counterpart based on results from its virtual counterpart. It is of utmost importance to protect these communications from interference.
Physical to virtual feedback flow
Combining physical observations and virtual models requires inverse problem techniques and data assimilation. Digital twins require calibration and updating on practical timescales, which highlights fundamental gaps in theory, methodology, and computational approaches to inverse problems and data assimilation. ML and AI can play a major role in addressing these challenges, including through online learning techniques that use streaming data to continually update models. Additionally, in environments where data is limited, approaches such as active learning and reinforcement learning can help guide the collection of additional data that is most important for digital twin purposes.
Virtual to physical feedback flow
A digital twin drives changes in the physical counterpart itself (e.g., through control) or in the observation system associated with the physical counterpart (e.g., through sensor steering) through an automated controller or a human. You may. Although mathematically and statistically sophisticated formulations of optimal experimental design (OED) exist, few approaches can address the kinds of high-dimensional problems expected in digital twins. In the context of digital twins, OED needs to be tightly integrated with data assimilation and control or decision support tasks to optimally design and guide data collection. Real-time digital twin computation may require edge computing, subject to computational accuracy, power consumption, and communication constraints. ML models that can run quickly are well-suited to meeting these requirements, but their black-box nature poses a barrier to establishing trust. Additional work is required to develop reliable ML and surrogate models that perform well under the required computational and time conditions. Although the dynamic adaptation needs of digital twins could benefit from reinforcement learning approaches, there is a gap between theoretical performance guarantees and effective methods in the practical domain.
Verification, Validation, and Quantification of Uncertainty (VVUQ)
VVUQ must play a role in all elements of the digital twin ecosystem and is critical to the responsible development, use, and sustainability of digital twins. Evolution of physical counterparts in real-world conditions of use, changes in data collection, noisiness of data, changes in the distribution of data shared with the virtual twin, changes in the predictive and/or decision-making tasks imposed on the digital twin , and updates to the digital twin virtual model all impact VVUQ. Validation and validation help increase confidence in virtual representations, while quantification of uncertainty informs the quality of predictions. New VVUQ challenges for digital twins arise from model mismatches, unresolved scales, surrogate modeling, AI, hybrid modeling, and the need to make predictions in extrapolation regimes. However, a digital twin VVUQ must also deal with uncertainties associated with its physical counterpart, such as changes in sensors and data collection equipment, and the evolution of its physical counterpart. Applications that require real-time updates also require continuous VVUQ, which is not yet computationally feasible. VVUQ also serves to understand the impact of the mechanisms used to pass information between physical and virtual. These include challenges arising from parameter uncertainties and ill-posed or indefinite inverse problems, as well as uncertainties introduced by human involvement.
[ad_2]
Source link