GALCIT Colloquium
In this work we first use explainable deep learning based on Shapley explanations to identify the most important regions for predicting the future states of a turbulent channel flow. The explainability framework (based on deep SHAP) is applied to each grid point in the domain, and through percolation analysis we identify coherent flow regions of high importance. These regions have around 70% overlap with the intense Reynolds-stress (Q) events in two-dimensional vertical planes, but only 30% in the full three-dimensional domain. Interestingly, these importance-based structures have high overlap with classical turbulence structures (Q events, streaks and vortex clusters) in different wall-normal locations, suggesting that this new framework provides a more comprehensive way to study turbulence.
We also discuss the application of deep reinforcement learning (DRL) to discover active-flow-control strategies for turbulent flows, including turbulent channels, three-dimensional cylinders and turbulent separation bubbles. In all the cases, the discovered DRL-based strategies significantly outperform classical flow-control approaches. We conclude that DRL has tremendous potential for drag reduction in a wide range of complex turbulent-flow configurations.