Applied and Computational Mathematics Seminar

Department of Mathematics and Statistics



  

Fall 2023 Schedule
Parker 328, Friday 2:00 pm - 3:00 pm (CST)

For any questions or requests, please contact Yimin Zhong (yzz0225@auburn.edu)

 

Speaker Institution Date
Cao Kha Doan Auburn University September 1
Yimin Zhong Auburn University September 8
Yuming Paul Zhang Auburn University September 15
 Lu Zhang  Rice University September 22
Habib N. Najm Sandia National Lab October 6 (11:00 AM)
Nick Dexter Florida State University October 6
Xiaojing Ye  Georgia State University  October 27
Rentian Hu University of Notre Dame November 3
Bao Wang The University of Utah November 10 (Zoom)

  

Cao Kha Doan

doan.jpg
 

Date and time: Sept 1 at 2:00 pm (Parker 328)

Title: Low regularity integrators for the classical and conservative Allen-Cahn equations with maximum bound principles

Abstract: This talk is concerned with conditionally structure-preserving, low regularity time integration methods for both classical and conservative Allen–Cahn equations. Important properties of such equations include the maximum bound principle (MBP) and energy dissipation law. By iteratively using Duhamel’s formula, first- and second-order low regularity integrators (LRIs) are constructed for time discretization. The proposed LRI schemes are proved to preserve the MBP and energy stability, and are shown to conserve mass for the conservative Allen-Cahn equation. Furthermore, their temporal error estimates are also successfully derived under a low regularity requirement that the exact solution is only assumed to be continuous in time. Numerical results show that the proposed LRI schemes are more accurate and have better convergence rates than the exponential time differencing schemes, especially when the interfacial parameter approaches zero.

 

 

Yimin Zhong

Yimin Zhong
 

Date and time: Sept 8 at 2:00 pm (Parker 328)

Title: Implicit boundary integral method for linearized Poisson Boltzmann equation

Abstract: In this talk, I will give an introduction to the so-called implicit boundary integral method based on the co-area formula and it provides a simple quadrature rule for boundary integral on general surfaces.  Then, I will focus on the application of solving the linearized Poisson Boltzmann equation, which is used to model the electric potential of protein molecules in a solvent. Near the singularity, I will briefly discuss the choices of regularization/correction and illustrate the effect of both cases. In the end, I will show the numerical error estimate based on the harmonic analysis tools. 

 

 

Yuming Paul Zhang

Paul Zhang 

Date and time: Sept 15 at 2:00 pm (Parker 328)

Title: Exploratory HJB Equations and Their Convergence

Abstract:  We study the exploratory Hamilton--Jacobi--Bellman (HJB) equation arising from the entropy-regularized exploratory control problem, which was formulated by Wang, Zariphopoulou and Zhou (J. Mach. Learn. Res., 21, 2020) in the context of reinforcement learning in continuous time and space. We establish the well-posedness and regularity of viscosity solutions to the equation, and derive an explicit rate of convergence as exploration diminishes to zero. If time permitted, I would also discuss the analysis of the policy iteration algorithm used to study the control problem. These are joint works with Xunyu Zhou, Hung Tran and Wenpin Tang.

 

 

Lu Zhang

lu_zhang_rice.jpg 

Date and time: Sept 22 at 2:00 pm (Parker 328)

Title: Coupling physics-deep learning inversion

Abstract: In recent years, there has been an increasing interest in applying deep learning to geophysical/medical data inversion. However, the direct application of end-to-end data-driven approaches to inversion has quickly shown limitations in practical implementation. Indeed, due to the lack of prior knowledge of the objects of interest, the trained deep-learning neural networks very often have limited generalization. In this talk, we introduce a new methodology of coupling model-based inverse algorithms with deep learning for two typical types of inversion problems. In the first part, we present an offline-online computational strategy of coupling classical least-squares-based computational inversion with modern deep learning-based approaches for full waveform inversion to achieve advantages that can not be achieved with only one of the components. In the second part, we present an integrated data-driven and model-based iterative reconstruction framework for joint inversion problems. The proposed method couples the supplementary data with the partial differential equation model to make the data-driven modelling process consistent with the model-based reconstruction procedure. We also characterize the impact of learning uncertainty on the joint inversion results for one typical inverse problem.

 

 

   

Habib N. Najm

 habib_najm.jpg

Date and time: Oct 6 at 11:00 am (Parker 328)

Title: Approximate Bayesian Computation for Model Calibration Given Summary Statistics

Abstract:  It is often the case in Bayesian parameter estimation that one has to contend with summary statistics on functions of the data on model observables, rather than having access to the data itself. For example, one may have access only to marginal moments on some quantity estimated from the data, but not the original data. In this setting, the challenge is to estimate a posterior density on model parameters given constraints on derived quantities. We have used maximum entropy and approximate Bayesian computation methods in this context to sample the joint space of data and parameters, accepting data sets consistent with available statistics, and employing opinion pooling methods to arrive at a pooled posterior on quantities of interest. We have applied this approach in multiple contexts, invoking approximations where necessary to tackle problem complexity. This talk will explore this landscape, and will highlight effective use of this construction in a recent study where we used summary information, in the form of nominal values and error bars, from multiple legacy experimental data sets, to arrive at a posterior on model parameters.

 

Nick Dexter

nick_dexter_fsu.jpg 

Date and time: Oct 6 at 2:00 pm (Parker 328)

Title: Learning High-Dimensional Banach-Valued Functions from Limited Data with Deep Neural Networks

Abstract: Reconstructing high-dimensional functions from few samples is important for uncertainty quantification in computational science. Deep learning has achieved impressive results in parameterized PDE problems with solutions in Hilbert or Banach spaces. This work proposes a novel algorithmic approach using DL, compressed sensing, orthogonal polynomials, and finite elements to approximate smooth functions in infinite-dimensional Banach spaces. Theoretical analysis provides explicit guarantees on error and sample complexity, and numerical experiments demonstrate accurate approximations on challenging benchmark problems.

Xiaojing Ye 

xiaojing_ye_gsu.jpg 

Date and time: Oct 27 at 2:00 pm (Parker 328)

Title:  Neural control approach to approximate solution operators of evolution PDEs

Abstract:  We introduce a novel computational framework to approximate solution operators of evolution partial differential equations (PDEs). For a given evolution PDE, we parameterize its solution using a nonlinear function, such as a deep neural network. Then the problem of approximating the solution operator can be reformulated as a control problem in the parameter space of the network. From any initial value, this control field can steer the parameter to generate a trajectory such that the corresponding network solves the PDE. This allows for substantially reduced computational cost to solve the evolution PDE with arbitrary initial conditions. We also develop comprehensive error analysis for the proposed method when solving a large class of semilinear parabolic PDEs. Numerical experiments on different high-dimensional evolution PDEs with various initial conditions demonstrate the promising results of the proposed method.

 

Rentian Hu

rentian_hu_nd.jpeg 

Date and time: Nov 3 at 2:00 pm (Parker 328)

Title:  High order absolutely convergent fast sweeping methods with multi-resolution WENO local solvers for Eikonal and factored Eikonal equations

Abstract: Motivated by the recent work on absolutely convergent fast sweeping method for steady-state solutions of hyperbolic conservation laws, in this talk we will discuss high order fast sweeping methods with multi-resolution weighted essentially non-oscillatory (WENO) local solvers for solving Eikonal equations, an important class of static Hamilton-Jacobi equations. Based on such kind of multi-resolution WENO local solvers with unequal-sized sub-stencils, iteration residues of the designed high order fast sweeping methods can settle down to round-off errors and achieve the absolute convergence. In order to obtain high order accuracy for problems with singular source-point, we will also discuss the factored Eikonal approach developed in the literature and solve the resulting factored Eikonal equations by the new high order WENO fast sweeping methods. We will show the accuracy, computational efficiency, and advantages of the new high order fast sweeping schemes for solving static Hamilton-Jacobi equations. 

 

Bao Wang

bao-wang_utah.jpeg 

Date and time: Nov 10 at 2:00 pm (Zoom)

Title: Equivariant Generative Models for Molecular Modeling

Abstract: Molecular modeling tasks exhibit different symmetries, e.g. roto-translation equivariance and periodicity. A grand challenge in machine learning-assisted molecular modeling – e.g. molecule generation – is to account for different inherent symmetries. In this talk, I will discuss a few issues on building and training stable and expressive equivariant generative models, including normalizing flows and diffusion models, for molecule generations. Furthermore, I will discuss the role of steerable features of different types for equivariant machine learning.