Department of Mathematical Sciences
850 West Dickson Street, Room 309
University of Arkansas
Fayetteville, AR 72701
P 4795753351
F 4795758630
Email: math@uark.edu
Hybrid Conference: 47^{th} Annual Spring Lecture Series
Numerical Linear Algebra: from Scientific Computing to Data Science Applications
May 4  6, 2022 (7:30 am  6:00 pm CST)
Principal Speaker: Yousef Saad
Distinguished Professor in the Department of Computer Science and Engineering at the University of MinnesotaPublic Lecture  Challenges in Smart Patient Monitoring: from Raw Data to Decision Support
May 4, 2022 (6:00 pm CST)
Public Lecturer: Sabine Van Huffel
Professor Emerita in the Department of Electrical Engineering at KU LeuvenWomen in STEM Panel
May 6, 2022 (1:00pm CST)
Association for Women in Mathematics (AWM)
Invited Speakers
Mark Arnold (University of Arkansas, USA)
Erin Claire Carson (Charles University, Czech Republic)
Jie Chen (IBM Research, USA)
Alice Cortinovis (EPFL, Switzerland)
Tim Davis (Texas A&M University, USA)
Agnieszka Międlar (University of Kansas, USA)
Rachel Minster (Wake Forest, USA)
James Nagy (Emory University, USA)
Sara Pollock (University of Florida, USA)
Qiang Ye (University of Kentucky, USA)
Organizer
Tulin Kaman (tkaman@uark.edu)
Assistant Professor, Lawrence Jesser Toll Jr. Endowed Chair, Department of Mathematical Sciences, University of ArkansasRegistration
Schedule of Talks
Conference Location will be in the Donald W. Reynolds Center Auditorium
Wednesday, May 4th 
Thursday, May 5th

Friday, May 6th

7:15am Registration  Coffee/Tea  7:15am Coffee/Tea  
7:45am Opening Remarks  
8:00am Yousef Saad Lecture #1 
8:00am Yousef Saad Lecture #3 
8:00am Yousef Saad Lecture #5 
Break  
9:30am Tim Davis  9:30am Sara Pollock  9:30am Rachel Minster 
Break  
10:45am Erin Claire Carson  10:45am Agnieszka Międlar  10:45am Mark Arnold 
Lunch  
1:00pm Yousef Saad Lecture #2 
1:00pm Yousef Saad Lecture #4 
1:00pm Women in STEM Panel 
Break  
2:30pm James Nagy  2:30pm Jie Chen  2:15pm Contributed Talks Session 
Break  3:45pm Closing Remarks  
3:45pm Alice Cortinovis  3:45pm Qiang Ye  
4:45pm Poster Session w/ Reception  4:45pm Boarding Shuttle from Reynolds Center to Crystal Bridges 

6:00pm Sabine Van Huffel Public Lecture 
6:00pm Banquet @ Crystal Bridges 
Abstracts
Numerical Linear Algebra: from Scientific Computing to Data Science Applications
Principle Lecturer: Yousef Saad (Distinguished Professor in the Department of Computer Science and Engineering at the University of Minnesota)
Abstract
Numerical linear algebra is at the core of virtually every field of science and engineering, whether in solving linear systems that arise from simulations of physical phenomena, or in obtaining various solutions of optimization problems in data related applications. As the world around us is progressively being analyzed or modeled with the help of available data, the types of computational problems encountered are changing, and as a result the field is currently undergoing a deep transformation. This set of 5 lectures presents an overview of the methodologies that are used in both the scientific computing and the data science disciplines. In an effort to build a bridge between these two disciplines, one of the goals of the lectures is to introduce data science techniques to nonspecialists in scientific computing.
Summary of Lectures
Numerical linear algebra is at the core of virtually every field of science and engineering, whether in solving linear systems that arise from simulations of physical phenomena, or in obtaining various solutions of optimization problems in data related applications. As the world around is progressively being analyzed or modeled with the help of available data, the types of computational problems encountered are changing, and as a result the field is currently undergoing a deep transformation. For the specialist in numerical methods this means that existing methods must be adapted to the new demands and that new ones must also be developed to cope with the emerging paradigm.
This set of 5 lectures will present the fundamental ideas of numerical linear algebra with an emphasis on those methods that have the potential to play an important role in data science, e.g., eigenvalue and singular value problems, sparse matrix techniques, graph based methods, Krylov subspaces, and preconditioning. Part of the lectures will be devoted to specific problems of data sciences, covering applications of graphbased methods, randomization methods, Network analysis, dimension reduction, and neural networks among other themes. The lectures will frequently be illustrated by live Matlab (or Python) demonstrations.
Lecture #1
The first lecture will begin with a brief historical perspective in an attempt to explain what drives innovation in matrix computations. A major problem in the era of Gauss (early 1800s), was to solve linear systems related to the normal equations for the purpose of calculating orbits of celestial objects. The first part of the 1900s saw an increased interest in solving discretized partial differential equations for dealing with various simulations, e.g., weather forecasting. Later in 1950s at the height of the cold war, the eigenvalue problem generated a wave of interest in the linear algebra community, as it was key to solving the 'flutter' problem in aircrafts. There is no doubt that the emergence of data related applications and machine learning in particular, is ushering in a new era, initiating a turning point of similar magnitude to those turns taken in the past. This first lecture we will also provide a synopsis of problems encountered in matrix computations, ranging from the ones seen in optimization and control to those related to solving PDEs. Finally, the lecture will cover some background in matrix theory and in sparse matrix techniques.
Lecture #2
The second lecture will present projection methods and Krylov subspace techniques. Here, it is important to take a relatively formal view because the idea of projection method is rather powerful as well as widespread. Thus, the Galerkin approach or the RayleighRitz projection technique can be applied in finite as well as infinite dimensional spaces and can be applied to spaces of polynomials, or vectors, or classes of functions. Among these, Krylov subspace methods occupy a distinct place in numerical linear algebra. They will be described for linear systems as well as for eigenvalue problems, and briefly for nonlinear systems of equations.
Lecture #3
The third lecture will cover sparse matrix techniques and graphs. A sparse matrix is a matrix whose entries are mostly zeros. Such matrices play a major role in applications ranging from discretized partial differential equations to the analysis of text data. This lecture will introduce sparse matrices, discuss how they are stored, and their relations with graphs. Graph representations play a key role in understanding certain types of algorithms for sparse matrices. In particular we will see how graphs can be used to represent data by means of nearestneighbor techniques or via kernels. This lecture will also discuss sparse direct methods for solving linear systems, Preconditioning techniques, and Domain Decomposition ideas.
Lecture #4
The fourth lecture will continue on with graphs and discuss methods for graph and network analysis. We will describe the intimate relation between graphs and data, and the Graph representation of data, and similarity graphs. Graph Laplaceans play a major role in various applications, so we give an overview of their properties and then see how they have been used in applications such as clustering and various measures of network analysis. We will also discuss graph partitioning and coarsening as these are rather common in scientific computing. A key ingredient utilized in both scientific computing and in data science is that of graph coarsening. Given a large graph, the goal of graph coarsening is to find a graph of much smaller size that is a faithful representative of the original graph. When analyzing networks, various measures of 'communicability' are used. We will take a particular look at the Estrada index. This particular topic generated a big interest in the problem of estimating bilinear forms. We will also provide an overview of numerical linear algebra and Krylov subspace methods for Network analysis.
Lecture #5
Lecture 5 will take us to 'dimension reduction methods', an area that straddles numerical linear algebra, machine learning, and statistical analysis. An important application of dimension reduction is that of graph embeddings which consists of representing graphs with vectors. This lecture will also discuss projection methods from the angle of dimension reduction. It will review randomization and sketching and graph based methods. Other topics to be covered include: Linear Discriminant Analysis, Support Vector Machines, and applications such as: Segmentation; Face recognition; digit recognition. Finally we will give an overview of Neural Networks: Basics of neural networks; Deep learning; Convolutional Neural Networks; The problem of back propagation; Graph Neural Networks and the use of embeddings. Numerical methods: Stochastic gradient descent.
Wednesday, May 4, 2022
8:00am

Yousef Saad #1

9:30am

Tim Davis

10:45am

Erin Claire Carson

1:00pm

Yousef Saad #2

2:30pm

James Nagy

3:45pm

Alice Cortinovis

4:45pm

Poster Session

6:00pm

Public Lecture

SuiteSparse:GraphBLAS: Graph Algorithms in the Language of Sparse Linear Algebra
Tim Davis (Texas A&M University, USA)
SuiteSparse:GraphBLAS is a full implementation of the GraphBLAS standard, which defines a set of sparse matrix operations on an extended algebra of semirings using an almost unlimited variety of operators and types. When applied to sparse adjacency matrices, these algebraic operations are equivalent to computations on graphs. GraphBLAS provides a powerful and expressive framework for creating graph algorithms based on the elegant mathematics of sparse matrix operations on a semiring. Key features and performance of the SuiteSparse implementation of GraphBLAS package are described. The implementation appears in Linux distros, forms the basis of the RedisGraph module of Redis (a commercial graph database system), and appears as C=A*B in MATLAB. Graph algorithms written in GraphBLAS can rival the performance of highlytuned specialized kernels, while being far simpler for the end user to write.
Exploiting Mixed Precision in Numerical Linear Algebra
Erin Claire Carson (Charles University, Czech Republic)
Support for floating point arithmetic in multiple precisions is becoming increasingly common in emerging architectures. Mixed precision capabilities are already included in many machines on the TOP500 list and are expected to be a crucial hardware feature in coming exascale machines. From a computational scientist's perspective, our goal is to determine how and where we can exploit mixed precision computation in our codes. This requires both an understanding of performance characteristics as well as an understanding of the numerical behavior of algorithms in finite precision arithmetic. After introducing floating point computation, mixed precision hardware, and current work in mixed precision numerical linear algebra, we present examples that demonstrate what can go wrong if we use low precision blindly. This motivates the need for rigorous rounding error analysis in algorithms used in scientific computing and data science applications. Understanding the behavior of algorithms in finite precision is necessary not only for illuminating potential dangers, but also for revealing opportunities. As an example of where rounding error analysis can lead to new insights and improved algorithms, we present a general algorithm for solving linear systems based on mixedprecision iterative refinement. From this, we develop a mixedprecision GMRESbased iterative refinement scheme that works for even illconditioned systems. We discuss performance results on modern GPU architectures and the HPLAI benchmark, which is based on these mixed precision iterative refinement algorithms. The world's top supercomputers already exceed exaflop performance on HPLAI, achieving over 4x higher performance than on the standard HPL benchmark.
Flexible Krylov Subspace Regularization for Inverse Problems
James Nagy (Emory University, USA)
Inverse problems arise in a variety of applications: image processing, machine learning, finance, mathematical biology, and more. Mathematical models for these applications may involve integral equations, partial differential equations, and dynamical systems, and solution schemes are formulated by applying algorithms that incorporate regularization techniques and/or statistical approaches. In most cases these solutions schemes involve the need to solve a largescale illconditioned linear system that is corrupted by noise and other errors. In this talk we describe Krylov subspacebased regularization approaches to solve these linear systems that exploit direct matrix factorization methods on small subproblems to choose regularization parameters and variable preconditioning to enforce certain regularization, such as sparsity and lowrank. The methods are very efficient for large scale inverse problems, they have the advantage that various regularization approaches can be used, and they can also incorporate methods to automatically estimate regularization parameters.
Randomized Trace Estimation and Determinants
Alice Cortinovis (EPFL, Switzerland)
Computing the determinant of a largescale symmetric positive definite matrix A is a task that arises in many applications, for example in the training of a Gaussian process regression model. When the matrix is sufficiently small, its determinant can be computed via a Cholesky decomposition of A. When A is large and this approach is too costly, one can still get an approximation of the determinant by estimating the trace of a suitable matrix, that is, the matrix logarithm log(A), using randomized algorithms. In this lecture we consider the Hutchinson trace estimator, which obtains an approximation to the trace of a matrix B by averaging some quadratic forms involving B and random vectors following a suitable distribution. In the context of determinants, the advantage of this approach is that quadratic forms involving log(A) can be approximated efficiently using Lanczos method. We discuss convergence bounds for the Hutchinson trace estimator, focusing on the case in which B is a symmetric but indefinite matrix, and apply the bounds to the approximation of the determinant.
Thursday, May 5, 2022
8:00am

Yousef Saad #3

9:30am

Sara Pollock

10:45am

Agnieszka Międlar

1:00pm

Yousef Saad #4

2:30pm

Jie Chen

3:45pm

Qiang Ye

5:00pm

Excursion @ Crystal Bridges

6:00pm

Banquet @ Crystal Bridges

Filtered Anderson Acceleration for Nonlinear PDEs
Sara Pollock (University of Florida, USA)
Anderson acceleration (AA) is a popular extrapolation technique used to accelerate the convergence of fixedpoint iterations. It requires the storage of a (usually) small number of solution and update vectors, and the solution of an optimization problem that is generally posed as leastsquares and solved efficiently by a thin QR decomposition. First developed in 1965 in the context of integral equations, this method has recently been increasing in popularity as a Jacobianfree approach to converging to discrete nonlinear PDE solutions, not to mention applications in optimization and machine learning. The convergence behavior of AA is still not fully understood, and its dependence on the selection of parameters including the algorithmic depth remains an active field of research. In this talk we will discuss understanding and improving the behavior of the algorithm using standard tools and techniques from numerical linear algebra. We will also numerically demonstrate how the filtering and dynamic depth selection procedures suggested by the recent theory can be used to improve both the efficiency and robustness of accelerated solves.
Challenges for Eigenvalue Computations in Breakthrough Applications
Agnieszka Międlar (University of Kansas, USA)
Many real life problems lead to challenging PDE eigenvalue problems, e.g., vibrations of structures or calculation of energy levels in quantum mechanics. A lot of research is devoted to the socalled Adaptive Finite Element Method (AFEM) which allows discretization of the governing PDE, solving the finite dimensional algebraic eigenvalue problem and iteratively improving obtained numerical approximations. However, advanced approaches dedicated to solve these challenging eigenvalue problems require a unified framework bringing together: spectral and perturbation theory to derive a priori error estimators, a posteriori error analysis which enables deriving efficient and reliable error estimators which take into account various errors of different origins, iterative solvers and model reduction techniques to efficiently solve finite dimensional algebraic linear and nonlinear eigenvalue problems etc. This talk will discuss several attempts to achieve the above goal. In particular, we will discuss how the Cauchy integralbased approaches offer an attractive algorithmic framework when solving interior largescale linear and nonlinear eigenvalue problems. Finally, we will illustrate behavior of presented methods with several numerical examples.
Deep Learning with GraphStructured Data: A Mathematical Perspective
Jie Chen (IBM Research, USA)
Graphs are a mathematical abstraction as well as a structured organization of data, ubiquitously used in science and technology. In scientific computing, they model sparse matrices such that their partitioning and coarsening are vital ingredients in the solution of large linear systems; while in machine learning, graphs capture the pairwise relationship of entities and they offer supplementary information for building predictive models. With the resurgence of neural networks for their remarkable effectiveness in modeling the complex mapping between inputs and outputs, how one injects the structural information to enhance the predictive power of neural networks attracted surging interest recently. This lecture is a brief journey of the recent rise of graph neural networks (GNNs). I will introduce GNNs as a mathematical model, illustrate their uses, describe scalability challenges in training and inference, present solutions for massive graphs, and discuss more complex but practical scenarios including temporal evolution of the graphs and the learning of a hidden graph structure. Both the audiences familiarizing themselves with deep learning and those seeking a research subject will find this lecture informative.
Batch Normalization Preconditioning for Neural Network Training
Qiang Ye (University of Kentucky, USA)
Batch normalization (BN) is a ubiquitous method in deep neural network training that has been shown to decrease training time and improve generalization performance. Despite its success, BN is not theoretically well understood. It is not suitable for use with very small minibatch sizes or online learning. In this talk, we will analyze the effects of minibatch statistics of a hidden variable on the Hessian matrix of a loss function and propose a preconditioning method called Batch Normalization Preconditioning (BNP) to improve the conditioning of the Hessian. BNP implicitly uses a parameter transformation that is equivalent to normalizing the hidden variables. It reduces the condition number of the Hessian and hence accelerates convergence of training iterations. Compared with BN, one benefit is that BNP is not constrained on the minibatch size and works in the online learning setting. We will present several experiments demonstrating competitiveness of BNP. Furthermore, we will discuss a connection to BN which provides theoretical insights on how BN improves training and how BN is applied to special architectures such as convolutional neural networks. The talk is based on a joint work with Susanna Lange and Kyle Helfrich.
Friday, May 6, 2022
8:00am

Yousef Saad #5

9:30am

Rachel Minster

10:45am

Mark Arnold

1:00pm

Women in STEM Panel

2:15pm

Contributed Talks

3:45pm

Closing Remarks

Randomized Parallel Algorithms for Tucker Decompositions
Rachel Minster (Wake Forest, USA)
Largescale data frequently appears in data analysis and numerical computing applications, posing challenges in storage and computational costs. These datasets often possess inherent multidimensional structure that we can exploit to compress and store them in tensor form. The Tucker decomposition is one lowrank tensor decomposition that generalizes the matrix SVD to higher dimensions. Traditional algorithms used to compute Tucker decompositions can be computationally expensive as they involve computing the SVD of matrices with a large number of columns. In this talk, we will discuss randomized algorithms and probabilistic analysis for Tucker decompositions. We will also propose new parallel randomized algorithms that address important computational challenges. Using randomized matrix techniques, we accelerate a distributedmemory implementation of the Tucker decomposition to obtain an efficient algorithm that avoids communication. Specifically, we employ a new sketching method that exploits Kronecker structure to accelerate a key computation. We also present probabilistic analysis of the error resulting from the algorithm, as well as numerical results demonstrating the computational benefits of this approach.
Classical GramSchmidt without Reorthogonalization
Mark Arnold (University of Arkansas, USA)
In a wide variety of computing environments, the classical GramSchmidt algorithm (CGS) can be a very efficient implementation of the QR factorization of a thin (or square) matrix A. But in finite precision CGS suffers from such loss of orthogonality among the columns of Q as to render it generally useless without reorthogonalization. The modified GramSchmidt algorithm (MGS), whose sequential nature slows it in many compute environments, is backward stable for many problems (Ax=b, least squares, GMRES iterations, etc.). Our main result says that if A=QR is computed by CGS, and R is column diagonally dominant, then the loss of orthogonality in Q is about the same as for the MGS factorization. This suggests some new (block) methods, of which we will discuss a few, with some experimental results.