|
The numerical solution of large nonlinear (polynomial or rational) eigenvalue problems is a crucial question in numerous technologies like the stability and vibration analysis in classical structural mechanics and the investigation of molecular dynamics, elastic deformation of anisotropic materials, gyroscopic systems, or optical waveguide design. In particular, the trend to extreme designs (high speed trains, Airbus 380, modern materials, optoelectronic devices, micro-electromechanical systems, fluid-solid structures) presents a challenge for the numerical computation of resonance frequencies of these structures [1,28, 30, 32,42]. These extreme designs often lead to very large eigenvalue problems with an extremely bad conditioning. On the other hand, the physics of the underlying system lead to specific algebraic structures of the nonlinear eigenvalue problem that result in symmetries in the spectrum and also in specific properties of the corresponding eigenvectors or invariant subspaces. In numerical computation, it is essential to use algorithms that preserve these structures in order to obtain physically meaningful results.
Usually, nonlinear eigenvalue problems are solved by linearization, i.e., by embedding the system into a larger linear eigenvalue problem. In this context, it is essential to construct linearizations that reflect the algebraic structure of the original problem and to develop algorithms that solve the resulting linear problems in a structure preserving way. On the other hand, linearization may increase the condition number of the problem with the effect that the solution is more sensitive towards small perturbations of the original data. Hence, there is need for the design of structure preserving algorithms that work directly on the original data.
Despite numerous existing numerical approaches, there are no black-box solvers for nonlinear eigenvalue problems as they exist for linear eigenvalue problems like Linpack, Lapack, Scalapack and Arpack. The reason for this stems from the nonlinearity on one hand, which causes the absence of the basic ingredients for fast and reliable numerical methods on the other hand. Particularly, fundamental theoretical results on nonlinear eigenvalue problems are missing.
In the first phase of the project, structured linear eigenvalue problems arising from the linearization of several classes of nonlinear eigenvalue problems have been investigated and structure preserving algorithms have been developed. In this context, Hamiltonian eigenvalue problems that arise in the linearization of gyroscopic quadratic eigenvalue problems have been analyzed in [2,4,5,7,9,18,20] concerning perturbation analysis, development of algorithms, and implementation aspects. In particular, a software package has been implemented [3], see http://www.math.tu-berlin.de/~kressner/hapack . This research is part of the Ph.D. thesis of D. Kressner [22]. Specific structure preserving methods for gyroscopic systems have been constructed in [15]. The same methods were applicable in optimal waveguide design. Jacobi-like algorithms for the solution of generalized indefinite Hermitian eigenvalue problem have been developed in [26].
A new theory for the representation of nonlinear eigenvalue problems via generalized eigenvalue problems has been developed in [24,25]. This approach allows to find structure reflecting and optimally conditioned linearizations and, in particular, a comparison of the condition numbers of the original nonlinear problem and its linearization [12]. Based on the results in [24,25,26], specific methods for a polynomial eigenvalue problem in the vibration analysis of high speed trains have been developed in cooperation with the company SFE Berlin in [13,14] leading to computed eigenvalues with as many correct digits as the finite element discretization allows, in contrast to no correct digits before. These results have been publicized by I. Ipsen in SIAM News [16].
Concerning the problem of finding algorithms that work directly on the original data, Krylov subspace methods for nonlinear eigenvalue problems including techniques for locking and restarting have been developed in [31]. Moreover, algorithmic aspects of product eigenvalue problems have been investigated [19,21]. These type of nonlinear eigenvalue problems arise, e.g., in bifurcation analysis.
A first step towards a perturbation analysis of structured eigenvalue problems has been made by developing and analyzing canonical forms for several structured eigenvalue problems in [27]. Moreover, condition numbers for nonlinearly structured eigenvalue problems have been investigated in [17].
Finally, a documentation of the state of art in numerical solution methods for nonlinear eigenvalue problems has been presented in [29].
Another main topic have been palindromic and even eigenvalue problems that arise in applications like the vibration analysis of rail tracks, but also in discrete or continous time optimal control. A structured canonical form for palindromic pencils is described in [34,38]. Also, numerical methods were developed. The structure-preserving version of the well known QR-algorithm was developed for palindromic eigenvalue problems [23,36]. It is especially well suited for the important class of SISO systems. For other types of palindromic problems, the antitriangular URV algorithm was presented in [35]. This method uses a deflation procedure to calculate eigenvalues bounded away from the unit circle and uses accurate methods for a remaining small and dense eigenvalue problem with eigenvalues close to the unit circle. One of these accurate methods was an adaption of a nonsymmetric Jacobi algorithm whose convergence properties have been investigated in [28]. Most of the research on palindromic eigenvalue problems forms also part of the Ph.D. thesis of C. Schröder [39].
A structure similar to the palindromic structure arises in the stability analysis of time delay systems. These so-called PCP-palindromic eigenvalue problems were analysed in [10].
A first step towards a perturbation analysis of structured eigenvalue problems has been made by developing and analyzing canonical forms for several structured eigenvalue problems in [27].
General nonlinear eigenvalue problems were tackled in the Ph.D. thesis of K. Schreiber [33]. There, concerning basic theoretical results, a representation for the inverse of a nonlinear operator has been developed, which motivates nonlinear inverse iteration techniques, and a representation for the condition number of an eigenvalue for general nonlinear problems has been found.
A generalization of the (two-sided) Rayleigh quotient for nonlinear eigenvalue problems was given and analyzed, in particular a standard and a generalized Rayleigh functional, for which locally unique existence, first order perturbation expansions and and error bounds in the vectors, as well as stationarity have been shown, cf. [33].
With the so gained knowledge on Rayleigh functionals, new basic methods for the computation of eigenvalues and -vectors have been designed. These one- resp.\ two-vector methods are essential for subspace extension methods like Jacobi--Davidson [6] and nonlinear Arnoldi [43] in two ways: First, these methods are based upon the respective one-vector methods in the sense that they provide the vector updates; second, the subspace expansion methods still need to solve small dimensional dense nonlinear problems, which is usually done by the nonlinear inverse iteration. The two--sided Rayleigh functional iteration is the analogon to the two-sided Rayleigh quotient iteration for nonnormal matrices, and has been shown to converge locally cubically as well. A generalization of the residual inverse iteration has been shown to converge locally quadratically [33].
Different versions of nonlinear Jacobi--Davidson methods have been discussed. The two-sided nonlinear version was shown to converge locally cubically [33]. A nonlinear Jacobi-Davidson-like method that is especially suited for the computation of ill-conditioned eigenvalues has been developed and analyzed in [40].
A technique to accelerate convergence for methods computing left and right eigenvectors has been introduced in [33]. The half-step two-sided Rayleigh functional iteration is shown to converge with R-order 4.
The special case of nonlinear complex symmetric eigenvalue problems has been examined, where an appropriate complex symmetric Rayleigh functional has been developed, the locally cubic convergence of the complex symmetric Rayleigh functional iteration / Jacobi-Davidson method, and the locally quadratic convergence of the complex symmetric residual inverse iteration method have been shown in the Ph.D. thesis of K. Schreiber [33]..
It has been shown in [41] that the generalization from one-dimensional methods to methods that compute invariant subspaces for matrices is not always straight-forward and may fail.