- Perron-Frobenius Theory for general real and complex matrices
- Matrix theory
- Structured perturbations
- INTLAB
- Interval arithmetic
- Accurate sum and dot product
- Wilkinson's estimates revisited
- Ph.D. [9] S.M. Rump. Kleine Fehlerschranken bei Matrixproblemen. PhD thesis, Universität Karlsruhe, 1980. (PDF, 10710019 bytes) [click for download]
- Sparse matrices
- Verification methods
- Computer Algebra and computer-assisted proofs
- Hedge-Effectiveness

In
S.M. Rump. Theorems of Perron-Frobenius type for matrices without sign
restrictions. *Linear Algebra and its Applications (LAA)*,
266:1-42, 1997. (PDF, 273503 bytes) [click for download][81] the classical Perron-Frobenius Theory is extended to general real and in S.M. Rump. Perron-Frobenius Theory for Complex Matrices.
*Linear Algebra and its Applications (LAA)*, 363:251-273,
2003. (PDF, 247135 bytes) [click for download][109] to complex matrices, where the Perron root is replaced by the sign-real
or sign-complex radius, respectively.
The latter is the largest real solution of the nonlinear eigenvalue problem *|Ax|=r|x|* for
real or complex vectors x, respectively. Variational characterizations can be found in
S.M. Rump. Variational characterizations of the sign-real and the
sign-complex spectral radius. *Electronic Journal of Linear Algebra (ELA)*,
9:112-117, 2002. (PDF, 132402 bytes) [click for download][104].

In
S.M. Rump. Bounds for the Componentwise Distance to the Nearest
Singular Matrix.
*SIAM J. Matrix Anal. Appl. (SIMAX)*, 18(1):83-103, 1997.
(PDF, 250422 bytes) [click for download][84] this theory is applied to show that the ratio between the componentwise distance
to the nearest singular matrix and the reciprocal of the componentwise condition number
is finite. In
S.M. Rump. Almost Sharp Bounds for the Componentwise Distance to the
Nearest Singular Matrix. *Linear and Multilinear Algebra (LAMA)*,
42:93-107, 1997. (PDF, 159553 bytes) [click for download][85] the ratio is estimated sharply from below, and up to
a factor *3+2sqrt(2)* from above, see also
S.M. Rump. Ill-conditioned Matrices are componentwise near to singularity.
*SIAM Review (SIREV)*, 41(1):102-112, 1999. (PDF, 174976 bytes) [click for download][95]. The results rely on the relation between
the sign-real spectral radius and cycle products
S.M. Rump. The sign-real spectral radius and cycle products.
*Linear Algebra and its Applications (LAA)*, 279:177-180, 1998.
(PDF, 128819 bytes) [click for download][86], and they are true for all dimensions
and weight matrices.

The generalized Perron-Frobenius Theory is applied in S.M. Rump. Conservatism of the circle criterion - solution of a problem
posed by A. Megretski. *IEEE Trans. Automatic Control*, 46(10):1605-1608,
2001. (PDF, 163161 bytes) [click for download][102] to solve a problem posed by
Megretsky, that is to estimate the conservatism of the circle criterion.

In
S.M. Rump. On P-Matrices. *Linear Algebra and its Applications (LAA)*,
363:237-250, 2003. (PDF, 202117 bytes) [click for download][111] my generalization of Perron-Frobenius Theory is applied to give necessary
and sufficient criteria for a matrix to be P-matrix. It is also shown that
no minor can be left out to check the P-property. Furthermore a first algorithm is given which
has not necessarily exponential computing time to check the P-property.

The verification of nonsingularity of an interval matrix is an NP-hard problem
S. Poljak and J. Rohn. Checking Robust Nonsingularity Is NP-Hard,
*Math. of Control, Signals, and Systems*, 6:1-9, 1993.[PoRo93]. In
S.M. Rump. Verification Methods for Dense and Sparse Systems of Equations.
In J. Herzberger, editor, *Topics in Validated Computations - Studies in
Computational Mathematics*, pages 63-136, Elsevier, Amsterdam,
1994. (PDF, 498000 bytes) [click for download][71] a first criterion is given which is in some cases superior to strong singularity. In
S.M. Rump. Optimal scaling for p-norms and componentwise distance
to singularity. *IMA Journal of Numerical Analysis (IMAJNA)*,
23:1-9, 2003. (PDF, 170682 bytes) [click for download][110] this is improved to give lower and upper bounds for the optimal
p-norm condition number of a matrix. Furthermore a new computable
lower bound for the componentwise distance to the nearest singular matrix is given.

In
S. Friedland, D. Hershkowitz, and S.M. Rump. Positive entries of
stable matrices. *Electronic Journal of Linear Algebra (ELA)*,
12:17-24, 2005. (PDF, 130129 bytes) [click for download][130] we prove that a positive stable matrix of dimension greater than one must
have at least two positive entries, and that there exist positive stable
matrices of every dimension >1 with this property. We also give a companion
matrix which is strictly lower triangular except elements *(1,1)* and *(1,n)*.

In
S.M. Rump. Structured Perturbations Part I: Normwise Distances.
*SIAM J. Matrix Anal. Appl. (SIMAX)*, 25(1):1-30, 2003.
(PDF, 275615 bytes) [click for download][108] various explicit formulas and estimations are stated for structured normwise
condition number of matrices. Generally, there is not much difference
between the unstructured and the structured condition number. Similar
results for eigenvalues and the pseudospectrum of structured matrices are given in
S.M. Rump. Eigenvalues, pseudospectrum and structured perturbations.
*Linear Algebra and its Applications (LAA)*, 413:567-593, 2006.
(PDF, 296921 bytes) [click for download][133]. An exception are Toeplitz matrices, where we show in
S.M. Rump. Structured Perturbations Part I: Normwise Distances.
*SIAM J. Matrix Anal. Appl. (SIMAX)*, 25(1):1-30, 2003.
(PDF, 275615 bytes) [click for download][108] and
S.M. Rump and H. Sekigawa. The ratio between the Toeplitz and the
unstructured condition number.
Operator Theory: Advances and Applications, 199:397-419, 2009.
(PDF, 241394 bytes) [click for download][139] by a nice connection to the minimization of the norm of the
product of two polynomials that the ratio between the structured Toeplitz
and unstructured condition number may be exponentially small in the
dimension.

The well-known theorem by Eckhart-Young or Gastinel-Kahan states that the normwise distance
to the nearest singular matrix is equal to the reciprocal of the condition number.
In
S.M. Rump. Structured Perturbations Part I: Normwise Distances.
*SIAM J. Matrix Anal. Appl. (SIMAX)*, 25(1):1-30, 2003.
(PDF, 275615 bytes) [click for download][108] I show that this also true for the structured distance
and structured condition number (see also Perron-Frobenius).
This is proved for many structures such as symmetric, Toeplitz, Hankel, circulants and others.
So in contrast to linear systems the worst case is assumed in particular for a structured matrix.

In
S.M. Rump. Structured Perturbations Part II: Componentwise Distances.
*SIAM J. Matrix Anal. Appl. (SIMAX)*, 25(1):31-56,
2003. (PDF, 231299 bytes) [click for download][107] it is shown that the situation changes completely
when using componentwise distances.
For many structures explicit parameterized examples are given such that the componentwise
(Skeel-) condition number is about 1, independent of the parameter, whereas
the general condition number tends to infinity with the parameter going to zero.
The same is true for least squares problems
S.M. Rump. Ill-conditionedness need not be componentwise near to ill-posedness
for least squares problems. *BIT*, 39(1):143-151, 1999. (PDF, 142448 bytes) [click for download][94]. Moreover, the structured componentwise backward error for symmetric linear systems
can be arbitrarily far apart from the unstructured backward error
S.M. Rump. The componentwise structured and unstructured backward error can be
arbitrarily far apart. submitted for publication, 2014.
(PDF, 220482 bytes)[click for download][160].

Starting from 1983, see for example S.M. Rump. CALCULUS. In U. Kulisch, editor, *Wissenschaftliches Rechnen mit
Ergebnisverifikation*, pages 85-99. Vieweg und Akademie Verlag, Berlin, 1989.[51], S.M. Rump. ACRITH - CALCULUS - TPX - Programmierwerkzeuge für wissenschaftliches
Rechnen. In *Tagungsband Wissenschaftliches Forum*, Hamburg, 1990.[55] and
D. Husung and S.M. Rump. ABACUS, Sprachbeschreibung.
In L. Atanassova et al.,
editor, *Computer arithmetic and enclosure methods: Proceedings of
the Third International IMACS-GAMM Symposium on Computer Arithmetic
and Scientific Computing (SCAN-91)*, Oldenburg, Germany,
1 - 4 October 1991, Dublin, 1991.[61], we developed various programming environments for
convenient access to interval operations. When Matlab introduced an operator
concept, the time was ripe to create the interval toolbox INTLAB
S.M. Rump. INTLAB - INTerval LABoratory. In Tibor Csendes, editor,
*Developments in Reliable Computing*, pages 77-104. Kluwer Academic Publishers,
Dordrecht, 1999.[92] for verified computations. We already improved traditional interval arithmetic substantially
in the widely used interval C-library PROFIL
O. Knüppel. PROFIL / BIAS - A Fast Interval Library. *Computing*,
53:277-287, 1994.[72]. For the Matlab interval toolbox
the obstacle of interpretation overhead and frequent changing of the rounding mode
had to be solved in
S.M. Rump. Fast and parallel interval arithmetic. *BIT*, 39(3):539-560,
1999. (PDF, 221498 bytes) [click for download][96]. Furthermore, highly
accurate real and complex interval standard functions are developed in
S.M. Rump. Rigorous and portable standard functions. *BIT*, 41(3):540-562,
2001. (PDF, 234868 bytes) [click for download][99]; for highly accurate bounds for the gamma function over the full floating-point range see S.M. Rump. Verified sharp bounds for the real gamma function over the
entire floating-point range. To be published in Nonlinear Theory and Its
Applications (NOLTA), IEICE, Vol.E5-N,No.3, July, 2014.
(PDF, 430728 bytes)[click for download][157]. In
S.M. Rump. Fast Interval Matrix Multiplication.
Numerical Algorithms, 61(1):1-34, 2012.[154] is a thorough treatment of different possibilities of interval matrix multiplication.

Meanwhile INTLAB is distributed in release 9, and it runs under Matlab and Octave. Here are demos including in particular some larger examples. There are several thousand users in more than 50 countries. For a bibliography of more than 500 papers using INTLAB in various areas see [INTLABref].

Recent definitions of interval arithmetic favour to restrict
intervals to subsets of the reals excluding infinity.
This means *{2* <= *x* < *inf}*
is an interval, but
*{2* <= *x* <=
*inf}* is not.
This nice idea implies the *0*x* is the zero interval for all intervals x.
However, there is a drawback. To allow numerical computations with
intervals the bounds are floating-point numbers. Fortunately infinity
is a floating-point number in IEEE 754, so that the above interval
*{2* <= *x* <
*inf}*
can be represented by the bounds *2* and *inf*. Thus we have the weird
situation that *[a,b]* may be a valid interval but one or both bounds
do not belong to it. This may create awkward situations where programs
do not perform as expected, which is in particular not appealing for
verification algorithms.

In
S.M. Rump. Interval Arithmetic Over Finitely Many Endpoints.
BIT Numerical Mathematics, 42(4):1059-1075, 2012.[153] I defined a new interval arithmetic solving this situation.
Besides it resolves an apparent disparity between overflow and underflow,
and it allows to introduce additional quantities like *pi* or *e* such that
*sin([pi])=0* or *log([e])=1* are point intervals.

Interval arithmetic suffers from the dependency problem and the wrapping effect.
An interesting approach to reduce this problem is affine arithmetic
L.H. Figueiredo and J. Stolfi. Self-Validated Numerical Methods and Applications
*Brazilian Mathematics Colloquium monograph*, IMPA, 1997.[FigSto97]. Several improvements and the description of the corresponding INTLAB toolbox for
affine arithmtic are presented in
S.M. Rump and M. Kashiwagi. Implementation and improvements of affine arithmetic.
*Nonlinear Theory and Its Applications, IEICE*,, 2(3):1101-1119, 2014.
(PDF, 2135727 bytes) [click for download][162].

In T. Ogita, S.M. Rump, and S. Oishi. Accurate Sum and Dot Product.
*SIAM Journal on Scientific Computing (SISC)*, 26(6):1955-1988, 2005.
(PDF, 373558 bytes) [click for download][127] and S.M. Rump, T. Ogita, and S. Oishi. Fast High Precision Summation.
Nonlinear Theory and Its Applications (NOLTA), IEICE},
1(1), 2010.(PDF, 339422 bytes) [click for download][144] we present algorithms for the fast computation
of the sum and dot product of floating-point numbers with a specified precision.
The latter paper received the "NOLTA Best Paper Award" by the
IEICE Engineering Sciences Society.
The methods use extensively so-called error-free transformations. In fact, the algorithm
Sum2 in T. Ogita, S.M. Rump, and S. Oishi. Accurate Sum and Dot Product.
*SIAM Journal on Scientific Computing (SISC)*, 26(6):1955-1988, 2005.
(PDF, 373558 bytes) [click for download][127] itself is an error-free vector transformation: An input vector p is transformed
into a vector q of same length such that

*\sum{p_i}=\sum{q_i}*,*q_n*is the ordinary floating-point sum of the*p_i*, and- the condition number of the sum decreases by a factor
*psilon*,

In S.M. Rump, T. Ogita, and S. Oishi. Accurate Floating-point
Summation part I: Faithful Rounding. *SIAM J. Sci. Comput.*,
31(1):189-224, 2008. (PDF, 457457 bytes) [click for download][141] and S.M. Rump, T. Ogita, and S. Oishi. Accurate Floating-point
Summation part II: Sign, K-fold Faithful and Rounding to Nearest.
*Siam J. Sci. Comput.*, 31(2):1269-1302, 2008.
(PDF, 377632 bytes) [click for download][142], a 60-page paper in two parts, algorithms are presented to compute
a sum (and dot product) with faithful rounding.
That means, the computed result is always
correct up to the last bit. The theoretical foundation as well as an algorithm
for computing a k-fold faithfully rounded result is presented as well, where
the result is represented by a vector of k floating-point numbers
(see also Ph.D.).
Moreover, fast algorithms for computing the sign of a sum, the rounded-to-nearest and
result with directed rounding are described.

The analysis of such algorithms is often involved, tedious and error-prone.
To cure that I introduced in
S.M. Rump, T. Ogita, and S. Oishi. Accurate Floating-point
Summation part I: Faithful Rounding. *SIAM J. Sci. Comput.*,
31(1):189-224, 2008. (PDF, 457457 bytes) [click for download][141] the ufp-concept, the unit in the first place.
Using that floating-point numbers are viewed as scaled integers, the proofs are compiled
of inequalities and the matter becomes much better comprehensible.

The general approach is to use thoroughly the IEEE 754 standard to prove results.
Based on that in S.M. Rump, P. Zimmermann, S. Boldo, and G. Melquiond.
Computing predecessor and successor in rounding to nearest.
*BIT Numerical Mathematics*, 49(2):419-431, 2009.
(PDF, 165101 bytes) [click for download][145], for example, another fast method is given to compute the
predecessor and successor in pure floating-point with rounding to nearest.

XBLAS
X. Li, J. Demmel, D. Bailey, G. Henry, Y. Hida, J. Iskandar, W. Kahan, S. Kang, A.
Kapur, M. Martin, B. Thompson, T. Tung, and D. Yoo. Design, Implementation
and Testing of Extended and Mixed Precision BLAS.
*ACM Trans. Softw.*, 28(2):152-205, 2002.[Li_etal02] computes a result as if computed in quadruple precision,
whereas the accuracy still depends on the condition number.
The result of algorithm AccSum in S.M. Rump, T. Ogita, and S. Oishi. Accurate Floating-point
Summation part I: Faithful Rounding. *SIAM J. Sci. Comput.*,
31(1):189-224, 2008. (PDF, 457457 bytes) [click for download][141] is always accurate to the last bit, but nevertheless
it is some 40 % faster than XBLAS, both theoretically and practically.
In S.M. Rump. Ultimately Fast Accurate Summation.
SIAM Journal on Scientific Computing,
31(5):3466-3502, 2009. (PDF, 688702 bytes) [click for download][146] this algorithm is improved by up to another 25 %.

For an application of accurate dot products to the solution of arbitrarily ill-conditioned linear systems in double precision see Section Verification methods. In S.M. Rump and H. Böhm. Least Significant Bit Evaluation for Arithmetic Expressions. Computing, 30:189-199, 1983. (PDF, 1418658 bytes) [click for download][17] we showed how to evaluate arithmetic expressions accurately using an accurate dot product.

Wilkinson's standard error estimates for certain floating-point computations involve a factor
*gamma_k:=k*eps/(1-k*eps)*, where *eps* denotes the relative rounding error unit.
Surprisingly, using the ufp-concept mentioned in the previous section, we can show that
this factor can be replaced by *k*eps* for a number of standard problems. This is proved
for the sum of floating-point numbers in
S.M. Rump. Error estimation of floating-point summation and dot product.
BIT Numerical Mathematics, 52(1):201-220, 2012. (PDF, 120791 bytes)[click for download][149], and for dot products in
C.-P. Jeannerod and S.M. Rump. Improved error bounds for inner products
in floating-point artihmetic. SIAM. J. Matrix Anal. & Appl. (SIMAX),
34(2):338-344, 2013.(PDF, 303823 bytes)[click for download][155]. The estimates are valid without restriction on n. For the foundation of floating-point arithmetic see
S.M. Rump and M. Lange. On the Definition of Unit Roundoff.
to appear in BIT, 2014. (PDF, 102188 bytes)[click for download][161].

Moreover, the famous Lemma 8.4 in ASNA
Higham, N.~J. Accuracy and stability of numerical algorithms. SIAM Publications, Philadelphia 2002.[Hi02] bounds the error of Gaussian elimination by factors *gamma_k for k=n or k=n-1*,
depending on whether division occurs or not. These factors can be replaced by *k*eps*
as well as shown in
S.M. Rump and C.-P. Jeannerod. Improved error bounds for LU and
Cholesky factorizations. submitted for publication, 2014.
(PDF, 352228 bytes)[click for download][156]. A similar remark applies to
Cholesky factorization and solving triangular systems
by substitution. Again, all estimates are true without restriction on
*n*.

A similar result for recursive evaluation of powers of binary floating-point numbers
is shown by Graillat, Lefèvre and Muller
HAL Id: ensl-00945033, https://hal-ens-lyon.archives-ouvertes.fr/ensl-00945033v2[159]. We improve this
S.M. Rump, F. Bünger and C.-P. Jeannerod. Improved Error Bounds for
Floating-Point Products and Horner's Scheme, to appear in BIT, 2014.
(PDF, 149930 bytes)[click for download][158] to general products of real and/or floating-point numbers, to general
base *beta ≥ 2*, to a larger number of factors, and to the computation in any order.
Moreover, we show that the restriction on the number of factors is basically sharp, and it is mandatory.
Thus, the factor *gamma_k* cannot be replaced generally by *k*eps* for arbitratry
arithmetic expressions.

In 1969 Krawczyk
R. Krawczyk. Newton-Algorithmen zur Bestimmung von Nullstellen mit
Fehlerschranken, *Computing*, 4:187-201, 1969.[Kra69] presented the famous Krawczyk operator.
He assumed an interval vector to be given including a solution of a nonlinear
system, and he used his operator to refine this interval.
In 1977 Moore
R.E. Moore. A Test for Existence of Solutions for Non-Linear Systems,
SIAM J. Numer. Anal. 4: 611--615, 1977.[Mo77] observed that the Krawczyk operator can be
used as an existence test, where he proved contraction by a norm estimation.
Both approaches lack the *construction* of an inclusion.

Based on that I develop in my Ph.D. S.M. Rump. Kleine Fehlerschranken bei Matrixproblemen. PhD thesis, Universität Karlsruhe, 1980. (PDF, 10710019 bytes) [click for download][9] a constructive algorithm to compute an inclusion of a linear system and eigenvalue problems. Compared to previous methods three major improvements are introduced. First, Krawczyk's operator is modified to compute an inclusion of the error of an approximate solution rather than of the solution itself. The new operator turns out to be simpler and more accurate than the original one.

Secondly, rather than using some norm to show contraction, an inclusion in the interior is used to prove existence and uniqueness of the solution or to prove nonsingularity of the Jacobian.

In
O. Caprani and K. Madsen. Iterative Methods for Interval Inclusion of
Fixed Points, *BIT*, 18:42-51, 1978.[CaMa78] some inflation was used in an interval
iteration without analysis. In my Ph.D. I show thirdly that this is necessary, coined the term epsilon-inflation for that,
and give necessary and sufficient conditions under which an inclusion
is computed (see also
S.M. Rump. A Note on Epsilon-Inflation. Reliable Computing,
4:371-375, 1998. (PDF, 122641 bytes) [click for download][89]).

These three approaches, i) to calculate an inclusion of the error of an approximation, ii) to show contraction by mapping into the interior, and iii) the epsilon-inflation are today standard tools in verification algorithms for finite as well as for infinite dimensional problems.

The IBM program product ACRITH implements the algorithms described in my
Ph.D.
S.M. Rump. Kleine Fehlerschranken bei Matrixproblemen. PhD thesis,
Universität Karlsruhe, 1980. (PDF, 10710019 bytes) [click for download][9] and habilitation
S.M. Rump. Solving Algebraic Problems with High Accuracy.
Habilitationsschrift. In U.W. Kulisch and W.L. Miranker, editors,
*A New Approach to Scientific Computation*, pages 51-120.
Academic Press, New York, 1983.[18].

Using a precise dot product an inclusion is computed and stored in k separate vectors,
k-1 of which are floating-point vectors and the last one is an interval vector.
This method, suggested in my Ph.D., represents a k-fold accurate result (see also
Accurate sum and dot product).
Later this method was called staggered correction
J. Stetter. Sequential defect correction for high-accuracy floating-point
algorithms. In: *Numerical Analysis (Proceedings, Dundee 1983).*,
Lecture Notes in Mathematics, vol. 1066 (1984), 186-202[Ste84].

If some long precision formats are available, an optimal strategy is discussed how to increase the precision to mimize the total computational
effort, see also
V. Kreinovich and S.M. Rump. Towards Optimal Use of
Multi-Precision Arithmetic: A Remark. *Reliable Computing* 12(5),
365-369, 2006. (PDF, 121825 bytes) [click for download][134].

In
S.M. Rump. Validated Solution of Large Linear Systems. In R. Albrecht,
G. Alefeld, and H.J. Stetter, editors, *Validation numerics:
theory and applications*, volume 9 of Computing Supplementum,
pages 191-212. Springer, 1993. (PDF, 180113 bytes) [click for download][69], see also
S.M. Rump. Verification Methods for Dense and Sparse Systems of Equations.
In J. Herzberger, editor, *Topics in Validated Computations - Studies in
Computational Mathematics*, pages 63-136, Elsevier, Amsterdam,
1994. (PDF, 498000 bytes) [click for download][71], a method to solve linear systems with large symmetric
positive definite sparse matrix is presented. Apparently this is still
the only known method for effectively solving sparse linear systems. The method
is based on the verification of positive definiteness, see also
S.M.Rump. Verification of positive definiteness,
*BIT Numerical Mathematics*, 46:433-452, 2006.
(PDF, 248004 bytes [click for download][135]. Based on that a super-fast

method is derived in
S.M. Rump and T. Ogita. Super-Fast Validated Solution of Linear Systems.
*J. Comput. Appl. Math. (JCAM)*, 199(2):199-206, 2007. (PDF, 167035 bytes) [click for download][119] for solving sparse linear systems with guaranteed
error bounds, that is the computation of a floating-point approximation *and*
verification of the result is performed in the *same* computing time as only
a floating-point approximation.

Developments in verification methods until 2012 are summarized in my 163-pages overview article
S.M. Rump. Verification methods: Rigorous results using floating-point arithmetic.
Acta Numerica, 19:287-449, 2010.[150]. In my habilitation
S.M. Rump. Solving Algebraic Problems with High Accuracy.
Habilitationsschrift. In U.W. Kulisch and W.L. Miranker, editors,
*A New Approach to Scientific Computation*, pages 51-120.
Academic Press, New York, 1983.[18] verification methods for a number of
numerical problems including general systems of nonlinear equations are derived
(malheureusement, the printed version contains many typos due to
unfortunate circumstances). This latter algorithm together with others is also
implemented in INTLAB
S.M. Rump. INTLAB - Interval Laboratory, the Matlab toolbox for
verified computations, Version 6.0, 2008.[120]. An introduction to verification or
self-validating methods is given in
S.M. Rump. Self-validating methods.
*Linear Algebra and its Applications (LAA)*,
324:3-13, 2001. (PDF, 163273 bytes) [click for download][98], some historical background on intervals in
D. Dennis, V. Kreinovich, and S.M. Rump. Intervals and the Origins of
Calculus. *Reliable Computing*, 4(2):191-197, 1998.
(PDF, 205001 bytes) [click for download][87]. An often cited and analyzed (
A. Cuyt, B. Verdonk, S. Becuwe and P. Kuterna.
A Remarkable Example of Catastrophic Cancellation Unraveled.
*Computing*, 66(3):309-320, 2001. (doi: 10.1007/s006070170028)[CuyVerBecKut01],
E. Loh and W. Walster. Rump's Example Revisited.
*Reliable Computing*, 8:3(245-248), 2002. (doi: 10.1023/A:1015569431383)[LohWal02]) example that identical and
erroneous results are computed in single, double and extended precision is
given in
S.M. Rump. Algorithms for Verified Inclusions - Theory and Practice.
In R.E. Moore, editor, *Reliability in Computing*, volume 19 of Perspectives
in Computing, pages 109-126. Academic Press, 1988.
(PDF, 543373 bytes) [click for download][45], more examples can be found in
S.M. Rump. How Reliable are Results of Computers?, Translation of
''Wie zuverlässig sind die Ergebnisse unserer Rechenanlagen?''. *Jahrbuch
Überblicke Mathematik*, pages 163-168, 1983.
(PDF, 392516 bytes) [click for download][19].

In S.M. Rump. Accurate solution of dense linear systems, Part I: Algorithms in rounding to nearest. Journal of Computational and Applied Mathematics (JCAM), 242:157-184, 2013.[151] algorithms for the verified solution of linear systems solely using rounding to nearest are presented. Using tricky error estimates fairly sharp inclusions can be computed very fast. Since no directed rounding is necessary, those algorithms are easily portable. In S.M. Rump. Accurate solution of dense linear systems, Part II: Algorithms using directed rounding. Journal of Computational and Applied Mathematics (JCAM), 242:185-212, 2013.[152] algorithms for the verified solution of linear systems using directed roundings are presented. These algorithms are very fast and accurate and are based on H-matrices. In this paper also a method is described for computing inner inclusions even if only one matrix or right hand side entry is a fat interval.

In
S.M. Rump. Guaranteed Inclusions for the Complex Generalized Eigenproblem.
*Computing*, 42:225-238, 1989.(PDF, 789892 bytes) [click for download][50] the complex generalized eigenproblem is treated, in
S.M. Rump. Computational Error Bounds for Multiple or Nearly Multiple
Eigenvalues. *Linear Algebra and its Applications (LAA)*, 324:209-226,
2001. (PDF, 219485 bytes) [click for download][103] an algorithm is given to compute bounds of
multiple or clustered eigenvalues and their invariant subspace, and in
S.M. Rump and J. Zemke. On eigenvector bounds. *BIT*, 43:823-837,
2004. (PDF, 194763 bytes) [click for download][115] we show under general assumptions that an accurate inclusion of a single
eigenvector to a multiple eigenvalue of
geometric multiplicity greater than one is not possible, even for
symmetric/Hermitian/normal matrices.

In
S.M. Rump. Ten methods to bound multiple roots of polynomials.
*J. of Computation and Applied Mathematics (JCAM)*, 156:403-432, 2003.
(PDF, 332813 bytes) [click for download][106], 10 different methods are given
to compute an inclusion of multiple roots of polynomials.

For multiple roots of (systems of) nonlinear equations there are two approaches.
First, in
S.M. Rump and S. Oishi. Verified Error Bounds for Multiple Roots
of Nonlinear Equations. In *2009 International Symposium on
Nonlinear Theory and its Applications, NOLTA'09, Sapporo, Japan,*
2009. (PDF, 161890 bytes) [click for download][147] a method is described how to verify that the given problem has
exactly *k* roots in a complex disc. This improves a method by Neumaier
A. Neumaier. An Existence Test for Root Clusters and Multiple Roots.
*Zeitschrift für Angewandte Mathematik und Mechanik (ZAMM)*,
68(6):256-257, 1988.[Neu88]. By principle, the multiplicity cannot be verified, i.e.
there are *k* roots in total in that disc. Second, in
S.M. Rump and S. Graillat. Verified error bounds for multiple
roots of systems of nonlinear equations. *Numerical Algorithms*,
2009. DOI 10.1007/s11075-009-9339-3. (PDF, 440922 bytes) [click for download][148] a method is described to verify that a slightly perturbed problem has a
*k*-fold root within computable error bounds, where a bound for the
perturbation is computed as well.

Based on work by Neumaier it is shown in
S.M. Rump. Rigorous Sensitivity Analysis for Systems of Linear and
Nonlinear Equations. Math. Comput., 54(10):721-736, 1990.
(PDF, 154543 bytes) [click for download][53] and
S.M. Rump. Verification Methods for Dense and Sparse Systems of Equations.
In J. Herzberger, editor, *Topics in Validated Computations - Studies in
Computational Mathematics*, pages 63-136, Elsevier, Amsterdam,
1994. (PDF, 498000 bytes) [click for download][71] how *inner* inclusions of the solution complex of
a linear or nonlinear system with interval coefficients can
be obtained practically free of cost together with a verified
inclusion. Jansson
C. Jansson. Interval Linear Systems with Symmetric Matrices, Skew-Symmetric Matrices,
and Dependencies in the Right Hand Side. *Computing*, 46:265-274, 1991.[Ja91c] shows how to compute inclusions of a linear
system with symmetric matrix taking into account dependencies and thus obtaining much
better inclusions. Based on that it is shown in
S.M. Rump. Verification Methods for Dense and Sparse Systems of Equations.
In J. Herzberger, editor, *Topics in Validated Computations - Studies in
Computational Mathematics*, pages 63-136, Elsevier, Amsterdam,
1994. (PDF, 498000 bytes) [click for download][71]
how to solve general parameterized systems of linear interval equations.

Methods based on the Krawczyk operator or its modifications require a
matrix inversion and interval matrix multiplication. Compared to Gaussian
elimination this implies a factor 9 in computing time. In
S. Oishi and S.M. Rump. Fast verification of solutions of matrix equations.
Numer. Math., 90(4):755-773, 2002. (PDF, 226356 bytes) [click for download][105] we present a method where the computational effort of the verification is the same as
for Gaussian elimination, so that the total verification process takes
twice as long as a pure floating-point elimination. In
S.M. Rump and T. Ogita. Super-Fast Validated Solution of Linear Systems.
*J. Comput. Appl. Math. (JCAM)*, 199(2):199-206, 2007. (PDF, 167035 bytes) [click for download][119] a super-fast

method for sparse s.p.d. linear systems is given where this factor is reduced to 1
(see sparse matrices).

In about 1984, I derived a method (see topic INTLAB on
http://www.ti3.tu-harburg.de)
to solve linear systems with extremely ill-conditioned system matrix, see also
S.M. Rump. Approximate inverses of almost singular matrices still
contain useful information. Technical Report 90.1, Forschungsschwerpunkt
Informations- und Kommunikationstechnik, TU Hamburg-Harburg, 1990.
(PDF, 1870115 bytes) [click for download][54]. The method uses only IEEE 754 basic operations plus
an accurate dot product, as for example in
S.M. Rump, T. Ogita, and S. Oishi. Accurate Floating-point
Summation part I: Faithful Rounding. *SIAM J. Sci. Comput.*,
31(1):189-224, 2008. (PDF, 457457 bytes) [click for download][141], S.M. Rump, T. Ogita, and S. Oishi. Accurate Floating-point
Summation part II: Sign, K-fold Faithful and Rounding to Nearest.
*Siam J. Sci. Comput.*, 31(2):1269-1302, 2008.
(PDF, 377632 bytes) [click for download][142]. It is an iterative method reducing the condition number in each step
by about a factor eps. I never published the method because the lack of analysis.

The method uses multiplicative rather than additive corrections
S.M. Rump. Error bounds for extremely ill-conditioned problems.
In *Proceedings of 2006 International Symposium on Nonlinear Theory
and its Applications*, Bologna, Italy, September 11-14, 2006.
(PDF, 101159 bytes) [click for download][137] (see also
T. Ohta, T. Ogita, S.M. Rump, and S. Oishi.
Numerical Verification Method for Arbitrarily Ill-conditioned Linear Systems.
*Transactions on the Japan Society for Industrial and Applied Mathematics
(Trans. JSIAM)*, 15(3):269-287, 2005.
(PDF, 118303 bytes) [click for download][138], this paper received the Best Paper Award 2005 of Japanese SIAM).
To test the method it was necessary to construct arbitrarily ill-conditioned matrices
exactly storable in single or double precision
S.M. Rump. A Class of Arbitrarily Ill-conditioned
Floating-Point Matrices.
*SIAM Journal on Matrix Analysis and Applications (SIMAX)*,
12(4):645-653, 1991. (PDF, 129968 bytes) [click for download][65]. It seems still mysterious that an approximate inverse computed in double
precision (i.e. 53 bits precision) of a matrix with condition number
greater than 10^{300} still contains useful information. A first analysis with
some restrictions is given in
S. Oishi, K. Tanabe, T. Ogita, and S.M. Rump.
Convergence of Rump's Method for Inverting Arbitrarily Ill-Conditioned
Matrices.*J. Comput. Appl. Math.*, 205(1):533-544, 2007.
(PDF, 126985 bytes) [click for download][136], and based on that a full analysis is
derived in
S.M. Rump. Inversion of extremely ill-conditioned
matrices in floating-point.
*Japan J. Indust. Appl. Math. (JJIAM)*, 26:1-29, 2009. (PDF, 348003 bytes)
[click for download][143].

Verification or self-validating methods
S.M. Rump. Self-validating methods.
*Linear Algebra and its Applications (LAA)*,
324:3-13, 2001. (PDF, 163273 bytes) [click for download][98] can be used to
assist mathematical proofs. Connections between self-validating
methods and computer-assisted proofs are described in
S.M. Rump. Computer-assisted proofs and Self-Validating Methods.
In B. Einarsson, editor, *Handbook on Acuracy and Reliability in
Scientific Computation*, pages 195-240. SIAM, 2005. (PDF, 4690342 bytes) [click for download][122], a paper which was also translated into Japanese
S.M. Rump. Computer-Assisted Proofs II. *Bulletin of the Japan Society
for Industrial and Applied Mathematics (Bull. JSIAM)*, 14(4):44-57,
2004, translated by T. Ogita. (PDF, 147675 bytes) [click for download][116], S.M. Rump. Computer-Assisted Proofs I. *Bulletin of the Japan Society
for Industrial and Applied Mathematics (Bull. JSIAM)*, 14(3):2-11,
2004, translated by T. Ogita. (PDF, 115508 bytes) [click for download][117]. My early work in computer algebra includes the computation of the sign
of a real algebraic number
S.M. Rump. On the Sign of a Real Algebraic Number. In *Proceedings of
the 1976 ACM Symposium on Symbolic and Algebraic Computation*,
pages 238-241, New York, 1976. (PDF, 188515 bytes) [click for download][1] which is used to compute separating intervals
for real polynomials with coefficients in algebraic number fields
S.M. Rump. Ein Algorithmus zur Isolierung der reellen Nullstellen
eines Polynoms mit algebraischen Koeffizienten, Rechenzeitanalyse und
Implementierung. Master's thesis, Universität Kaiserslautern, 1976.[76]. For the analysis of this algorithm a bound for the minimal root separation
was derived
S.M. Rump. Polynomial Minimum Root Separation. *Math. Comput.*,
33(145):327-201, 1979. (PDF, 1185999 bytes) [click for download][79], the first bound of this quality
for polynomials with multiple roots.

There are a number of measures to certify the effectiveness of Hedges.
In
A.C. Hailer and S.M. Rump. Hedge-Effektivität: Lösung des Problems
der kleinen Zahlen. (Hedge Accounting nach FAS 133 bzw.
IAS 39). *Zeitschrift für das gesamte Kreditwesen*,
56(2003)(11):599-603, 2003.[114] we address the problem of small
numbers, and in
A.C. Hailer and S.M. Rump. Evaluierung von Hedge-Effektivitätstests.
*Zeitschrift für das gesamte Kreditwesen*, 58(20):1089-1097, 2005.[128] we give a number of criteria for optimal tests together with a new
Hedge effectiveness test
A. Hailer and S.M. Rump. Evaluation of Hedge Effectiveness Tests.
*Journal of Derivatives Accounting (JDA)*, 2(1):31-52, 2005.
(PDF, 497918 bytes) [click for download][129]. Our methods are apparently used by large banks.