conditioning backward error 2 .pdf
File information
Original filename: conditioning-backward-error-2.pdf
This PDF 1.5 document has been generated by TeX / pdfTeX-1.40.15, and has been sent on pdf-archive.com on 18/11/2015 at 17:24, from IP address 155.41.x.x.
The current document download page has been viewed 553 times.
File size: 717 KB (27 pages).
Privacy: public file
Share on social networks
Link to this file download page
Document preview
THE CONDITIONING AND BACKWARD ERROR OF THE MACKEY
PENCIL
ASHLEY MAILLOUX AND LUCAS MEDINA
ADVISOR: MARIBEL BUENO
Abstract. The standard way of solving the PEP (polynomial eigenvalue problem)
P (λ)x = 0, where P (λ) is the matrix polynomial Ak λk + · · · + A1 λ + A0 and A0 , . . . , Ak
are n × n matrices, is by using linearizations (degree 1 matrix polynomials that share the
eigenstructure of P (λ)). There are efficient algorithms, like the QZ method, used to solve
the PEP for the linearizations. When choosing a linearization, it is convenient that it
has similar conditioning and backward error to P (λ) while also preserving its structural
properties like symmetries in the coefficients. In our work, we study the Mackey pencil
T (λ), which is block-symmetric. In the literature, the pencils called D1 and Dk belonging
to the vector space DL(P ), were shown to be optimal in regards to the conditioning and
backward error associated with an eigenpair (δ, x), when |δ| ≥ 1 and |δ| ≤ 1, respectively.
Moreover, D1 and Dk are linearizations only if A0 and Ak are nonsingular, respectively.
Given a simple, finite, nonzero eigenvalue of P (λ), we provide explicit expressions for
the condition number and backward error of an eigenpair (δ, x) for both P (λ) and T (λ)
as well as bounds on the quotients of these numbers. Unlike D1 and Dk , the numerical
behavior of the Mackey pencil does not depend on the modulus of δ and does not require
Ak or A0 to be nonsingular for T (λ) to be a linearization of P (λ). The only drawback
is that this pencil is only available for k odd. We will provide results from numerical
experiments.
1. Introduction
Let P (λ) be the n × n matrix polynomial of degree k
P (λ) =
k
X
λi Ai ,
Ai ∈ Cnxn .,
i=0
In this paper, we will be assuming that P is regular, that is the det(P (λ)) 6= 0. If x and
y are nonzero vectors such that they are a solution to the polynomial eigenvalue problem
(PEP), P (λ)x = 0 and y ∗ P (λ) = 0, then we will say that x and y are a right and left
eigenvector, respectively. The scalar λ is called a (finite) eigenvalue of P (λ).
The standard method of solving the PEP is to use a matrix pencil, a matrix polynomial
of degree 1 that is a linearization of P (λ). We say that L(λ) = λL1 + L0 is a linearization
of P (λ) if there exists two unimodular matrices, F (λ) and G(λ) such that
I(k−1)n
0
F (λ)L(λ)G(λ) =
.
0
P (λ)
When the degree of P (λ) is larger than 1, the PEP becomes difficult to solve numerically.
Choosing a linearization of P proves to be an easier way to solve the PEP since there are
1
2
ASHLEY MAILLOUX AND LUCAS MEDINA ADVISOR: MARIBEL BUENO
efficient algorithms for the generalized eigenvalue problem. The new problem that we are
solving is
L(λ)z = 0,
where z is a right eigenvector of L.
When P (λ) is structured (Hermitian, Skew-Hermitian, Palindromic, ...), it is convenient
to use linearizations that preserve the structural properties of P . For example, if P is
hermitian, meaning that Ai = A∗i for i = 0 : k, we want a linearization L(λ) = λL1 + L0
such that L∗i = Li , i = 0 : 1. Note that A∗ denotes the complex conjugate transpose of
the matrix A.
Also, when applying an algorithm to solve the eigenvalue problem for L(λ) we would
like the linearization to be comparable to P (λ) in regards to other properties, such as
conditioning and backward error. When choosing a linearization, it is not desirable to
choose one that has worse conditioing than P (λ). Also, the standard algorithm to compute
the eigenvalue problem for L, that is the QZ method, produces small backward errors.
Thus, we want to choose a linearization whose backward error is approximately the same
as that of P . Linearizations with these properties are considered optimal for our purposes.
In 2006, a group of mathematicians created a vector space which they named DL(P ).
The standard basis for this vector space contains k pencils, denoted by D1 , . . . Dk . Extensive research has been done on the pencils in this space regarding conditioning and
backward error. In [4], it was shown that there are two optimal linearizations (in regards
to conditioning and backward error) in DL(P ): D1 and Dk . D1 and Dk are optimal when
|δ| ≥ 1 and |δ| ≤ 1, respectively and D1 and Dk are linearizations of P only if A0 and AK
are nonsingular, respectively.
The goal of our project is to find a block-symmetric pencil such that it is a linearization
that behaves comparably to P (λ) in regards to conditioning and backward error and that
does not require nonsingularity. Furthermore, we worked on finding a pencil that behaved
optimally regardless of the eigenvalues of any P (λ).
Through research, we were able to find such a pencil, but only for matrix polynomials of
odd degrees. We constructed bounds on the backward error for this pencil and present the
proof of these bounds in our paper. Also, we conducted numerical experiments examining
the conditioning and backward error of our pencil compared to other pencils.
The paper is structured as follows: In Section 2, we present relevant background information with examples to motivate our basic concepts. In Section 3, we discuss important
pencils in literature. In Section 4, we describe the scaling technique that we used in our
numerical experiments and in section 5, we present our numerical experiments and end
our paper with our conclusions and future work.
2. Background
We will begin with defining commonly used terms throughout our paper and these
concepts will motivate our results that are mentioned in later sections. This information
has been drawn from several publications, all of which are referenced accordingly.
It is important to recall that the 2-norm of a matrix is the maximum of the norm of
Ax where x ranges over the unit sphere. Also note that in our paper, ⊗ refers to the
Kronecker product.
THE CONDITIONING AND BACKWARD ERROR OF THE MACKEY PENCIL
3
2.1. Forward Error. When using any algorithm to solve the PEP, it will produce some
forward error. The relative forward error is
|ˆ
y − y|
|y|
where yˆ is the computed value for y.
It happens to be that for our problem, the forward error is bounded above by the
conditioning and the backward error of the problem. Specifically,
forward error ≤ condition number × backward error.
2.2. Conditioning. To think of conditioning in a general sense, let us consider a polynomial of degree 2.
For example, let us examine
x2 − 2x + 1 = 0
(x − 1)2 = 0
x = 1, (double root)
When using a computer to compute solutions, there will be errors due to rounding and
truncation errors. So next we examine what happens when one of the coefficients change
slightly. Let us change the constant term to .9999. The problem that we are now solving
for is x2 − 2x + .9999 = 0
Thus,
x2 − 2x + .9999 = 0
(x − .99)(x − 1.01) = 0
x = .99, 1.01
Notice that a small change in the constant term, exactly a change of 1.0 × 10−4 resulted
in a change in the roots by1.0 × 10−2 . So a slight change in the input caused a much larger
change in the output.
We can classify the way a problem is conditioned in one of two ways. A problem
is said to be well-conditioned or ill-conditioned. A problem is well-conditioned if a small
perturbation in the input results in a small change in the output. Ill-conditioned problems
are those in which a small perturbation in the input results in a large change in the output.
Thus, it is easy to see that the conditioning of a problem deals with the senstivity of the
problem, not the algorithm. The words small and large are obviously subjective, but the
value of the condition number can be classified as either depending on the problem at
hand. For certain problems, a condition number could be said to be well-conditioned, but
not for others.
For this specific problem, we will provide the roots for any change in the constant term
in order to classify the problem in regards to conditioning.
x2 − 2x + (1 + ∆) = 0
4
ASHLEY MAILLOUX AND LUCAS MEDINA ADVISOR: MARIBEL BUENO
Using the quadratic formula,
p
4 − 4(1 + ∆)
δ + ∆δ =
√ 2
2 ± −4∆
=
√2
= 1 ± −∆
2±
Thus,
|δx |
|x|
lim |δ|
∆→0
|a|
=
√
| ∆|
|δ|
|∆|
|a|
√
|a|
∆
=
·
|δ| |∆|
=∞
It easily shown that a small change in input results in a drastic change in the output.
Thus, we can classify the example above as being an ill-conditioned problem.
For our problem, the explicit expression for the condition number is
P
( kn=0 |λ|i kAi k2 ) kyk2 kxk2
κP (λ) =
,
|λ||y ∗ P 0 (λ)x|
where x and y are right and left eigenvectors and λ is a simple, finite, non-zero eigenvalue.
0
P is the entry-wise derivative of P .
2.3. Backward Error. Unlike conditioing, the backward error deals with an error in the
algorithm itself. When the algorithm computes the solution, it is not the exact solution
for the problem given. The backward error looks to find the exact problem that yields the
computed solution. If we have a problem, f (a) = b, the algorithm produces the solution
ˆb. We must find the exact problem that yields ˆb. That is,
f (a + ∆a) = ˆb.
ˆ
In a more formal setting, the normwise backward error [1] of an approximate eigenpair(ˆ
x, λ)
ˆ is finite is defined as
of P (λ), where λ
ˆ = min{ : (P (λ)
ˆ + ∆P (λ))ˆ
ˆ x = 0, k∆Ai k ≤ kAi k , i = 0 : m}
ηP (ˆ
x, λ)
2
2
Pm ˆ i
ˆ
where ∆P (λ) = i=0 λ ∆Ai .
For our research, we use the explicit formula for backward error that was constructed
by Tisseur[1] as
ˆ
x
P (λ)ˆ
2
ˆ = P
ηP (ˆ
x, λ)
m
i
ˆ
( i=0 |λ | kAi k2 ) kˆ
xk 2
ˆ
where (ˆ
x, λ) is a computed eigenpair for P.
THE CONDITIONING AND BACKWARD ERROR OF THE MACKEY PENCIL
5
3. Noteworthy Pencils in Literature
3.1. The Vector Space DL(P ). Recall the vector space DL(P ) that we mentioned briefly
in the introduction. There are two vector spaces, L1 (P ) and L2 (P ) in which almost all of
the pencils belonging to these vector spaces are linearizations for the matrix polynomial
P (λ). For linearizations that are in L1 (P ), it has been shown that it easy to recover the
right eigenvectors of the matrix polynomial from the right eigenvectors of the linearization.
Similarly, the left eigenvectors for P (λ) can be easily recovered from the left eigenvectors
of a linearization in L2 (P ). Mackey, Mackey, Mehl, and Mehrmann [4] constructed DL(P )
from the intersection of these vector spaces.
Definition 3.1. For any matrix polynomial P (λ) of size n × n, the double ansatz space
of P is
DL(P ) := L1 (P ) ∩ L2 (P ),
which is the set of nk × nk pencils L(λ) that simultaneously satisfy
L(λ) · (Λ ⊗ I) = v ⊗ P (λ) for some v ∈ Fk
(ΛT ⊗ I) · L(λ) = wT ⊗ P (λ) for some w ∈ Fk ,
which are referred to as a ”right” and “left ansatz,” respectively.
Recall that almost all of the pencils in this vector space are linearizations for P (λ) and
that the standard basis for this vector space contains k pencils, denoted by D1 , . . . Dk .
The optimal linearizations in regards to conditioning and backward error in DL(P ) are
D1 and Dk . D1 and Dk are optimal when |δ| ≥ 1 and |δ| ≤ 1, respectively.
The bounds for the conditioning of the pencils D1 and Dk provided below were calculated
by Bueno, Dopico, and Furtado [2].
Theorem 1. Let P (λ) be a regular matrix polynomial of degree k with A0 6= 0. Assume
maxi=0:k {kAi k2 }
that δ is a simple, finite, nonzero eigenvalue of P (λ). Let ρ = min{kA
and consider
k k2 ,kA0 k2 }
the natural weights for D1 (λ, P ) and P (λ).
• If A0 is nonsingular and |δ| ≥ 1, then
1
κD1 (δ)
≤
≤ 2k 2 ρ.
kρ
κP (δ)
• If Ak is nonsingular and |δ| ≤ 1, then
1
κDk (δ)
≤
≤ 2k 2 ρ.
kρ
κP (δ)
The backward error bounds were constructed by Higham, Li, and Tisseur for the homogenous form of P (λ). The homogenous form allows for λ to be infinite or zero. In
Section 4, we will define a scaling technique that we used that preserves the conditioning
and backward error. If λ were to be infinite or 0, the scaling would no longer preserve the
conditioning or backward error. Thus, in our research we only looked at finite, simple,
non-zero eigenvalues.
The following two theorems are for upper bounds of the backward error for any L ∈
DL(P ). We only were able to look at the right eigenvectors for P and L, so computations
in the future must be done for the left eigenvectors as well.
6
ASHLEY MAILLOUX AND LUCAS MEDINA ADVISOR: MARIBEL BUENO
Theorem 2. Let P (λ) be a matrix polynomial and let L(λ) ∈ DL(P ) be a linearization
for P with ansatz vector v. Let z be a right eigenvector of L and let x = (v T ⊗ In )z . Let
ηP and ηL be the backward error for the matrix polynomial and L ∈ DL(P ). The ratio ηηPL
is bounded above such that
|λ| + 1 kzk2
ηP
≤ kr1/2 ρ k
ηL
|λ| + 1 kxk2
where ρ =
max{kAi k2 }
.
min {kAk k2 ,kA0 k2 }
Proof. Since L(λ)z = v ⊗ P (λ)x, it follows that
kvk2 kP (λ)xk2 = kv ⊗ P (λ)xk2 = kL(λ)zk2 ≤ kL(λ)k2 kzk2
Thus,
kP (λ)xk2 ≤
By multiplying both denominators by (
kL(λ)k2 kzk2
.
kvk2
Pk
i=0
|λ|i kAi k2 ) kxk2 , it follows that
kP (λ)xk2
kL(λ)zk2
≤ Pk
.
Pk
i
( i=0 |λ| kAi k2 ) kxk2
( i=0 |λ|i kAi k2 ) kxk2 kvk2
kP (λ)xk2
(|λ| kL1 k2 + kL0 k2 ) kzk2
ηP
= Pk
ηL
kL(λ)zk2
( i=0 |λ|i kAi k2 ) kxk2
(|λ| kL1 k2 + kL0 k2 ) kzk2
≤ Pk
( i=0 |λ|i kAi k2 ) kxk2 kvk2
(|λ| + 1) max{kL1 k2 , kL0 k2 } kzk2
≤
.
P
( ki=0 |λ|i kAi k2 ) kxk2 kvk2
where the last inequality follows because
|λ| kL1 k+kL0 k ≤ |λ| max{kL1 k2 , kL0 k2 }+max{kL1 k2 , kL0 k2 } = (|λ|+1) max{kL1 k2 , kL0 k2 }.
In [2, theorem 3.20] shows us
max{kL1 k2 , kL0 k2 } ≤ kr1/2 max kAi k2 kvk2
THE CONDITIONING AND BACKWARD ERROR OF THE MACKEY PENCIL
7
(|λ| + 1) max{kL1 k2 , kL0 k2 } kzk2
ηP (x, λ)
≤
P
ηL (z, λ)
( ki=0 |λ|i kAi k2 ) kxk2 kvk2
≤
kr1/2 (|λ| + 1) max{kAi k2 } kzk2
P
( ki=0 |λ|i kAi k2 ) kxk2
It follows that
k
X
|λ|i kAi k2 ≥ |λ|k kAk k2 + kA0 k2 . ≥ (|λ|k + 1) min{kAk k2 , kA0 k2 }
i=0
Then, a calculation shows
kr1/2 (|λ| + 1) max{kAi k2 } kzk2
ηP (x, λ)
≤
P
ηL (z, λ)
( ki=0 |λ|i kAi k2 ) kxk2
max{kAi k2 }
kzk2
|λ| + 1
1/2
≤ kr
k
|λ| + 1 min{kA0 k2 , kAk k2 } kxk2
|λ| + 1 kzk2
1/2
= kr ρ
|λ|k + 1 kxk2
Notice that the upper bound for the backward error depends on the modulus of the
eigenvalue. This requires the calculation of the eigenvalues, but furthermore, if the eigenvalue is large or small, the bounds could become extremely large or virtually 0. The next
bound we provide is an alternative bound that does not depend on the modulus of the
eigenvalue.
Theorem 3. Let P (λ) be a matrix polynomial and let L(λ) ∈ DL(P ) be a linearization
for P with ansatz vector v. Let z be a right eigenvector of L and let x = (v T ⊗ In )z . Let
ηP and ηL be the backward error for the matrix polynomial and L ∈ DL(P ). The ratio ηηPL
is bounded above such that
kzk2
ηP
≤ k 3/2 r1/2 ρ
.
ηL
kxk2
Proof. Note that
k(P (λ)xk2 = P (λ)(v T ⊗ In )z 2
= (v T ⊗ P (λ))z 2
= (ΛT ⊗ In )L(λ)z 2
≤ ΛT ⊗ In kL(λ)zk
2
2
8
ASHLEY MAILLOUX AND LUCAS MEDINA ADVISOR: MARIBEL BUENO
(|λ| kL1 k2 + kL0 k2 ) kzk2
ηP
kP (λ)xk
= Pk
ηL
kL(λ)zk2
( i=0 |λ|i kAi k2 ) kxk2
kΛ(λ) ⊗ In k2 kL(λ)zk2 (|λ| kL1 k2 + kL0 k2 ) kzk2
≤
.
P
( ki=0 |λ|i kAi k2 ) kxk2 kL(λ)zk2
From the techniques used in the proof of the previous theorem,
T
Λ (λ) ⊗ In (|λ| + 1) max{kL1 k , kL0 k } kzk
ηP
2
2
2
2
≤
Pk
i
ηL
(
|λ| kAi k2 ) kxk2
T
i=0
Λ (λ) ⊗ In (|λ| + 1)kr1/2 max{kAi k } kzk
2
2
2
≤
k
(|λ| + 1) min{kAk k2 , kA0 k2 } kxk2
By Lemma A.1 in [5]
T
Λ (λ) ⊗ In (|λ| + 1) √
2
≤ k.
|λ|k + 1
Thus,
k 3/2 r1/2 max{kAi k2 } kzk2
ηP
≤
ηL
min {kAk k2 , kA0 k2 } kxk2
kzk2
= k 3/2 r1/2 ρ
kxk2
where ρ =
max{kAi k2 }
.
min {kAk k2 ,kA0 k2 }
Even though these bounds work for any linearization in DL(P ), the two optimal linearizations are D1 and Dk . Unfortunately, D1 and Dk are linearizations of P (λ) only if
A0 and Ak are nonsingular, respectively. Another disadvantage of these two pencils is
that the bounds for the backward error depend on the modulus of the eigenvalue. These
pencils are optimal and linearizations, but only under the conditions mentioned.
3.2. Mackey Pencil. As seen in Section 3.1, the pencils D1 and Dk were shown to be
optimal in terms of conditioning and backward error, but both depended on the modulus of
the eigenvalue. Finding a pencil that numerically behaves like D1 and Dk for all eigenvalues
would prove extremely helpful, as this pencil could replace the other two. This section
of the paper focuses on one specific block-symmetric linearization of P (λ) which we will
refer to as the Mackey pencil.
Definition 3.2. The Mackey pencil T (λ), is a linearization of P (λ) of the following form,
depending on k.
THE CONDITIONING AND BACKWARD ERROR OF THE MACKEY PENCIL
9
If k is even, T (λ) is
0
I
I Ak−1
λ
−1
Ak
−Ak−2 I
.
..
I
0 ..
−
.
−A2 I
0 I
I
0
I A1
.
−A0
If k is odd, T (λ) is
Ak
0
I
I
A
k−2
λ
−Ak−1 I
I
0
−
..
.
0 I
I A1
..
.
−A2 I
I
0
−A0
.
Unlike pencils in DL(P ), when k is odd A0 and Ak do not need to be nonsingular.
When k is even, we see that Ak must be nonsingular, which could be problematic. In
this section of the paper we will theoretically focus on when k is odd due to the good
theoretical behavior of k odd. Later on, we will show numerical experiments for when k
is even, explaining why we did not further explore the even case theoretically.
The bounds for the condition number[2] were constructed by Bueno, Dopico, and Furtado for generalized Fiedler pencils.
Theorem 4. Let P (λ) be a regular matrix polynomial of odd degree k with A0 6= 0. Assume
that δ is a simple, finite, nonzero eigenvalue of P (λ). Let T (λ) be the Mackey pencil of
P (λ). Then,
κT (δ)
1
≤
≤ 2k 4 ρ1 ,
k
κP (δ)
where ρ1 =
maxi=0:k {kAi k2 ,1}3
min{kAk k2 ,kA0 k2 }
and ρ2 =
min{max{kAk k2 ,1},max{kA0 k2 ,1}}
.
max{kAi k2 }
Definition 3.3. For a computed eigenpair, the backward error for the matrix polynomial,
P (λ) and for the Mackey pencil, T (λ) are given by
ˆ
x
P (δ)ˆ
2
ˆ = P
ηP (ˆ
x, δ)
m
i
ˆ
( i=0 |δ | kAi k2 ) kˆ
xk2
ˆ is a computed eigenpair for P. and
where (ˆ
x, δ)
ˆ =
ηT (ˆ
z , δ)
ˆ
z
T (δ)ˆ
2
ˆ kT1 k + kT0 k )
(|δ|
2
2
ˆ is a computed eigenpair for L.
where (ˆ
z , δ)
.
Link to this page
Permanent link
Use the permanent link to the download page to share your document on Facebook, Twitter, LinkedIn, or directly with a contact by e-Mail, Messenger, Whatsapp, Line..
Short link
Use the short link to share your document on Twitter or by text message (SMS)
HTML Code
Copy the following HTML code to share your document on a Website or Blog