New Research In
Physical Sciences
Social Sciences
Featured Portals
Articles by Topic
Biological Sciences
Featured Portals
Articles by Topic
 Agricultural Sciences
 Anthropology
 Applied Biological Sciences
 Biochemistry
 Biophysics and Computational Biology
 Cell Biology
 Developmental Biology
 Ecology
 Environmental Sciences
 Evolution
 Genetics
 Immunology and Inflammation
 Medical Sciences
 Microbiology
 Neuroscience
 Pharmacology
 Physiology
 Plant Biology
 Population Biology
 Psychological and Cognitive Sciences
 Sustainability Science
 Systems Biology
On the controllability of?distributed?systems
Abstract
To “control” a system is to make it behave (hopefully) according to our “wishes,” in a way compatible with safety and ethics, at the least possible cost. The systems considered here are distributed—i.e., governed (modeled) by partial differential equations (PDEs) of evolution. Our “wish” is to drive the system in a given time, by an adequate choice of the controls, from a given initial state to a final given state, which is the target. If this can be achieved (respectively, if we can reach any “neighborhood” of the target) the system, with the controls at our disposal, is exactly (respectively, approximately) controllable. A very general (and fuzzy) idea is that the more a system is “unstable” (chaotic, turbulent) the “simplest,” or the “cheapest,” it is to achieve exact or approximate controllability. When the PDEs are the Navier–Stokes equations, it leads to conjectures, which are presented and explained. Recent results, reported in this expository paper, essentially prove the conjectures in two space dimensions. In three space dimensions, a large number of new questions arise, some new results support (without proving) the conjectures, such as generic controllability and cases of decrease of cost of control when the instability increases. Short comments are made on models arising in climatology, thermoelasticity, nonNewtonian fluids, and molecular chemistry. The Introduction of the paper and the first part of all sections are not technical. Many open questions are mentioned in the text.
Section 1. Introduction
To control a system is to (try to) make it behave according to our wishes, at least possible cost, in a way which is compatible with safety, regulations, and ethics. A vast program indeed?…
In this expository paper, we consider a particular family of “wishes,” related to controllability: we are given a time horizon T (assuming the process starts at time t = 0) and we are given two states, y^{o}, the given state at initial time t = 0, and y^{T}, a given element of the state space (y^{T} = target). The “wish” is to drive the system, by an adequate choice of the control, from y^{o} to y^{T} (resp., to a “neighborhood” of y^{T}). If this is possible, one says that the system is controllable (resp., approximately controllable).
According to our very general definition of control of a system, if the system is controllable or approximately controllable, we want to achieve our “wish” at the least possible cost.
We now make all this more precise.
We consider evolution systems which are governed (modeled) by partial differential equations (PDEs). These are the socalled distributed systems—i.e., the phenomenon under study is “distributed” in a threedimensional geometrical domain Ω.
The state space is denoted by Y, the state at time t is denoted by y(t). One can act on the system through actuators. It means that the state y(t) depends on the choice of the control (on the instructions we give to the actuators). The control will be denoted by v; it depends on t and on the “distributed” variable corresponding to the location of the actuators. Once v(t) is chosen, the PDE is “solved” and it defines “the” state y(t; v) (or “a” state if we are in situations where it is not known if there is a unique solution).
The first controllability question is then to know if there exists a choice v_{o}(t) of v(t) (a choice of a policy) such that at time T, y(T; v_{o}) equals y^{T}, or belongs to a “neighborhood” of y^{T}.
If this is possible, it is possible in infinitely many ways. Indeed, choose v arbitrarily in (0, t_{1}), t_{1} < T. At time t_{1}, we reach a state y^{1}. In interval (t_{1}, T) we drive the system from y^{1} to y^{T} (on a neighborhood of y^{T}). We obtain in this way infinitely many controls “which do the job.” Then it makes sense to try to minimize ∥v∥ (which expresses the “cost” of the control) among all vs “which do the job.”
This is still fuzzy. There are technical questions, such as the choice of the topologies on the state space. We shall avoid these questions in this paper. There are also fundamental questions, related to “minimizing the cost.” The control v is applied at parts of the domain Ω and (or) of its boundary Γ (applying controls to all points of Ω does not make sense physically and it is not interesting mathematically). If the control v is applied on a region ?? contained in Ω (resp., on a part Γ_{o} of Γ) one deals with a distributed (resp., a boundary) control.
Then we wish to choose ?? (resp., Γ_{o}) as “small” as possible (if ?? reduces to one or several points, one has pointwise control), and once ?? (resp., Γ_{o}) is chosen, and if there is some kind of controllability, then one minimizes ∥v∥ as above.
Moreover, the location of ?? (resp., Γ_{o}) is very important.
Remark 1.1: If we deal with a system where waves (or singularities) propagate with finite speed, then it is obvious (at least formally) that if one acts on a region ?? ? Ω, some time will be needed in order for the state to be modified at time T on all of Ω, and also that some kind of geometric condition on ?? will be needed (because of trapped rays). The same comments apply if the actuators are on Γ_{o} ? Γ.
The above questions (and many others) have been studied by Russel (1). Another type of method has been introduced by the author in refs. 2 and 3 (method HUM = Hilbert uniqueness method. A hint of this method is given below in Section 2). A general theory has been given in ref. 4 for the wave equation.
In this paper, we will concentrate on systems with diffusion, hence time irreversibility.
The main questions we want to address (different to solve?…) is the Control of the Navier–Stokes equations (even, if I dare to say, the “control of turbulence,” whatever that means?…).
It will be presented according to the following plan. As a “warming up” I explain in Section 2 the situation for the heat equations, linear and nonlinear.
Some ideas on the possible methods are briefly introduced.
Section 3 presents the conjecture made a few years ago (ref. 5, ?) concerning the control of Navier–Stokes equations. Actually if the dimension of Ω is 2, the “conjecture” has been proven by Coron and Fursilov (6), as briefly explained in Section 3.
In Section 4, we consider the same problems for the Stokes equations. This consideration leads to a “generic” result of controllability, due to the author and Zazua (7), and also to some new open problems
One idea behind the conjecture of Section 3 is that the more a system is “unstable,” the cheaper it is to control. Some precise results along this line, together with open questions, are given in ref. 8. An example is given in Section 5.
Needless to say, many other very interesting questions arise in the control of distributed systems. Some of them are briefly mentioned in Section 6.
All the results that are going to be mentioned are constructive, in the sense that numerical computations can be based on the proofs of the results given below, as reported in a series of papers with R. Glowinski (9–11). But most of these computations are “off line.” The “real time” problems are not addressed in this paper.
Very important industrial applications lie behind what is presented here. A huge bibliography is devoted to these questions. For the physical aspects of them, refer to ref. 12 and to the bibliography therein.
Of course, whatever the approach, if there are difficulties in the implementation of the (optimal) control, it is because of the “complexity” of the model. But, after all, the system could be “simpler” than the model. Hence the search for “lowdimensional models” (cf. ?). In this respect, I mention paper 13, in the context of the remark of John Von Neumann “Climate is simpler to control than to predict” (quotation according to P. Dvoretsky, personal communication).
In the search of models with “reduced complexity,” asymptotic methods, such as boundary layers equations, are classical. It should be pointed out here that rapidly oscillating controls can also be useful for the control of Navier–Stokes equations. This will be reported elsewhere.
I wish to mention also the neural networks approach (14).
The beginning of each section is not technical, and can be read without looking at the more specialized remarks. Many open questions are given in the text.
Section 2. The Case of the Heat Equation
2.1. Linear Problem.
Let us start with the linear heat equations 2.1 where χ is the characteristic function of an open set ?? ? Ω and where v is any square integrable function in ?? × (0, T) (all functions are real valued).
To fix ideas, we take for boundary conditions 2.2 and, with the notations of Section 1, the initial condition is given by 2.3 We are given y^{T} in L^{2}(Ω) [the space Y of the Introduction equals L^{2}(Ω)], and we want to drive the system “close” to y^{T}.
This is possible. Indeed, 2.4 This result can be verified (cf. ref. 15) by using Hahn–Banach theorem and a backward uniqueness result (16).
A more constructive approach is presented below.
Thanks to Eq. 2.4, one can drive (in infinitely many ways) the system (modeled by Eqs. 2.1, 2.2, and 2.3) from y^{o} to the set y^{T} + βB, where B = unit ball of L^{2}(Ω), β > 0 arbitrarily small.
We can then consider the problem 2.5 Before we proceed a few remarks are in order.
Remark 2.1: The equations 2.1, 2.2, and 2.3 admit a unique solution, in adequate Sobolov spaces. This is classical.
Remark 2.2: Result 2.4 is true for ?? “arbitrarily small” and located at any place in Ω. It is also true for T given arbitrarily small.
Remark 2.3: With the terminology of the Introduction, result 2.4 implies that in the present situation we have approximate controllability.
Remark 2.4: Because of the classical property of smoothness of solutions of Eq. 2.1, y(T; v) is always C^{∞} outside ??, so that y(T; v) cannot span the whole space L^{2}(Ω). We have approximate controllability, and not exact controllability.
2.2. Duality Arguments.
I introduce now, in the simple situation of Section 2.1, a method which is extremely useful. It is based on the Fenchel–Rockafellar duality (cf. ref. 17). A few notations are needed. I introduce 2.6 Then where z = z(v) is the solution of 2.7 I introduce two proper convex functions, If we set we define in this way a continuous linear map from L^{2}(?? × (0, T)) → L^{2}(Ω).
Problem 2.5 can now be equivalently formulated as 2.8 All this is nothing but notations! But in this form we can apply ref. 17. It gives 2.9 where (i.e., F^{*}_{i}(h) = sup_{g}(〈h, g〉 ? F_{i}(g)), After a few computations, the dual problem (given in the righthand side of Eq. 2.9) is given as follows.
The “dual state” ? is given by the solution of the backward equation 2.10 Then L*f = ?χ and we obtain 2.11 where 2.12 where ∥f∥ = (∫_{Ω} f^{2}dx)^{1/2}.
Remark 2.5: Since the mapping f → ? = solution of Eq. 2.10 is linear, the expression defines a seminorm on L^{2}(Ω). It is actually a norm, since if ? = 0 on ?? × (0, T), then (cf. ref. 18 for this type of uniqueness theorem for much more general equations; cf. also the general results of refs. 19 and 20) ? = 0, hence f = 0.
We set 2.13 Then of course 2.14 But f is a norm which is weaker than ∥f∥, so that a direct minimization of ?(f) is not trivial.
The introduction of norm of the type 2.13 is the key element of the Hilbert uniqueness method (2, 3).
Remark 2.6: The duality formula is very useful (after slight modifications) for numerical computations. See refs. 9 and 10.
Remark 2.7: The above method is very general for linear problems. But it does not apply to nonlinear problems, situations that are now introduced.
2.3. Nonlinear Problems and Unstable Problems.
Let us now “slightly perturb” the state equation 2.1 but in a nonlinear fashion: 2.15 2.2 and 2.3 unchanged.
Problem 2.15, 2.2, 2.3 admits a unique solution, still denoted by g(v). But y(T; v) spans a “small” set of L^{2}(Ω), no matter how small α is (cf. refs. 21 and 22). The “small” perturbation αy^{3} completely destroys the approximate controllability.
What would be the situation for “destabilizing” perturbations?
If we consider Eq. 2.15 with α < 0, then the corresponding problem does not admit in general a global solution in Ω × (0, T).
One has to set the problem in a different way. One considers all couples {y, v} such that (we change α into ?α) 2.16 and such that conditions 2.2 and 2.3 hold true.
In other words 2.16, 2.2, 2.3 is thought of as a set of constraints, not a set of equations. This set of couples {y, v} is not empty (it suffices to start with y smooth with support in ?? × (0, T), which shows the nonemptiness, at least if y^{o} has support in ??).
Then one can consider the set described by y(T; v) when {y, v} are subject to the constraints 2.16, 2.2, 2.3. We conjecture that this set is dense in L^{2}(Ω). [We even do not exclude the possibility of this set being the whole space L^{2}(Ω)].
Remark 2.8: We can consider an even more unstable situation. We consider the set of all couples {y, v} such that 2.17 and such that 2.2 and 2.3 hold true with y^{o} = 0. This problem is nonwellset, so that we have to consider the set of couple {y, v}. One has then 2.18 The proof is an immediate corollary of ref. 23 (cf. also ref. 24 and the report§). Indeed, let y^{T} be given in L^{2}(Ω).
We define y(t; v) as the solution of 2.17 and 2.2 and 2.19 (This is now a wellset problem.) The problem amounts then to drive this system to zero. This is indeed possible, according to ref. 23 (a nontrivial result, which, in a sense, relies on a precise estimate of the norm 2.13, using ideas based on Carleman’s estimates, one of the key ingredients for proving uniqueness theorems and controllability to zero).
We are now ready to proceed with Navier–Stokes equations.
3. Conjectures for Navier–Stokes Equations
After proper scaling, we write the Navier–Stokes equations in the form 3.1 3.2 subject to the boundary conditions 3.3 and the initial condition 3.4 In Eq. 3.1, μ is > 0, p denotes the pressure, χ is the characteristic function of ?? ? Ω, and 3.5 denotes the control.
We introduce the Hilbert space 3.6 where n denotes the unit normal to Γ directed toward the exterior of Ω. The initial condition y^{o} is given in H.
On the basis of the classical contributions of Leray (25, 26), it is known that there exists a global solution in time of 3.1?…?3.4 but uniqueness is still an open question (uniqueness is known in two dimensions). Therefore we denote by y any solution of 3.1?…?3.4 (and in dimension 2 the solution of 3.1?…?3.4).
Remark 3.1: As it appears in Eq. 3.1, the control is distributed.
Physically it is much more interesting to consider boundary control. Technical details are more complicated. But the conjectures and the results which follow are essentially valid in the case of boundary control.
Remark 3.2: One knows the existence of a global solution in time which is square integrable in t with values in the space V defined by 3.7 where H^{1}(Ω) = {ψψ, ?ψ/?x_{i} ∈ L^{2}(Ω), i = 1, 2, 3} and which is weakly continuous with values in H. Therefore we can consider the set
??R(T)=set of all states y(T; v) at time T
??when v describes the space 3.5,
??where y denotes all possible solutions
??of 3.1 …3.4 (it denotes the solution
??if Ω??^{2}).[3.8]
The first conjecture is (3, ?) 3.9Remark 3.3: Actually a very interesting result in this direction has been obtained in dimension 2 by J.?M. Coron and A.?F. Fursikov (see below).
Remark 3.4: We have a stronger conjecture
??Conjecture R(T) is dense in H
??when v spans a subspace a. = 0 of
??L^{2}(?? × (0, T))^{3}, a ∈ ?^{3}.[3.10]
In other words, two controls are used instead of three controls. As briefly reported below, the result analogous to 3.10 is proven in the case of Stokes equations.
In Section 4 below, I also report on the case (introduced in ref. 7) where one considers only one control.
Remark 3.5: Let g be given arbitrarily in H and let z be a (or the) solution (depending if space dimension is 3 or 2) of 3.11 We then consider the following question: given g, and given y^{o} ∈ H, can one find a control v such that 3.12 Of course this is trivial if y^{o} = g by taking v = 0. The above formulation is slightly ambiguous if Ω ? ?^{3}. It is clear in the case Ω ? ?^{2} where z is uniquely defined by g.
The following (highly nontrivial) result has been proven in ref. 6: 3.13 (and this is possible in infinitely many ways).
Let us express this result in a slightly different equivalent way. Let us denote by G(t) the nonlinear semigroup generated in 2 dimensions by the solution of 3.11, i.e., 3.14 Then 3.13, is equivalent to the following statement:
??In dimension 2, one can always find
??a controlv which drives the system from
??any y^{o} ∈ H to any element of G(T)H.[3.15]
Again differently: 3.16 This notion, systematically used in ref. 26, has been introduced for finite dimensional control by Willems (27).
The proof of result 3.13 uses Carlemantype estimates (as in refs. 26 and 28) and topological arguments as in ref. 29.
Remark 3.6: Result 3.13 proves conjecture 3.9 in two dimensions if one knows (using formulation 3.15) that 3.17 This is an interesting nontrivial question. For the linear diffusion equations, it is equivalent using Hahn–Banach theorem, to backward uniqueness (16), as recalled in Section 2. In the nonlinear cases, one cannot apply Hahn–Banach. But one can still study backward uniqueness, (cf. ref. 31), where 3.17 is raised. Actually, 3.17 has been proven in ref. 32 for periodic solutions (Ω = square) and with H equipped with a weaker topology (the topology of the dual of V as introduced in 3.7).
We do not know if 3.17 is proven for Dirichlet boundary conditions, and with the topology of H.
Remark 3.7: Other results which go in the direction of the proof of the conjectures are given by Fabre (33, 34) and FernandezCara and Real (35).
Remark 3.8: For finitedimensional Galerkin approximations of the solutions, the equivalent of the above conjectures are proven in ref. 36, together with other results connected with Section 5 below.
We now proceed with the (much simpler!) Stokes equations, where I introduce the notion of generic controllability.
Section 4. Generic Controllability
4.1. General Formulation.
When dealing with a manmade system, one has of course some flexibility in the design of Ω (the construction of noncontrollable structures should of course be avoided!). This is the classical and fundamental problem of Optimum Design (cf. ? and ref. 37 and the bibliography therein), which is (in general) a static problem.
A very general (and fuzzy) question along these lines is the following: 4.1 Making this precise is not a simple matter, since one should first define a notion of “measure of controllability” (see also the following section).
A precise result related to question 4.1 has been obtained for Stokes equations in ref. 7, as explained below.
4.2. Generic Controllability for Stokes System.
We consider the Stokes equations 4.2 4.3 obtained from the Navier–Stokes equations by suppressing the nonlinear terms y?y.
Now we consider one control function 4.4 We indicate below an example where
??y(T; w) spans a set which is not dense
??in H when w spans the whole space
??L^{2}(??×(0, T)).[4.5]
We conjecture that
??One can always obtain the density of
??the space described by y(T; w) in H by an
??arbitrary small change of Ω.[4.6]
This is proven when 4.7 Actually we are in the situation of 4.5 when ?? is a circle—i.e., we do not have approximate controllability for the Stokes system, in the case ?? = circle, with only “one control” as in 4.4.
Moreover, if approximate controllability is not true for some ??, we can always modify ?? in ???, a domain arbitrarily close to ?? in a C^{∞} topology, in such a way that we do have approximate controllability for ??? × (O, L). In short,
??approximate controllability is true generically
??with respect to ??(L is arbitrary).[4.8]
Remark 4.1: As before, the proof relies on a uniqueness property: one considers any vector function ? which satisfies 4.9 and 4.10 This implies that ? ≡ 0 “in general” with respect to ??. After some computations and estimates (cf. ref. 7), everything is reduced to using the fact (38) that the spectrum of the Laplace operator in ??, for Dirichlet boundary condition, is generically simple.
The counterexample is based on the existence [in the case Ω = (circle x(0, L)] of an eigenfunction ψ of Stokes operator 4.11 such that π = 0 and ψ_{3} = 0 in Ω, ψ ≠ 0.
Such a ψ is constructed in ref. 7. Actually, a similar example was given long before by Dafermos (39) for problems of stabilization.
Remark 4.2: Of course one can raise the question whether this type of “generic approximate controllability” is true for the Navier–Stokes system.
Remark 4.3: Of course Stokes equations are an extremely simplified form of Navier–Stokes equations! An already more realistic model—and indeed very useful in iterative numerical analysis—is the following: 4.12 all other conditions being unchanged, and a = {a_{1}, a_{2}, a_{3}} being a vector function such that 4.13 (actually it would even be necessary to consider cases where the assumptions on a are weaker).
One can raise questions similar to all those raised before. A very important preliminary question is the uniqueness problem.
Let us assume that ? is a (weak) solution of (compare to Remark 4.1) 4.14 It is not known whether 4.10 implies generically that ? ≡ 0.
It is even not clear that the stronger hypothesis 4.15 implies ? = 0 (this corresponds to 2 controls). Indeed, it has only recently be proved by Fabre and Lebeau (40) that 4.16 This highly nontrivial result extends previous results of refs. 41 and 33, where some smoothness on the function a is assumed.
Remark 4.4: Results on the controllability for stochastic Stokes equations are given in ref. 42. Generic results in this framework are not known.
Remark 4.5: We have indicated for this section that for the Stokes equations (and hopefully for others) approximate controllability can be achieved with actions on an arbitrarily small part of the domain, for an arbitrarily small time, and on only one component of the equations. But at what cost? I introduce questions of this type in the following section.
Section 5. Controlling Instability Is Cheap
5.1. General Setting.
Let us consider now the state equation 5.1 where now y is a scalar function. We assume that 5.2 and that 5.3 It is known that in this situation one has approximate controllability—i.e., the space described by y(T; v) when v spans L^{2}(?? × (0, T) is dense in L^{2}(Ω). (See, for instance, ref. 8 for a more general result.)
Therefore, given 5.4 there are infinitely many vs in L^{2}(?? × (0, T)) such that 5.5 One can then define the function of k (assuming all other data being fixed) given by 5.6 for v subject to 5.5. This function expresses the “cost” of the control. (For an attempt to measuring the cost of controllability, see ref. 43.)
Our goal here is to see whether or not M(k) decreases as k increases, and even whether M(k) → 0 as k → +∞. Let me explain why I conjecture this kind of property (may be not exactly as stated above) (proofs of results along these lines are indicated in Section 5.2 below).
The main reason is that as k ↗ +∞, Eq. 5.1 becomes less and less stable.
Remark 5.1: Let us set k = 1/?, v = (1/?)w and let us introduce s = t/?. Then the first term of an asymptotic expansion of the solution of Eqs. 5.1, 5.2, and 5.3 is given formally (there are boundary layers near ?Ω) by 5.7 an extremely instable problem (actually a nonwellset problem) which enjoys very nice properties as far as controllability is concerned as indicated in Remark 2.8.
Therefore it is not unconceivable that M(k) decreases as k → ∞. But of course this remark is formal, not only because the asymptotic expansion is formal, but, more importantly, because we change the time horizon of controllability when working with the “fast time” s = t/?.
Remark 5.2: If we take instead of Eq. 5.35.8 and if we take y^{T} = 0, then we can define another “cost” 5.9 Then ?(k) → +∞ as k → ?∞. That is, it “costs” more and more to “control to zero” a more and more unstable system.
It is the other way around if one wants to drive a more and more unstable system from zero to any “neighborhood” of y^{T}.
This is probably a very general situation?…
Remark 5.3: No results in the direction of what is said just above are known for nonlinear systems. It would be extremely interesting to study “the cost” of increasing turbulence or of having early explosions in unstable or nonwellset systems.
Remark 5.4: It seems likely that the results, presented in Section 5.2 below, are independent of (reasonable) boundary conditions for 5.1. But in the proofs of the results to follow, the special structure of boundary conditions 5.2, 5.3 is used. If we introduce the unbounded operator A_{o} in L^{2}(Ω) defined by 5.10 with domain 5.11 [in fact, with usual notations, D(A_{o}) = H^{2}(Ω) ∩ H_{0}^{1}(Ω) at least if Γ is Lipschitz], then, for the boundary conditions 5.2, This property is used for the spectral decomposition of A_{o} and A_{o}^{2}.
Some precise results along the lines of the above discussion are given below.
5.2. The Cost Tends to Zero as k → +∞. Formal approach.
One of the main ingredients to obtain estimates on M(k) is to use a duality formula, similar to the one presented in Section 2.2.
One has 5.12 where ??(f) is given as follows.
For f in L^{2}(Ω) we solve the backward (adjoint) state equation 5.13 Then 5.14 Of course ??(f) depends on k, since ? depends on k.
To make formulae slightly simpler, let us change t in T ? t in 5.13. We obtain 5.15 ??(f) still being given by 5.14.
Let us proceed in an extremely formal fashion. Things can be fixed (see below), but with other methods.
We introduce (as in Remark 5.1) the fast variables s = t/?, with k = 1/?.
Then formally the first term of an asymptotic expansion of ? (for fixed f) is given by 5.16 where ψ is now “given” (more precisely satisfies the constraint) 5.17 If f is such that 5.17 defines ψ, then 5.18 where S_{?} = S/?, so that (I think there is no need to recall once more that this is formal) 5.19 But because of the fact that 5.17 is extremely unstable, the limit which appears in 5.19 is always +∞, so that the “only way” to minimize ??(f) when ? is very small is to take f = 0, so that it is not unreasonable to think that M(k) → 0 as k → ∞?…
5.3. A Precise Approach When the Notion of Neighborhood of the Target Is Relaxed.
We are going to relax condition 5.5. We introduce the spectral decomposition of ?Δ: 5.20 where the w_{j}s are normalized ∥w_{j}∥ = 1.
We introduce the finitedimensional space E defined by 5.21 where Λ is finite given arbitrarily. We define 5.22 We want now to drive the system to a state y(T; v) such that 5.23 This is a (very) relaxed notion of “neighborhood” of y^{T}!
It is known (and it is a simple matter to verify) that there always exist v such that 5.23 holds true. We then introduce 5.24 We then have 5.25 The proof (cf. ref. 8) is based on the duality formula which now reads 5.26 where 5.27 ? given by 5.15. But now f lies in the finitedimensional space E, so that 5.28 The solution of 5.15 is then given by 5.29 By using the fact that the w_{j}s are linearly independent on ?? (they are analytic functions in Ω), 5.30 for a suitable constant c.
Applying this inequality to ?(t), we obtain that hence 5.31 An explicit computation of the righthand side of 5.31 shows that 5.32 where τ(k) → 0 as k → +∞ (in fact it goes to zero exponentially). But using 5.26 we have hence 5.25 follows.
Remark 5.5: In ref. 8, one studies the more general state equation 5.33 where A is a selfadjoint operator in L^{2}(Ω) [or in a product (L^{2}(Ω))^{N}] which is strictly positive, and where 0 ≤ θ < 1.
The example given here corresponds with the notations of Remark 5.4 to A = A_{o}^{2} and θ = ?.
Remark 5.6: Inequality 5.30 can be very much improved by making explicit the way the constant c depends on Λ. The following (highly nontrivial) inequality has been proven in ref. 24 (cf. also § footnote) 5.34 where the constants c_{i} are now independent of Λ. Using 5.34 (in the cases it is proven!) allows one to obtain results of the type of 5.25 with Λ depending (suitably) in k (cf. ref. 8).
Section 6. Other Physical Situations and Mathematical Problems
6.1. Climatology.
As was hinted at in the Introduction, the main motivation (independent of the intrinsic mathematical interest, at least in my opinion?…) for working on the problems addressed in this paper lies with industrial situations.
Another motivation is connected with climatology problems (themselves related to industrial questions!).
Back in 1955, John Von Neumann (44), observed that one could achieve great changes in climate by changing the albedo on (large) portions of the ice caps (adding that this was not a sensible thing to do?…). In more precise terms: If we observe some unpleasant changes in the climate of planet earth, “can we return to a solution” that we like better (using the terminology 3.16)?
Due to the central role played in any modelization of the atmosphere and ocean (cf., for instance ref. 45) by the Navier–Stokes equations, it is natural to arrive at questions of the sort introduced in Section 3.
But other crucial components of climatology are the ice caps—and this observation leads to the question of the controllability of free boundary problems (problems of this sort also arise in “classical” industry!), where irreversible changes could happen (mathematically?…?!). This type of problem was mentioned in ref. 46. I refer here to a report by Diaz‖.
The interest of these questions is for the time being purely theoretical, but the situation could change in view of the appearance of many ideas of bioengineering in the specialized literature (and also since the regulations are nothing but controls!).
6.2. Patterns.
Let us consider the classical Bénard problem in thermohydrodynamics, with boundary control. In a tank we consider, in a nondimensional form, the velocity y and the temperature θ of the fluid given by 6.1 where e_{3} = {0, 0, 1}.
Standard boundary conditions are given on y. We assume that θ_{1} is the temperature at the top (x_{3} = L) and that we heat at the bottom (x_{3} = 0) as we please: 6.2 where v has no constraint (a nonphysical hypothesis).
What are the solutions that we can reach (terminology 3.16) at time T by an adequate choice of v? What patterns can we achieve in this way?
6.3. Thermoelasticity. Section 6.2 above gives a coupled system where there is diffusion on all components of the state.
A different situation arises in thermoelasticity, where the system is parabolic and hyperbolic, depending on the components of the state. More precisely if y denotes the displacement and θ the temperature, a simplified model is 6.3 with boundary conditions 6.4 and the initial conditions 6.5 Because of the “hyperbolic” part (6.3) of the equations, some geometric condition on ?? is to be expected. Indeed, it has been proven by Lebeau and Zuazua (24) that, if any ray of geometric optics of length cT intersects ??, then one can drive the system to zero at time T—i.e., one can find v such that the corresponding solution of 6.3, 6.4, 6.5 satisfies See the survey of Zuazua.§
6.4. NonNewtonian Fluids.
In all situations presented up to now, many problems are open, but the results obtained so far do support the conjectures.
In the examples presented now, no conjectures are offered?… We consider here nonNewtonian fluids of the Oldroyd type—with a memory.
The problem we want to address is: can one control this type of fluid? If y denotes the velocity and τ the stress tensor, then one has a coupled system of equations for y and τ (and the pressure). The equations for y are like Navier–Stokes plus a term containing firstorder space derivatives on the components of τ, and the equations for τ are transport equations with terms depending (in a rather complicated way) on y. These are the Oldroyd derivatives, which express that the (nonNewtonian) fluid has a memory.
Assuming distributed or boundary control, the natural question seems now to be: can one drive the state y toward a neighborhood of y^{T}and of zero for τ(T)? No theoretical result seems to be known in this type of problem. Numerical results (for extremely simplified models with memory) are given in ref. 11.
6.5. Schroedinger Equation.
In the paper “Control of molecular motion” (47) the following type of problem is introduced.
The state equation is given by 6.6 where H_{o}? = ?Δ??+?V_{o}(x)?, V_{o} being the potential energy function, and where U can be thought of as a family of operators which are the control functions. These operators should be such that, under appropriate boundary conditions, and with the initial condition 6.7 the state y is defined.
Given again a desired state y^{T} at time T, can one find a family of operators U = U(t) which drive the system from y^{o} to a neighborhood of y^{T}?
The above question (which is of course a nonlinear problem even if the state equation is linear when U is chosen) enters in the family of bilinear control. The only mathematic results available for such situations and for different (but related) models seem to be those of Ball and Slemrod (48, 49).
Footnotes

This contribution is part of the special series of Inaugural Articles by members of the National Academy of Sciences elected on April 30, 1996.

J. L. Lions

?? Lions, J.?L., Ninth Institut National de Recherche Informatique et Automatique International Conference, June 12–15, 1990, Antibes.

?? Tang, K.?Y., Graham, W.?R. & Peraire, J., American Institute of Aeronautics and Astronautics 27th Fluid Dynamics Conference, June 1996.

?§ Zuazua, E., Congress of European Mathematicians, July 24, 1996, Budapest.

?? Jameson, A., and Reuther, J. & Jameson, A., American Institute of Aeronautics and Astronautics 33rd Aerospace Sciences Meeting, January 1995.

?‖ Diaz, J.?I., Proceedings of France–Spain Meeting on Mathematical and Numerical Aspects of Climatology, January, 1994, Malaga, Spain, pp. 43–47.
ABBREVIATION
 PDE,
 partial differential equation
 Accepted January 2, 1997.
 Copyright ? 1997, The National Academy of Sciences of the USA
References
 ?
 Russell D L
 ?
 Lions J L
 ?
 Lions J L
 ?
Bardos, C., Lebeau, G. & Rauch, J. (1988) Contr?le et Stabilisation dans les Problèmes Hyperboliques, Appendix of ref. 3.
 ?
 Spligler R
 Lions J L
 ?
 Coron J M,
 Fursikov A V
 ?
 Marcellini P,
 Talenti G,
 Visentini E
 Lions J L,
 Zuazua E
 ?
Lions, J.?L. & Zuazua, E. (1997) J. Complutense, in press.
 ?
Glowinski, R. & Lions, J.?L. (1994) Acta Numerica Cambridge, 269–378.
 ?
Glowinski, R. & Lions, J.?L. (1995) Acta Numerica Cambridge, 159–333.
 ?
 Glowinski R,
 Lions J L
 ?
 Lumley J L,
 Acrivos A,
 Leal L G,
 Leibovich S

 Sahay A,
 Sreenivasan K R
 ?
Narendra, S. (1996) Proc. IEEE (October), 1385–1406.
 ?
 Lions J L
 ?
 Lions J L,
 Malgrange B
 ?
 Rockafellar T R
 ?
 Mizohata S
 ?
 Tataru D
 ?
 Tataru D
 ?
 Henry J
 ?
Diaz, J.?I., Henry, J. & Ramos, A.?M. (1995) On the Approximate Controllability of Some Semilinear Parabolic Boundary Value Problems, preprint.
 ?
 ?
Lebeau, G. & Zuazua, E. (1997) Arch. Ration. Mech. Anal., in press.
 ?
 Leray J
 ?
 Leray J
 ?
 ?
 Fursikov A V,
 Imanuvilov O Yu
 ?
 Fursikov A V,
 Imanuvilov O Yu

 Coron J M
 ?
 Bardos C,
 Tartar L
 ?
Constantin, P., Foias, C., Kukavica, I. & Majda, A.?M. (1997) J. Math. Pures Appl., in press.
 ?
 Fabre C
 ?
 Fabre C
 ?
 ?
Lions, J.?L. & Zuazua, E. (1997) C. R. Seances Acad. Sci. Ser. A, in press.
 ?
 Pironneau O
 ?
 Micheletti A M
 ?
 Dafermos C
 ?
 ?
 Saut J C,
 Temam R
 ?
 Real J
 ?
 Lions J L
 ?
Von Neumannm, J. (1995) Fortune, June; reprinted in Von Neumann, J. (1963) Collected Works (Pergamon, Oxford), Vol. 6, pp. 504–519.
 ?
 Lions J L,
 Temam R,
 Wang S
 ?
 Lions J L
 ?
 ?
 ?
 Ball J M,
 Slemrod M