
Executive Summary of Failed Proof Attempts
The quest to prove the global existence and smoothness of solutions to the three-dimensional incompressible Navier-Stokes equations has captivated mathematicians for over a century. Recognized as one of the seven Millennium Prize Problems by the Clay Mathematics Institute, this challenge carries a $1 million prize and represents a fundamental obstacle at the intersection of pure mathematics and fluid mechanics. This review synthesizes key findings from the vast body of literature on attempts to solve this problem, highlighting the common points of failure and the profound reasons for its enduring difficulty.
The central hurdle lies in a single, unproven premise: that for any given initial velocity, a smooth solution to the equations will exist for all time, and that this solution will never develop a “singularity,” a point of infinite velocity or pressure. While solutions are known to exist and remain smooth for a short period of time, the long-term behavior of the equations remains elusive.
Numerous, highly-publicized attempts to provide a definitive proof have failed, not due to a lack of mathematical ingenuity, but because of subtle, yet critical, flaws. The most common points of breakdown in proposed proofs include:
- Failure to Address All Scenarios: Many attempts propose a proof that works for a limited class of solutions but does not generalize to all possible initial conditions, especially those that are physically turbulent or chaotic. A valid proof must hold universally.
- Incorrect Assumptions about Function Spaces: The equations are often analyzed within specific mathematical frameworks known as Sobolev spaces or Lebesgue spaces. A recurring error has been to make an assumption about the behavior of solutions within these spaces that is not, in fact, guaranteed for the full, non-linear problem.
- The Inevitable Problem of Singularities: The core difficulty is the potential for a “blow-up” in the solution—a point in space and time where the velocity or its derivatives become infinite. While physical intuition suggests such an event is impossible, mathematicians have been unable to rigorously prove that it cannot occur. Flawed proofs often contain a subtle step that inadvertently assumes a singularity does not form, thus begging the question.
- Incomplete Treatment of Non-linear Terms: The equations’ non-linear advection term (u⋅∇u) is what makes them so powerful for describing turbulence, but also what makes them so difficult to analyze. Many failed proofs have not adequately controlled the growth of this term, allowing for the potential of uncontrolled behavior that leads to a singularity.
The consistent failure of even the most promising proof attempts underscores the immense depth of the Navier-Stokes problem. It is a testament to the complexity of turbulence and the limits of our current mathematical tools. The literature on these failures is not a catalog of defeat, but a critical roadmap that guides the ongoing research, narrowing the field of possibilities and refining the direction of future inquiry.
Deep Mathematical Hurdles in Navier-Stokes Proof Attempts
The history of failed attempts to prove the Navier-Stokes equations is a roadmap of our struggle to mathematically control the chaotic and non-linear behavior of fluids. Each failure has exposed a deep, often counter-intuitive hurdle that highlights why a simple, clever proof has yet to be found.
1. The Unruly Advection Term (u⋅∇u)
The most significant and persistent hurdle is the non-linear advection term, written as u⋅∇u
In simple terms, this term represents how a fluid’s own velocity carries its momentum. While this is what makes the equations so powerful for describing phenomena like turbulence, it is also what makes them mathematically intractable.
- The Hurdle: In linear equations, a small change in the input leads to a proportionally small change in the output. But with this non-linear term, a small, local disturbance can amplify and propagate, potentially leading to explosive, uncontrolled growth. Proving that the solution will never “blow up” requires a way to globally control this term, and every attempted proof has ultimately failed to do so for all possible initial conditions.
- The Intuition: Imagine a calm river. A small pebble creates a ripple. Now imagine that same pebble creating a whirlpool that spins faster and faster, potentially pulling in the entire river. The advection term describes this chaotic, self-reinforcing process.
2. The Elusive Nature of Singularities
The core of the Millennium Problem is proving that a solution remains “smooth” for all time. In mathematics, “smooth” means that the velocity and pressure values, as well as their derivatives (rates of change), are always finite. A “singularity,” or “blow-up,” is a theoretical point in space-time where one of these values becomes infinite.
- The Hurdle: While physical intuition dictates that infinite velocity is impossible, mathematicians have been unable to rigorously prove that a singularity cannot form. Every attempted proof that has succeeded for a short time has failed to provide a robust, long-term bound on the solution’s growth. The counter-intuitive hurdle is that we cannot prove the obvious.
- The Intuition: Think of a perfect, smooth wave on the ocean. As it approaches the shore, it gets steeper and steeper. The mathematical equations work perfectly until the moment the wave “breaks” and collapses into foam. A singularity in the Navier-Stokes equations is the mathematical equivalent of that breaking point—a point where our current tools can no longer describe what’s happening.
3. The Leaky Boxes of Function Spaces
Mathematicians analyze the equations within specific mathematical frameworks called “function spaces,” which are essentially “boxes” that contain functions with certain properties (e.g., they are smooth, they have finite energy, etc.).
- The Hurdle: Many proofs have successfully shown that a solution will remain in a specific “box” for a finite period. The deep problem is proving that the solution will not “escape” the box and enter a state of infinite energy or unbounded velocity after that time. Attempts to use “energy estimates” to put a global bound on the solution’s growth have consistently fallen short. The energy of the system is not necessarily decreasing or remaining constant, making a proof of its boundedness extremely difficult.
- The Intuition: It’s like trying to keep a bouncing ball in a room with a leaky roof. You can show that the ball stays in the room for a minute, but if there’s no way to prove the holes in the roof won’t grow bigger and let the ball escape, you can’t prove it will stay in the room forever.
In essence, the collective failures of the past are a stark reminder that we are not missing a single piece of the puzzle. We are missing an entirely new type of framework—a new way of thinking about non-linear, chaotic systems that is capable of providing the rigorous, global bounds that the Navier-Stokes equations demand.
Conflicting Assumptions in Navier-Stokes Proof Attempts
The history of failed Navier-Stokes proofs is a study in mathematical assumptions, where seemingly small choices in a proof can lead to an entire argument’s collapse. When we analyze these failures collectively, we see that many of them stem from a set of contradictory assumptions about the nature of the solution itself. The core conflict is often between assuming a certain “well-behaved” nature of the solution and the unproven, potentially singular reality of the equations.
Here, we break down some of the most prominent assumptions and their direct contradictions.
Category 1: The Assumption of “Niceness” vs. the Potential for Catastrophe
- Assumption A: A Priori Boundedness in Energy Spaces
- What it is: Many proofs assume that a solution’s energy, or a related quantity like the L2 norm of the velocity field (∫∣u∣2dx), remains bounded for all time. This is a crucial starting point because if the energy is bounded, it provides a fundamental control over the potential for a “blow-up.”
- Assumption B: The Existence of Singularities
- What it is: This is the unproven possibility that a singularity can form. A singularity is a point where the velocity or pressure becomes infinite. While no such singularity has ever been observed in a physical fluid or rigorously proven to exist in the equations, its potential presence invalidates any proof that implicitly assumes the solution remains bounded or smooth.
- Contradiction: Assumption A directly contradicts the possibility of a singularity from a mathematical standpoint. The entire goal of the Millennium Problem is to prove that Assumption B is false. Therefore, any proof that builds upon the assumption that the energy is bounded without first proving it is circular and fundamentally flawed. The conflict is existential: the proof is either valid or it’s not.
Category 2: The Assumption of Universality vs. Simplified Problem Domains
- Assumption C: Restricted Class of Initial Conditions
- What it is: Many proofs have been shown to be valid only for a specific, “tame” class of initial conditions—for instance, those with low initial energy or velocity fields that are very smooth. These are simplified scenarios that do not fully capture the complexity of the full problem.
- Assumption D: Universality of the Proof
- What it is: The Millennium Problem requires a proof that holds for all possible initial conditions, no matter how chaotic or turbulent.
- Contradiction: This is the most common contradiction found in failed proofs. A proof that is contingent on a limited set of initial conditions fails to solve the universal problem. The “solution” to a specific case is not a solution to the general problem. A great analogy is proving that a boat can cross a calm lake, but failing to prove it can cross a stormy ocean.
Category 3: The Assumption of Decaying Solutions vs. Non-Decaying Solutions
- Assumption E: The Decay of Solutions at Infinity
- What it is: In many analytical approaches, it is assumed that the solution’s velocity field approaches zero as the distance from the origin goes to infinity. This simplifies the analysis by allowing for certain boundary conditions and energy estimates.
- Assumption F: Solutions with Infinite or Slow-Decay Energy
- What it is: The full Navier-Stokes problem allows for initial conditions with infinite energy or solutions that do not decay to zero at infinity. Physically, this could represent a uniform wind field or an infinitely large vortex.
- Contradiction: Assumption E directly conflicts with Assumption F. A proof that works only for solutions with finite energy and a rapid decay at infinity fails to address the full scope of the problem as defined. It’s like trying to prove something about all numbers, but only testing it on even numbers.
In essence, the failures are not simple errors but instead expose the deep mathematical schism between what we wish to be true and the unproven reality of the equations themselves. The solution will not be found by piecing together these conflicting assumptions, but by creating a new mathematical framework that can operate without them.
Google Notebook LM – 27 Sources Navier-Stokes Failed Attempts
Assumptions in Navier-Stokes and Fluid Dynamics Research Overview
Source [47]
• The functions f, g are non-negative (f, g ≥ 0) and locally bounded.
• The functions f and g satisfy f(ρ∗) = g(ρ∗) = 0.
• The function g decays as |g(ρ0)| ≤ Mρ^(-q)0 for some constants M, q > 0.
• The function f satisfies |f(ρ0)| ≤ M̃ρ^(-q)0.
• The structure of d(ρ) is as described in the Appendix of the original paper.
• The functions χ∗ and χ̃ have compact supports.
• Formulas (3.16) and (3.17) from the original paper are used.
• Test functions of Section 3 are chosen from C_c(R) (as opposed to C^2_c(R) in the original paper), which is assumed to have no consequences on the Young measure reduction.
Source [36]
• The historical overview of key developments in fluid mechanics is acknowledged as not complete.
• The density ρ is assumed to be constant throughout the thesis, implying the incompressibility condition div v = 0 for the Navier-Stokes equations.
• For previous analytical or computer-assisted existence results for Navier-Stokes equations, a certain smallness assumption on the Reynolds number is assumed to be the basis.
• For the current thesis, solutions are sought for arbitrarily large Reynolds numbers, provided the flux through a suitable intersection of the domain remains the same.
• The established computer-assisted techniques are acknowledged to “cannot cover the whole range of possible Reynolds numbers”.
• Considerations and examples are restricted to domains in R^2, while the analytical setting is noted to apply to higher dimensions with adaptions.
• The domain Ω is fixed as the infinite strip S := R × (0, 1) perturbed by a compact obstacle D ⊆ S (i.e., Ω := S \ D).
• The obstacle is chosen such that the unbounded boundary of Ω is Lipschitz.
• The obstacle D is assumed to be of two types: either D ⊆ [−d1, d1] × ([0, d2] ∪ [d3, 1]) (obstacle at the boundary) or D ⊆ [−d1, d1] × [d2, d3] (obstacle detached from the boundary), for constants d1, d2, d3 > 0 with d2 < d3 < 1.
• Computer-assisted proofs for ordinary or partial differential equations require a zero-finding formulation of the underlying problem.
• A rigorous (analytical) proof of existence requires a fixed-point argument, such as Schauder’s Fixed-point Theorem for bounded domains or Banach’s Fixed-point Theorem for unbounded domains.
• The structure assumed for the approximate solution ω̃ in (3.7) is not a restriction for most applications of computer-assisted proofs, as common methods yield a compactly supported approximate solution.
• The numerical algorithm must provide an approximation that is exactly divergence-free.
• Assumption (A1): A bound δ ≥ 0 for the defect (residual) of ω̃ has been computed, such that ‖Fω̃‖_H(Ω)′ ≤ δ.
• Assumption (A2): A constant K > 0 is available such that ‖u‖_H1_0(Ω,R2) ≤ K‖L_U+ω_u‖_H(Ω)′.
• Assumption (A3): A constant K∗ > 0 is available such that a similar inequality holds for an associated adjoint operator (implied by context of).
• For the existence and enclosure theorem, constants K and K∗ satisfying assumptions (A2) and (A3) are assumed to be already computed using computer-assisted methods.
• The linearization L_U+ω of F at ω̃ is bijective if assumptions (A2) and (A3) are satisfied (Proposition 3.3 is assumed).
• For an analytic proof, the crucial inequality (3.11) in Theorem 3.4 must be checked rigorously.
• Interval arithmetic calculations are required for computing constants δ, K, K∗ and validating inequalities, to account for rounding errors.
• Interval arithmetic ideas are applied to the set of floating-point numbers F ⊆ R instead of the entire space R to capture rounding errors.
• The IEEE 574 standard for floating-point arithmetic provides all necessary rounding modes for interval arithmetic operations.
• A concrete function V is fixed for the computation of the desired approximate solution.
• The finite element mesh, denoted by M = {Ti : i = 1, . . . , N}, consists of triangles.
• If i, j ∈ {1, . . . , N} are such that Ti ∩ Tj = {z}, then z is a corner of Ti and Tj.
• If i, j ∈ {1, . . . , N}, i ≠ j are such that Ti ∩ Tj contains more than a single point, then Ti ∩ Tj is an edge of Ti and Tj.
• Common mixed finite elements (like Raviart-Thomas or Taylor-Hood) cannot be applied because they only yield approximations that are divergence-free with respect to a finite dimensional space of test functions, not exactly divergence-free, which is not sufficient for the applications (cf. Theorem 3.4).
• For the computation of norms, interval arithmetic operations are required, especially for quadrature rules where all quadrature points and their corresponding weights must be computed rigorously.
• Conditions (5.9) and (5.10) hold true for the finite element mesh M.
• For the computation of ρ̃, the approximation ρ̃ is assumed to be in H(div,Ω,R^(2×2)), requiring finite elements that provide solutions in this space exactly.
• The success of the first approach for computing norm bounds is directly linked to the Reynolds number, and it is expected to fail if the Reynolds number is “too large”.
• For the first approach, the constant σ used in the inner product on H(Ω) is set to zero, which is possible because Poincaré’s inequality holds for the strip S and thus for the domain Ω ⊆ S.
• Former applications of computer-assisted techniques for unbounded domains strongly exploit the self-adjointness of the operator Φ^(-1)L_U+ω and use a spectral decomposition argument to compute K.
• Nakao’s method is only applicable to bounded domains, which is not the case in the authors’ considerations.
• The lack of self-adjointness is present in the current application (implied by Remark 6.1, which is not provided but referenced).
• The essential spectrum of problem (6.8) is defined via the associated self-adjoint operator (Φ^(-1)L_U+ω)∗Φ^(-1)L_U+ω.
• A positive lower bound σ > 0 for the spectral points of the eigenvalue problem (6.8) is assumed to be in hand.
• The positive eigenvalues of the eigenvalue problems (6.8) and (6.9) coincide, but it is not sufficient to consider only one problem as one might have an eigenvalue 0, so both must be considered.
• A constant K_c is assumed to have been computed using an approximate solution ω_c on a coarse finite element mesh.
• Assumptions (A1)-(A3) must be computed using the same approximate solution.
• Problem (6.12) and the base problem (6.23) are assumed to be homotopically connected, implying the existence of a family (H_t, 〈 · , · 〉_t)_t∈ of separable (complex) Hilbert spaces and a family (M_t)_t∈ of bounded, positive definite hermitian sesquilinear forms such that (H1, 〈 · , · 〉1) = (H, 〈 · , · 〉) and M1 = M.
• For all 0 ≤ s ≤ t ≤ 1, Ω(s) ⊇ Ω(t) is assumed (related to the domain deformation homotopy).
• The base problem (6.23) is assumed to be “not too far away” from problem (6.12) to be used directly as a comparison problem.
• The approximate solution ω̃ is compactly supported, i.e., ω̃ = { ω̃_0, in Ω_0; 0, in Ω \ Ω_0 } for Ω_0 ⊆ S_R ∩ Ω =: Ω_R with S_R := (−R,R) × (0, 1).
• The support of ω is contained in the bounded part Ω_R and is extended by zero on S \ Ω.
• For domain deformation homotopy, a family of domains (Ω(t))_t∈ is chosen such that Ω(0) = S and Ω(1) = Ω, and Ω(s) ⊇ Ω(t) for 0 ≤ s ≤ t ≤ 1.
• Only finitely many domains from the family (Ω(t))_t∈ are needed for the homotopy steps.
• The families (H_t, 〈 · , · 〉_t)_t∈ and (M_t)_t∈ are specifically chosen as: H_t := { u ∈ H(S) : u = 0 on S \ Ω(t) }, 〈u, ϕ〉_t := 〈u, ϕ〉_H1_0(Ω(t),R2), and M_t(u, ϕ) := (γ1 + ν)〈u, ϕ〉_H1_0(Ω(t),R2) − γ2 ∫_SR∩Ω(t) u · ϕd(x, y) for 0 ≤ t ≤ 1.
• Due to Ω(s) ⊇ Ω(t), it is assumed that H_s ⊇ H_t for all 0 ≤ s ≤ t ≤ 1, and 〈u, ϕ〉_s = 〈u, ϕ〉_t and M_s(u, u) = M_t(u, u) for all u ∈ H_t.
• The eigenvalues of interest for the base problem are located below some constant ρ_0 < σ^(0)_0 = γ1, where σ^(0)_0 is the infimum of the essential spectrum.
• Condition (6.92) is assumed to hold piecewise on each of the subintervals I1, . . . , IM and [ξ0,∞).
• On the unbounded interval I_∞ = [ξ0,∞), the functions θ_1, . . . , θ_3 are constant and thus independent of ξ.
• The constant ξ0 is greater than 0.
• The lower bounds κ and κ̂ (introduced in Section 6.2.1.4) are assumed to be in hand for computing lower bounds for the essential spectra.
• The lower bounds κ and κ̂ (satisfying (6.82) and (6.83)) are used as lower bounds for σ0 and σ̂0 respectively, for the essential spectra.
• The domain Ω still contains the obstacle D.
• The pair (ω̃, p̃) is considered as the approximate solution for the transformed Navier-Stokes equations (1.13).
• The approximation of the pressure p̃ computed with the algorithm described in Section 7.2 satisfies ∇p̃ ∈ L^2(Ω0,R2).
• An example domain with a specific geometry (presented in Figure 8.1) is used to illustrate differences between approaches for computing norm bounds K and K∗.
• A Reynolds number Re is prescribed.
• For the example domain, the parameters d_0 := 2.5, d_1 := 0.5, d_2 := 0.5 and d_3 := 1.0 are fixed.
• The choice d_3 := 1.0 is considered natural because the obstacle is located at a single side of the strip.
• For all verified computations, the corners of the corresponding triangle T must be exactly representable on the computer.
• All meshes considered have their vertices exactly representable on the computer.
• By the choice of parameters d_0, d_1, d_2, d_3, the additional assumptions on the finite element mesh M (cf. (5.9) and (5.10) in Section 5.1) required for the computation of the L_∞-norms are satisfied for the triangulation.
• The existence of reentrant corners in Ω or Ω0 is faced, and a strategy of adding already refined cells in their neighborhood is used.
• For the computation of the defect bound δ, all integrals and L_∞-norms need to be evaluated using interval arithmetic operations.
• For the first approach to norm bounds, the parameter σ (of the inner product) is set to 0.
• Theorem 3.4 was successfully applied.
• For the second approach with straightforward coefficient homotopy, σ = 1.0 is fixed for most computations.
• n0 and n̂0 denote the number of eigenvalues (below some ρ0) considered in the eigenvalue homotopy corresponding to eigenvalue problems (6.8) and (6.9) respectively.
• The essential spectra of the base problems consist of the single values γ1 (for (6.8)) and γ̂1 (for (6.9)), which provide the required lower bounds for the essential spectra.
• For the second approach with extended coefficient homotopy, all computations use the parameter σ = 0.25 for the inner product defined on H(Ω).
• The success of the eigenvalue homotopy method heavily depends on the choice of σ.
• For eigenvalue computations, a computational domain with radius twice as large as Ω0 (e.g., [−6, 6] ×) is used.
• The constant ρ0 is chosen to be relatively “small”.
• A suitable balance for the parameter σ of the inner product needs to be found, as a small σ avoids computational effort but negatively affects Lehmann-Goerisch bounds, while a large σ is suggested by examples.
• The crucial assumption needed in Corollary 6.9 is confirmed, i.e., M_t1(ũ^(t1)_N1, ũ^(t1)_N1) / 〈ũ^(t1)_N1, ũ^(t1)_N1〉_H1_0(S,R2) < ρ0.
• Assumptions (A1), (A2), and (A3) hold uniformly for all Reynolds numbers in some compact interval [Re, Re] ⊆ (0,∞).
• ω̃ ∈ H(Ω)∩W(Ω) is an approximate solution of (1.15).
• Constants δ ≥ 0, K, K∗ > 0 are computed satisfying assumptions (A1b), (A2b), and (A3b) uniformly on the compact interval [Re, Re].
• The condition 4K^2C^4/(2Re) δ < 1 holds for all Re ∈ [Re, Re].
• The first approach for computing norm bounds is used whenever possible to reduce computational effort, implying that if the second approach is used, the first one failed.
• For parallelogram obstacles, the constants d_0 := 2.5, d_1 := 0.5, d_2 := 0.5 and d_3 := 1.0 are fixed.
• Each finite element mesh considered consists of triangles with corners exactly representable on the computer, which is possible due to 45° angles and exact representability of obstacle corners.
• For Navier-Stokes equations, the linearized operator is not self-adjoint.
• Computer-assisted methods with the second approach and the extended homotopy method theoretically allow proving the existence of a solution for arbitrarily high Reynolds numbers, provided it exists and enough computational power is available.
• For future projects, considering the base problem on the space H(S) instead of H1_0 is a possibility.
• The methods presented apply to the 3-dimensional case, and Theorem 3.4 remains valid for 3D.
• For the 3D case, adaptions are necessary at several stages, such as the definition of function V and the type of divergence-free finite elements (Argyris elements are not applicable).
• Exact quadrature points and weights are required to compute integrals rigorously.
• For the setup of the transformation Φ_T, the corners of the corresponding cell need to be known rigorously.
• Functionals L̂_1, . . . , L̂_21 : P_5(T̂) → R represent the degrees of freedom for the reference triangle T̂.
• The reference shape functions ζ̂_1, . . . , ζ̂_21 ∈ P_5(T̂) have been computed to satisfy L̂_i(ζ̂_j) = δ_i,j.
• The implementation of higher-order Raviart Thomas elements uses ideas described by Ervin in [25, Section 3.4].
• For Raviart Thomas elements, a reference triangle T̂ and a counterclockwise numbering of edges starting at zero are considered.
• H denotes a separable (complex) Hilbert space endowed with the inner product N, and M is a bounded, positive definite symmetric bilinear form on H.
• All eigenvalues of the considered eigenvalue problem are well separated (i.e., no clustered eigenvalues exist).
• Results for estimating integral terms, proved for functions in H1_0(Ω,R2), remain valid if H1_0(Ω,R2) is replaced by H(Ω), as H(Ω) ⊆ H1_0(Ω,R2).
• For Lemma A.9, u, v, ϕ ∈ H1_0(Ω,R2).
• For calculating Argyris reference shape functions, an ansatz ζ̂_j(x̂, ŷ) = ∑_(k=0)^5 ∑_(l=0)^k w^(j)_k,l x̂^lŷ^(k-l) is used.
• For Lemma A.14, k ∈ N.
Source [48]
• u, v ∈ H1_0(Ω) with ∇ · u = 0.
• Ω is a smooth bounded domain in R^2.
• The inequalities ‖u · ∇v‖_L2 ≤ C‖∇u‖_L2‖∇v‖_L2 and ‖u · ∇v‖_L2 ≤ C‖u‖_L2‖v‖_H2 are stated as erroneous in the original paper, invalidating the proof of Proposition 1.
Source[26]:
• The initial vorticity ω0 is bounded.
• A suitable decay of ω0 at infinity is required, e.g., ω0 in L2, noting it’s a ‘soft assumption’ without quantitative dependence on ‖ω0‖_2.
• Estimates are performed on a sequence of smooth (entire in the spatial variable) global-in-time approximations.
• r ∈ (0, 1].
• f is a bounded, continuous vector-valued function on R^3.
• For any pair (λ, δ), λ ∈ (0, 1) and δ ∈ (1/(1+λ), 1), there exists a constant c∗(λ, δ) > 0 such that if ‖f‖_H-1 ≤ c∗(λ, δ) r^(5/2) ‖f‖_∞ then each of the six super-level sets S_i,±λ is r-semi-mixed with the ratio δ.
Source [41]:
• A constant ρ∗ > 0 exists for the pressure law.
• Solutions are allowed to admit non-trivial end states (ρ±, u±) such that lim_(x→±∞)(ρ, u) = (ρ±, u±).
• Smooth, monotone functions (ρ̄(x), ū(x)) are chosen such that, for some L0 > 1, (ρ̄(x), ū(x)) = (ρ+, u+) for x ≥ L0 and (ρ−, u−) for x ≤ −L0.
• These reference functions are fixed at the very start of the approach and do not change later.
• Pressure laws have linear growth at high densities.
• Pressure functions satisfy condition (1.7).
• The entropy kernel χ = χ(ρ, u, s) is a fundamental solution of the entropy equation (1.11).
• The distribution ψ is of specific types: ψ ∈ {δ(u−s±k(1)), H(u−s±k(1)), PV(u−s±k(1)), Ci(u−s±k(1))}.
• Initial data (ρε_0, uε_0) are given.
• Estimates are independent of ε ∈ (0, ε0] for some fixed ε0 > 0.
• Initial data must be of finite-energy: sup_ε E[ρε_0, uε_0] ≤ E0 < ∞.
• Initial density must satisfy a weighted derivative bound: sup_ε ε^2 ∫_R |ρε_0,x(x)|^2 / ρε_0(x)^3 dx ≤ E1 < ∞.
• Relative total initial momentum should be finite: sup_ε ∫_R ρε_0(x)|uε_0(x)−ū(x)| dx ≤ M0 < ∞.
• An additional condition is ρε_0 ≥ cε_0 > 0.
• These initial conditions can be guaranteed by cutting off the initial data by max{ρ0, ε^(1/2)} and then mollifying at a suitable scale.
• ψ ∈ C^2_c(R).
• ψ1, ψ2 ∈ C^2_c(R) are test functions.
• s1, s2, s3 ∈ R.
• The support of ν is contained in V ∪ ⋃_k (ρk, uk), where (ρk, uk) are such that if (ρk, uk) ∈ suppχ(s), then (ρk′, uk′) ∉ suppχ(s) for all k′ ≠ k.
• s1 and s2 are chosen such that (ρk, uk) ∈ suppχ(s1)χ(s2).
• (T2, T3) corresponds to one of the pairs: (δ, δ), (PV,PV), (Q2, Q3), (δ,PV), (PV, Q3), (δ,Q3), where Q2, Q3 ∈ {H,Ci, R}.
• Mollifying kernels φ2, φ3 ∈ C^∞_c(−1, 1) are chosen such that ∫_R φj(sj) dsj = 1 and φj ≥ 0 for j = 2, 3.
Source [33]:
• A singularity at finite time t∗ requires that ∫_t max|ω|dt must diverge as t→t∗.
• For the Euler equations, the direction of the vorticity must be indeterminate in the limit as the singularity is approached.
Source [44]:
• The hypotheses (1.5)-(1.8) hold.
• (ρ0,u0) are given functions satisfying (1.9)-(1.10).
• (ρ,u) and (ρ̃, ũ) are smooth local-in-time solutions to the systems (NSENC) and (NSEDC) respectively, defined on Ω× [0, T ] with the same initial data (ρ0,u0), as described by Theorem 1.1.
• M0 is fixed as in the statement of Theorem 1.1.
• (ρ̃, ũ) and (ρ,u) are smooth classical solutions to system (1.1) defined on Ω × [0, T ] with boundary conditions (1.3) and (1.4) respectively, satisfying bounds (1.11)-(1.12).
• (ρ̃, ũ) and (ρ,u) have the same initial data (ρ0,u0) which satisfy (1.9)-(1.10).
From “2016_7_De_Rosa.pdf”:
• The exponent α is suitably small (below 1/2).
• The work is in a (spatial) periodic setting: T^3 = S^1×S^1×S^1, identified with the cube [0,2π]^3 in R^3.
• α < 1/2 and 1/2 ≤ e ≤ 1.
• If c > max( (3−2α)/(2(1−2α)), ‖e‖^(1−2α)/(b(2α+γ−1))1, ‖e‖^(1−2α)/(2−2α)2, ‖e‖^1 ), then there exists a sequence of triples (vq, pq, R̊q).
• α ∈ [1/4, 1/2) and b > 1.
• µ, λ_q+1 ≥ 1 and ` ≤ 1.
• The condition δ^(1/2)_q λ_q ≤ µ is satisfied (CFL condition).
• For comparing energy profiles, e(0) = ẽ(0) and e′(0) = ẽ′(0).
• The choice of parameters η,M,a,b,c (from Chapter 2) works for both energy profiles.
From “2112.03116v1.pdf”:
• a0 ∈ C^∞(R^3 \ {0}) is divergence free and scaling invariant, and σ ∈ R is a size parameter.
• For |σ| ≪ 1, existence and uniqueness fall into the known perturbation theory of Koch and Tataru in BMO^(−1).
From “2405.19249v1.pdf”:
• Previous results on nonlinear inviscid damping depend heavily on Fourier analysis methods, which assume the perturbation vorticity remains compactly supported away from the boundary. The current work aims to also use physical space methods.
• A change-of-coordinates (x, y) 7→ (z, v) is defined to eliminate the background (time-varying) shear flow and propagate regularity.
• The validity of prior techniques from for controlling interior vorticity and interior coordinate system norms is assumed.
• ω_in satisfies the hypotheses of Theorem 1.1.
• The case ν = 0 is covered by.
• Regularity is measured in the coordinate system defined by ω0.
• Two essential Gevrey indices are defined: 1/2 < r < 1 (Interior Gevrey 1/r Index) and 1 < s (Exterior Pseudo-Gevrey s Index).
• r > 1/2 is chosen close to 1/2 for technical convenience.
• λ0 is chosen small for technical convenience.
• ǫ is sufficiently small.
• Bootstrap hypotheses are assumed.
• t . ν^(−1/3−1/ζ) for 0 < ζ < 1/78.
• The constants {θn} appearing in (2.55) – (2.69) are chosen as in.
• The bootstrap hypotheses (2.117),(2.118),(2.119), and (2.120) hold on [0, T] (for Theorem 2.13 in).
• ν ≪ 1.
• ǫ is sufficiently small (for Lemma 3.8).
• Functions f, g are sufficiently regular for product rules.
• Uniform bounds for t, η, ν and k ≠ 0 hold.
• f_m,n is a sequence of functions related by f_m,n := ∂_x^m Γ_n f.
• G_m,n is a sequence of weight functions.
• The cut-off functions satisfy χ_m’+n’ = 1 on the support of χ’_m’+n’+1.
• Specific conditions m ≥ 4N and n ≥ 4N are assumed for a particular case in the proof.
• Inner products are defined as in (5.1).
• Inner products are defined as in (5.24).
• H,G,H are the solutions of equations (2.7), (2.6), and (2.8) respectively.
• Relations (2.138) and (2.139) hold true.
• Remark 6.1, Hölder’s inequality, and the bootstrap assumptions are used.
• n ≥ 1.
• Relation (2.141) holds.
• j = 1, 3 for Lemma 8.2.
• U ∈ L_∞ (for estimating Err^(4)_LHS).
• A priori estimates available from are used to provide uniform estimates over νt^(3+δ) ≤ 1.
From “2410.09261v5.pdf”:
• The main result is the construction of non-smooth entropy production maximizing solutions of the Navier-Stokes equation of the Leray-Hopf (LH) class of weak solutions.
• The criteria of are necessary for blowup.
• Numerical study provides strong evidence for the existence of non-smooth solutions.
• The Foias description of Navier-Stokes turbulence as an LH weak solution provides the constructive step.
• The existence of entropy production maximizing solutions of the Navier-Stokes equation is established in.
• The theory of vector and tensor spherical harmonics is used.
• The Lagrangian for space time smooth fluids, derived from, influenced the authors’ thinking.
• Scaling analysis, based on the renormalization group, also influenced the authors’ thinking.
• The Hilbert space for this theory is the space H of L^2 incompressible vector fields defined on the periodic cube T^3.
• The initial data in H is continuous.
• A one-dimensional space of singular solutions is eliminated.
• After a global Galilean uniform drift transformation, u and f can always be assumed to have a zero spatial average.
• The viscous dissipation term is necessarily SRI (specific to the context) due to restrictions on its possible forms.
• The complexified bilinear form B is written as B(u, v)_C.
• The proof of analyticity for NSRI moments of order one is based on the energy conservation of Lemma III.2, with an entropy principle hypothesis and non-negativity of the turbulent dissipation rates.
• The numerical program of documents a rapid near total blowup in enstrophy, followed by a slower blowup in the energy.
• Within the energy spherical harmonics, restricting the statistics to the single 3D mode ℓ = 2 is sufficient.
From “ADA034123.pdf”:
• The v. Neumann stability analysis for the local linearized model will most likely impose restrictions on Δt and Δx for stable computation, which should be observed by all approximate solutions.
• A vector unknown function IJ(t, x) of dimension p is to be calculated.
• The matrix B is chosen to be the main tridiagonal elements of A.
• All artificial sources and doublets etc. are assumed to properly vanish in the steady state limit.
• The choice of B is dictated by the desire to reduce computational effort in obtaining the steady solution, irrespective of its physical correspondence to some temporal flow field.
• Without external artificial sources, nature has demonstrated that a steady state will eventually be reached.
• The suitable choice of acceleration parameters, specific to the problem type and class of prescribed boundary data, is required for reducing computational effort in steady flow problems.
• Distributed dipoles arising from truncation errors of every computational cell must be suppressed or eliminated, which can be achieved through careful formulation.
• A suitable property is implicit in the mathematical abstraction of continuity and differentiability of the functions in question.
• The differential formulations in terms of different dependent and independent variables are all equivalent, but this is not necessarily the case for difference approximations of conservation laws.
• Universal functions of the genuine solution u(x) vanish on both boundaries and have their absolute magnitudes less than 0.1.
• Truncation errors ET are expected to be of the order of (Re Δx)^2/10 for second-order accurate schemes.
• For Re Δx ~ O(1) and finite values of α ~ O(1), the estimate of the maximum absolute truncation errors is valid.
• The decay characteristics described by the universal function B_k may be used where the one-dimensional model is appropriate.
• The steady state criterion |U_~I| < O(Δx)^n is sufficiently accurate in an n-th order accurate scheme.
• The truncation error is expected to be *~ (Re Δx)^n* for conservative difference formulation of n-th order formal accuracy.
• Influence functions B1, 2, . . . are not likely to possess maximum magnitudes much less than 10^(−1).
• For r and s > 0 and < 1, specific (ill-rendered) inequalities relating r and s are stated.
• Spurious solutions will be suppressed as long as the same boundary values of U’_~1 are used at every step.
• These boundary values can be determined by the approximate boundary conditions B(T) U_~ – 0, and may contain errors.
• The maximum permissible change of U per mesh (ΔU)_max is one half the |u_1-u_2| across the discontinuity, to avoid shock-induced large oscillations.
• Within the linearized framework, criterion (5.32) (from) should be equally applicable.
• Most solutions of Poisson-type equations in the literature cannot be analyzed for an error estimate primarily because of the non-conservative form of the difference formulation.
• Experimental data are generally not available to provide a quantitative estimate of the error of computed results.
• For numerical integrations by Jenson and Hamielec et al., uniform outflow was approximated as a downstream boundary condition.
• They ensured that steady state results were essentially independent of further mesh reduction from mesh sizes Δx = 1/20.
From “IJNMF.final.2004.pdf”:
• Exact solutions are used to accurately evaluate the discretization error in the numerical solutions.
• Modeling and Simulation (M&S) is viewed as the numerical solution to any set of partial differential equations that govern continuum mechanics or energy transport.
• The engineering community must gain increased confidence for M&S to fully achieve its potential.
• Sources of error in M&S are categorized into physical modeling errors (validation-related) and mathematical errors (verification-related).
• In the method of manufactured solutions, an analytical solution is chosen a priori and the governing equations are modified by the addition of analytical source terms.
• Manufactured solutions are chosen to be sufficiently general so as to exercise all terms in the governing equations.
• Adherence to guidelines ensures that the formal order of accuracy is attainable on reasonably coarse meshes.
• The domain examined is 0 ≤ x/L ≤ 1 and 0 ≤ y/L ≤ 1 with L = 1 m.
• Only uniform Cartesian meshes are examined, so the codes cannot be said to be verified for arbitrary meshes.
• For the Euler Equations, the general form of the primitive solution variables is chosen as a function of sines and cosines.
• In this case (Euler), φ_x, φ_y, φ_xy are constants (subscripts not denoting differentiation).
• The chosen solutions are smoothly varying functions in space.
• Temporal accuracy is not addressed in this study.
• The governing equations were applied to the chosen solutions using Mathematica™ symbolic manipulation software to generate FORTRAN code for the resulting source terms.
• For a given control volume, the source terms were simply evaluated using the values at the control-volume centroid.
• For the Navier-Stokes case, the flow is assumed to be subsonic over the entire domain.
• The absolute viscosity µ = 10 N·s/m^2 is chosen to ensure that the viscous terms are of the same order of magnitude as the convective terms, minimizing the possibility of a “false positive” on the order of accuracy test.
• The solutions and source terms are smooth, with variations in both the x and y directions.
• The boundary requires the specification of one property and the extrapolation of two properties from within the domain.
• Applying a large viscosity value for the manufactured solution makes the use of an inviscid boundary condition questionable, but the order of accuracy of the interior points was not affected.
• Further investigation of appropriate boundary conditions for this case is beyond the scope of this paper.
• Options not verified in the current study include: solver efficiency and stability (not verifiable with the method), nonuniform or curvilinear meshes, temporal accuracy (manufactured solutions not functions of time), and variable transport properties µ and k.
From “JGomezSerrano.pdf”:
• The dissertation has two parts: classical analysis/PDE techniques, and computer-assisted proofs.
• Initial conditions are a graph.
• A turning singularity develops in finite time.
• The interface stops being a graph when a turning singularity develops.
• The interface finally collapses into a splash singularity.
• The first part of the result (turning singularity) was proved by Castro et al..
• The connection between the turning singularity and splash singularity results is not evident a priori, as it’s not known if the solution sets have common elements.
• The completion of the proof (connecting turning to splash) is based on techniques where the computer predominates as a rigorous theorem prover tool.
• Castro et al. proved that a class of initial data develops turning singularities for the Muskat problem, moving into the unstable regime.
• The study compares different Muskat models: confined (fluids between fixed boundaries) and non-confined, and cases with permeability jumps (inhomogeneous model).
• No claim is made that splash and splat are the only singularities that can arise.
• Elementary potential theory is assumed for irrotational divergence-free vector fields v(x, y, t) defined on a region Ω(t) ⊂ R^2 with a smooth periodic boundary.
• v is smooth up to the boundary and 2π-periodic with respect to horizontal translations.
• v has finite energy.
• The function c(α, t) can be picked arbitrarily, as it only influences the parametrization of ∂Ω(t).
• z ∈ H^k(T), ϕ ∈ H^(k-1/2)(T) and ω ∈ H^(k-2)(T) as part of the energy estimates.
• Techniques from [28, Section 6.4] are applicable for treating singular terms.
• k ≥ 3 for Lemma 2.4.6.
• k = 4 for the proof of a specific lemma; other cases are left to the reader.
• k ≥ 4 for Lemma 2.4.15.
• zε,δ,µ(α, t) ∈ H^4(T), ωε,δ,µ(α, t) ∈ H^2(T), ϕε,δ,µ(α, t) ∈ H^3(T).
• It is required that ∂_tϕε,δ ∈ H^3(T) (instead of H^(3+1/2)(T)) for specific energy estimates.
• The function ϕ̃(α, t) = Q^2(α, t)ω̃(α, t) / (2|z̃α(α, t)|) − c̃(α, t)|z̃α(α, t)| (introduced by Beale et al. and Ambrose-Masmoudi) will be used to prove local existence in Sobolev spaces.
• A commutator estimate for convolutions is repeatedly used.
• NICE3B implies ∫ Q^j∂_α^k(K̃)∂_α^k(NICE3B) ≤ CE_p^k(t) for some positive constants C, p and any j.
• N stands for the maximum number of derivatives of the function to be evaluated.
• The coefficients (f)_k are the coefficients of the Taylor series around x0 up to order N.
• t ∈ [t0, t1] is a small time interval.
• A,B,E depend in a reasonable way on t.
• An upper bound for ‖S^(−1)_t‖ is obtained, assuming ‖S^(−1)_t0‖ ≤ C0.
• The classical method of adding and subtracting the same term is used to create differences and eliminate occurrences of variables (z, ω, ϕ).
• The computation and bounding of the Birkhoff-Rott operator is the most expensive.
• The expansion (Q^2(z)−Q^2(x)) = 1/8 〈 (1+x^4)/x, (3x^2−1)/x^2 〉 D +O(D^2) is used.
• The same methods as before can be applied to the equations with f = g = 0, which are satisfied by (z, ω, ϕ).
• The evolution of a fluid in a porous medium is an interesting problem in fluid mechanics.
• Darcy’s law applies, with the permeability of the medium κ equal to b^2/12.
• The work is conducted in the two-dimensional case, with generalization to 3D being immediate.
• In subsequent sections, the inhomogeneous, non-confined regime for the Muskat problem will be investigated.
• The C-XSC library will be used for rigorous computations.
• Having a confined medium plays a role in the mechanism for achieving turning singularities.
• There are cases where the jump in permeabilities can either prevent or promote singularities, or have no impact.
• Theorems 6.2.2 and 6.2.3 are more general than [16, Theorem 3, Theorem 4] because they suppress any smallness assumption in |K| or largeness in h2.
• The analytical part of the theorems is detailed in and.
• Specific curves z1(α) and z2(α) are defined for α ∈ [−π, π] and extended periodically in the horizontal variable.
• Specific parameters (N = 8192, RelTol = 10^(−5), AbsTol = 10^(−5), K = 1, h2 = π^2) are used for running the program.
• There is turning for all −1 < K < K1 and no turning for all K2 < K < 1 for a short enough time.
From “Lee,Michael.pdf”:
• The torus is chosen for simplicity because it is compact and has no boundary.
• A divergence-free initial condition v0 is given.
• The flow is of an incompressible homogeneous fluid.
• There is a body force field f.
• The Navier-Stokes equations describe the flow.
• 1 ≤ p, q, r < ∞.
• Ω is a measurable set in R^n.
• u belongs to L^p(Ω) ∩ L^q(Ω).
• ‖aj‖_L2 = 1.
• Uniform bounds C1, C2 are obtained via other assumptions.
From “Michele-Thesis.pdf”:
• The method used to tackle the problem is Convex Integration.
• The main result of the thesis concerns the fractional Navier-Stokes equations with a Laplacian exponent θ < 1/3.
• The general strategy of the proof involves defining suitable relaxations of the notion of solution (“subsolutions”) and approximating one kind of subsolution with another that is closer to the notion of solution.
• Adapted subsolutions (with R̊(·,0) ≡ 0 and C1 norm of velocity blowing up at a controlled rate at t=0) are the basis for a quantitative criterion for non-uniqueness.
• Equations are termed hypodissipative when θ < 1 and hyperdissipative when θ > 1.
• Solutions for θ < 1/3 are studied unless otherwise stated.
• The equations model the behavior of a fluid with internal friction interaction when θ ∈ [1/2,1].
• Classical solutions of the Euler, Navier-Stokes, and fractional Navier-Stokes equations satisfy energy balances.
• Solutions satisfying the specified energy conditions are referred to as admissible or dissipative solutions.
• For any 0 < β < 1/3, there are infinitely many C^β initial data that give rise to infinitely many C^β admissible solutions of the 3D Euler equations.
• The proof of Theorem 1.3.2 (main result of thesis) cannot maintain the admissibility of the regular solution up to a fixed time and thus sacrifices regularity to restore admissibility on a fixed time interval.
• The existence of one approximate solution implies the existence of infinitely many solutions to the original system of PDEs.
• The Euler equations are recast in a specific form: { ∂_t v + div u + ∇p = 0; div v = 0; u = v ⊗̊ v = v ⊗ v − (1/n) Id |v|^2 }.
• λ_max denotes the maximum eigenvalue, and L^2_w is the space L^2 endowed with the weak topology.
• There exist infinitely many weak solutions v of the Euler equations (2.1.1.1) in [0,T)×R^n with pressure p = q0 − (1/n)|v|^2 such that v ∈ C([0,T];L^2_w), v(t,x) = v0(t,x) for t∈{0,T} a.e. x∈R^n, and (1/2)|v(t,x)|^2 = e(t,x)1Ω ∀t∈(0,T) a.e. x∈R^n.
• The strategy to prove Proposition 3.2.2.1 is to find a suitable complete metric space and prove that the desired solutions are residual.
• The construction aims for a sequence of subsolutions (vq, pq,Rq) such that the error Rq ≥ 0 is gradually removed.
• Only the traceless part R̊q matters for measuring the error from being an Euler solution.
• Perturbations are chosen to oscillate at frequency λq, leading to the bound ‖∇wq‖_0 ≲ δ^(1/2)_q λq.
• δq → 0 and λq → ∞, with λq at least an exponential rate.
• For the sake of definiteness, λq ∼ λ^q and δq ∼ λ^(−2β0)_q for some λ > 1 are imagined, though actual proofs require super-exponential growths.
• It is possible to send δq → 0 as q ↑ ∞ and obtain a relation between δq and λq.
• A profile W satisfying conditions (H1)-(H4) is found.
• It is crucial that c0 vanishes (content of H1).
• λq ∼ λ^q for some fixed λ ≥ 1.
• Real-valued ak are chosen, and Bk = B−k from Proposition 3.4.1 are satisfied.
• For k′ = −k, the integrals do not vanish.
• The set Λ of indices k is chosen such that −Λ⊆Λ.
• Beltrami flows are a well-known class of stationary solutions of the Euler equations.
• vk ∗ ⇀ ṽ and vk ⊗ vk ∗ ⇀ ṽ ⊗ ṽ + R̃ in L^∞ uniformly in time.
• The initial data of adapted subsolutions are automatically wild, assuming they satisfy an appropriate “admissibility condition”.
• From onwards, the prescription of an arbitrary kinetic energy profile is abandoned, and the generalized energy of the subsolutions (∫_T3 |v|^2(t,x)+trR(t,x)dx) is conserved across the iterations.
• A second intermediate step, “strong subsolutions,” is introduced from.
• For Nash error, ( v(t,x)⊗ v(t,x)−u(t,x)⊗u(t,x)+ R̊(t,x) ) dx−e(t)−− ∫_Td |u(t,x)|^2dx d/dt < σ.
• γ,ε > 0 and β ≥ 0 such that 2γ+β+ε ≤ 1.
• f ∈ C^0,2γ+β+ε.
• For every γ∈(0,1), ε > 0 such that 0 < γ+ε ≤ 1, and f as above.
• E1,E2 > 1.
• E is a family of smooth functions on with properties: (i) 1/2 ≤ e(t)≤ 1, (ii) e(0) is the same for every e∈E, (iii) e′(0) is the same for every e∈E, (iv) sup_e∈E ‖e‖_C1 = E1, (v) sup_e∈E ‖e‖_C2 = E2.
• A constant K > 1 is chosen for E1 = 2K + 2 and E2 = CK^2, and e′ ≤ −2K + 2 is required.
• The admissibility condition is ensured by choosing K large enough so that C K^γ < K−1.
• The strategy in (for θ < 1/3) requires local existence and uniqueness results for solutions of fractional Navier-Stokes, as well as estimates for their norms.
• An averaging process is linear and commutes with derivatives.
• For C^β-adapted subsolutions: γ,Ω > 0, 0 < β < 1/3, and ν satisfies ν > (1-3β)/(2β).
• Initial datum v(0, ·)∈C^β(T3) and R(0, ·)≡ 0.
• For all t > 0, ρ(t) > 0.
• There exist α∈(0,1) and C ≥ 1 such that ‖v‖_1+α ≤ CΩ^(1/2)ρ^(−(1+ν)) and |∂_tρ| ≤ CΩ^(1/2)ρ^(−ν).
• The convex integration strategy adopted by in the Euler setting is followed.
• The parameters δq, 𝜁q, λq are defined by specific relations.
• 1 < b < (1−β)/(2β).
• a ≥ 1 is sufficiently large to absorb various q-independent constants.
• Λ ≥ 1.
• Conditions (4.3.5) and (4.3.9) hold true.
• b, β as in (4.3.2) (so β(1+b)<1).
• 0 < 𝛼,𝛾 are sufficiently small depending on b, β.
• N∈N is sufficiently large depending on b, β,𝛼,𝛾 to get (4.3.15).
• Λ ≥ 1.
• θ < β, 2bβ < 1−β, and 𝛼,𝛾 are sufficiently small.
• a is sufficiently large.
• N = 0 and N = 1.
• Estimate (6.3.13) will be proved in Step 5.
• (vi, pi) in (6.3.29) is defined at least on an interval of length ∼ ‖v`q,i‖^(−1)_1+α.
• a ≥ 1 is sufficiently large.
• Estimates in Theorem 2.4.1.2 and Lemma 6.2.1 can be applied to (vq,i, pq,i,R`q,i) and (vi, pi,0).
• Estimates of vi−vq,i* and *vq,i −vq are used.
• 𝛼 < 𝛽𝛾.
• wo,i have pairwise disjoint supports.
• The parameters chosen in Step 1, (v0, p0,R0), satisfy (7.1.10)-(7.1.14) (and thus (a0)-(g0)).
• (vq, pq,Rq) is a smooth strong subsolution satisfying (aq)-(gq).
• Proposition 6.3.1 is applied.
• For any η > 0, |T(t)| ≤ η for t∈ [0,T(η,δ,a)].
• The family of strong subsolutions (v̂, p̂, R̂+ e/3 Id) has e : [0,T]→R satisfying e(t) ≤ 5/2 δ – p̂(t), |∂_t e| ≤ √(δ_0 λ_0)e, e ≥ 0, e(0) = 0.
• The proof of Proposition 7.2.1 closely follows [18, Section 9].
• The choice of cut-off functions is dictated by the shape of the trace part of the Reynolds stress, not fixed a priori.
• 2𝛼 < 𝛽𝛾 and 𝛼 < 2/9.
• (v̂, p̂, R̂) is a C^β-adapted subsolution on [0,T], with Ω=Λ, satisfying the strong condition |˚̂R| ≤ Λp̂^(1+γ) and conditions (4.2.4)-(4.2.5) for some α, ν > 0 as in Definition 4.2.2.
• (1−β)/(2β) < 1+ν < (1−β)/(2β).
• b > 0 such that b^2(1+ν) < (1−β)/(2β) and 2β(b^2−1)<1.
• 𝛼,𝛾 > 0 are sufficiently small.
• (v0, p0,R0) = (v̂, p̂, R̂).
• a ≥ 1 is sufficiently large.
• A sequence (vq, pq,Rq) of smooth strong subsolutions is inductively constructed.
• The parameters chosen in Step 1, (v0, p0,R0), satisfy (7.2.16) (and therefore (A0)-(F0)).
• (vq, pq,Rq) satisfies (Aq)-(Fq).
• S and (vq, pq,Rq) satisfy the required assumptions on the interval [T^(i)_1 +2τq,T^(i)_2 −2τq]∩Jq with parameters 𝛼,𝛾 > 0.
• Condition (6.4.2) (or its worsened form discussed in Remark 7.2.1) follows from (7.2.24).
• H(𝜓^2_q)≥ 0.
• α ∈ (0,1) and m∈N for Lemma A.4 (Schauder estimates).
• g = 0 for the transport equation.
• For (fractional) Navier-Stokes equations, f = v, g = −∇p.
From “On Weak Solutions and the Navier-Stokes Equations.pdf”:
• The external force f and viscosity ν are often not found in many papers and results concerning the Navier-Stokes equations.
• f = 0 is assumed for simplicity, and this assumption is followed in this paper.
• ν = 1 can be set because it is possible to regularize the equations.
From “TeachingLH-submitted.pdf”:
• T > 0 is an arbitrary finite number representing time.
• Ω ⊂ R^3 is a domain, whose boundary ∂Ω will be denoted as per Assumption 2.1.
• Specific boundary conditions are imposed: u→ 0 for |x| → ∞ (if Ω = R^3), u is periodic (if Ω = T^3), u = 0 on (0, T )× ∂Ω (if Ω is a bounded domain).
• The initial datum is requested to be tangential to the boundary (for Ω = R^3).
• The initial datum satisfies the periodicity condition (for Ω = T^3) or the zero boundary condition (for bounded Ω).
• The pressure p is an unknown of the system.
• The approximation method should be chosen such that the approximate solutions satisfy the energy.
• The uniform bounds obtained on the sequence of approximating solutions are the same inferred by the a priori estimates available for the system.
• A notion of approximating solution is introduced for which convergence to a Leray-Hopf weak solution will be proven.
• The domain Ω ⊂ R^3 is of three types: (A1) the whole space, Ω = R^3; (A2) the flat torus, Ω = T^3; (A3) a bounded connected open set Ω ⊂ R^3, locally situated on one side of the boundary ∂Ω, which is at least locally Lipschitz.
From “WRAP-enstrophy-circulation-scaling-Navier-Stokes-Kerr-2017.pdf”:
• For questions posed in Sobolev spaces (truncated Fourier series), periodic calculations are ideal.
• For questions posed in the whole space (R^3), localized aperiodic initial states are more appropriate.
• The report applies a Fourier-based code to two configurations with global helicities H at opposite extremes (trefoil vortex knots and anti-parallel vortices).
• The trefoil calculations were originally designed to address the experimental claim that the global helicity (1.6) was preserved during reconnections, which Kerr (2017) confirmed through the first reconnection.
• As viscosity ν → 0 in fixed periodic domains, higher-order norms are bounded from above if the Euler solutions have no singularities (Constantin 1986 proof).
• The critical viscosities νs depend inversely upon the size of the domain, allowing bounds to be relaxed as ν decreases by increasing domain size.
• The maximum of vorticity ‖ω‖_∞ and the cubic velocity norm ‖u‖_L3 are used as regularity criteria for bounding singularities of the Navier-Stokes equations.
• Helical trefoil vortex knots have “compact support”.
• Anti-parallel configurations, due to symmetries, allow easy resolution increase in the reconnection zone and identification of vorticity components.
• The global helicity of anti-parallel configurations is identically zero.
• The initial integral norms of anti-parallel configurations increase as the domain is increased.
• The ν ≡ 0 Euler and the ν <3.125e-5 calculations are resolved.
• The discretisation error is independent of the domain size ℓ once ℓ > 4π.
From “Why global regularity for Navier-Stokes is hard _ What’s new.pdf”:
• Global regularity results currently require one or more of the following: (1) exact and explicit solutions (or transformation to simpler PDE/ODE), (2) perturbative hypotheses (e.g., small data, data close to a special solution, or a hypothesis with an ǫ), or (3) one or more globally controlled quantities (coercive, critical, or subcritical).
• Papers on global regularity for Navier-Stokes all assume (1), (2), or (3) via additional hypotheses on the data or solution.
• Several counter-examples have been found to some intermediate statements in a paper concerning the existence of strong solutions for Navier-Stokes equations.
From “compproofsvrep.pdf”:
• Computer-assisted proofs are acceptable to the mathematical community.
• The question of stability or instability depends crucially on the choice of functional space and the metric used to study stability.
• A blowup solution could be stable in one functional space but unstable in another functional space at the same time.
• It could be misleading to conclude that the 3D Euler blowup is not computable based on one formulation and one metric that are not suitable to study the potential stable blowup of the 3D Euler equation.
• Finding sufficient and necessary conditions for phase transitions may shed light on the intricate phenomenon of generalized hardness of approximation.
From “leray.pdf”:
• Leray’s self-similar solutions of the three-dimensional Navier-Stokes equations must be trivial under very general assumptions, for example, if they satisfy local energy estimates. This is the main theorem proven.
• For Nečas, Růžička, & Šverák, global energy estimates are assumed.
• For the current paper’s Theorem 2, local energy estimates (1.4) for u are assumed.
• Theorem 2 is purely local, imposing no boundary condition on u.
• The local estimate of pressure p corresponds to a global estimate of P due to self-similarity.
• Every self-similar weak solution u in Theorem 2 is a “suitable weak solution” in the sense of [CKN].
• Partial regularity result in [CKN] is applied to obtain (1.10).
• Notational conventions and some definitions are established.
• Definitions of Leray-Hopf weak solutions [Le, Ho] and suitable weak solutions [Sch1, CKN] are referred to.
• Obtaining the local estimate of ∇U requires certain weak local control of U and P.
• Weak control of P is obtained by considering P̃ given by (2.2).
• Global control of U gives a (weak) global control of P̃, which is then used to obtain a local control of P̃ and P.
• q = ∞.
• P̃ defined by (2.2) is in the BMO space.
• |y − y0| < 1/2 and ρ ∈ [3/4,1].
• For 3 < q < ∞, the assumption U ∈ L^q(R^3) can be weakened to ‖U‖_q,B2(y) + ‖P‖_q/2,B2(y) → 0 as |y| → ∞.
• The local energy estimates (1.4) imply ‖u‖_10/3,Q1 < ∞.
• λ0 = (2a)^(−1/2), and A1 and A2 are explicit constants.
• All right-hand sides in (4.1) are finite.
• U ∈ W^(1,2).
• Scheffer’s question considers the existence of nontrivial solutions of Leray’s equation with a “speed-reducing” force g such that U · g ≤ 0.
From “mathematics-11-01062-v2.pdf”:
• The paper provides an overview of results related to energy conservation in spaces of Hölder-continuous functions for weak solutions to the Euler and Navier–Stokes equations.
• It considers families of weak solutions to the Navier–Stokes equations with Hölder-continuous velocities with norms uniformly bound in terms of viscosity.
• The problem of understanding vanishing viscosity limits and the construction of distributional (dissipative) solutions to the Euler equations has a long history.
• For simplicity, f = 0 is assumed, but results can easily be adapted to include smooth non-zero external forces.
• The focus is on the Hölder regularity case, as it keeps the results simple and understandable for an audience familiar with classical spaces of mathematical analysis.
• For the Navier–Stokes equations (positive viscosity) or even inviscid limits, it seems necessary to restrict to the space–periodic case.
• There is limited knowledge about energy conservation for the Navier–Stokes equations (NSE) in the presence of boundaries under Hölder assumptions.
• The vanishing viscosity limit poses unsolved questions in the case of Dirichlet conditions.
• “Quasi-singularities” for Leray–Hopf solutions are required to account for anomalous energy dissipation, even if energy dissipation vanishes sufficiently slowly (as positive powers of ν).
• “Smooth enough” Leray–Hopf solutions cannot have Hölder norms (above a critical smoothness) that are bounded uniformly in viscosity if total dissipation vanishes too slowly.
• The total energy dissipation rate is defined as ε[v] := ν|∇vν|^2 + D(vν).
• If D(vν) = 0, then energy dissipation arises entirely from viscosity, and energy equality holds.
• A proper (even if standard) analysis of the commutation term after mollification is performed.
• The additional regularity ∇vν ∈ L^2(0, T; L^2(T3)) holds for Leray–Hopf weak solutions, but it is not uniform in ν > 0.
• These results (with non-uniform regularity) are not applicable to the vanishing viscosity limit.
• No assumptions are required regarding the existence of limiting Euler solutions.
• Weak Euler solutions vE can be obtained as the limit of Leray–Hopf solutions vν as ν→ 0, based on the hypotheses.
• For the mollification argument, a “symmetric” ρ ∈ C^∞(R^3) is fixed.
• u ∈ Ċ^σ(T^3) ∩ L^1_loc(T^3).
From “navierstokes.pdf”:
• Scheffer applied ideas from geometric measure theory to prove a partial regularity theorem for suitable weak solutions of the Navier–Stokes equations.
• The singular set of a weak solution u consists of all points (x◦, t◦) ∈ R^3 × R such that u is unbounded in every neighborhood of (x◦, t◦).
• If the force f is smooth, and if (x◦, t◦) doesn’t belong to the singular set, then u can be corrected on a set of measure zero to become smooth in a neighborhood of (x◦, t◦).
From “zhang.pdf”:
• 1/p + 1/q = 1.
• f, g ∈ L^∞_T(L^2) ∩ L^2_T(H^1).
• h ∈ L^2_T(BMO).
• 1 ≤ i ≤ 3.
• The assumption on the velocity gradient can be replaced by u ∈ L^2(0, T; BMO(R^3)) or u ∈ L^2(0,∞; BMO(R^3)).
• The Bony decomposition uv = T_u v + T_v u + R(u, v) is used, based on (2.5).
• 0 < T ≤ ∞ and ε > 0.
• f, g ∈ L^∞(0, T; L^2(R^3))∩L^2(0, T; Ḣ^1(R^3)).
• h verifies (1.12), i.e., h ∈ L^p(0, T; Ḃ^0_q,∞(R^3)) with 2/p + 3/q = 2, 1 ≤ p < ∞, 3/2 < q < ∞.
• The Plancherel theorem and (1.13)/(1.16) are applied.
• E^2_0 is the right-hand side of (1.13) with T = ∞.
Acknowledging Errors and Conflicting Conclusions in Research
Based on the sources provided, there are several instances where assumptions or previously stated results are explicitly identified as false or erroneous, or where differing assumptions lead to conflicting conclusions:
1. Explicitly Identified Errors or False Statements in Prior Work:
• False Inequality (Source): The inequality f (ρ0) ≤ g(ρ0) + 1 / (ρ0 − ρ∗) ∫ ρ0 ρ∗ f (ρ)dρ ∀ρ0 > ρ∗ (equation 1) is false, and a counterexample based on the error function erf exists. This error was subsequently repeated in the proof of Lemmas 4.5 and 4.6, and under equation (6.15) in the original paper.
• Mistake in Lemma (Source): Remark 2.4 in the new source notes that Lemma 3.6 of the original paper contained a mistake in its proof, although the final estimate of Lemma 2.3 in the new source validates Lemma 4.3 in.
• Non-True Inequalities (Source,): Lemma 1 in [N. Kim, SIAM J. Math. Anal., 41 (2009)] is explicitly stated as not true, and its proof is erroneous. Specifically, the inequalities ‖u · ∇v‖L2 ≤ C‖∇u‖L2‖∇v‖L2 (1.1) and ‖u · ∇v‖L2 ≤ C‖u‖L2‖v‖H2 (1.2) are not true, and counterexamples exist. The verification of the decay rate of a related multiplier, crucial for the proof, was not fully done and is impossible.
2. Contradictory Conclusions Arising from Different Methodological Assumptions:
• Stability vs. Instability in Different Functional Spaces (Source): The stability or instability of a potential finite time singularity can be contradictory depending on the choice of functional space and metric used for the analysis. For example, Chen and Hou demonstrated a stable finite time self-similar blowup for the 3D axisymmetric Euler equation and the 2D Boussinesq equation with C1,α initial velocity when using a dynamic rescaling formulation. In contrast, Vasseur and Vishik proved hydrodynamic instability for the same problem when using their definition of instability. This illustrates that a blowup solution can be simultaneously stable under one set of assumptions (regarding the functional space and metric) and unstable under another, leading to conflicting conclusions about its nature.
• Non-Uniqueness Despite Global Smoothness (Source): Non-uniqueness has been observed for initially smooth axisymmetric solutions without swirl, even though these solutions are known to remain globally smooth. This suggests that assumptions guaranteeing global smoothness do not inherently guarantee uniqueness, which can be counter-intuitive or challenge conventional expectations about solution properties.
3. Other Relevant Points (Not Direct Contradictions of Assumptions):
• Failure of a Computational Approach (Source): The “first approach” for computing norm bounds K and K* fails for a Reynolds number of Re = 4.0. This failure occurs not because the prerequisite inequality 2C2Re‖U + ω‖L∞(Ω,R2) < 1 is not met, but because a higher-level inequality (3.11) in Theorem 3.4 does not hold true. This highlights that satisfying component assumptions does not always guarantee the success of a complex proof, but it’s not a direct contradiction between fundamental mathematical assumptions.
• Physically Meaningless Results (Source): If flow fields are computed using periodic boundary conditions, any observed lift, drag, or heat transfer are considered “computational artifices” and physically meaningless. This isn’t a contradiction of mathematical assumptions, but rather a warning about the physical interpretation of numerical results when the computational setup does not align with the physical conditions.
• Numerical Discretization Issues (Source,): Applying the same difference algorithm to different but equivalent differential forms of an equation can lead to non-equivalent difference equations with vastly different stability behaviors. This points to the challenges in numerical approximations and the consistency between continuous and discrete formulations, rather than contradictory mathematical assumptions about the underlying physical phenomena.
• Simulations and False Blow-ups (Source): There’s a general observation in the field that simulations can sometimes indicate values in equations “blew up,” only for more sophisticated computational methods to later show otherwise. This highlights the inherent difficulty and sensitivity of these problems and the potential for computational artifacts to lead to incorrect conclusions about singularities.
Unveiling Fluid Dynamics: Mathematical Hurdles and Paradoxes
The provided sources expose several deep and often counter-intuitive mathematical hurdles in the study of fluid dynamics, particularly concerning the Navier-Stokes and Euler equations:
1. Fundamental Errors and False Assumptions in Mathematical Proofs:
• A significant hurdle involved the explicit identification of false inequalities or erroneous proofs in previously published work. For instance, the inequality f (ρ0) ≤ g(ρ0) + 1 / (ρ0 − ρ∗) ∫ ρ0 ρ∗ f (ρ)dρ ∀ρ0 > ρ∗ (equation 1) was found to be false, with a counterexample based on the error function erf. This error was then “repeated in the proof of Lemmas 4.5 and 4.6, and under equation (6.15) in the original paper”. Similarly, Lemma 1 in a paper by N. Kim was explicitly stated as “not true and the proof is erroneous,” specifically for inequalities like ‖u · ∇v‖L2 ≤ C‖∇u‖L2‖∇∇v‖L2 (1.1) and ‖u · ∇v‖L2 ≤ C‖u‖L2‖v‖H2 (1.2), for which counterexamples exist. Such discoveries highlight the extreme sensitivity and rigorous demands of proofs in this field, where even seemingly plausible inequalities can be fundamentally incorrect.
• Another example is a counterexample concerning the pressure in the Navier-Stokes equations as t → 0+. These instances demonstrate that widely accepted or intuitively appealing mathematical statements can be false upon rigorous examination.
2. Non-Uniqueness and Ill-Posedness of Solutions:
• A highly counter-intuitive and deep hurdle is the non-uniqueness of weak solutions for the Navier-Stokes equations, even with bounded or finite kinetic energy. Ladyzhenskaya provided an example of non-uniqueness in 1969. More recently, Buckmaster and Vicol constructed “non-unique distributional solutions of the Navier-Stokes equations with finite kinetic energy”. For the Euler equations, it has been shown that for any β < 1/3, there exist weak solutions that do not conserve energy, and that admissible solutions (which satisfy the energy inequality) are not unique for general initial data. This means that even when solutions behave “physically” by satisfying the energy inequality, their evolution is not uniquely determined.
• Furthermore, for the Euler equations, the concept of “wild initial data” is introduced: initial data that generate infinitely many admissible solutions. The set of such data is dense in the set of divergence-free L2 vector fields for β < 1/3. This implies that non-unique, non-conservative behavior is not an isolated phenomenon but rather widespread.
• Non-uniqueness has also been demonstrated for the forced Navier-Stokes equations with zero initial datum, where a specific force f can lead to two distinct solutions. An open problem remains to achieve non-uniqueness without such an external force.
• For fractional Navier-Stokes equations, non-uniqueness of admissible solutions has been extended to θ < 1/3. These results reveal that predictability and determinism, often assumed in physical laws, can break down even in mathematically “well-behaved” settings.
3. The Navier-Stokes Regularity Problem and the “Scaling Gap”:
• The central, long-standing, and deep mathematical hurdle is the question of global existence and smoothness of solutions for the 3D Navier-Stokes equations (the Clay Millennium Prize Problem). No proof has yet been found guaranteeing the existence of a smooth solution in three dimensions.
• A key conceptual hurdle is “supercriticality”. This means that globally controlled quantities (like energy) are “much weaker at controlling fine-scale behaviour than controlling coarse-scale behaviour”. This fundamental inadequacy of existing tools for fine-scale control makes achieving global regularity extremely difficult.
• The “scaling gap” refers to the “scaling distance between a regularity criterion and a corresponding a priori bound”. Previous reductions of this gap have been “logarithmic in nature,” and current work aims for an “algebraic factor” reduction. This highlights the intricate connection between scaling symmetries and solution regularity.
4. Challenges in Inviscid Limits and Vanishing Viscosity:
• The convergence of the vanishing physical viscosity limit from Navier-Stokes to Euler equations as ε → 0 is a “difficult problem”. For no-slip Dirichlet boundary conditions, a “stronger boundary layer (of size ν−1/2)” appears due to the “significant mismatch of the boundary conditions” between viscous and inviscid equations.
• The fact that the analytical tools, such as the coordinate system and adapted vector fields, “depend on ν” makes the inviscid limit “a significantly more subtle proposition than one may guess at first”.
• A counter-intuitive finding is that the conservation of energy in solutions to the Euler equations can depend on the method of construction. For σ > 1/2, conditions for both the existence of the inviscid limit and energy conservation are more stringent than for energy conservation alone, suggesting that the path to a solution influences its fundamental properties.
• Another counter-intuitive aspect is that while solutions to the Navier-Stokes equations are “smooth in space for any ν > 0,” this smoothness “cannot be uniform in viscosity”. This non-uniformity implies that as ν → 0, “quasi-singularities” are required to account for observed anomalous energy dissipation rates in turbulent flow.
5. Counter-Intuitive Physical and Mathematical Behaviors:
• Vorticity amplification: Even when viscosity is taken into account, vorticity can be amplified “by an arbitrarily large factor in an extremely small point-neighbourhood within a finite time, and this behaviour is not resolved by viscosity”. This defies the intuitive smoothing effect of viscosity.
• Physical vs. Mathematical Singularities: A “physical singularity” can exist despite mathematical regularity. For example, in a free surface flow, a radius of curvature can become “extremely small” (≈ 1.9× 10−42), which is “perfectly regular from a purely mathematical point of view” but constitutes a physical singularity. This highlights a disconnect between mathematical definitions and physical interpretation.
• Energy Dissipation and Enstrophy: The rate of energy dissipation can “remain finite in the limit of vanishing kinematic viscosity ν,” which paradoxically requires the enstrophy (〈ω2〉) to become infinite. This connects the phenomenon of turbulence to the presence of singularities.
• Effect of Surface Tension: For water waves, singularities (splash or splat types) can arise even when surface tension is present, which might be counter-intuitive as surface tension is often expected to regularize interfaces.
• Domain and Parameter Dependence: The behavior of solutions can depend subtly on the domain’s geometry and physical parameters. For instance, whether a confined medium promotes or prevents “turning singularities” depends on specific permeability parameters, leading to varied and sometimes counter-intuitive outcomes. Similarly, numerical results indicate that the ordering of Lp norms can be “inversely ordered from the Hölder expectation”.
• Supercriticality and Euler vs. Navier-Stokes: The “majority view amongst mathematicians is that Euler blow-up does not in any way imply Navier-Stokes blow-up”. This is a critical hurdle, as the fundamental difference in viscosity (even if ν is small) leads to distinct behaviors, making results not directly transferable between the two systems.
6. Hurdles in Computer-Assisted Proofs and Numerical Simulations:
• Deceptive Simulations: “Simulations can sometimes indicate values in equations ‘blew up,’ only for more sophisticated computational methods to later show otherwise”. “The road is littered with the wreckage of previous simulations”. This makes rigorous proof indispensable, as numerical evidence can be misleading.
• Nature of Proof and Understanding: The significant use of computers in proofs raises philosophical questions about what constitutes a “proof” and whether computer-assisted proofs “improve their understanding of why a particular statement is true, rather than simply provide validation”.
• Finite Precision and Rounding Errors: Computers’ inability to manipulate infinite digits means “tiny errors inevitably occur”. Rigorous proofs require “carefully track[ing] those errors” using techniques like interval arithmetic.
• Stability Definition Ambiguity: The “stability or instability of a potential finite time singularity” depends “crucially on the choice the functional space and the metric that we use to study stability”. A blow-up solution can be “stable in one functional space but is unstable in another functional space at the same time”. This means the very definition of stability is context-dependent and ambiguous.
• Limitations of Numerical Methods: Common mixed finite elements are often “not exactly divergence-free” as required for rigorous proofs. Reentrant corners in computational domains have “negative effects,” and designing exactly divergence-free singular functions is challenging. The use of artificial viscosity, common in inviscid flow simulations, is “not tolerable for viscous flow problems” as it overshadows physical viscosity effects.
• “Miracles” in Proofs: Some local existence proofs contain “several ‘miracles,’ i.e., complicated calculations and estimates that lead to simple favorable results for no apparent reason”. This suggests deep underlying structures that are not yet conceptually understood.
• Inadequacy of Global Control: As stated by Jacob Schwartz, “Fluid dynamics… is not a subject that can be expressed in terms of theorems and their proofs”. This reflects a deep meta-mathematical hurdle in formalizing fluid dynamics.
These examples collectively illustrate that the study of fluid dynamics involves mathematical challenges that often defy intuition, require extreme precision, and push the boundaries of current analytical and computational methods.
3 responses to “A Critical Review and Taxonomy of Flawed Proofs for the Navier-Stokes Existence and Smoothness Problem”
Excellent summary of the historical failure modes — and it’s precisely those issues that our Neural-Inspired Spectral–Temporal Continuation for Smooth Global Navier–Stokes Solutions on T³
Jeffrey Camlin
DOI: 10.63968/post-bio-ai-epistemics.v1n2-010a
https://recursion-intelligence.org/post-bio-ai-epistemics-v1n2-010a.html
was designed to address.
Our construction explicitly works on the periodic torus T³ with arbitrary smooth divergence-free data (no smallness or decay assumptions), employs a spectral continuation operator Cζ that preserves incompressibility and weak energy bounds across potential singularities, and uses temporal lifting to restore full C∞ regularity. The resulting solution is classical on each interval, globally weak (Leray–Hopf) across all times, energy-consistent, and non-accumulating in its restart sequence—satisfying Fefferman’s Conjecture B conditions.
In short: it accepts singularities rather than assuming them away, continues them spectrally without altering the PDE, and achieves global smoothness via time reparametrization. That combination directly resolves every breakdown category the review lists.
— Jeff Camlin (ORCID 0000-0002-5740-4204)
LikeLike
Thank you Jeff for sharing your reserach: 🜁 Neural-Inspired Spectral–Temporal Continuation for Smooth Global Navier–Stokes Solutions on T³
What real-world application could this applied to? Where fluid dynamics and AI intersect, especially in systems that require high-resolution and stable simulations over complex time intervals? Like: Climate and Weather Modeling – Spectral continuation and temporal lifting can help maintain smoothness in climate models, improving long-term forecasts and reducing computational errors near singularities; Aerospace Engineering – spectral filtering can stabilize simulations during design testing, especially for hypersonic vehicles or reentry dynamics; Biomedical Fluid Simulation – AI-regularized Navier–Stokes solutions can improve simulations for surgical planning or medical device design; AI-Augmented Digital Twins – Using GANs and transformer-inspired PDE solvers allows for fast, stable updates in digital twins, even when sensor data is noisy or incomplete; High-Fidelity Animation and Visual Effects – The method can be used to generate physically accurate fluid animations that remain stable even under extreme conditions, …..
LikeLike
Thanks for the reply! Right now I found it makes one hell of a DNS – an iDNS the BKM acts like a regulator and auto adjusts dt on the fly and if it hits a non physical artifact spectral kicks in and prevents NaN.
So it’s a deterministic, Fourier-spectral method for the Navier–Stokes equations on the 3-torus that adds a temporal-lifting controller to standard spectral continuation.
Instead of shrinking the timestep when flows stiffen, iDNS re-parameterizes time itself:
partial_tau u = frac{1}{phi'(tau)} mathcal{N}(u), qquad t = int phi'(tau),dtau,
The result is a deterministic, energy-preserving DNS that can integrate high-Reynolds flows or learning-based PDE surrogates far longer than conventional fixed-Δt methods.
In short:
💡 Where this matters (real-world applications)
Domain How iDNS helps
Climate & Weather Modeling Smooths stiff dynamics across hours → decades, preventing numerical blow-ups in long-range forecasts.
Aerospace Engineering Stabilizes CFD of hypersonic and re-entry flows where traditional time-stepping fails near shock or boundary-layer singularities.
Biomedical Flow Simulation Enables stable, high-resolution blood-flow or respiratory models for surgical planning and device design.
AI-Augmented Digital Twins Acts as a physics-consistent core for transformer/GAN-based digital twins, letting them update in real time even with sparse or noisy sensor data.
High-Fidelity Animation & VFX Produces physically consistent fluid animation without the “numerical explosions” that limit current simulation engines.
🚀 The big picture
Mathematically deterministic: built from the Spectral Continuation and Weak–Strong Compatibility framework—no stochastic regularization.
Computationally efficient: often >10× faster than standard DNS at equal accuracy.
AI-compatible: the same temporal-lifting map can regularize continuous-time neural networks (Neural ODEs / PINNs), linking rigorous fluid dynamics to modern AI training.
In one line:
Dripping Soon on arXiv with standard benchmarks and a couple showoff benchmarks at RE 10^6 and 10^7
LikeLike