The Lax-Milgram Theorem is a fundamental result in functional analysis, particularly significant in the study of partial differential equations (PDEs). It provides conditions under which a bilinear form on a Hilbert space can be "inverted" to guarantee the existence and uniqueness of a solution to a certain linear equation.
Let $f \in L^2(\Omega), c \in L^{\infty}(\Omega)$. Assume that $c \geq 0$. Then the problem: Find $u \in V=H_0^1(\Omega)$ such that
$$ \forall v \in V, \quad \int_{\Omega}(\nabla u \cdot \nabla v+c u v) d x=\int_{\Omega} f v d x $$
has one and only one solution.
Let:
We consider the bilinear form:
$$ a(u, v):=\int_{\Omega} \nabla u \cdot \nabla v d x+\int_{\Omega} c u v d x $$
and the linear form:
$$ L(v):=\int_{\Omega} f v d x $$
We want to find $u \in V$ such that
$$ a(u, v)=L(v), \quad \forall v \in V . $$
Step 1: Show $a(\cdot, \cdot)$ is bounded (continuous)
Since $\nabla u, \nabla v \in L^2(\Omega)^n$ and $c \in L^{\infty}(\Omega)$, we have:
$$ |a(u, v)| \leq\|\nabla u\|{L^2}\|\nabla v\|{L^2}+\|c\|{L^{\infty}}\|u\|{L^2}\|v\|_{L^2} . $$