Fourier Neural Operators (FNOs) have gained significant attention for their ability to solve partial differential equations (PDEs) and stochastic partial differential equations (SPDEs) efficiently. Unlike traditional neural network architectures that map finite-dimensional inputs to outputs, FNOs operate directly on functions and can generalize to solutions across different resolutions. This makes them powerful for learning the mappings in SPDEs where stochasticity and high-dimensional function spaces are involved.

1. Overview of Fourier Neural Operators (FNOs)

2. Why Use FNOs for SPDEs?

3. Mathematical Foundation of FNOs

$\frac{\partial u(t, x)}{\partial t} = \mathcal{L}u(t, x) + \sigma(u(t, x)) \dot{W}(t, x),$

where $u(t, x)$ is the solution, $\mathcal{L}$ is a differential operator, $\sigma$ is a noise coefficient function, and $\dot{W}(t, x)$ represents white noise.

$\mathcal{G}\theta: u_0(x) \mapsto u(t, x),$

where $\mathcal{G}\theta$ is a neural operator parameterized by $\theta$ , mapping initial conditions $u_0(x)$ to the solution $u(t, x)$ .

4. Fourier Transform in FNOs

$\hat{u}(k) = \mathcal{F}[u(x)],$

where $\hat{u}(k)$ is the Fourier transform of $u(x)$ , and $k$ represents the frequency components.

$u(x) = \mathcal{F}^{-1}[\hat{u}(k)].$

5. Architecture of Fourier Neural Operators

  1. Input Layer:
  2. Fourier Layer:
  3. Spectral Processing:
  4. Inverse Fourier Transform:
  5. Nonlinear Activation: