Fourier Neural Operators (FNOs) are an advanced framework in deep learning designed for solving PDEs by learning mappings between functional spaces. They extend neural network architectures to directly model complex operators, offering efficient and accurate solutions for high-dimensional problems, including stochastic partial differential equations (SPDEs). FNOs leverage the Fourier transform to capture and propagate information globally, making them especially powerful for tasks involving complex, multiscale behavior.

1. Concept of Neural Operators

2. Fourier Neural Operator Architecture

The Fourier Neural Operator applies the Fourier transform to learn and apply global convolutions efficiently, bypassing some limitations of traditional convolutional neural networks (CNNs) which are inherently local.

Key Components:

  1. Fourier Transform Layer:

  2. Spectral Convolution:

    $\hat{v}(\xi) = P(\xi) \hat{u}(\xi),$

    where $\hat{u}(\xi)$ and $\hat{v}(\xi)$ are the Fourier transforms of the input and output, respectively.

  3. Inverse Fourier Transform:

  4. Iterative Layers:

3. Mathematical Formulation

Given an input function $u_0(x)$ , the FNO approximates the solution $u(x)$ by repeatedly applying a global convolution in Fourier space and transforming it back to physical space:

  1. Initial Lift:

    $u^{(0)}(x) = W_{\text{lift}} u_0(x).$

  2. Fourier Convolution:

    $\hat{u}^{(k+1)}(\xi) = P^{(k)}(\xi) \hat{u}^{(k)}(\xi),$

    followed by:

    $u^{(k+1)}(x) = \text{IFFT}(\hat{u}^{(k+1)}(\xi)).$

  3. Nonlinearity and Update:

    $u^{(k+1)}(x) = \sigma(u^{(k+1)}(x) + b^{(k)}).$

  4. Final Projection:

    $u(x) = W_{\text{proj}} u^{(K)}(x).$

4. Advantages of Fourier Neural Operators