Today, oceanographic research is no more conceivable without advanced mathematical methods, either to understand and simulate the behaviour of the ocean, or to interpret an synthesize all available observations. This is why the MEOM team participates to the development of these methods and to the conception and implementation of appropriate numerical tools

Development of the NEMO platform

NEMO (Nucleus for European Modelling of the Ocean) is a state-of-the-art modelling framework for oceanographic research and operational oceanography.

Solutions of ocean models are most often computed at the nodes of a grid with finite resolution. The figure shows the horizontal grid of the DRAKKAR global ocean circulation model at a 1/2° résolution. (Every 10 lines is drawn.) The computation of these solutions requires considerable informatic resources. The figure shows the division into subdomains of the DRAKKAR global ocean model grid at a 1/4° résolution (1442 x 1021 x 46 nodes) in order to distribute the computation (3400 CPU hours, for one year of simulation) over 186 processors of one of the IDRIS supercomputers (IBM Power4). Bathymetry of the Gulf of Lions continental shelf, as resolved by a hugh resolution model grid (1/60° horizontal resolution and 130 vertical levels).

The MEOM team participates to the development of NEMO in several ways:

The contribution of the MEOM team to the development of NEMO is mainly supported by the DRAKKAR and MERSEA. projects.

The MEOM team also uses secondarily other modelling tools, among which the hybrid coordinate ocean model HYCOM (illustrated in the figure above)... ...or ecosystem models, as sketched in the figure above, showing the interrelations between the state variables.

Data assimilation

Data assimilation is the generic name of methods enabling to compute a coherent description of the time evolution of a phenomenon (for us, the ocean circulation), by combining a partial observation dataset to the theoretical knowledge of the phenomenon (for us, an ocean model, like NEMO).

Sketch of the data assimilation principle: The general problem is to determine, among all solutions of a model, the one that minimizes a certain norm of the difference with respect to the observations (depending on their accuracy). The model can also be assumed imperfect, by looking for an approximate solution of the model equations (that become a weak constraint, instead of a strong constraint). Furthermore, if the observations are insufficient to determine the solution, it is necessary to introduce information about the model initial condition (as an additional weak contraint). The relative importance of these sources of information (observations, model, initial condition) can be ruled by statistical assumptions on the amplitude and shape of their respective errors.

The MEOM team contributes to the development of ocean data assimilation methods in several ways:

The contribution of the MEOM team to the development of data assimilation is mainly supported by the MERSEA and ONR projects.

There is a class of assimilation methods that look for the solution of the assimilation problem by sequentially correcting the model forecast using the observations. To do that, they use, at each assimilation step, an estimation of the error covariance matrices on the state of the system (P) and on the observations (R). The figure describes on assimilation cycle (forecast/correction cycle) of the Kalman filter. A way of characterizing the model error is to draw randomly an ensemble of values for a parameter that is a potential source of error, and perform the corresponding ensemble of model simulations (Monte Carlo method). For instance, the figure shows an ensemble of temperature profiles simulated by a 1D mixed layer model, and resulting from an ensemble of perturbations of the wind forcing. (The model without perturbation is in red.)