OSU Tidal Inversion Software documentation

OTISoo (2009)

Gary D. Egbert & Lana Erofeeva

1. Getting Started

2. Setting up the computational domain

2.1. Making the grid file

2.2. Editing/Viewing the grid and Defining Open Boundary Conditions

3. Setting run parameters

4. Getting a prior solution with Fwd_fac(_b)

5. Incorporating TOPEX/POSEIDON Altimetry and other Tidal Data

5.1. Collecting data with data_cooker

5.2. Pathfinder data package

5.3. Other tidal data formats

6. Calculating the "temporal data structure" matrix with mkB

7. Setup for representer calculation

8. Calculating representers with repX(_b)

9. Calculating the spatial representer matrix with rpx_to_rp

10. Calculating representer coefficients with reduce_B

11. Computation of the Inverse Solution

12. Posterior error calculation

13. OTISoo setup summary

14. Inverse problem solution summary


BACK TO OSU Tidal Data Inversion

The OSU Tidal Inversion Software (OTIS) is a relocatable implementation of the tidal data inversion scheme described in [EBF 1994], further developed in [EE 2002]. Since 2002 OTIS has been significantly modified incorporating extra functions using object oriented features of Fortran 90.

Major functional changes to OTIS since 2002


Features of OTISoo:

1. Getting Started »

Unpack OTISoo (OTISoo.tar.Z) in destination where you are going to solve your local tidal inverse problem. For storing intermediate results of computations significant additional disk space will generally be required especially if there is not enough RAM to solve your problem.

Note! You have to have LAPACK and BLAS installed on your system to compile many of the OTISoo components. Note, that LAPACK and BLAS are ALREADY included in some popular commercial operational system software packages, such as Read Hat Linux/Linux Fedora, see http://www.netlib.org/lapack on details.

Note! the text file OTISoo/README provides a set of "cookbook" instructions for computing an inverse tidal solution (for the Yellow Sea) using the simplest program defaults. After initial unpacking and setting up of OTISoo, it is probably best to go through this simple example to get an overview of OTISoo usage. Then refer to this manual for more details, or to refine usage for your particular purposes.

After unpacking OTIS you will have the following directory structure:



Topography data bases

h_tpxo, u_tpxo






grid.cdl,  run_param,












In the outline above, directories are denoted with bold, files with italic. Thus, for example, crd is file, src and matlab are directories. h_tpxo and u_tpxo are symbolic links to current versions of global tidal elevation and transport data bases. The actual working directories for each local region will be made in OTISoo/local.

OTISoo codes need to be compiled first (only once!) in OTISoo/bin with a makefile provided in this subdirectory.  You might need to adjust the makefile for your particular environment. To make all OTISoo codes type "make linux.all"  and "make linux.all_b" in OTISoo/bin if you system running Linux or  "make unix.all"  and "make unix.all_b" if you system is running Unix. The makefile of course can be adjusted theoretically for any operational system, just keep in mind, that OTISoo "state-of-the-art" software was tested only on Linux and Unix. To execute OTISoo codes from any local subdirectory without specifying the full path, you need to add */OTISoo/bin to your environmental variable "path". Note that it is essential that the <your_path>/OTISoo/bin directory be added to the search path that is set up at initialization of each new UNIX shell (i.e., when .cshrc is executed at shell start up). It is a good idea also to add <your_path>/OTISoo/scripts too. This is where some useful shell scripts, provided with OTIS are located and this is where you can put your own shell scripts for running sequences/loops of OTISoo codes.

To use the OTISoo MATLAB codes you might also want to add <your_path>/OTISoo/matlab to the environment variable MATLABPATH.

With your workspace efficiently organized and the environmental variables set you may now start to perform steps in order to compute an inverse solution from the local area directory subdirectory (which we call MyArea below to be specific).

To create the new subdirectory in OTISoo/local execute the script crd

crd  MyArea

from directory */OTISoo/. This will organize subdirectories and work space within OTISoo/local. You may run crd as many times as you wish, creating multiple directories for various local areas. crd also gives you some hints about further steps in using OTISoo. The result will be sub-directories in OTISoo/local organized this way:










Directories dat, out and repx1 are initially empty; exe contains an ASCII control file run_param. Directory prm contains a control file cf_difvar, which you might want to edit (see comments on programs Varest and Diffuse below for details) and a file Constituents.

Note! In OTISoo desired constituents are written in groups - one group per line, constituents are separated by blanks. For example:

m2 s2

k1 o1

This notation provides treating each pair of m2,s2 and k1,o1 as correlated constituents. Maximum number of constituent per group allowed is limited to four (due to computational limitations and expediency).


Note! Initially only one subdirectory for representers (repx1) is created. If you have more than one group of correlated constituents you need to create representer subdirectories for other groups. As soon as ../prm/Constituents is set, you may run script set_repx_subdir which will make the subdirectories for you or create the subdirectories with mkdir. Thus for the example above there will be 2 subdirectories in OTISoo/local/MyArea/ - repx1 and repx2.

Note! All executables have to be run from OTISoo/local/MyArea/exe/. Furthermore all other paths will be given relatively to this subdirectory. For example, OTISoo/local/MyArea/prm/ will be given as ../prm. To get usage of any OTISoo code just type <code_name> -Q.

With your space efficiently organized you might to start to define your inverse tidal problem. To do this the following must be specified:

The tidal constituents should be given in the file ../prm/Constituents as described in the notes above. The file could be edited with any text editor or from matlab GUI gridEdit described in the next section. All other definition steps are described in the following sections.

2. Setting up the computational domain»

The computation domain can be set up with some tools provided in OTISoo and described in this section. A bathymetry grid can be defined by user or set with these OTISoo tools, as well as open boundary conditions for a prior model. To view/edit bathymetry grid one would need to use MATLAB. However a bathymetry grid and open boundary conditions can be created without using MATLAB.

2.1. Making the grid file»

The first step is to make a bathymetry or grid file. This can be accomplished by running program mk_grid from OTISoo/local/MyArea/exe. The program prompts for the area limits, database preference etc. mk_grid usage is:

  mk_grid [-g<output_grid_name>]
 Output: ../prm/grid

 Min depth = 2m
 Input: S&S in ../../../DB/topo_11.1.img
 NO GEBCO mk_grid support with public distribution due to netcdf installation/compilers compatability problems
 on local systems
        Local bathymetry MUST be
        in 3 columns ASCII format:
        lat1 lon1 H1
        lat2 lon2 H2

Bathymentry data bases are not provided with OTISoo distribution and should be downloaded from corresponding sites, i.e. Smith&Sandwel bathymetry data base is available for download at

As a result a ../prm/grid file is produced. The file is a sequential FORTRAN binary file with 4 records:

(1) header : n, m, theta_lim, phi_lim, dt, nob where

(2) list of open boundary node indices (i,j); This is an integer(kind=4) array of size (2,nob).

(3) bathymetry (depth, m) real(kind=4) n x m array of water depth. The bathymetry data is interpreted throughout the inversion software as the average water depth over each of the n x m grid cells, as illustrated in figure 1.

 (4) ocean/land mask : integer(kind=4) n x m array of 1 for ocean nodes, 0 for land nodes

Note that if you use some other source of bathymetry data (for example, GEBCO) you might want to use mk_grid for this (-l option). Then you would need to translate your bathymetry in 3 columns ASCII file as described above. Other optiono is to construct a grid file in OTISoo format, using the conventions outlined in Figure 1.  MATLAB script in OTISoo/matlab/Fuctions called grd_out.m might be used to output any bathymetry matrix in OTIS format. Thus if you load an n x m bathymetry grid into MATLAB in array hz, with the ocean land mask in mz, latitude and longitude limits in array ll_lims = [ lon1-dlon/2,lon2+dlon/2,lat1-dlat/2,lat2+dlat/2 ], dt = time step (cartesian/polar indicator), and iob = [] (you can set the list of open boundary nodes later with gridEdit), and use

grd_out(grid_file_name,ll_lims,hz,mz,iob,dt) .

A grid file in the proper (OTIS) format will be written to grid_file_name.

Arakawa C-grid

Figure 1: Grid conventions used in OTIS. An Arakawa C-grid is used for all dynamical calculations. The model domain is divided equally into n columns and m rows, for a total of nm grid cells. The longitude and latitude limits (which must be specified (in degrees) in the grid header) correspond to the edges of the model domain, not the centers of the edge grid cells. Elevations z are taken to be averages over each cell. Similarly, grid bathymetry is given for each cell, and should correspond to average water depth in each cell. Volume transports U and V are specified on grid cell edges, and are interpreted to be the average volume transport over the cell edge. Boundary conditions at the coast are specified on the U and V nodes. Open boundary conditions are given by specifying the elevation z for open boundary edge cells, or transports on edge U or V nodes.  

2.2 Editing and Viewing the Grid and Defining Open Boundary Conditions »

Open boundary conditions might be set in 2 ways - either in MATLAB (this allows to edit grid and set an arbitrary open boundary) or using executable ob_eval. If ob_eval is used the list of open boundary nodes must be set before as described above either by mkgrid or user provided from matlab. ob_eval usage:

Usage: ob_eval [-M<model_file>] [-U]


           elevation OBC

           output files  = ../prm/obc_<constituent>

           model_file = OTIS/DB/h_tpxo


           if -U - normal flow obc, the defaults are:

           model_file = OTIS/DB/u_tpxo,

           output files  = ../prm/uv_obc_<constituent>

The result will be the set of open boundary files for each single constituent from ../prm/Constituents

To view and edit the grid and to complete setting up constituents and open boundary condition files, there is an interactive MATLAB program called gridEdit. We first describe how to use the MATLAB grid editing program. Get into MATLAB in ../exe, and run gridEdit. A dialogue box for browsing for a grid file will appear. Choose the grid file you wish to look at and/or edit. The grid will be plotted on the screen, as in Figure 2.

gridEdit screen 1

Figure 2: MATLAB grid editor display. This program (gridEdit.m) makes it easy to edit the land/ocean mask, and to set up boundary conditions. Further details on the different groups of buttons are given below.

There are four groups of buttons below the plotted bathymetry grid. These allow you to:

  1. zoom the view of the grid in or out. These buttons are labeled: Sub-grid, Prev_grid, and Full_grid. After activating Sub-grid, you can use the mouse to define an area of the grid to zoom in on. The program enlarges the selected area to keep the same aspect ratio for the grid. Prev_grid returns you to the previous view of the grid. Full_grid restores the view to the full grid.
  2. Edit the land/ocean mask. This feature is usually used to mask off parts of the domain which you do not want to include in your model (e.g., lakes, or small bays which get cut off from the remaining ocean nodes), or to modify the topology of land and ocean. Mask activates the mouse for editing the mask one node at a time. Left mouse button masks an ocean node, the center button unmasks, and the right button terminates masking. (Button definitions are for a UNIX system; the program will also work on a PC, with a two button mouse. In this case the left button masks, right unmasks, and shift-left terminates; you may have to modify routine grdmon.m to adapt to your system). Until you terminate masking you can't do anything else in the program. Mask Blk lets you mask all nodes in a rectangle by dragging with the mouse to define a rectangular block of nodes. Unmask Blk unmasks a block in the same way. For all mask editing options it is generally best to first zoom in on a particular area, then edit the mask in this area. The button marked H0 lets you define a minimum depth for ocean nodes. First you can edit the text field which defines the minimum depth (5 meters by default). Then by clicking on the H0 button all nodes at the minimum depth or less are masked. By clicking again the minimum depth feature can be turned off, and all nodes shallower than the current minimum depth are unmasked.
  3. Define constituents. This is accomplished by editing the constituents field (3rd in bottom row of buttons). The program reads from the file ../prm/Constituents to initialize this. If you modify the constituents by editing this field, the new set of constituents will be written to the file ../prm/Constituents when you save the grid. You shall separate constituents in groups with blanks and separate group of constituents with semicolon.
  4. Define the location of open boundary nodes, and then use a tidal elevation (transport) file such as h_tpxo (u_tpxo) to define the elevations (transports) on the open boundary. This is the simplest way to set up open boundary conditions. First you need to define where the open boundary is. This is accomplished by activating the open boundary node editor Edit_OB, and clicking with the left button to make an ocean node an open boundary node, with the center button to erase an open boundary node and the right to terminate open boundary node editing. There is also a feature (the button Edge=OB) which allows you to define all edge ocean nodes as open boundary nodes. If you want part of the open boundary to be on the edge of the grid, it is simplest to start by clicking this, then editing. Once you have defined the open boundary nodes (and the desired constituents; see the previous item), you can define the elevations or transports on these nodes.

There are two open boundary condition (OBC) types for calculating a forward solution implemented - elevation and normal flow open boundary conditions. Radiation BC can be set for calculating representers and inverse solution. By default elevation OBCs are used. This can be changed using the upper menu labeled OBC type. To define elevation or transports on the open boundary nodes click on Make_OBC. This opens a dialogue box that you can use to choose a tidal elevation model for defining boundary conditions. Pick the desired tidal elevation solution (e.g., h_tpxo ) or transport solution (e.g., u_tpxo). The files obc_<constituent> or uv_obc_<constituent> (used by the forward modeling codes for defining open boundary conditions) will be written to the subdirectory ../prm the same way ob_eval does. In fact, ob_eval is called from gridEdit after OBC button was activated and a model for taking open boundary conditions from was chosen.

you have edited the mask as desired, defined the open boundary conditions, and set the constituents, click the Save button. You will be prompted for an output file name. After completing these steps you have the files Constituents, obc_* (uv_obc_*), and grid ready in ../prm. This completes set up of the model domain and boundary conditions for the tidal inverse problem.

3. Setting run parameters »

There are a number of adjustable parameters and program options that can be set by the user. These options are described in the following sections. Some of these parameters/options need to be used consistently over all inversion steps. To make it simpler to maintain this consistency, and to simplify routine use of the programs, a control file called run_param can be used. An initial copy of run_param is supplied in MyArea/exe. The file is sufficiently commented to make the meaning of parameters clear (once you understand the program options). If run_param is not found in ../exe by OTISoo executables, standard defaults are used.

Note! Some OTIS programs have command line options, which also can also be used to change defaults. Use of command line options overrides parameter values set in run_param, which in turn override standard program defaults. To see usage of any OTISoo code use command line option -Q, i.e. type  <code_name> -Q

Program Defaults

The defaults for the parameters set in run_param are common throughout the whole inversion procedure. Here we show the list of the defaults in the order they are given in the run_param with references to the documentation sections for detail descriptions. A reasonable strategy for most users will be to keep these defaults initially, changing defaults only as you understand the options better.

  1. Obsolete (kept for consistency with previous OTIS versions), do not use
  2. Maximum RAM (Mb) available - not limited if the line is empty
  3. Prior model file name (elevations): ../out/h0.df.out (see section 4 for details)
  4. Prior model file name (transports): ../out/u0.df.out (see section 4 for details)
  5. Correction model file name: no correction (see section 6 for details)
  6. Elevation open boundary conditions: 1 (2 for normal component of transports; 3 for radiation BC; see section 2.2 for details)
  7. Friction velocity : default is a constant 1m/s (see section 4  for details)
  8. Friction velocity file name: ../prm/uv.vel.unf (if the friction velocity parameter is set to zero a file name should be provided on this line; see section 4 for details)
  9. Use Diffuse output ../prm/diff.unf to calculate covariance scales (see section 7  for details): 1 (set to 0 to use simplified normalization)
  10. Data file name: ../dat/tpxbin.dat (see section 5 for details)
  11. Covariance file name: ../prm/covsc_* (for repX, rlcX: see section7 for details)
  12. Open boundary variance scale: 1. (for repX, rlcX: see section 8 and section 11 for details)
  13. Rigid boundary variance scale: 1. (for repX, rlcX: see section 8 and section 11 for details)
  14. Interior variance scale: 1. (for repx, rlc: see section 8 and section 11 for details)
  15. Covariance file name: ../prm/covsc_*  for Sml_df: (see section 12 for details)
  16. Open boundary variance scale: 1. for Sml_df  (see sections 8,11,12 for details)
  17. Rigid boundary variance scale: 1. for Sml_df  (see sections 8,11,12 for details)
  18. Interior variance scale: 1. (for Sml_df: see sections  8,11,12 for details)
  19. Dumping parameter: 1. (see section 10 for details)
  20. Truncation parameter for reduce_B : by default this is equal to the total number of representers (i.e., no truncation; see section 10 for details)
  21. Number of blocks for reduce_B: by default this is 1(i.e., no blocking; see section 10 for details)
  22. Delete old G.dat (WW.dat) (see section 10 for details)
  23. Do not replace G.dat with WW.dat (see section 10 for details)
  24. rlcX makes final solution (does not loop over realization for posterior error estimation) (see sections 11,12 for details)

It is strongly recommended for consistency purposes to set values of the parameters 3,4,6,7 in the run_param unless you intend to use the defaults.

4. Getting a prior solution with Fwd_fac(_b) »

The first step in the actual inverse calculation is to compute a prior solution for your region/constituents. This can be accomplished by using Fwd_fac(_b), solving linearized SWE.

Note! Here and after "_b" means "out-of-core" factorization blocking version of the code. Use blocking versions when not enough RAM is available for solving your problem. It is recommended to set RAM limits (Mb) in run_param. If the default number of blocks (2) is still not fitting in the limited RAM, you get recommendations on number of blocks to use. The number of blocks should be set to a whole degree of 2, using -N option for all *_b codes.

For solving the linearized SWE we omit advection (generally this is reasonable for tidal problems, except in very shallow water), and we use a linear parameterization of bottom friction. To define this parameterization you can provide an estimate of tidal currents to program mkFrv (described below), or you can use a simple constant linear bottom drag of the sort described in [EBF 1994].



 Fwd_fac  [-k<drag coefficient> -u<friction velocity value>

                   -h<min depth for drag> -n -r -f<forcing file name>

                   -t<tidal loading & ocean self attraction file>

                  -U<Jayne_internal_drag_scale> -E<EdZaron_internal_drag_scale>]


  k=0.003 - <drag coefficient>

  1m/s - <friction velocity value> or set in run_param. Run fwd_fac -u0

               if want to use output by mkSpeed as friction velocity field

  h0=10m - <min depth for drag>

  Forcing is calculated by atgf

  Elevation OBC (files ../prm/obc_<constituent>)

  If -n then NORMAL FLOW OBC (file ../prm/uv_obc_<constituent>)

  If -r then RADIATION OBC

  Use SAL file: ../../../DB/h_tpxo6.2_sal.

              To use crude correction beta=0.9

              use -t WITHOUT any file name

  Internal drag scales=0

Options for Fwd_fac_b are exactly the same plus an additional option -N<number of blocks> to set number of blocks for out-of-core factorization.


Fwd_fac(_b) input files include grid, obc_* and Constituents. The programs look for these input files in ../prm/. To use a non-constant bottom drag in the linearized code Fwd_fac(_b) you need to also supply a velocity model file, called ../prm/uv.vel.unf. This is most simply constructed by running mkFrv first (see below).

For details on estimation of internal wave drag see [Jayne and St. Laurent, 2001] and [Zaron and Egbert, 2006]. Zaron and Egbert internal wave drag calculation is implemented in the code. If Jayne’s internal wave drag definition needs to be used, then WE and SN components of the drag should be provided by the user in the files  ../prm/Jayne_u.dat  and ../prm/Jayne_v.dat. Internal wave drag components should be written in these files in OTISoo grid format (see section 2.1).


Outputs from the forward modeling code consist of two binary files. The first contains elevations on the z - nodes, the second - volume transports on the U and V nodes. For the linearized forward solver Fwd_fac the files are named h0.df.out and u0.df.out (for Fwd_fac_b h0.bdf.out and u0.bdf.out) These files are written to subdirectory ../out/.

Files h0.df.out and h0.bdf.out are Fortran binary sequential files in the format described below:

(1) header : n, m, nc, theta_lim, phi_lim, c_id, where

            (2) tidal elevations (m) (nc records)

Files u0.df.out and u0.bdf.out are Fortran binary sequential files of the similar format:

(1) header : n, m, nc, theta_lim, phi_lim, c_id, where

(2) tidal transports (m2/c) (nc records

If you want to use a prior solution from some other source (i.e., not calculated by Fwd_fac(_b)) you must put the elevations and transports in the format of h0.df.out and u0.df.out to proceed with the solution of your inverse problem as outlined below. Use MATLAB scripts  h_in.m and u_in.m to read and h_out.m and uv_out.m in OTISoo/matlab/Functions to write your elevations and transports in the standard format. To view the tidal elevation and transport (current) solutions run MATLAB codes hPlot.m and uvPlot.m (curPlot.m) from ../exe.

Note! Matrix factorization is time and memory consuming procedure. To optimize the computational process and support inter-constituent correlation case, factored matrices (Fwd_fac) or their blocks (Fwd_fac_b) are saved in ../dat. This way they can be repeatedly used for calculations by other OTISoo (for unchanged set of parameters in run_param). In OTISoo the factored matrix or matrix block file names contain the constituent names, for which they were produced.  This implementation requires significant increase of necessary disk storage. Should you change friction velocity or any of Fwd_fac(_b) inputs, factored matrices or their blocks should be cleaned. For this use script clean_Sfac [-b] in OTISoo/scripts. Use clean_Sfac to clean matrix output by Fwd_fac and clean_Sfac -b to clean block output by Fwd_fac_b.

Linearized Bottom Drag»

For the linearized SWE bottom drag is parameterized as cD u0 U/ H. Here cD is a non-dimensional drag coefficient (cD = 0.003 by default), H is water depth, U is volume transport, and u0 is a parameter defining water speed in m/s. The water speed, or "friction velocity" parameter can be set to a constant, or it can be estimated (as a function of position in the model domain). Note that the parameterization of bottom drag used in [EBF1994] is equivalent to u0=10m/s. By default Fwd_fac uses a constant friction velocity u0=1m/s. Different friction velocities can be specified via command line option or in the control file run_param. In particular, to use a different constant value for the friction velocity type: Fwd_fac -u<value>. For spatially varying friction velocity the output of mkFrv (../prm/uv.vel.unf) shall be used by typing Fwd_fac -u0 (discussed below).It is strongly recommended that you set the friction velocity value in the control file run_param. This parameter is used by various OTISoo codes and should be kept the same to maintain consistency between steps. It is typically useful to set u0 to values, which are somewhat larger than realistic tidal velocities (which in the deep ocean rarely exceed a few cm/s or so). Assuming a larger friction velocity stabilizes calculations somewhat, and has almost no effect on tidal elevations. (Effects on currents may be somewhat greater, but are still usually small, provided u0 is not too large). The increased stability helps to eliminate spurious noise in velocity components near critical latitudes for diurnal constituents.

Program mkFrv calculates the drag tensor components at the latitudes and longitudes of the U and V nodes using transport fields from a prior model (in the format specified below). Only option for mkFrv is -Qoff. This will allow user to get prompted and change defaults on the promts.  For solution of the inverse problem non-linearity in the dynamical equations due to quadratic bottom drag is linearized in the neighborhood of the prior tidal solution. This leads to solving linear dynamical equations with a linearized drag tensor computed from the velocities from the prior solution. The prior model should be named in the control file run_param or the default name (OTISoo/DB/h_tpxo7.2) will be taken. The friction velocity scales are then written in the file uv.vel.unf in ../prm/.

One of possible sequences of steps to get a solution with spatially varying friction velocity is:
Fwd_fac -u2 - makes first version of ../out/u0.df.out
mkFrv  - makes drag tensor in ../prm/uv.vel.unf (enter ../out/u0.df.out on the prompt "Enter name of the transport field file”)
Clean Fwd_fac matrix output with clean_Sfac

Fwd_fac -u0 - replaces ../out/u0.df.out with the second (refined) version
Clean Fwd_fac matrix output with clean_Sfac if you want use constant friction velocity for further calculations,

The format of the output from mkFrv is as follows:

(1) header

(2) friction velocity scales (8 records)

The scales are then added up to give linearized drag terms:

ruu = r1u + r11u

rvv = r1u + r22u

ruv= r12u

rvu= r12u

Ocean Loading and Self - Attraction

The linearized direct solver correct tidal forcing for ocean self-attraction and loading (SAL), using a SAL correction computed for a global model (TPXO*.*). The global SAL correction (10 constituents) is in the file OTISoo/DB/h_tpxo6.2.sal. All codes use this correction file by default. Because SAL corrections are considerably smoother than the tides themselves, the global scale correction should be sufficiently accurate for most regional scale tidal modeling. The user should thus not have to be concerned with this correction. To use instead the crude scalar correction factor beta=0.9 run Fwd_fac with the -t option.

5. Incorporating TOPEX/POSEIDON Altimetry and other Tidal Data»

5.1. Collecting data with data-cooker

With computation domain set and the prior model calculated one can proceed with defining data sets for inversion and converting them into OTISoo format.

In OTISoo this is done by the code (not existent in original OTIS) called data_cooker. In OTISoo possibility to omit some constituents for certain kinds of data is supported.  For example, for 2 groups of constituents, m2,s2 and k1,o1 Topex/Poseidon data will be included in assimilation for all 4 constituents, GRACE data for m2,s2 and o1, GFO data for s2,k1,o1 etc. depending on particular kind of data. If the data have already been harmonically analyzed (such as GRACE or tide gauges data), then specific set of allowed constituents is defined in input data files. If the data are time series, then the set of allowed for this particular kind of data constituents should be defined in the data reading subroutine. Constituents, for which specific data set will be included into assimilation, are defined as overlap of desired and allowed constituents. All possible combinations of desired constituents are kept in the Constituent Dictionary (array of derived type constituents). For the example above the Constituent Dictionary contains 15 combinations, i.e.: m2 s2 k1 o1; m2 s2 k1; m2 s2 o1; m2 k1 o1; s2 k1 o1; m2 s2; m2 k1; m2 o1; s2 k1; s2 o1; k1 o1;m2; s2; k1; o1. First combination in the Constituent Dictionary always includes all modeled constituents. Thus each data vector can be pointed to a line in the Constituent Dictionary to define for which constituents the data shall be assimilated.

data_cooker is interface to majority of OTISoo codes in the sense that it converts input files into internal formats, used by all other codes, thus providing integrity of the package, as opposed to original OTIS where input files were used by almost every single code.


On input data_cooker takes:

- representer location file in ../prm/lat_lon.rep - 5 column ascii file - (optional, discussed below)

- bathymetry grid ../prm/grid

 - list of constituents to invert in ../prm/Constituents

 - data to assimilate (various supported data formats are discussed below)


Should user want to change either of these input files, all inversion steps should be re-run, starting from data_cooker.


Note! data_cooker must always be run first, except for the codes, making a prior model, i.e. mkFrv and Fwd_fac(_b) discussed above and Pathfinder data package (see below). These codes may be run before/without running data_cooker.


data_cooker outputs:

- data converted into datavec type in ../dat/data_type1.dat

- location dictionary in ../dat/data_loc.dat

 - Constituent Dictionary at ../dat/ConstDict.dat

 - repIndex(2,nrep),dataIndex(3,ndat) in ../dat/repIndex.dat, ../dat/dataIndex.dat,


where dataIndex(1,:) points to a data type; dataIndex(2,:) points to location index in location dictionary; dataIndex(3,:) points to a line in Constituent Dictionary; repIndex(1,:)  points to representer type;  repIndex(2,:)  points to representer ID (corresponds to 3rd column in ../prm/lat_lon.rep file or internally assigned). repIndex(3,:) as analogues to dataIndex(3,:) is omitted here, since it would always point to the first line in the Constituent Dictionary. Representers are calculated for all desired constituents at each representer location in this release of OTISoo.

data_cooker can be run in "append" mode to combine various data sets in ../dat/data_type1.dat.

Currently 11 formats of input data files are supported. These data formats were can be easily changed or appended with other formats by adding/modifying specific reading/writing subroutines and appending case statement in the data_cooker source file.

data_cooker usage:


        [-D<data_file>] [-a] [-o<output_file>]


        [-t] [-u] [-U] [-r] [-T] [-I] [-G]




         -a - APPEND existing output files

              DEFAULT: replace output files

         -o - output converted data into <output_file>.

              Do not change dictionaries & indices.

              This is needed to SUBTRACT data only in mkB

              DEFAULT: ../dat/data_type1.dat


              DEFAULT: ../prm/grid

         -t - tide gauges data format

         -u - current meter data ASCII

         -U - ADCP ship born data format

         -T - Tomography av.vel. format

         -G - GRACE averaged ssh format

         -I - ICEsat 1 point x/o

         -r - CODAR radials BINARY

         -N - do not check data format

         If no specific data format option given, the data format is defined automatically.


Making a list of desired representer locations »

The list of representer locations in ../prm/lat_lon.rep, optionally used by the data-cooker, can be done one of 3 ways:

In OTISoo a representer location can be a single point (defined by single latitude and longitude), a line (defined by 2 latitudes and longitudes), a rectangular (defined by 2 corner latitudes and longitudes).

For very large data sets it might not be practical (or necessary) to calculate a representer for every data point. For example, when using altimetry data from the PATHFINDER database described below, we have had good success using representers calculated for cross-over locations, and at selected additional points along ground tracks across shallow seas and coastal areas. Note that while the list of representer locations will generally be a subset of the data locations, this is not necessary. With the modified representer approach used here, the representer locations could be completely different from the data locations. We thus refer to the representer location file: ../prm/lat_lon.rep. The file has a simple ASCII format: generally one line for each representer, giving latitude, longitude, an index and unit vector coordinates. An index is given to a representer when making the file (the indices are in increasing order, but some may be missing). Thus a particular index identifies a representer location and it is called further "rep_id". Unit direction vector coordinates are needed only for representers for tidal current data. Otherwise values in the last two columns are set to 0. Velocity components of representers are calculated along with elevation components for all representers if any non zero unit direction vectors are in ../prm/lat_lon.rep. Thus each line of the file ../prm/lat_lon.rep is written as


<Lat> <Lon> < rep_ID> < q > < j >


For example,







! representer for elevation data






! representer for West - East velocity data






! representer for South - North velocity data






!representer for North-West vector velocity data






! Averaged over rectangular elevation line1






! Averaged over rectangular elevation line 2

Note, that two representers must be specified to compute representers for both components of vector velocity data (e.g. as measured by current mooring, line 2 and 3 of the example), and only one - for velocity data measured in a single direction (e.g. a radial velocity measurement from CODAR, line 4 of the example). Representers for averaged over rectangular area elevations (GRACE data) are also possible. In this case 2 lines define single representer (line 5&6 of the example), and coordinates of the rectangular box corners are given.

If you are using the PATHFINDER database for the inversion, it is simplest to use the program Lat_lon described in the next section, to set up the representer location file.

Now we discuss how to prepare input data for the data_cooker.

5.2. The Altimetry Data Package »

2003 version of the PATHFINDER database (364 orbit cycles) with with all corrections applied except for the ocean tides is provided with OTISoo with kind permission of the authors (Brian Beckley brianb@nemo.gsfc.nasa.gov, Richard Ray ray@nemo.gsfc.nasa.gov). OTISoo thus includes programs for extracting data from the PATHFINDER database and from "no tidal correction applied" versions of Topex/Tandem, GFO and ERS altimetry databases (NOT provided with OTISoo). The PATHFINDER data files are  located in OTISoo/DB/topex alongwith two once computed Topex cross-over location files, used by data extracting software.

Run Lat_lon and Makedat from OTISoo/local/MyArea/exe anytime after ../prm/grid file has been made. Lat_lon makes two lists of altimetry data locations and representer locations file (optional for data_cooker input) in the model domain. This is done since altimetry data sets can be large even for smaller local problems, in this case reduced basis approach (discussed in [EBF 1994]) when representers are calculated not at ALL data locations, but at a subset of data locations, needs to de applied. The data and representer location lists may be subsequently modified using matlab program repEdit. Makedat reads the data list and extracts the altimetry time series for these locations from an altimetry data base. Following are further details for each program.


This program makes initial versions of two lists. The first (../prm/lat_lon.rep) is the list of latitudes and longitude for which representers will be calculated. This simple ASCII file can be made by other means provided the format is correct. By default if the file already exists, Lat_lon either does not change or replaces the file. The second list (../dat/lat_lon.*, where * is one of:  dat, dat2,ers,gfo) is the list of latitudes and longitudes of altimetry data to be incorporated into the inverse model. Note, that this file is used by the code Makedat (see below) for making a binary file of data measured at the locations from the list. This file is not used for anything else in OTISoo (except repEdit.m, see below), and thus not needed if you are not assimilating altimetry data. Using latitude and longitude limits defined in the ../prm/grid, Lat_lon reads cross-overs and along track altimetry data point locations from altimetry data base for the given rectangular area. By default all cross-over points in the area are chosen as representer sites, and all points (along track and cross-over) as data sites. Command line options allow some common variants on these defaults (e.g., adding some along track sites to the representer list, choosing a uniformly spaced subset of along track points for the data list). Finally, note that Lat_lon just makes an initial version of the representer and data lists. These ASCII files can be modified with a text editor, or by using the graphical MATLAB editor repEdit.

Note! PATHFINDER latitudes are from -66.1491 to +66.1491, PATHFINDER longitudes are from 0. to 360. Do not expect to find any data sites beyond the limits.

Note! Two longitude conventions are supported in PDP as:

1.     0_________________180________________360

2. -180_________________0_________________180


Usage:     Lat_lon [-tp2] [-ers] [-gfo]

                [-r[<number>]] [-R[<number>]]

                [-A[<number>]] [-M <number>]

                [-d[<number>]]  [-D[<number>]] [-a[<number>]]



 default: Topex/Poseidon

    -tp2: Topex2

    -ers: ERS

    -gfo: GFO

    Options -tp2, -ers, -gfo are not supported

    with OTISoo public distribution

 NOTE: Cross-over options below are only supported for TP



 -r or -R - take cross-overs only

 -r<n>    - take cross-overs + each n-th site along track

 -R<n>    - take cross-overs + n (<=10) along track sites between

 -A<n>    - take each n-th site along track

 -M<n>    - limit number of rep.sites by n

            amongst m sites defined by other options

            if m<n, m representers will be set



 -d or -D - take cross-overs only

 -d<n>    - take cross-overs + each n-th site along track

 -D<n>    - take cross-overs + n (<=10) along track sites between

 -a<n>    - take each n-th site along track

 -H<D_m> -  take sites with depth<D_m

 -h<D_m> -  take sites with depth>D_m



 Take ALL cross-over locations as REPRESENTER SITES

          (number of representer sites is unlimited)

 Take ALL data locations as DATA SITES

 Data at REPRESENTER SITES are always included


 After Lat_lon is finished you can edit the lists

 with MATLAB repEdit



repEdit.m (MATLAB)

This MATLAB program allows the user to view and edit the representer and data site lists. Use of this program is optional; files made with Lat_lon can be edited with any text editor. However this script makes it easier for a user to edit representer and data location lists and also view data locations written by data_cooker in ../dat/data_type1.dat.  The graphical editor is reasonably self-explanatory and has a simple help facility. The program makes a plot (on top of the local bathymetry) of all data and representer sites chosen by running Lat_lon. Using the mouse, locations in the representer site list can be deleted/added to the representer site list. Altimetry data can also be edited by adding/deleting single sites/block of sites. Only those sites initially in ../dat/lat_lon.* can be added.


Note! Only altimetry data can be edited with repEdit. Other kinds of data can NOT. They can only be viewed.


Upon completion of all editing new versions of the data and representer list files overwrite the input list files by pushing "Save” button. Note that the program does not eliminate locations from the altimetry data list file, but rather just sets a flag indicating that the location is not to be used. The situation is different for the representer list. Here unused representer locations are not kept in the representer list file. Because representer locations are assigned a permanent number when they are added to the list it is possible to add new representer locations to the list at any time, even after some representers have been calculated and a preliminary inverse solution has been computed. In particular, it is possible to add new representers in areas where altimetry residuals remain large after an initial inverse calculation. data_cooker MUST be re-run every time the representer list is changed. Inversion codes beyond data_cooker do NOT read ../prm/lat_lon.rep, they use internal dictionary done from  ../prm/lat_lon.rep by data_cooker.

Possibility to assign representers at data sites other than on TOPEX/Poseidon tracks is provided through repEdit. Even if no T/P altimetry data is used, and thus the code Lat_lon has not been run, one can still assign representers sites with repEdit for other data types (say, tide gauges) and view the data collected by data_cooker.

To run repEdit the following are required:



Further help to repEdit is provided in the upper left Menu.

repEdit screen 1
repEdit screen 2
Figure 3: MATLAB representer and data sites editor display. Representers are shown with red, used altimetry data with yellow, unused altimetry data with grey.  Different representer types are allowed (elevation, averaged elevation, velocity, averaged velocity). In this example averaged over rectangular GRACE representers are shown with red frames.  Cartesian grid of the Weddell Sea (uniform in km, rather than in latitudes and longitudes) is shown on background in latitude/longitude coordinates. Left panel corresponds to "lat_lon" mode of repEdit (switch in upper right menu): only altimetry data is shown, Topex altimetry data editing is allowed (altimetry to edit is indicated in lower right corner); right panel - "dataLoc" mode of repEdit: all data collected by data_cooker is shown (here TP and ICESat cross-overs, shown with black dots), data editing is NOT allowed.



This FORTRAN program makes a direct access binary altimetry data file. The file contains time series for each location in the lat_lon.*  file which is flagged to be used. Topex/Poseidon cross-over site time series is organized in 2 records:

lat lon L Ta ssha

lat lon L Td sshd



Along-track time series is organized in one record:

lat lon L T ssh

the same for ascending and descending tracks. All fields have the same meaning as for the cross-over records. Thus the code Makedat reads the selected cross-over (Topex/Poseidon only) and along track altimetry data locations from lat_lon.* and extracts the appropriate time series from an altimetry database to create an output file in the format described above. Subsequently, we shall refer to this format as the "tpxbin" format. The output file is used as the input data file by mkB.



 Makedat [-o<output_file>] [-a]  [-tp2] [-ers] [-gfo]

  -a     - apply along track averaging using all available data

 -tp2,ers,gfo - use TP2, ERS, GFO data bases



           T/P data base

 NOTE: No TP2/ERS/GFO data provided with OTISoo public distribution


 Output file: ../dat/tpxbin.dat for T/P

              ../dat/tpxbin2.dat for TP2

              ../dat/gfobin.dat for GFO

              ../dat/ersbin.dat for ERS

 NOTE: Lat_lon should be run first!

 NO along track averaging


 ACCEPT data with quality flags:  0  1  2  3  6  7  8  9 11 16 

 REJECT data with quality flags:  4  5 10 12&13 14 15  


 Quality flag meanings:

             0 -  good data

            1  -  not set so can read as unsigned integer

            2  -  poseidon measurement

            3  -  depth of ocean < 200 meters

            4  -  sigma H of linear fit to 10/sec data > 15 cm

            5  -  Significant Wave Height > 10 m or < 0 (K band)

            6  -  Significant Wave Height > 10 m or < 0 (C band)

            7  -  Sigma0 < 6 db or  > 27 db (K band)

            8  -  Sigma0 < 11 db or  > 30 db (C band)

            9  -  Attitude > .45 degrees or < 0

           10  - AGC > 40 db or < 28 db at latitudes > +/- 55 degrees (K band)

           11  - AGC > 32 db or < 22 db at latitudes > +/- 55 degrees (C band)

           12  -  Cross Track Distance > 1.0 km.

           13  -  Cross Track Slope > 10 cm/km

           14  -  observation failed edit criteria

           15  -  observation had undefined correction parameter

           16  -  not used


 Edit OTISoo/include/sat_altimt.h and recompile Makedat to change editing criteria defaults


Input: (output by repEdit and/or lat_lon):

binary data file in the "tpxbin" format in ../dat/ (default)

5.3. Other tidal data formats »

As outlined above currently 11 formats of input data files are supported: 5 of them for  satellite altimetry: T/P, TOPEX Tandem, GFO, ERS, ICEsat cross-over differences (see previous section); others are - CODAR radial velocities, tide gauges, current meters, ADCP ship born velocities, tomography averaged along line velocities,  GRACE averaged elevation with prior subtracted. These data formats are defined by the particular data provider. We only discuss here simple data formats, which can be supported by regular users.

Tidal harmonic constants can be obtained from

  Tide gauges ASCII format:

<lat> <lon> <constituent> <amplitude_cm> <phase_degree> [<error>]

For example:

-33.62 -78.83 s2 10.8 92.9

-33.62 -78.83 k2 3.5 96.9

-54.50 159.00 q1 1.8 201.9

-54.50 159.00 o1 8.0 198.0

The <error> column is arbitrary. By default the tide gage error is taken from OTISoo/include/constants_f90.h.

Note, that you have to keep phase convention in correspondence with the following conversion to complex amplitude and back

complex_amplitude=amplitude*cexp(- cmplx(0.,phase))



I.e. standard Greenwich phases should be used, when specifying amplitude and phase. Complex harmonic constants H assume a time dependence of exp{+iwt} (so the partial tide is given by h(t)=Re[H exp{+iwt}].

Current meter ASCII format

Assumes vector current measurement in standard coordinates:

<lat> <lon> <constituent> <u_amplitude_cm/s> <u_phase_degrees> <v_amplitude_cm/s> <v_phase_degrees>[<error>]

where u means West - East component, and v means South - North component of velocity vector. The same phase convention as for tide gauge data is supported. The <error> column is arbitrary. By default 1cm/s or the value from the command line (-u option) is taken.

Current meter (CODAR) binary format

Due to the fact that the number of CODAR data is comparatively large, we also support direct access binary format for the velocity data. Each data record has the form

<lat><lon><Vrad_real_m/s><Vrad_imag_m/s> < q > < j >

where < q > < j > are unit vector coordinates in direction of measured radial velocity. Here each line item is real*4 , and thus record length is 24 bytes. Note, that this format also can be used for current meter data (one line for each component of velocity vector). Thus, two lines correspond to each current meter location

<lat><lon><u_real_m/s><u_imag_m/s> 1.0 0.0

<lat><lon><v_real_m/s><v_imag_m/s> 0.0 1.0

Separate data file should be done for each constituent. Binary file names are hard coded in version 3.1 of OTIS. These are ../dat/rad_m2.bin, ../dat/rad_s2.bin etc.

Ship born ADCP ASCII format

Each line corresponds to one measurement at one point at a time:

<lat> <lon> <time> <u_cm/s> <v_cm/s>

Time should be expressed in days relatively January 1, 1992, 00:00 GMT (corresponds to 48622mjd).

 6. Calculating the "temporal data structure" matrix with mkB »

mkB takes data_cooker output and implements the rest of data processing functions: interpolates prior, correction, SAL models (if appropriate) to data locations for certain set of constituents (defined by a line in the Constituent Dictionary), using sparse data functionals for interpolation; subtracts tidal predictions of prior, correction, SAL models; creates dense matrix Ak that relate the tidal signal in the time series at a single point to the tidal constants at this point; performs SVD; for each temporal structure matrix and reduces the size of the data (and the matrices Ak), to the number of real unknowns at each site (i.e., two times the number of constituents). The reduced data vectors and matrices are saved in ../dat/B.dat.

For each distinct data location, mkB computes a 2*NC x NC complex matrix Bk, and a 2*NC dimensional real reduced data vector dk. In the absence of any noise, these satisfy dk=Bk hk, where hk is the 2*NC real vector of harmonic constants (real and imaginary parts) at data location k. mkB can also be used for harmonic analysis of time series at each point. In this case the equation relating dk to hk is solved at each point, and hk is output instead of dk. Third,  coupled with data_cooker (in this case no changes to output data_cooker files are made) mkB can be used to subtract a tidal solution (e.g., the prior model; the final inverse solution) from the measured altimetry or other tidal data, and output the residuals.

mkB options

Program mkB subtracts a number of corrections from the altimetry data before extracting the reduced data. All of these corrections are optional. First, mkB subtracts the specified prior model from the data, using the prior solution file specified in run_param or on the command line (-M option). Second a "correction file" can be specified (again in run_param, or on the command line(-c or -C options). For example, an inverse solution might be sought for only one constituent, say M2. By using the best possible estimates of other large constituents as a correction to the data before extraction of M2, contamination of the desired signal can be minimized. These corrections also serve to make estimates of residuals in the altimetry data more consistent with typical detided data no matter how many constituents are included in the inverse solution. Only constituents that are in the correction file, but not in the file ../prm/Constituents are used for corrections. Third, long period tides (currently only as equilibrium tides) can be subtracted from the altimetry data if no long period tides is provided in the correction the model. Finally, a correction for radial deformation load tides is applied to the altimetry data. This correction is taken from a global 8 constituent correction file ../../../DB/h_tpxo.load. Correction file and long period corrections make no sense for data that has already been harmonically analyzed, so these corrections only apply to the altimetry format. Weather each of this corrections should be applied to a specific kind of data is defined earlier by data_cooker and saved in ../dat/data_type1.dat.


makeB [-D<data_file>] [-M<model_file>] [-C<model_file>]

       [-p<out_file>] [-L] [-h] [-H] [-1] [-i] [-d]




         -M<model_file> - Subtract prior model


         -C<model_file> -

             Correct for constituents,  taken from a <model_file>,

             excluding those presented also  in prior and in ../prm/Constituents.

         -p<out_file> - only subtract prior model put result into <out_file>

         -L - DO NOT subtract long period tide

         -h - Only do harmonic analisys & output h.dat

         -H - Do harmonic analisys and save B in HA format

         -i - infer minor constituents using perth2 by Richard Ray

Note! All model files, used with options: -M -c -C, have to be in the "h0.df.out" format, described in section 4. The models can be on different grids. Thus for the correction model you can use any model which includes your grid area (for example, a global model, such as TPXO*.*).

To run makeB in OTISoo/local/MyArea/exe/ in its "subtract only" mode a shell script mkBd, located in OTISoo/scripts may be used. This script combines data_cooker and mkB usage conveniently for user to get estimates of RMS misfit.

mkBd usage:

mkBd -D<data_file> [-o<output_file>] [-M<model_file>]"

                  defaults: model_file  - prior set in run_param"

                                 output_file - dif"

Residuals after subtracting the model from the data will be output in the dif file in the same format as original data (of allowed formats).

The binary sequential file ../dat/B.dat output by mkB contains a header and a number of records corresponding to the data sites number in your data file set in the command line.

(1) header

n_sites, nc, irecl

where n_sites - integer, number of data sites; nc - integer, number of constituents; irecl - integer, data site record length.

(2) a series of data site records. Each of these has the format

m_type, lat, lon, dummy, dp, sigma, B

where m_type - integer, measurement type (mtype=1 for elevation data, m_type=3 for current data);

lat, lon - real, data site latitude and longitude;

dummy(5) - real array reserved for future purposes: this version of OTIS already uses dummy(1) to save L - time series length at the data site;

dp(2*nc) - real array, reduced data vector at the data site;

sigma(2*nc) - real array, vector of data variance estimates;

B(2* nc,nc) - complex matrix defining temporal structure of reduced data vector at the data location.

In the case when B is saved in harmonically analyzed format (-H option) the standard record of the same size has format

m_type, lat, lon, dummy, hhat, theta, B,


hhat(2*nc) - real array of harmonic constants at the data site;

theta(2*nc) - real array, vector of harmonic constant variance estimates

In the case when B is saved in "one point" format (-1 option) the standard record of the same size has format

m_type, lat, lon, dummy,dp, err, B,


dp - real <harmonic constant Re/Im>/<one point measurement> at the data site;

err - real <harmonic constant estimate>/<data> error

7. Setup for representer calculation »

With the prior (dynamics only) solution computed, construction of the inverse solution can begin with the representer calculation. Before representers can be calculated three preliminary steps must be completed. These are: creating of the linearized drag tensor (optional - discussed in section 4 - can be skipped, if you use a constant friction velocity for calculating representers); setting up the dynamical error covariance file; making a list of data locations to be used for representer calculations (optional - if not given, representers will be calculated at ALL data locations, this step was discussed at section 5). 

Creating the dynamical error covariance scale file with Diffuse and Varest

Currently the dynamical error covariance used is as described in [EBF, 1994]. You can specify a single correlation length scale (constant in space, and over constituents), and a spatially varying error variance for each constituent. There is a program called Varest which estimates the spatially varying dynamical error variance using the prior model for approximate amplitudes of elevations and currents, together with parameters which specify estimates of fractional errors in bathymetry, the drag paramterization, etc. The fractional error parameters, together with decorrelation length scales for forcing and boundary condition errors are specified in the file cf_difvar in ../prm/. This file (cf_difvar) is input to programs Diffuse (optional) and Varest as well as data_cooker output files.

Program Diffuse finds the normalizing constants for the bell-shaped spatial covariance kernels and writes these out in ../prm/diff.unf. The normalization constants in diff.unf are then used, with the variances calculated by Varest to make the overall scaling part of the dynamical error covariance. This is saved in the output file ../prm/covsc_<constituents>, where <constituents> are all constituents from ../prm/Constituents written in a single line without blanks. This file is used by default by the representer calculation programs (repX and rlcX) as the dynamical error covariance file.

If you intend to use these exact normalizing constants calculated by Diffuse, you have to run this program before running Varest. However, Varest can also be run without running Diffuse first. In this case an approximate normalization scale (exact for the case of no land in the model domain) is used. The default normalization scale is computed as:

Go(i,j)=W(i,j)/(p r02),

where W(i,j) is an area element for the (i,j) cell of the grid, and r0 is the correlation length scale, defined in file ../prm/cf_difvar. For most purposes the default scale is probably good enough, since the approximation is equivalent to changing details about the error covariance scales, which are in reality only approximately known at best.

One portion of the dynamical errors is due to uncertainty in the bottom drag formulation. This uncertainty is calculated as a fixed percentage (specified by the user) of estimated drag. To estimate the drag term in the momentum equations Diffuse needs to know what friction velocity is used. By default the program uses the value u0 = 1m/s. To be completely consistent one should use the same friction velocity as used for calculation of the prior model. This is automatically maintained by the control file run_param. By running Varest with the -u0 option a spatially varying friction velocity computed by mkFrv can be used.


Varest [-N] [-f] [-B[<freq>]] [-u<value>] [-d] [-c] [-m<factor>]

         -f - USE output by diffuse

         -B<basic Brunt-Vaisala freq>, if -B, freq=5.24e-3

         -z<Brunt-Vaisala e-folding scale>, if -z scale=1300

         -u<friction velocity (m/s)>

         -N - normal flow OBC

         -d - ADD discretization term

         -c - set interior and RB covariance to

              its averages

         -m<factor> - multiply covariance by <factor>


 1. Simple defaults set as normalizing scales

 2. Constant friction velocity value 1m/s is used

 3. Elevation OBC

 4. NO discretization term


Set -u0, if you want to use output by mkFrv from ../prm/uv.vel.unf as friction velocity field
Set -f if you want to use output by Diffuse

MUST be run first
If the -f option is used, then Diffuse must to be run first, otherwise Diffuse is optional.
If the -u0 option is used,mkFrv must be run first
The files ../out/h0.df.out and ../out/u0.df.out will be taken as a prior by default unless different names are set in the control file run_param.

Inter-constituent correlation matrix

OTISoo implements inter-constituent cross-correlation for assimilating data with entangled tidal harmonics. Constituent groups are defined in ../prm/Constituents (same group constituents are written in a single line). Inter-constituent matrix is defined using an ASCII file provided in OTISoo/DB/ccor. This file contains complex inter-constituent correlation matrix, calculated using dynamical residuals from tpxo7.2. Information from this file is read by corresponding OTISoo codes.

8. Calculating representers with repX(_b)»

Program repX(_b) calculates the representers by factoring the matrix of the wave equation coefficients as done by Fwd_fac and also produces or uses (if they already exist) intermediate factored matrices (or their blocks) in ../dat. See notes on this in section 4.


 repX    [   -#<nrep1>-<nrep2>


             -w (use white noise covariance)

             -u<num> (set friction velocity to num)

             -v<num> (scale open bc variance)

             -b<num> (scale rigid bc variance)

             -i<num> (scale interior variance)

             -n      (normal flow OBC) ]

             -U - calculate and save VELOCITY

                  REPRESENTERS (by default this function is OFF)

             -O smooth OB with ../prm/OBcov

             -z NON zero W.Eq. covariance


 DEFAULTS: ALL representers listed in

          ../dat/repIndex.dat are calculated

          run for all groups of constituents

           ob_var_sc = 1;

           interior_var_sc =1;rb_var_sc = 1.

           Friction velocity u=1m/s

           Run repx -u0 if you want to use output by mkFrv a as friction velocity field

           Elevation OBC

where integer numbers nrep1 and nrep2 define the representer range, enumerated as in lat_lon.rep, to be calculated during the run. Note that the numbering of representers refers to rep_ID assigned in the master representer list in ../prm/lat_lon.rep or internally by data_cooker. By default all representers from the list are to be calculated. By default only elevation representer component is calculated. Both elevation and velocity representer components can be calculated by setting -U option. See [EE2002] for definitions on both components of representer.

By default the constant friction velocity of 1m/s is used by repx for linearized drag tensor computation. To change the default set the friction velocity value in run_param or use -u<value> option as for Fwd_fac. To use spatially varying friction velocity fields (output by mkFrv) use the -u0 option. As always, options set in run_param override the standard program defaults.

Note that repX is the most time and memory consuming part of OTIS. Until you are familiar with run times for your computing environment, it is probably best to try a run for a small number of sites (repX -#1-10) to get an estimate of computational time and RAM. If limited RAM set in run_param is not enough, one would need to use an out-of-core block-factorization version of repX - repX_b. See comments on usage of blocking codes in the beginning of section 4.

Output files, containing calculated representers for each data site, are written to ../repx<group_id>/, where <group_id> is a constituent group number. Make soft links from the directory if you want to change the destination for representer output. Elevation representer files are named: rpx.0001, rpx.0002, etc. Velocity representers are named: ruv.0001, ruv.0002, etc. Again, the numbers refer to the number of the location in the ../prm/lat_lon.rep file. You may potentially calculate up to 9999 representers without changing this convention for representer file names. To view representers use MATLAB rpxPlot.m for elevation representer or ruvPlot.m for velocity representer.

For not correlated constituents (one constituent per group or line ../prm/Constituents) one representer consists of one block, corresponding to a constituent. For correlated constituents one representer consists of nlp*nlp blocks (where nlp - is number of constituent in the group). For example, if the group has nlp=2 constituents m2 and s2, then the representer contains nlp*nlp=4 blocks: Rm2, Rm2s2, Rs2m2, Rs2s2.. Each of the blocks can be also viewed with matlab scripts rpxPlot.m, ruvPlot.m.

Note! the LAPACK routines zgbtrf and zgbtrs are used for factoring the linearized SWE matrix and solving the linear SWE system, see comments in section 1.

9. Calculating the spatial representer matrix with rpx_to_rp »

The program rpx_to_rp makes the Hermitian representer matrix R corresponding to harmonically analyzed data at the representer sites (i.e., the elements of R are elevation or velocity representers, evaluated at each representer site), and the matrix P (representers for harmonically analyzed data evaluated at all data locations). The calculation is controlled by the data_cooker output. The matrices P and R, together with matrix B (section 6) are used to do the matrix computations needed for finding the representer coefficients. Note that you have to run rpx_to_rp twice in different modes to get both P and R. Of course the representers have to be calculated (by repX) and placed in OTISoo/local/MyArea/repx<group_id>, as described in section 8, before this program can be run.

By default rpx_to_rp makes the rectangular matrix P and puts it in ../dat/P<group_id>.dat. Rows of P correspond to the data sites with latitudes and longitudes read from ../dat/B.dat (output by mkB). Columns of P correspond to representer sites.


   rpx_to_rp [-r] [-n<n_blk>] [-g<igrp>]


  -r      - optional. Make square matrix R - rows and columns correspond to the

             included representer sites (repLoc.dat).

             DEFAULT: Make rectangulat matrix P - rows correspond to data sites with

             lats and lons from ../dat/dataLoc.dat,  columns - from ../dat/repLoc.dat.

  -g<igrp>  contituent group in ../prm/Constituents, default: 1

  -n<n_blk> - number of blocks to calculate P (R) by blocks, if needed: DEFAULT: n_blk=1


               P and R are put into ../dat/P<igrp>.dat and dat/R<igrp>.dat

               (i.e. P1.dat, R1.dat, P2.dat, R2.dat etc.)


The output binary files ../dat/P*.dat and ../dat/R*.dat contain a header and a number of records for each data site. They are written in double precision.

(1) header

n_sites, nreps, nl, nlp

where n_sites - integer, number of data sites; nreps - integer, number of representers; nl - integer, number of constituents groups (obsolete, now always 1, since separate file contains matrix for one group of constituents); nlp - integer, number of constituents per group.

(2) A series of fixed length records. Each of nreps standard records of 16*nlp*nlp*n_sites bytes corresponds to nlp columns of P separated by nreps other columns. For example, for the case when nlp=2 and constituents in the group are m2 and s2, the matrix P has structure:










Each of 4 blocks in the matrix P has n_sites rows and nreps columns, i.e.

 Pm2={ Pm2(i,j), i=1:n_sites,j=1:nreps)

Then j-th record corresponds to: Pm2(:,j), Ps2m2(:,j) Pm2s2(:,j), Ps2s2(:,j).

The files ../dat/R>igroup>.dat has the similar structure.

10. Calculating representer coefficients with reduce_B »

The next step is to calculate the representer coefficients that are used to form the final inverse solution. This is accomplished with reduce_B, calculating the representer coefficients (as described in [EBF1994]). reduce_B merges 3 versions of the code of old OTIS3.2 (reduce_b, reduce_a, reduce_1). A blocked algorithm is used. By doing matrix manipulations using smaller blocks of large matrices very large problems with many data and/or representers can be solved. If the maximum available RAM value is properly set in run_param, reduce_B will automatically warn you if blocking needs to be used, or if a greater number of blocks should be used to fit the matrix calculations into available memory. By default no blocking is set (that is the number of blocks is equal to one), but this can be changed in run_param or in command line using -n<number_of_blocks> option. For modest sized problems (or on machines with a few Gb of memory) you should not need to block. Do not mix the number of blocks for reduce_B and number of blocks for out-of-core factorization (used by *_b codes, see notes on this in section 4)


./dat/P<igrp>.dat,../dat/R<igrp>.dat done by rpx_to_rp
../dat/B.dat (../dat/B1.dat) done by mkB



The principal output of reduce_B is ASCII files, called bhat<igrp>.dat in ../dat, containing the representer coefficients, and also some (possibly very large) matrices and vectors, all saved in ../dat: E<igrp>.dat, W<igrp>.dat, UT<igrp>.dat, dp<igrp>.dat, S<igrp>.dat, G<igrp>.dat (WW<igrp>.dat). The matrices and vectors are saved in double precision. <igrp> corresponds to constituent group index. Thus if 2 groups of constituents are defined in ../prm/Constituents, then 2 sets of matrices will be created, i.e. ../dat/bhat1.dat, ../dat/bhat2.dat, ../dat/E1.dat , ../dat/E2.dat etc.

These output matrices are used by MATLAB codes invert.m to display data fit and residuals (residuals are displayed for altimetry data only), and also for posterior error calculations (see section 12). invert.m is a MATLAB script, which provides a graphical interface for looking at the inversion results and, possibly making some adjustments. In particular, invert allows you to vary the assumed data error scale (referred to as sig_e in the run_param; this is essentially a "damping parameter"). Varying this parameter allows you to explore the tradeoff between fitting the data and the dynamics in a simple way; in particular by plotting a tradeoff curve. If you change the value there is an option to recalculate the representer coefficient vector and rewrite the file ../dat/bhat,igrp>.dat, output previously by reduce_B. invert has a built-in help facility and is pretty much self-explanatory. This matlab script has to be run from OTISoo/local/MyArea/exe after reduce_B has been run in order to find all necessary files.


In addition to the blocking options discussed above, the primary option to be aware of is the truncation level. The inverse solution is constructed as a sum of the prior model and a linear combination of the representers. With the reduced basis approach used here, we compute only a fraction of the representers, then optimize the penalty functional over solutions formed from this reduced basis. Instead of using all representers for the reduced basis, one can first do an eigenvector decomposition of the Hermitian square representer matrix R (i.e., representers evaluated at representer sites), and use only the representers corresponding to the dominant modes of R. The number of modes retained is referred to as the truncation level. By default this is set to the number of representers, i.e., no truncation. This can be modified in run_param or on the command line.

Another option to be aware of is the damping parameter. This is used to scale the assumed data error covariance. When using altimetry data the data error covariance for each data location is estimated by mkB using the variance of the altimetry residuals obtained after making all corrections and fitting the modeled constituents. By increasing sig_e above 1 (e.g., with the option -s10) you increase the assumed data error variance and decrease fit to the data. Since the inverse solution only depends on the ratio of data error variance to dynamical error variance, increasing the data errors by a factor f is equivalent (in terms of the solution, but not in terms of computed error statistics) to decreasing the dynamical errors by a factor 1/f. It is probably more common that the dynamical error scales are misestimated. It is thus best to view the damping parameter as a way to vary the relative fit to data and dynamics, not just to adjust the data error variance.

Finally there are some options which effect which matrices are computed and output, and whether a full or partial calculation is done. These include the -w option, which results in the output of the full SVD of the matrix BP. This option must be used before posterior error calculations can be done. However, this calculation takes considerable computer time for large problems, and is not required for computation of the inverse solution. If the -w option is used (i.e., the matrix WW.dat is made and output), the inverse calculation can be repeated more efficiently with the -B option. This is only useful if small changes to the same data set are made (e.g., using the same altimetry data, but making using a different prior model). A more useful option is -b coupled with -s<value>, which allows a very quick recomputation of the inverse solution with a different value of the damping parameter. Other options are useful for limiting disk usage by reduce_B.



 reduce_B [-b][ -B][ -w] [ -s<sig_e>] [-t<trunc>] [-n<n_blk>]


     -b - recalculate bhat.dat ONLY

     -B - recalculate bhat.dat ONLY, using

          dp=w*W'*d (W.dat,WW.dat & B.dat)

     -w - replace G.dat with WW.dat

     -s<sig_e> - set damping parameter to <sig_e>

     -t<trunc> - set truncation parameter to <trunc>

     -T - truncate ONLY WW.dat (last step)

     -n<n_blk> - set QR-block number to <n_blk>


      calculate ALL matrices

      do NOT replace G.dat with WW.dat

      sig_e=1; trunc=nreps; n_blk=nc

      TRUNCATE starting from making E

 NOTE: All matrices saved in DOUBLE precision

11. Computation of the Inverse Solution »

There are two alternative ways to get the inverse solution in OTISoo. The first and RAM saving way is to calculate a weighted sum of representers with Sumreps; the second - by forcing the adjoint and forward systems with forcing from the calculated vector of representer coefficient save in the files ../dat/bhat<igrp>.dat with rlcX. Both can be used in various cases, however Sumreps can not be used to get transports/currents when only elevation components of representers were calculated. If both elevation and current components of representers were calculated, then Sumreps is more memory efficient and fast way to get an inverse solution. For example, to get the full inverse solution (that is elevations and transports), the following situations are possible:

  1. only elevation components of representers were calculated - use rlcX;
  2. elevation and velocity components of representers were caculated - use Sumreps -u

Further comments on each of the codes are given below.


Program Sumreps constructs the inverse tidal solution, using the representers from OTISoo/local/MyArea/repx<igrp>/, the representer coefficients from../dat/bhat<igrp>.dat and the prior model. Sumreps works by directly adding up the calculated representers; it produces only the elevation component of the inverse solution if velocity components of the representers were not calculated. In the case when velocity components of representers were calculated Sumreps produces the full inverse solution, including transports.


 Sumreps [-M[<file>]] [-u] [-NPG] [-o<file>] [-g<igrp>]


 -M<prior elevation file>, if -M then no prior

 -o<output elevation file>, default:../out/hf.Sr.out

 -u       - sum velocity representers & prior transports

 -NPG     - rotated coordinates

 -g<igrp> - sum for group <igrp> only

The program uses h0.df.out (u0.df.out) in ../out/ as a prior model by default, but this can be changed using the -M option or by editing the control file run_param. The -M option with no file name means no prior should be added. To produce transport component of the inverse solution use the option -u. Only elevation component of the inverse solution is produced by default.

The final inverse solution for tidal elevations is written to hf.Sr.out in ../out, and for tidal transports in uv.Sr.out in ../out.The format is the same as for solution files output by the forward modeling programs (e.g., h0.df.out and u0.df.out; see section 4). To view the solution you can again use MATLAB script hPlot.m.


To obtain the full inverse solution (including the transport fields) in the case when no velocity component of representers were calculated or constituents are correlated, program rlcX  must be used. This program essentially recomputes the linear combination of representers by forcing the adjoint system with the appropriate linear combination of delta functions, smoothing with the dynamical error covariance, then solving the forward system. The result is added to the prior solution. Representer coefficients are obtained from ../dat/bhat<igrp>.dat, the dynamical error covariance is defined by the covsc_<constituents> file in ../prm/, and the prior tidal elevations (h0.df.out and transports u0.df.out from ../out/ by default or as set in run_param). The prior solution file names can be also changed from command line or omitted (corresponding blank line in the run_param or the -M option with no file name). Prior solution files have to be in the same standard format as h0.df.out and u0.df.out, as described in section 4.

LAPACK routines zgbtrf and zgbtrs are used by rlcX, see comments on LAPACK in section 1. The same default value for friction velocity is used (1m/s). To change the default value use the -u<value> option or -u0 if you want to use the spatially varying friction velocity fields (output by mkFrv). You have to maintain consistency between repX and rlcX, or you will not get the correct inverse solution. For example, if you set -u0 when running repX, you must set -u0 when running rlcX, and you must use the same friction velocity file. It is thus strongly recommended that you set the friction velocity option in the control file run_param from the very beginning (see section 3) to maintain consistency. Each time you changed the value, you need to use the script clean_Sfac in OTISoo/scripts. A simple check can be done: output by Sumreps hf.Sr.out and by rlcX hf.rlcX.out must be essentially the same (within computational error) if you have run the repX(_b) and rlcX(_b) consistently, and if numerical problems are not serious.

The program rlcX(_b) outputs the files hf.rlcX.ou t(hf.rcbX.out)t (tidal elevations) and uv.rlcX.out(uv.rcbX.out) (tidal transports) in the standard formats, described in section 4, in ../out/. To view the solution run hPlot.m and uvPlot.m from MATLAB. Note that rlcX(_b) is also used for step 3 of the posterior error calculation described in the next section.

12. Posterior Error Calculation »

OTISoo provides some tools for calculating posterior errors, using Monte-Carlo simulation. Note that the error bars will only be reasonable if both the data and dynamical errors are reasonable to begin with. Five steps of posterior error calculations described below are done with OTISoo codes Sml_df(_b), Scfac, bhat_samp and rlcX(_b). These programs produce outputs saved in separate files for each single constituent in ../prm/Constituents. The output file names contain a constituent name in.

The five steps are:
Sml_df(_b) - For a number of realizations, generate random forcing with the assumed dynamical error covariance, and solve the forward problem for this forcing. The SINGLE constituent solutions are written in the files h001.smlX_<constituent>.out and uv001.smlX_<constituent>.out, h002.smlX_<constituent>.out and uv002.smlX_<constituent>.out, etc., in ../out/. (Use MATLAB hPlot.m, uvPlot.m to view the random error realizations.)


 Sml_df(_b)  [ -c<covariance file name>

             -g<constituent group number for this run>

             -u<num> (set friction velocity to num)

             -v<num> (scale open bc variance)

             -b<num> (scale rigid bc variance)

             -w (use white noise covariance)

             -i<num> (scale interior variance)

             -n      (normal flow OBC)

             -s - use SPECIAL set of covariance

                  parameters from run_param

             -O smooth OB with ../prm/OBcov

              file ../prm/obloc.dat is also needed



 DEFAULTS:Covariance file names: ../prm/covsc_<constituent>

           ob_var_sc = 1;

           interior_var_sc =1;rb_var_sc = 1.

           Friction velocity u=1m/s

           Run repx -u0 if you want to use output by

           mkFrv a as friction velocity field

           Elevation OBC

           Use the SAME covariance parameters set

           as for repX & rlcX

The dynamical error covariance file covs_<constituents>c in ../prm/ (output by Varest, it is also used by repX and rlcX) is also required for this step. There exist a different set of covariance parameters for Sml_df in run_param. It is needed for maintaining possibility to adjust the synthetic solutions in the case when the assumed dynamic error is not consistent with the magnitude of the model change for the actual inverse solution. The same set of parameters as for repX and rlcX is used by default. To use the special set of parameters use option -s:

Sml_df(_b) -s

2. Scfac - This program estimates a scaling factor for the assumed dynamic error which makes the overall (spatially averaged) variance of the synthetic realizations consistent with the magnitude of the model change for the actual inverse solution. A similar scaling factor is computed to make the synthetic data error consistent with the inverse solution residuals. If both the data and dynamical error magnitude are assumed right, then the scaling factors should be near one. Scfac computes the scaling factors using the difference between prior solution and the inverse solution.


 Scfac [-P<prior_model>] [-I<inverse_model>]


  Prior: h0.df.out or as set in run_param

  Inverse: hf.rlcX.out

The synthetic error solutions obtained with Sml_df(_b) need to be multiplied by the scaling factors to make the magnitude for each tidal constituent consistent with the actual data. The scaling factors are written into the ASCII files for each single  constituent ../prm/pe_scales_<constituent>. The first column in the file contains the scaling factors for the assumed data errors (DSF - the "Data Scaling Factors"). The second column in the file contains the scaling factors for the synthetic solutions (SSF - the "Synthetic solution Scaling Factors"). If the values of the factors are far from one, than the error parameters were chosen incorrect (see ../prm/cf_difvar), and it is recommended to start inversion over with a different covariance parameters. Factored matrices or their blocks saved on previous steps do not need to be cleaned when changing the covariance parameters only. They still can be used. If DSF and SSF are close to one, than one can proceed with the following steps.

Note! The realization files produced by Sml_df(_b) are not changed either on this step or on the following steps.
3. bhat_samp - For each realization this program:

4. rlcX(_b)  Run this program in the posterior error calculation mode (set last parameter to 2 in the control file run_param or run rlc -m2) to get synthetic inverse solutions (elevations and transports) for each realization. The results are the differences between the inverse solutions computed from the synthetic data and the actual synthetic solutions scaled with SSF. If the file ../prm/pe_scales_<constituent> is not found, then the corresponding SSF is set to one. The results are written into the files hf001_<constituent>.rlcX.out, uv001_<constituent>.rlcX.out, hf002_<constituent>.rlcX.out and uv002_<constituent>.rlcX.out,etc.,in ../out/. These also can be viewed with MATLAB hPlot.m and uvPlot.m.
5. Using the output files you can calculate posterior error variances and covariances for any set of functionals of the tidal fields. A simple and generally useful example of code for calculating such statistics is located in OTISoo/src/stats_rlz.f90. This code computes pointwise posterior error variances for elevations and transports, writing results in the standard "h0.df.out" format (except the values are real, not complex). The file names are er_hf.<constituent>.out and er_uf.<constituent>.out. Use MATLAB hPlot.m to view the output by stats_rlz.

13. OTIS setup summary »

The section gives a short summary of necessary steps you have to do to set up OTISoo:

If the codes are compiled successfully, you may use OTISoo.

14. Inverse problem solution summary »

The section offers a simple summary of steps to set and solve a tidal inverse problem. Perform the following steps to get your inverse solution:

Here we are focusing on default usage mostly. The run_param file content corresponds to the default usage. Only the main inputs and outputs are written in the table. For more details on each item see corresponding sections. A section number and short resume are given in the Comments column. The rows corresponding to optional step are shaded. <constit> indicates a single "constituent” (i.e. such file is done for each constituent). <igrp> indicates a single group, i.e. igrp=1,ngrp. [] means the input item is optional.








GEBCO bathymetry data base in OTISoo/DB/GridOne.grd


Section 2;

Makes a bathymetry grid in the standard format including OB nodes and masking off small "lakes"









Section 2;

Edits grid, constituents and makes open boundary files







Section 2;

Makes open boundary files - run this, if gridEdit was not used.








Section 4;

Gets a prior solution by direct factorization; use hPlot.m and uvPlot.m to view. Stop here if only pure dynamical prior is desired.







Section 5;

Makes a version of representer master list and altimetry data location list. 








Section 5;

Use repEdit to view/edit the representer and altimetry data sites.







Section 5;

Makes binary altimetry data file for a given region.





[../dat/tpxbin.dat] or/and other tidal data







Translates constituents, representer list and data into internal format, read further by every inversion code.

Can be run repeatedly few times to append for multiple data sets. See section 5.3 on other tidal data formats.








Section 6;

Makes temporal data structure matrix.









Section 7;

Creates the dynamical error covariance scale file (constant friction velocity 1m/s is used for linearized drag).












Section 8;

Calculates representers in locations set in ../prm/lat_lon.rep.




rpx_to_rp -r







Section 9;

Makes matrices P (representers calculated in data locations) and R (representers calculated in representer locations)














Section 10;

Calculates representer coefficients and some arbitrary matrices












Section 10;

Makes tradeoff curve to adjust a damping parameter; recalculates representer coefficients with new value of the damping parameter; estimates and plots data fit and residuals



../dat/ bhat<igrp>.dat









../out/ hf.rlcX.out

../out/ uv.rlcX.out

Section 11;

Makes inverse elevation and transports solutions, using bhat.dat by forming combined delta-function forcing in representer locations

This sequence of steps has been applied to a small rectangular area 28N - 41N, 117E-129.5E (the Yellow Sea) to form a simple test case. All input and output on the test case are saved in OTISoo/local/YS. Files *.txt in OTISoo/local/YS/exe are control output by the corresponding codes. For example file mk_grid.txt is the control output by mk_grid. Files *.jpg are graphical output of corresponding matlab codes. For example, gridEdit.jpg is the graphical output by gridEdit.

If a file is shown as input and output in the table simultaneously, the file is rewritten by the program. For example, when running repEdit the first time we added some representers (initial ../prm/lat_lon.rep is copied to ../prm/lat_lon.rep_ini for users to compare), and thus the updated file ../prm/lat_lon.rep was produced (copy saved in ../prm/lat_lon.rep_edited). See repEdit.jpg in OTISoo/local/YS/exe for the representer and data sites we picked in this example. 

Content of the directory YS/prm was saved. Content of directories ../repx1, ../dat/ and ../out was cleaned after the test run to minimize the size of OTISoo distribution.

References »

  1. E. Anderson at al. LAPACK, Users Guide, Philadelphia, 1992
  2. G. Egbert, A. Bennett, M. Foreman, TOPEX/Poseidon tides estimated using a global inverse model, J. of Geophys. Res., vol 99, No C12, pp. 24,821 - 24, 852, 1994
  3. G. Egbert and A. Bennett, Data Assimilation Methods for Ocean Tides, in Modern Approaches to Data Assimilation in Ocean Modeling, Elsevier Science B.V., 1996
  4. G. Egbert and S. Erofeeva, Efficient inverse modeling of barotropic ocean tides, , Journal of Oceanic and Atmospheric Technology, vol.19, N2, February 2002
  5. Jayne, S. R., and St. Laurent, L.C., 2001. Parameterizing tidal dissipationover rough topography, Geophys. Res. Lett., 28, 811-814
  6. R. Ray, GSFC, Topex/Poseidon "no-tide" data (rray@geodesy2.gsfc.nasa.gov)
  7. E. D. Zaron and G. D. Egbert, 2006. Estimating Open-Ocean Barotropic Tidal Dissipation: The Hawaiian Ridge, Journal of Physical Oceanography, v.36, pp. 1019-1035