AIModels.AIClasses module

Classes for the AI project

Classes

Field:

Class for fields

EarlyStopping:

Class for early stopping

TimeSeriesDataset:

Class for time series dataset

class AIModels.AIClasses.Field(name, levels, area, mr, dstart='1/1/1940', dtend='12/31/2022', dropX=False)[source]

Bases: object

Class for fields

Parameters:
  • name (string) -- Name of the field

  • levels (string) -- Level of the field

  • area (string) -- Area to be analyzed, possible values are

    • 'TROPIC': Tropics

    • 'GLOBAL': Global

    • 'PACTROPIC': Pacific Tropics

    • 'WORLD': World

    • 'EUROPE': Europe

    • 'NORTH_AMERICA': North America

    • 'NH-ML': Northern Hemisphere Mid-Latitudes

  • mr (float) -- Number of EOF retained

  • dstart (string) -- Start date for field

  • dtend (string) -- End date for field

Variables:
  • name (string) -- Name of the field

  • levels (string) -- Level of the field

  • area (string) -- Area of the field

  • mr (float) -- Number of EOF retained

  • dstart (string) -- Start date for field

  • dtend (string) -- End date for field

class AIModels.AIClasses.EarlyStopping(patience=5, verbose=False, delta=0)[source]

Bases: object

Class for early stopping

Parameters:
  • patience (int) -- Number of epochs to wait before stopping

  • verbose (boolean) -- If True, print the epoch when stopping

  • delta (float) -- Minimum change in loss to be considered an improvement

Variables:
  • patience (int) -- Number of epochs to wait before stopping

  • verbose (boolean) -- If True, print the epoch when stopping

  • delta (float) -- Minimum change in loss to be considered an improvement

  • counter (int) -- Number of epochs since last improvement

  • best_score (float) -- Best loss score

  • early_stop (boolean) -- If True, stop the training

class AIModels.AIClasses.TimeSeriesDataset(datasrc, datatgt, TIN, MIN, T, K, time_features=None)[source]

Bases: Dataset

Class for time series dataset. Includes time feature for transformers

Parameters:
  • datasrc (numpy array) -- Source data

  • datatgt (numpy array) -- Target data

  • TIN (int) -- Input time steps

  • MIN (int) -- Input variables size

  • T (int) -- Predictions time steps

  • K (int) -- Output variables size

  • time_features (numpy array (optional)) -- If not None contain Time Features

  • shift -- Overlap between source and target, for trasnformers overlap = 0 for LSTM overlap should be TIN-T

Variables:
  • datasrc (numpy array) -- Source data

  • datatgt (numpy array) -- Target data

  • time_features (numpy array) -- Time features

  • TIN (int) -- Input time steps

  • MIN (int) -- Input variables

  • T (int) -- Output time steps

  • K (int) -- Output variables

class AIModels.AIClasses.PositionalEmbedding(T, neof, embedding_dim)[source]

Bases: Module

forward(X)[source]

Apply positional embeddings to the input sequence.

Parameters: - X (Tensor): Input tensor of shape (batch_size, T, neof)

Returns: - Tensor: Positional embedding applied tensor of shape (batch_size, T, neof)

class AIModels.AIClasses.FixedPositionalEmbedding(max_T, max_neof, embedding_dim=64)[source]

Bases: Module

Creates fixed (non-learnable) positional embeddings for both the time dimension (T) and feature dimension (neof).

forward(X)[source]

X: shape (batch_size, T, neof)

Returns: shape (batch_size, T, neof)

with a fixed (non-learnable) sinusoidal offset added.

class AIModels.AIClasses.FeaturePositionalEmbedding(max_neof, embedding_dim=64)[source]

Bases: Module

Creates fixed (non-learnable) positional embeddings for the feature dimension only.

Given an input (batch_size, T, neof), we produce a feature-wise offset of shape (1, 1, neof), then broadcast-add it to the input.

forward(X)[source]
Args:

X: Tensor of shape (batch_size, T, neof)

Returns:

Tensor of the same shape, with a feature-wise positional offset added.

class AIModels.AIClasses.SymmetricFeatureScaler(feature_scales=None)[source]

Bases: BaseEstimator, TransformerMixin

Scales each feature to a symmetric range [-scale, +scale] around zero, following the scikit-learn transformer API.

For feature i, the data's minimum is mapped to -feature_scales[i], and the data's maximum is mapped to +feature_scales[i].

Parameters:

feature_scales (array-like of shape (n_features,), optional (default=None)) -- If None, all features are scaled to [-1, +1]. Otherwise, each feature i is scaled to [-feature_scales[i], +feature_scales[i]].

Variables:
  • data_min (ndarray of shape (n_features,)) -- Per-feature minimum seen in the data during fit.

  • data_max (ndarray of shape (n_features,)) -- Per-feature maximum seen in the data during fit.

  • n_features (int) -- Number of features in the fitted data.

  • feature_scales (ndarray of shape (n_features,)) -- Final validated array of per-feature scales.

Example

>>> import numpy as np
>>> X = np.array([[1, 10], [2, 20], [3, 30]], dtype=float)
>>> # Suppose we want feature 0 scaled to [-1, +1] and feature 1 to [-5, +5]
>>> feature_scales = [1.0, 5.0]
>>> scaler = SymmetricFeatureScaler(feature_scales=feature_scales)
>>> scaler.fit(X)
SymmetricFeatureScaler(...)
>>> X_scaled = scaler.transform(X)
>>> X_scaled
array([[-1.        , -5.        ],
       [ 0.        ,  0.        ],
       [ 1.        ,  5.        ]])
>>> X_orig = scaler.inverse_transform(X_scaled)
>>> X_orig
array([[ 1., 10.],
       [ 2., 20.],
       [ 3., 30.]])
fit(X, y=None)[source]

Learn the per-feature min and max from the training data.

transform(X)[source]

Scale the input data X to [-feature_scales[i], +feature_scales[i]] per feature.

inverse_transform(X)[source]

Revert data back to original range: [-scale, +scale] => [-1, +1] => [0, 1] => [data_min_, data_max_].

class AIModels.AIClasses.IdentityScaler[source]

Bases: BaseEstimator, TransformerMixin

A no-op (identity) scaler that complies with scikit-learn's estimator API. It leaves data unchanged.

fit(X, y=None)[source]
transform(X, y=None)[source]
inverse_transform(X, y=None)[source]