Core Layers

[source]

FullyConnected

polyaxon.layers.core.FullyConnected(mode, num_units, activation='linear', bias=True, weights_init='truncated_normal', bias_init='zeros', regularizer=None, scale=0.001, dropout=None, trainable=True, restore=True, name='FullyConnected')

Adds a fully connected layer.

fully_connected creates a variable called w, representing a fully connected weight matrix, which is multiplied by the incoming to produce a Tensor of hidden units.

  • Note: that if inputs have a rank greater than 2, then inputs is flattened prior to the initial matrix multiply by weights.

  • Args:

    • mode: str, Specifies if this training, evaluation or prediction. See Modes.
    • num_units: int, number of units for this layer.
    • activation: str (name) or function (returning a Tensor).
      • Default: 'linear'.
    • bias: bool. If True, a bias is used.
    • weights_init: str (name) or Tensor. Weights initialization.
      • Default: 'truncated_normal'.
    • bias_init: str (name) or Tensor. Bias initialization.
      • Default: 'zeros'.
    • regularizer: str (name) or Tensor. Add a regularizer to this layer weights.
      • Default: None.
    • scale: float. Regularizer decay parameter. Default: 0.001.
    • dropout: float. Adds a dropout with keep_prob as 1 - dropout.
    • trainable: bool. If True, weights will be trainable.
    • restore: bool. If True, this layer weights will be restored when loading a model.
    • name: A name for this layer (optional). Default: 'FullyConnected'.
  • Attributes:

    • w: Tensor. Variable representing units weights.
    • b: Tensor. Variable representing biases.

[source]

Dropout

polyaxon.layers.core.Dropout(mode, keep_prob, noise_shape=None, seed=None, name='Dropout')

Adds a Dropout op to the input.

Outputs the input element scaled up by 1 / keep_prob. The scaling is so that the expected sum is unchanged.

By default, each element is kept or dropped independently. If noise_shape is specified, it must be broadcastable to the shape of x, and only dimensions with noise_shape[i] == shape(x)[i] will make independent decisions. For example, if shape(x) = [k, l, m, n] and noise_shape = [k, 1, 1, n], each batch and channel component will be kept independently and each row and column will be kept or not kept together.

  • Args:

    • mode: str, Specifies if this training, evaluation or prediction. See Modes. keep_prob : A float representing the probability that each element is kept. noise_shape : A 1-D Tensor of type int32, representing the shape for randomly generated keep/drop flags. name : A name for this layer (optional).
  • References:

    • Dropout: A Simple Way to Prevent Neural Networks from Overfitting. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever & R. Salakhutdinov, (2014), Journal of Machine Learning Research, 5(Jun)(2), 1929-1958.
  • Links:

  • [https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf] (https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf)

[source]

Reshape

polyaxon.layers.core.Reshape(mode, new_shape, name='Reshape')

Reshape.

A layer that reshape the incoming layer tensor output to the desired shape.

  • Args:
    • mode: str, Specifies if this training, evaluation or prediction. See Modes.
    • new_shape: A list of int. The desired shape.
    • name: A name for this layer (optional).

[source]

Flatten

polyaxon.layers.core.Flatten(mode, name='Flatten')

Flatten the incoming Tensor.

  • Args:
    • mode: str, Specifies if this training, evaluation or prediction. See Modes.
    • name: A name for this layer (optional).

[source]

SingleUnit

polyaxon.layers.core.SingleUnit(mode, activation='linear', bias=True, trainable=True, restore=True, name='Linear')

Adds a Single Unit Layer.

  • Args:

    • mode: str, Specifies if this training, evaluation or prediction. See Modes.
    • activation: str (name) or function. Activation applied to this layer. Default: 'linear'.
    • bias: bool. If True, a bias is used.
    • trainable: bool. If True, weights will be trainable.
    • restore: bool. If True, this layer weights will be restored when loading a model.
    • name: A name for this layer (optional). Default: 'Linear'.
  • Attributes:

    • W: Tensor. Variable representing weight.
    • b: Tensor. Variable representing bias.

[source]

Highway

polyaxon.layers.core.Highway(mode, num_units, activation='linear', transform_dropout=None, weights_init='truncated_normal', bias_init='zeros', regularizer=None, scale=0.001, trainable=True, restore=True, name='FullyConnectedHighway')

Adds Fully Connected Highway.

A fully connected highway network layer, with some inspiration from - __https__://github.com/fomorians/highway-fcn.

  • Args:

    • mode: str, Specifies if this training, evaluation or prediction. See Modes.
    • num_units: int, number of units for this layer.
    • activation: str (name) or function (returning a Tensor).
      • Default: 'linear'.
    • transform_dropout: float: Keep probability on the highway transform gate.
    • weights_init: str (name) or Tensor. Weights initialization.
      • Default: 'truncated_normal'.
    • bias_init: str (name) or Tensor. Bias initialization.
      • Default: 'zeros'.
    • regularizer: str (name) or Tensor. Add a regularizer to this layer weights.
      • Default: None.
    • scale: float. Regularizer decay parameter. Default: 0.001.
    • trainable: bool. If True, weights will be trainable.
    • restore: bool. If True, this layer weights will be restored when loading a model.
    • name: A name for this layer (optional). Default: 'FullyConnectedHighway'.
  • Attributes:

    • W: Tensor. Variable representing units weights.
    • W_t: Tensor. Variable representing units weights for transform gate.
    • b: Tensor. Variable representing biases.
    • b_t: Tensor. Variable representing biases for transform gate.
  • Links:


[source]

OneHotEncoding

polyaxon.layers.core.OneHotEncoding(mode, n_classes, on_value=1.0, off_value=0.0, name='OneHotEncoding')

Transform numeric labels into one hot labels using tf.one_hot.

  • Args:
    • mode: str, Specifies if this training, evaluation or prediction. See Modes.
    • n_classes: int. Total number of classes.
    • on_value: scalar. A scalar defining the on-value.
    • off_value: scalar. A scalar defining the off-value.
    • name: A name for this layer (optional). Default: 'OneHotEncoding'.

[source]

GaussianNoise

polyaxon.layers.core.GaussianNoise(mode, scale=1, mean=0.0, stddev=1.0, seed=None, name='GaussianNoise')

Additive zero-centered Gaussian noise.

This is useful to mitigate overfitting, could be used as a form of random data augmentation. Gaussian Noise (GS) is a natural choice as corruption process for real valued inputs.

As it is a regularization layer, it is only active at training time.

  • Args:
    • scale: A 0-D Tensor or Python float. The scale at which to apply the the noise.
    • mean: A 0-D Tensor or Python float. The mean of the noise distribution.
    • stddev: A 0-D Tensor or Python float. The standard deviation of the noise distribution.
    • seed: A Python integer. Used to create a random seed. See @{tf.set_random_seed}.
    • name: A name for this operation (optional).

[source]

Merge

polyaxon.layers.core.Merge(mode, modules, merge_mode, axis=1, name='Merge')

[source]

Slice

polyaxon.layers.core.Slice(mode, begin, size, name='Slice')

Extracts a slice from a tensor.

This operation extracts a slice of size size from a tensor input starting at the location specified by begin.

  • Args:
    • mode: str, Specifies if this training, evaluation or prediction. See Modes.
    • name: str. A name for this layer (optional).