DL之LSTM:tf.contrib.rnn.BasicLSTMCell(rnn_unit)函數的解讀

      網友投稿 868 2025-04-04

      DL之LSTM:tf.contrib.rnn.BasicLSTMCell(rnn_unit)函數的解讀


      目錄

      tf.contrib.rnn.BasicLSTMCell(rnn_unit)函數的解讀

      函數功能解讀

      函數代碼實現

      tf.contrib.rnn.BasicLSTMCell(rnn_unit)函數的解讀

      函數功能解讀

      """Basic LSTM recurrent network cell.

      The implementation is based on: http://arxiv.org/abs/1409.2329.

      We add forget_bias (default: 1) to the biases of the forget gate in order to reduce the scale of forgetting in the beginning of the training.

      It does not allow cell clipping, a projection layer, and does not use peep-hole connections: it is the basic baseline.? For advanced models, please use the full @{tf.nn.rnn_cell.LSTMCell}

      that follows.

      """

      def __init__(self,

      num_units,

      forget_bias=1.0,

      state_is_tuple=True,

      activation=None,

      reuse=None,

      name=None,

      dtype=None):

      """Initialize the basic LSTM cell.

      基本LSTM遞歸網絡單元。

      實現基于:http://arxiv.org/abs/1409.2329。

      我們在遺忘門的偏見中加入了遺忘偏見(默認值:1),以減少訓練開始時的遺忘程度。

      它不允許細胞剪切(一個投影層),也不使用窺孔連接:它是基本的基線。對于高級模型,請使用完整的@{tf.n .rnn_cell. lstmcell}遵循。

      Args:

      num_units: int, The number of units in the LSTM cell.

      forget_bias: float, The bias added to forget gates (see above).

      Must set to `0.0` manually when restoring from CudnnLSTM-trained?checkpoints.

      state_is_tuple: If True, accepted and returned states are 2-tuples of the `c_state` and `m_state`. ?If False, they are concatenated along the column axis. ?The latter behavior will soon be deprecated.

      activation: Activation function of the inner states. ?Default: `tanh`.

      reuse: (optional) Python boolean describing whether to reuse variables in an existing scope. ?If not `True`, and the existing scope already has?the given variables, an error is raised.

      name: String, the name of the layer. Layers with the same name will share weights, but to avoid mistakes we require reuse=True in such?cases.

      dtype: Default dtype of the layer (default of `None` means use the type of the first input). Required when `build` is called before `call`.

      When restoring from CudnnLSTM-trained checkpoints, must use `CudnnCompatibleLSTMCell` instead.

      """

      參數:

      num_units: int類型, LSTM單元中的單元數。

      forget_bias: float類型,偏見添加到忘記門(見上面)。

      從cudnnlstm訓練的檢查點恢復時,必須手動設置為“0.0”。

      state_is_tuple: 如果為真,則接受狀態和返回狀態是' c_state '和' m_state '的二元組。如果為假,則沿著列軸連接它們。后一種行為很快就會被摒棄。

      activation: 內部狀態的激活功能。默認值tanh激活函數。

      reuse: (可選)Python布爾值,描述是否在現有范圍內重用變量。如果不是“True”,并且現有范圍已經有給定的變量,則會引發錯誤。

      name:字符串,層的名稱。具有相同名稱的層將共享權重,但是為了避免錯誤,我們需要在這種情況下重用=True。

      dtype:該層的默認dtype(默認為‘None’意味著使用第一個輸入的類型)。當' build '在' call '之前被調用時是必需的。

      從經過cudnnlstm訓練的檢查點恢復時,必須使用“CudnnCompatibleLSTMCell”。

      ”“”

      函數代碼實現

      @tf_export("nn.rnn_cell.BasicLSTMCell")

      class BasicLSTMCell(LayerRNNCell):

      """Basic LSTM recurrent network cell.

      The implementation is based on: http://arxiv.org/abs/1409.2329.

      We add forget_bias (default: 1) to the biases of the forget gate in order to

      reduce the scale of forgetting in the beginning of the training.

      It does not allow cell clipping, a projection layer, and does not

      use peep-hole connections: it is the basic baseline.

      For advanced models, please use the full @{tf.nn.rnn_cell.LSTMCell}

      that follows.

      """

      def __init__(self,

      num_units,

      forget_bias=1.0,

      state_is_tuple=True,

      activation=None,

      reuse=None,

      name=None,

      dtype=None):

      """Initialize the basic LSTM cell.

      DL之LSTM:tf.contrib.rnn.BasicLSTMCell(rnn_unit)函數的解讀

      Args:

      num_units: int, The number of units in the LSTM cell.

      forget_bias: float, The bias added to forget gates (see above).

      Must set to `0.0` manually when restoring from CudnnLSTM-trained

      checkpoints.

      state_is_tuple: If True, accepted and returned states are 2-tuples of

      the `c_state` and `m_state`. If False, they are concatenated

      along the column axis. The latter behavior will soon be deprecated.

      activation: Activation function of the inner states. Default: `tanh`.

      reuse: (optional) Python boolean describing whether to reuse variables

      in an existing scope. If not `True`, and the existing scope already has

      the given variables, an error is raised.

      name: String, the name of the layer. Layers with the same name will

      share weights, but to avoid mistakes we require reuse=True in such

      cases.

      dtype: Default dtype of the layer (default of `None` means use the type

      of the first input). Required when `build` is called before `call`.

      When restoring from CudnnLSTM-trained checkpoints, must use

      `CudnnCompatibleLSTMCell` instead.

      """

      super(BasicLSTMCell, self).__init__(_reuse=reuse, name=name, dtype=dtype)

      if not state_is_tuple:

      logging.warn("%s: Using a concatenated state is slower and will soon be "

      "deprecated. Use state_is_tuple=True.", self)

      # Inputs must be 2-dimensional.

      self.input_spec = base_layer.InputSpec(ndim=2)

      self._num_units = num_units

      self._forget_bias = forget_bias

      self._state_is_tuple = state_is_tuple

      self._activation = activation or math_ops.tanh

      @property

      def state_size(self):

      return (LSTMStateTuple(self._num_units, self._num_units)

      if self._state_is_tuple else 2 * self._num_units)

      @property

      def output_size(self):

      return self._num_units

      def build(self, inputs_shape):

      if inputs_shape[1].value is None:

      raise ValueError("Expected inputs.shape[-1] to be known, saw shape: %s"

      % inputs_shape)

      input_depth = inputs_shape[1].value

      h_depth = self._num_units

      self._kernel = self.add_variable(

      _WEIGHTS_VARIABLE_NAME,

      shape=[input_depth + h_depth, 4 * self._num_units])

      self._bias = self.add_variable(

      _BIAS_VARIABLE_NAME,

      shape=[4 * self._num_units],

      initializer=init_ops.zeros_initializer(dtype=self.dtype))

      self.built = True

      def call(self, inputs, state):

      """Long short-term memory cell (LSTM).

      Args:

      inputs: `2-D` tensor with shape `[batch_size, input_size]`.

      state: An `LSTMStateTuple` of state tensors, each shaped

      `[batch_size, num_units]`, if `state_is_tuple` has been set to

      `True`. Otherwise, a `Tensor` shaped

      `[batch_size, 2 * num_units]`.

      Returns:

      A pair containing the new hidden state, and the new state (either a

      `LSTMStateTuple` or a concatenated state, depending on

      `state_is_tuple`).

      """

      sigmoid = math_ops.sigmoid

      one = constant_op.constant(1, dtype=dtypes.int32)

      # Parameters of gates are concatenated into one multiply for efficiency.

      if self._state_is_tuple:

      c, h = state

      else:

      c, h = array_ops.split(value=state, num_or_size_splits=2, axis=one)

      gate_inputs = math_ops.matmul(

      array_ops.concat([inputs, h], 1), self._kernel)

      gate_inputs = nn_ops.bias_add(gate_inputs, self._bias)

      # i = input_gate, j = new_input, f = forget_gate, o = output_gate

      i, j, f, o = array_ops.split(

      value=gate_inputs, num_or_size_splits=4, axis=one)

      forget_bias_tensor = constant_op.constant(self._forget_bias, dtype=f.dtype)

      # Note that using `add` and `multiply` instead of `+` and `*` gives a

      # performance improvement. So using those at the cost of readability.

      add = math_ops.add

      multiply = math_ops.multiply

      new_c = add(multiply(c, sigmoid(add(f, forget_bias_tensor))),

      multiply(sigmoid(i), self._activation(j)))

      new_h = multiply(self._activation(new_c), sigmoid(o))

      if self._state_is_tuple:

      new_state = LSTMStateTuple(new_c, new_h)

      else:

      new_state = array_ops.concat([new_c, new_h], 1)

      return new_h, new_state

      機器學習 神經網絡

      版權聲明:本文內容由網絡用戶投稿,版權歸原作者所有,本站不擁有其著作權,亦不承擔相應法律責任。如果您發現本站中有涉嫌抄襲或描述失實的內容,請聯系我們jiasou666@gmail.com 處理,核實后本網站將在24小時內刪除侵權內容。

      版權聲明:本文內容由網絡用戶投稿,版權歸原作者所有,本站不擁有其著作權,亦不承擔相應法律責任。如果您發現本站中有涉嫌抄襲或描述失實的內容,請聯系我們jiasou666@gmail.com 處理,核實后本網站將在24小時內刪除侵權內容。

      上一篇:Microsoft Excel 2010介紹 視頻教程
      下一篇:工廠生產自動化設計規范的制定與實施,探討化工廠生產自動化設計規范的制定和實施方法
      相關文章
      亚洲日韩精品A∨片无码加勒比| 久久精品国产亚洲AV高清热 | 精品日韩亚洲AV无码一区二区三区 | 亚洲一区中文字幕久久| 国产成人精品日本亚洲网站| 亚洲综合AV在线在线播放| 伊人久久大香线蕉亚洲| 国产av无码专区亚洲av果冻传媒 | 亚洲乱码在线视频| 亚洲中文无码线在线观看| 亚洲一区在线观看视频| 亚洲AV综合色区无码二区偷拍| 亚洲18在线天美| 亚洲中文无码av永久| 狠狠色伊人亚洲综合网站色| 亚洲色大成网站WWW国产| 亚洲精品av无码喷奶水糖心| 亚洲国产精品自在自线观看| 久久人午夜亚洲精品无码区| 国产成人亚洲精品播放器下载| 国产大陆亚洲精品国产| 亚洲午夜无码AV毛片久久| 丝袜熟女国偷自产中文字幕亚洲| 自拍偷自拍亚洲精品被多人伦好爽 | 亚洲VA中文字幕不卡无码| 亚洲AV区无码字幕中文色| 中文字幕在线观看亚洲| 亚洲欧洲在线播放| 国产成人精品亚洲日本在线 | 久久亚洲伊人中字综合精品| 91亚洲一区二区在线观看不卡| 亚洲欧洲精品国产区| 亚洲午夜精品一区二区麻豆| 久久久久亚洲精品无码网址色欲| 亚洲AⅤ永久无码精品AA| 一本色道久久综合亚洲精品| 亚洲福利在线观看| 亚洲av乱码一区二区三区 | 亚洲av无码一区二区三区四区| www.亚洲色图.com| 亚洲午夜久久久影院伊人|