def build_model():
NUM_MIDDLE_01 = 100
NUM_MIDDLE_02 = 120
model = Sequential()
model.add(LSTM(NUM_MIDDLE_01, input_shape=(train_x.shape[1], train_x.shape[2]) ,return_sequences=True))
# model.add(LSTM(NUM_MIDDLE_01, input_shape=(NUM_LSTM, 1), return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(NUM_MIDDLE_02))
model.add(Dropout(0.2))
model.add(Dense(1))
model.add(Activation("linear"))
_optimizer = tf.keras.optimizers.Adam(
amsgrad=True
)
model.compile(loss='mae', optimizer=_optimizer)
# model.compile(loss="mean_squared_error", optimizer=_optimizer)
model.summary()
# model.compile(loss="mse", optimizer='rmsprop')
return model
Model: “sequential”
Layer (type) Output Shape Param #
lstm (LSTM) (None, 1, 100) 43600
dropout (Dropout) (None, 1, 100) 0
lstm_1 (LSTM) (None, 120) 106080
dropout_1 (Dropout) (None, 120) 0
dense (Dense) (None, 1) 121
activation (Activation) (None, 1) 0
Total params: 149,801
Trainable params: 149,801
Non-trainable params: 0
return_sequences 为true,会产生多一维
Original: https://blog.csdn.net/a5601564/article/details/121563602
Author: 大道至简_lyon
Title: tensorflow 输出层维度修改,关于return_sequences
原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/514226/
转载文章受原作者版权保护。转载请注明原作者出处!