怎么計算一元十五次方程(五十個一元一次方程)
922
2022-05-25
換湯不換藥,有手就會
基礎肥皂案例
數據集如下:
你的數據只要能跟它合上就行,年份和數據。
你只需要修改的地方:
套上去就完事,完整代碼:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
完整文件:
鏈接:https://pan.baidu.com/s/1FgDKr6ZF__OBuahkpy2PFg?pwd=dat5 提取碼:dat5 --來自百度網盤超級會員V3的分享
1
2
3
升級版肥皂案例
數據還是如下:
代碼如下,你可以根據自己的數據集修改一下路徑罷了:
# coding=utf-8 from pandas import read_csv from pandas import datetime from pandas import concat from pandas import DataFrame from pandas import Series from sklearn.metrics import mean_squared_error from sklearn.preprocessing import MinMaxScaler from keras.models import Sequential from keras.layers import Dense from keras.layers import LSTM from math import sqrt from matplotlib import pyplot import numpy # 讀取時間數據的格式化 def parser(x): return datetime.strptime(x, '%Y/%m/%d') # 轉換成有監督數據 def timeseries_to_supervised(data, lag=1): df = DataFrame(data) columns = [df.shift(i) for i in range(1, lag + 1)] # 數據滑動一格,作為input,df原數據為output columns.append(df) df = concat(columns, axis=1) df.fillna(0, inplace=True) return df # 轉換成差分數據 def difference(dataset, interval=1): diff = list() for i in range(interval, len(dataset)): value = dataset[i] - dataset[i - interval] diff.append(value) return Series(diff) # 逆差分 def inverse_difference(history, yhat, interval=1): # 歷史數據,預測數據,差分間隔 return yhat + history[-interval] # 縮放 def scale(train, test): # 根據訓練數據建立縮放器 scaler = MinMaxScaler(feature_range=(-1, 1)) scaler = scaler.fit(train) # 轉換train data train = train.reshape(train.shape[0], train.shape[1]) train_scaled = scaler.transform(train) # 轉換test data test = test.reshape(test.shape[0], test.shape[1]) test_scaled = scaler.transform(test) return scaler, train_scaled, test_scaled # 逆縮放 def invert_scale(scaler, X, value): new_row = [x for x in X] + [value] array = numpy.array(new_row) array = array.reshape(1, len(array)) inverted = scaler.inverse_transform(array) return inverted[0, -1] # fit LSTM來訓練數據 def fit_lstm(train, batch_size, nb_epoch, neurons): X, y = train[:, 0:-1], train[:, -1] X = X.reshape(X.shape[0], 1, X.shape[1]) model = Sequential() # 添加LSTM層 model.add(LSTM(neurons, batch_input_shape=(batch_size, X.shape[1], X.shape[2]), stateful=True)) model.add(Dense(1)) # 輸出層1個node # 編譯,損失函數mse+優化算法adam model.compile(loss='mean_squared_error', optimizer='adam') for i in range(nb_epoch): # 按照batch_size,一次讀取batch_size個數據 model.fit(X, y, epochs=1, batch_size=batch_size, verbose=0, shuffle=False) model.reset_states() print("當前計算次數:"+str(i)) return model # 1步長預測 def forcast_lstm(model, batch_size, X): X = X.reshape(1, 1, len(X)) yhat = model.predict(X, batch_size=batch_size) return yhat[0, 0] # 加載數據 series = read_csv('data_set/shampoo-sales.csv', header=0, parse_dates=[0], index_col=0, squeeze=True, date_parser=parser) # 讓數據變成穩定的 raw_values = series.values diff_values = difference(raw_values, 1)#轉換成差分數據 # 把穩定的數據變成有監督數據 supervised = timeseries_to_supervised(diff_values, 1) supervised_values = supervised.values # 數據拆分:訓練數據、測試數據,前24行是訓練集,后12行是測試集 train, test = supervised_values[0:-12], supervised_values[-12:] # 數據縮放 scaler, train_scaled, test_scaled = scale(train, test) # fit 模型 lstm_model = fit_lstm(train_scaled, 1, 100, 4) # 訓練數據,batch_size,epoche次數, 神經元個數 # 預測 train_reshaped = train_scaled[:, 0].reshape(len(train_scaled), 1, 1)#訓練數據集轉換為可輸入的矩陣 lstm_model.predict(train_reshaped, batch_size=1)#用模型對訓練數據矩陣進行預測 # 測試數據的前向驗證,實驗發現,如果訓練次數很少的話,模型回簡單的把數據后移,以昨天的數據作為今天的預測值,當訓練次數足夠多的時候 # 才會體現出來訓練結果 predictions = list() for i in range(len(test_scaled)):#根據測試數據進行預測,取測試數據的一個數值作為輸入,計算出下一個預測值,以此類推 # 1步長預測 X, y = test_scaled[i, 0:-1], test_scaled[i, -1] yhat = forcast_lstm(lstm_model, 1, X) # 逆縮放 yhat = invert_scale(scaler, X, yhat) # 逆差分 yhat = inverse_difference(raw_values, yhat, len(test_scaled) + 1 - i) predictions.append(yhat) expected = raw_values[len(train) + i + 1] print('Moth=%d, Predicted=%f, Expected=%f' % (i + 1, yhat, expected)) # 性能報告 rmse = sqrt(mean_squared_error(raw_values[-12:], predictions)) print('Test RMSE:%.3f' % rmse) # 繪圖 pyplot.plot(raw_values[-12:]) pyplot.plot(predictions) pyplot.show()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
結果如下:
具體自己改改,給個參考。
完整文件:
鏈接:https://pan.baidu.com/s/1tYDb44Ge5S6Wwt1sPE8iHA?pwd=hkkc 提取碼:hkkc --來自百度網盤超級會員V3的分享
1
2
3
數模q un:912166339比賽期間禁止交流,賽后再聊,訂閱本專欄,觀看更多數學模型套路與分析。
更健壯的LSTM
數據集不變,代碼如下:
# coding=utf-8 from pandas import read_csv from pandas import datetime from pandas import concat from pandas import DataFrame from pandas import Series from sklearn.metrics import mean_squared_error from sklearn.preprocessing import MinMaxScaler from keras.models import Sequential from keras.layers import Dense from keras.layers import LSTM from math import sqrt from matplotlib import pyplot import numpy # 讀取時間數據的格式化 def parser(x): return datetime.strptime(x, '%Y/%m/%d') # 轉換成有監督數據 def timeseries_to_supervised(data, lag=1): df = DataFrame(data) columns = [df.shift(i) for i in range(1, lag + 1)] # 數據滑動一格,作為input,df原數據為output columns.append(df) df = concat(columns, axis=1) df.fillna(0, inplace=True) return df # 轉換成差分數據 def difference(dataset, interval=1): diff = list() for i in range(interval, len(dataset)): value = dataset[i] - dataset[i - interval] diff.append(value) return Series(diff) # 逆差分 def inverse_difference(history, yhat, interval=1): # 歷史數據,預測數據,差分間隔 return yhat + history[-interval] # 縮放 def scale(train, test): # 根據訓練數據建立縮放器 scaler = MinMaxScaler(feature_range=(-1, 1)) scaler = scaler.fit(train) # 轉換train data train = train.reshape(train.shape[0], train.shape[1]) train_scaled = scaler.transform(train) # 轉換test data test = test.reshape(test.shape[0], test.shape[1]) test_scaled = scaler.transform(test) return scaler, train_scaled, test_scaled # 逆縮放 def invert_scale(scaler, X, value): new_row = [x for x in X] + [value] array = numpy.array(new_row) array = array.reshape(1, len(array)) inverted = scaler.inverse_transform(array) return inverted[0, -1] # fit LSTM來訓練數據 def fit_lstm(train, batch_size, nb_epoch, neurons): X, y = train[:, 0:-1], train[:, -1] X = X.reshape(X.shape[0], 1, X.shape[1]) model = Sequential() # 添加LSTM層 model.add(LSTM(neurons, batch_input_shape=(batch_size, X.shape[1], X.shape[2]), stateful=True)) model.add(Dense(1)) # 輸出層1個node # 編譯,損失函數mse+優化算法adam model.compile(loss='mean_squared_error', optimizer='adam') for i in range(nb_epoch): # 按照batch_size,一次讀取batch_size個數據 model.fit(X, y, epochs=1, batch_size=batch_size, verbose=0, shuffle=False) model.reset_states() print("當前計算次數:"+str(i)) return model # 1步長預測 def forcast_lstm(model, batch_size, X): X = X.reshape(1, 1, len(X)) yhat = model.predict(X, batch_size=batch_size) return yhat[0, 0] # 加載數據 series = read_csv('data_set/shampoo-sales.csv', header=0, parse_dates=[0], index_col=0, squeeze=True, date_parser=parser) # 讓數據變成穩定的 raw_values = series.values diff_values = difference(raw_values, 1)#轉換成差分數據 # 把穩定的數據變成有監督數據 supervised = timeseries_to_supervised(diff_values, 1) supervised_values = supervised.values # 數據拆分:訓練數據、測試數據,前24行是訓練集,后12行是測試集 train, test = supervised_values[0:-12], supervised_values[-12:] # 數據縮放 scaler, train_scaled, test_scaled = scale(train, test) #重復實驗 repeats = 30 error_scores = list() for r in range(repeats): # fit 模型 lstm_model = fit_lstm(train_scaled, 1, 100, 4) # 訓練數據,batch_size,epoche次數, 神經元個數 # 預測 train_reshaped = train_scaled[:, 0].reshape(len(train_scaled), 1, 1)#訓練數據集轉換為可輸入的矩陣 lstm_model.predict(train_reshaped, batch_size=1)#用模型對訓練數據矩陣進行預測 # 測試數據的前向驗證,實驗發現,如果訓練次數很少的話,模型回簡單的把數據后移,以昨天的數據作為今天的預測值,當訓練次數足夠多的時候 # 才會體現出來訓練結果 predictions = list() for i in range(len(test_scaled)): # 1步長預測 X, y = test_scaled[i, 0:-1], test_scaled[i, -1] yhat = forcast_lstm(lstm_model, 1, X) # 逆縮放 yhat = invert_scale(scaler, X, yhat) # 逆差分 yhat = inverse_difference(raw_values, yhat, len(test_scaled) + 1 - i) predictions.append(yhat) expected = raw_values[len(train) + i + 1] print('Moth=%d, Predicted=%f, Expected=%f' % (i + 1, yhat, expected)) # 性能報告 rmse = sqrt(mean_squared_error(raw_values[-12:], predictions)) print('%d) Test RMSE:%.3f' %(r+1,rmse)) error_scores.append(rmse) #統計信息 results = DataFrame() results['rmse'] = error_scores print(results.describe()) results.boxplot() pyplot.show()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
機器學習 深度學習
版權聲明:本文內容由網絡用戶投稿,版權歸原作者所有,本站不擁有其著作權,亦不承擔相應法律責任。如果您發現本站中有涉嫌抄襲或描述失實的內容,請聯系我們jiasou666@gmail.com 處理,核實后本網站將在24小時內刪除侵權內容。