-
Notifications
You must be signed in to change notification settings - Fork 0
/
atom.xml
537 lines (352 loc) · 303 KB
/
atom.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<title>木屋</title>
<subtitle>mhlwsk的博客</subtitle>
<link href="/atom.xml" rel="self"/>
<link href="https://www.mhlwsk.com/"/>
<updated>2020-09-27T16:20:15.023Z</updated>
<id>https://www.mhlwsk.com/</id>
<author>
<name>mhlwsk</name>
</author>
<generator uri="http://hexo.io/">Hexo</generator>
<entry>
<title>《PyTorch深度学习实践》13. 循环神经网络(高级篇)</title>
<link href="https://www.mhlwsk.com/archives/30590.html"/>
<id>https://www.mhlwsk.com/archives/30590.html</id>
<published>2020-09-27T16:10:01.000Z</published>
<updated>2020-09-27T16:20:15.023Z</updated>
<content type="html"><![CDATA[<h1 id="循环神经网络(高级篇)"><a href="#循环神经网络(高级篇)" class="headerlink" title="循环神经网络(高级篇)"></a>循环神经网络(高级篇)</h1><h2 id="例:人名分类"><a href="#例:人名分类" class="headerlink" title="例:人名分类"></a>例:人名分类</h2><h3 id="数据准备"><a href="#数据准备" class="headerlink" title="数据准备"></a>数据准备</h3><p>根据人名的英文拼写判断人所在的国家,数据形式如下:</p><p><img src="/images/13. 循环神经网络(高级篇).assets/image-20200927161615676.png" alt="image-20200927161615676" style="zoom:50%;"><br><a id="more"></a></p><p>这个问题是序列输入,而只有一个输出,若采用RNN,模型形式如下:</p><p><img src="/images/13. 循环神经网络(高级篇).assets/image-20200927161458425.png" alt="image-20200927161458425"></p><p>采用GRU的话,模型形式如下:</p><p><img src="/images/13. 循环神经网络(高级篇).assets/image-20200927161924466.png" alt="image-20200927161924466"></p><p>数据处理:</p><p>名字序列转ASCII码值,进一步表示成one-hot</p><p><img src="/images/13. 循环神经网络(高级篇).assets/image-20200927200005921.png" alt="image-20200927200005921"></p><p>对ASCII码表示的输入做padding,统一长度好形成一个张量</p><p><img src="/images/13. 循环神经网络(高级篇).assets/image-20200927200047083.png" alt="image-20200927200047083"></p><p>国家表示成数字label</p><p><img src="/images/13. 循环神经网络(高级篇).assets/image-20200927200209709.png" alt="image-20200927200209709"></p><h3 id="模型与代码"><a href="#模型与代码" class="headerlink" title="模型与代码"></a>模型与代码</h3><p>双向神经网络图:</p><p><img src="/images/13. 循环神经网络(高级篇).assets/image-20200927214633630.png" alt="image-20200927214633630"></p><p>注:<span>$hidden=[h_N^f,h_N^b]$</span><!-- Has MathJax --></p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br></pre></td><td class="code"><pre><span class="line"><span class="class"><span class="keyword">class</span> <span class="title">NameDataset</span><span class="params">(Dataset)</span>:</span></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">__init__</span><span class="params">(self, is_train_set=True)</span>:</span></span><br><span class="line"> filename = <span class="string">'data/names_train.csv.gz'</span> <span class="keyword">if</span> is_train_set <span class="keyword">else</span> <span class="string">'data/names_test.csv.gz'</span></span><br><span class="line"> <span class="keyword">with</span> gzip.open(filename, <span class="string">'rt'</span>) <span class="keyword">as</span> f:</span><br><span class="line"> reader = csv.reader(f)</span><br><span class="line"> rows = list(reader)</span><br><span class="line"> self.names = [row[<span class="number">0</span>] <span class="keyword">for</span> row <span class="keyword">in</span> rows]</span><br><span class="line"> self.len = len(self.names)</span><br><span class="line"> self.countries = [row[<span class="number">1</span>] <span class="keyword">for</span> row <span class="keyword">in</span> rows]</span><br><span class="line"> self.country_list = list(sorted(set(self.countries)))</span><br><span class="line"> self.country_dict = self.getCountryDict()</span><br><span class="line"> self.country_num = len(self.country_list)</span><br><span class="line"></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">__getitem__</span><span class="params">(self, index)</span>:</span></span><br><span class="line"> <span class="keyword">return</span> self.names[index], self.country_dict[self.countries[index]]</span><br><span class="line"></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">__len__</span><span class="params">(self)</span>:</span></span><br><span class="line"> <span class="keyword">return</span> self.len</span><br><span class="line"></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">getCountryDict</span><span class="params">(self)</span>:</span></span><br><span class="line"> country_dict = dict()</span><br><span class="line"> <span class="keyword">for</span> idx, country_name <span class="keyword">in</span> enumerate(self.country_list, <span class="number">0</span>):</span><br><span class="line"> country_dict[country_name] = idx</span><br><span class="line"> <span class="keyword">return</span> country_dict</span><br><span class="line"></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">idx2country</span><span class="params">(self, index)</span>:</span></span><br><span class="line"> <span class="keyword">return</span> self.country_list[index]</span><br><span class="line"></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">getCountriesNum</span><span class="params">(self)</span>:</span></span><br><span class="line"> <span class="keyword">return</span> self.country_nu</span><br></pre></td></tr></table></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br></pre></td><td class="code"><pre><span class="line"><span class="class"><span class="keyword">class</span> <span class="title">RNNClassifier</span><span class="params">(torch.nn.Module)</span>:</span></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">__init__</span><span class="params">(self, input_size, hidden_size, output_size, n_layers=<span class="number">1</span>, bidirectional=True)</span>:</span></span><br><span class="line"> super(RNNClassifier, self).__init__()</span><br><span class="line"> self.hidden_size = hidden_size</span><br><span class="line"> self.n_layers = n_layers</span><br><span class="line"> self.n_directions = <span class="number">2</span> <span class="keyword">if</span> bidirectional <span class="keyword">else</span> <span class="number">1</span></span><br><span class="line"></span><br><span class="line"> <span class="comment"># The input of Embedding Layer with shape: (seqLen, batchSize)</span></span><br><span class="line"> <span class="comment"># The ouput of Embedding Layer with shape: (seqLen, batchSize, hiddenSize)</span></span><br><span class="line"> self.embedding = torch.nn.Embedding(input_size, hidden_size)</span><br><span class="line"> self.gru = torch.nn.GRU(hidden_size, hidden_size, n_layers,</span><br><span class="line"> bidirectional=bidirectional)</span><br><span class="line"> self.fc = torch.nn.Linear(hidden_size * self.n_directions, output_size)</span><br><span class="line"> </span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">_init_hidden</span><span class="params">(self, batch_size)</span>:</span></span><br><span class="line"> hidden = torch.zeros(self.n_layers * self.n_directions,</span><br><span class="line"> batch_size, self.hidden_size)</span><br><span class="line"> <span class="keyword">return</span> create_tensor(hidden)</span><br><span class="line"></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">forward</span><span class="params">(self, input, seq_lengths)</span>:</span></span><br><span class="line"> <span class="comment"># input shape : B x S -> S x B</span></span><br><span class="line"> input = input.t()</span><br><span class="line"> batch_size = input.size(<span class="number">1</span>)</span><br><span class="line"></span><br><span class="line"> hidden = self._init_hidden(batch_size)</span><br><span class="line"> embedding = self.embedding(input)</span><br><span class="line"></span><br><span class="line"> <span class="comment"># pack them up</span></span><br><span class="line"> gru_input = pack_padded_sequence(embedding, seq_lengths) <span class="comment"># 压缩有padding的embedding,提高运算效率</span></span><br><span class="line"></span><br><span class="line"> output, hidden = self.gru(gru_input, hidden)</span><br><span class="line"> <span class="keyword">if</span> self.n_directions == <span class="number">2</span>:</span><br><span class="line"> hidden_cat = torch.cat([hidden[<span class="number">-1</span>], hidden[<span class="number">-2</span>]], dim=<span class="number">1</span>)</span><br><span class="line"> <span class="keyword">else</span>:</span><br><span class="line"> hidden_cat = hidden[<span class="number">-1</span>]</span><br><span class="line"> fc_output = self.fc(hidden_cat)</span><br><span class="line"> <span class="keyword">return</span> fc_output</span><br></pre></td></tr></table></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">trainModel</span><span class="params">()</span>:</span></span><br><span class="line"> total_loss = <span class="number">0</span></span><br><span class="line"> <span class="keyword">for</span> i, (names, countries) <span class="keyword">in</span> enumerate(trainloader, <span class="number">1</span>):</span><br><span class="line"> inputs, seq_lengths, target = make_tensors(names, countries)</span><br><span class="line"> output = classifier(inputs, seq_lengths)</span><br><span class="line"> loss = criterion(output, target)</span><br><span class="line"> optimizer.zero_grad()</span><br><span class="line"> loss.backward()</span><br><span class="line"> optimizer.step()</span><br><span class="line"></span><br><span class="line"> total_loss += loss.item()</span><br><span class="line"> <span class="keyword">if</span> i % <span class="number">10</span> == <span class="number">0</span>:</span><br><span class="line"> print(<span class="string">f'[<span class="subst">{time_since(start)}</span>]Epoch <span class="subst">{epoch}</span>'</span>, end=<span class="string">''</span>)</span><br><span class="line"> print(<span class="string">f'[<span class="subst">{i * len(inputs)}</span>/<span class="subst">{len(trainset)}</span>]'</span>, end=<span class="string">''</span>)</span><br><span class="line"> print(<span class="string">f'loss=<span class="subst">{total_loss / (i * len(inputs))}</span>'</span>)</span><br><span class="line"> <span class="keyword">return</span> total_loss</span><br></pre></td></tr></table></figure><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">testModel</span><span class="params">()</span>:</span></span><br><span class="line"> correct = <span class="number">0</span></span><br><span class="line"> total = len(testset)</span><br><span class="line"> print(<span class="string">"evaluating trained model ..."</span>)</span><br><span class="line"> <span class="keyword">with</span> torch.no_grad():</span><br><span class="line"> <span class="keyword">for</span> i, (names, countries) <span class="keyword">in</span> enumerate(testloader, <span class="number">1</span>):</span><br><span class="line"> inputs, seq_lengths, target = make_tensors(names, countries)</span><br><span class="line"> output = classifier(inputs, seq_lengths)</span><br><span class="line"> pred = output.max(dim=<span class="number">1</span>, keepdim=<span class="literal">True</span>)[<span class="number">1</span>]</span><br><span class="line"> correct += pred.eq(target.view_as(pred)).sum().item()</span><br><span class="line"> </span><br><span class="line"> percent = <span class="string">'%.2f'</span> % (<span class="number">100</span> * correct / total)</span><br><span class="line"> print(<span class="string">f'Test set: Asccuracy <span class="subst">{correct}</span>/<span class="subst">{total}</span> <span class="subst">{percent}</span>%'</span>)</span><br><span class="line"></span><br><span class="line"> <span class="keyword">return</span> correct / total</span><br></pre></td></tr></table></figure><p><a href="https://colab.research.google.com/drive/1eQNqTMobXZaQuaOyMu-dsE_bZ9LR7q5E?usp=sharing" rel="external nofollow noopener noreferrer" target="_blank">在Colab上运行</a></p><p>课程来源:<a href="https://www.bilibili.com/video/BV1Y7411d7Ys?p=13" rel="external nofollow noopener noreferrer" target="_blank">《PyTorch深度学习实践》完结合集</a></p>]]></content>
<summary type="html">
<h1 id="循环神经网络(高级篇)"><a href="#循环神经网络(高级篇)" class="headerlink" title="循环神经网络(高级篇)"></a>循环神经网络(高级篇)</h1><h2 id="例:人名分类"><a href="#例:人名分类" class="headerlink" title="例:人名分类"></a>例:人名分类</h2><h3 id="数据准备"><a href="#数据准备" class="headerlink" title="数据准备"></a>数据准备</h3><p>根据人名的英文拼写判断人所在的国家,数据形式如下:</p>
<p><img src="/images/13. 循环神经网络(高级篇).assets/image-20200927161615676.png" alt="image-20200927161615676" style="zoom:50%;"><br>
</summary>
<category term="深度学习" scheme="https://www.mhlwsk.com/tags/%E6%B7%B1%E5%BA%A6%E5%AD%A6%E4%B9%A0/"/>
<category term="PyTorch" scheme="https://www.mhlwsk.com/tags/PyTorch/"/>
</entry>
<entry>
<title>《PyTorch深度学习实践》12. 循环神经网络(基础篇)</title>
<link href="https://www.mhlwsk.com/archives/38898.html"/>
<id>https://www.mhlwsk.com/archives/38898.html</id>
<published>2020-09-27T16:09:48.000Z</published>
<updated>2020-09-27T16:16:11.886Z</updated>
<content type="html"><![CDATA[<h1 id="循环神经网络(基础篇)"><a href="#循环神经网络(基础篇)" class="headerlink" title="循环神经网络(基础篇)"></a>循环神经网络(基础篇)</h1><p>RNN适合用来解决序列问题</p><p><img src="/images/12. 循环神经网络(基础篇).assets/image-20200926142919033.png" alt="image-20200926142919033"><br><a id="more"></a></p><h2 id="RNN-Cell和RNN"><a href="#RNN-Cell和RNN" class="headerlink" title="RNN Cell和RNN"></a>RNN Cell和RNN</h2><p><img src="/images/12. 循环神经网络(基础篇).assets/image-20200926144353929.png" alt="image-20200926144353929"></p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">cell = torch.nn.RNN(input_size=input_size, hidden_size=hidden_size, num_layers=num_layers) <span class="comment"># num_layers指RNN的隐层数目</span></span><br><span class="line">out, hidden = cell(inputs, hidden)</span><br></pre></td></tr></table></figure><p>具体输入输出的说明:</p><p><img src="/images/12. 循环神经网络(基础篇).assets/image-20200926151359710.png" alt="image-20200926151359710"></p><p>num_layers=3时的情况:</p><p><img src="/images/12. 循环神经网络(基础篇).assets/image-20200926151449627.png" alt="image-20200926151449627"></p><h2 id="RNN的输入输出向量表示"><a href="#RNN的输入输出向量表示" class="headerlink" title="RNN的输入输出向量表示"></a>RNN的输入输出向量表示</h2><p>采用one-hot向量来表示文本</p><p><img src="/images/12. 循环神经网络(基础篇).assets/image-20200926153240847.png" alt="image-20200926153240847"></p><p>对于每个RNN Cell的损失,可以用交叉熵来计算:</p><p><img src="/images/12. 循环神经网络(基础篇).assets/image-20200926153640783.png" alt="image-20200926153640783"></p><h2 id="代码表示"><a href="#代码表示" class="headerlink" title="代码表示"></a>代码表示</h2><h3 id="采用RNN-Cell"><a href="#采用RNN-Cell" class="headerlink" title="采用RNN Cell"></a>采用RNN Cell</h3><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br></pre></td><td class="code"><pre><span class="line"><span class="class"><span class="keyword">class</span> <span class="title">Model</span><span class="params">(torch.nn.Module)</span>:</span></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">__init__</span><span class="params">(self, input_size, hidden_size, batch_size)</span>:</span></span><br><span class="line"> super(Model, self).__init__()</span><br><span class="line"> <span class="comment"># self.num_layers = num_layers</span></span><br><span class="line"> self.batch_size = batch_size</span><br><span class="line"> self.input_size = input_size</span><br><span class="line"> self.hidden_size = hidden_size</span><br><span class="line"> self.rnncell = torch.nn.RNNCell(input_size=self.input_size,</span><br><span class="line"> hidden_size=self.hidden_size)</span><br><span class="line"></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">forward</span><span class="params">(self, input, hidden)</span>:</span></span><br><span class="line"> hidden = self.rnncell(input, hidden)</span><br><span class="line"> <span class="keyword">return</span> hidden</span><br><span class="line"></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">init_hidden</span><span class="params">(self)</span>:</span></span><br><span class="line"> <span class="keyword">return</span> torch.zeros(self.batch_size, self.hidden_size)</span><br><span class="line"> </span><br><span class="line">net = Model(input_size, hidden_size, batch_size)</span><br></pre></td></tr></table></figure><p>训练过程</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br></pre></td><td class="code"><pre><span class="line">criterion = torch.nn.CrossEntropyLoss()</span><br><span class="line">optimizer = torch.optim.Adam(net.parameters(), lr=<span class="number">0.1</span>)</span><br><span class="line"></span><br><span class="line"><span class="keyword">for</span> epoch <span class="keyword">in</span> range(<span class="number">15</span>):</span><br><span class="line"> loss = <span class="number">0</span></span><br><span class="line"> optimizer.zero_grad()</span><br><span class="line"> hidden = net.init_hidden()</span><br><span class="line"> print(<span class="string">'Predicted string: '</span>, end=<span class="string">''</span>)</span><br><span class="line"> <span class="keyword">for</span> input, label <span class="keyword">in</span> zip(inputs, labels): <span class="comment"># Shape of inputs: (seqLen, batchSize, inputSize)</span></span><br><span class="line"> <span class="comment"># Shape of input: (batchSize, inputSize)</span></span><br><span class="line"> hidden = net(input, hidden)</span><br><span class="line"> loss += criterion(hidden, label) <span class="comment"># 注意:这里没有用.item(),而是直接构造的计算图</span></span><br><span class="line"> <span class="comment"># 因为这里算的loss是多个输出联合的loss,具体如下图</span></span><br><span class="line"> _, idx = hidden.max(dim=<span class="number">1</span>)</span><br><span class="line"> print(idx2char[idx.item()], end=<span class="string">''</span>)</span><br><span class="line"> loss.backward()</span><br><span class="line"> optimizer.step()</span><br><span class="line"> print(<span class="string">', Epoch [%d/15] loss=%.4f'</span> % (epoch+<span class="number">1</span>, loss.item()))</span><br></pre></td></tr></table></figure><p><img src="/images/12. 循环神经网络(基础篇).assets/image-20200926163316495.png" alt="image-20200926163316495"></p><h3 id="直接采用RNN"><a href="#直接采用RNN" class="headerlink" title="直接采用RNN"></a>直接采用RNN</h3><p>相比于使用RNN Cell要简化了不少,主要是不需要手动循环处理每个输入了,而是直接给出开头的<span>$h$</span><!-- Has MathJax -->和全部的输入,一次调用即可。</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br></pre></td><td class="code"><pre><span class="line"><span class="class"><span class="keyword">class</span> <span class="title">Model</span><span class="params">(torch.nn.Module)</span>:</span></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">__init__</span><span class="params">(self, input_size, hidden_size, batch_size, num_layers=<span class="number">1</span>)</span>:</span></span><br><span class="line"> super(Model, self).__init__()</span><br><span class="line"> self.num_layers = num_layers</span><br><span class="line"> self.batch_size = batch_size</span><br><span class="line"> self.input_size = input_size</span><br><span class="line"> self.hidden_size = hidden_size</span><br><span class="line"> self.rnn= torch.nn.RNN(input_size=self.input_size,</span><br><span class="line"> hidden_size=self.hidden_size,</span><br><span class="line"> num_layers=num_layers)</span><br><span class="line"></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">forward</span><span class="params">(self, input)</span>:</span></span><br><span class="line"> hidden = torch.zeros(self.num_layers,</span><br><span class="line"> self.batch_size,</span><br><span class="line"> self.hidden_size)</span><br><span class="line"> out, _ = self.rnn(input, hidden)</span><br><span class="line"> <span class="keyword">return</span> out.view(<span class="number">-1</span>, self.hidden_size)</span><br><span class="line"> </span><br><span class="line">net = Model(input_size, hidden_size, batch_size, num_layers)</span><br></pre></td></tr></table></figure><p>训练过程</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line">criterion = torch.nn.CrossEntropyLoss()</span><br><span class="line">optimizer = torch.optim.Adam(net.parameters(), lr=<span class="number">0.05</span>)</span><br><span class="line"></span><br><span class="line"><span class="keyword">for</span> epoch <span class="keyword">in</span> range(<span class="number">15</span>):</span><br><span class="line"> optimizer.zero_grad()</span><br><span class="line"> outputs = net(inputs) <span class="comment"># Size of Inputs: (seqLen, batchSize, inputSize</span></span><br><span class="line"> loss = criterion(outputs, labels) <span class="comment"># Size of Outputs: (seqLen * batchSize * 1)</span></span><br><span class="line"> loss.backward()</span><br><span class="line"> optimizer.step()</span><br><span class="line"></span><br><span class="line"> _, idx = outputs.max(dim=<span class="number">1</span>)</span><br><span class="line"> idx = idx.data.numpy()</span><br><span class="line"> print(<span class="string">'Predicted: '</span>, <span class="string">''</span>.join([idx2char[x] <span class="keyword">for</span> x <span class="keyword">in</span> idx]), end=<span class="string">''</span>)</span><br><span class="line"> print(<span class="string">', Epoch [%d/15] loss = %.3f'</span> % (epoch + <span class="number">1</span>, loss.item()))</span><br></pre></td></tr></table></figure><h2 id="Embedding"><a href="#Embedding" class="headerlink" title="Embedding"></a>Embedding</h2><p>One-hot的一些问题:</p><ul><li>维度过高</li><li>矩阵稀疏</li><li>硬编码</li></ul><p>解决方案:Embedding</p><p>加入Embedding后的网络:</p><p><img src="/images/12. 循环神经网络(基础篇).assets/image-20200926163655344.png" alt="image-20200926163655344"></p><p>其中Linear Layer是用来保证输出和label的维度一致的。</p><p>添加了Embedding层和线性层的模型代码如下:</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br></pre></td><td class="code"><pre><span class="line"><span class="class"><span class="keyword">class</span> <span class="title">Model</span><span class="params">(torch.nn.Module)</span>:</span></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">__init__</span><span class="params">(self)</span>:</span></span><br><span class="line"> super(Model, self).__init__()</span><br><span class="line"> self.emb = torch.nn.Embedding(input_size, embedding_size)</span><br><span class="line"> self.rnn = torch.nn.RNN(input_size=embedding_size,</span><br><span class="line"> hidden_size=hidden_size,</span><br><span class="line"> num_layers=num_layers,</span><br><span class="line"> batch_first=<span class="literal">True</span>)</span><br><span class="line"> self.fc = torch.nn.Linear(hidden_size, num_class)</span><br><span class="line"></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">forward</span><span class="params">(self, x)</span>:</span></span><br><span class="line"> hidden = torch.zeros(num_layers, x.size(<span class="number">0</span>), hidden_size)</span><br><span class="line"> x = self.emb(x) <span class="comment"># (batch, seqLen, embeddingSize)</span></span><br><span class="line"> x, _ = self.rnn(x, hidden)</span><br><span class="line"> x = self.fc(x)</span><br><span class="line"> <span class="keyword">return</span> x.view(<span class="number">-1</span>, num_class)</span><br><span class="line"></span><br><span class="line">net = Model()</span><br></pre></td></tr></table></figure><h2 id="LSTM和GRU"><a href="#LSTM和GRU" class="headerlink" title="LSTM和GRU"></a>LSTM和GRU</h2><p><img src="/images/12. 循环神经网络(基础篇).assets/image-20200926170043773.png" alt="image-20200926170043773"></p><p><img src="/images/12. 循环神经网络(基础篇).assets/image-20200926170400613.png" alt="image-20200926170400613"></p><p>GRU运算效率要高一些</p><p><a href="https://colab.research.google.com/drive/1TYKtbjRr4aWvZjLR_75wyn1mDgT9nyQ0?usp=sharing" rel="external nofollow noopener noreferrer" target="_blank">在Colab上运行</a></p><p>课程来源:<a href="https://www.bilibili.com/video/BV1Y7411d7Ys?p=12" rel="external nofollow noopener noreferrer" target="_blank">《PyTorch深度学习实践》完结合集</a></p>]]></content>
<summary type="html">
<h1 id="循环神经网络(基础篇)"><a href="#循环神经网络(基础篇)" class="headerlink" title="循环神经网络(基础篇)"></a>循环神经网络(基础篇)</h1><p>RNN适合用来解决序列问题</p>
<p><img src="/images/12. 循环神经网络(基础篇).assets/image-20200926142919033.png" alt="image-20200926142919033"><br>
</summary>
<category term="深度学习" scheme="https://www.mhlwsk.com/tags/%E6%B7%B1%E5%BA%A6%E5%AD%A6%E4%B9%A0/"/>
<category term="PyTorch" scheme="https://www.mhlwsk.com/tags/PyTorch/"/>
</entry>
<entry>
<title>《PyTorch深度学习实践》11. 卷积神经网络(高级篇)</title>
<link href="https://www.mhlwsk.com/archives/116.html"/>
<id>https://www.mhlwsk.com/archives/116.html</id>
<published>2020-09-27T16:09:37.000Z</published>
<updated>2020-09-27T16:16:11.885Z</updated>
<content type="html"><![CDATA[<h1 id="卷积神经网络(高级篇)"><a href="#卷积神经网络(高级篇)" class="headerlink" title="卷积神经网络(高级篇)"></a>卷积神经网络(高级篇)</h1><h2 id="1-1的卷积核"><a href="#1-1的卷积核" class="headerlink" title="1*1的卷积核"></a>1*1的卷积核</h2><p>下图是两种网络的构造方式,图二相比于图一多了一个1*1的卷积核层,在长宽不变的情况下减小了通道数,把运算次数减少了一个数量级。</p><p><img src="/images/11. 卷积神经网络(高级篇).assets/image-20200924112727978.png" alt="image-20200924112727978"><br><a id="more"></a></p><h2 id="GoogleNet(简化版)"><a href="#GoogleNet(简化版)" class="headerlink" title="GoogleNet(简化版)"></a>GoogleNet(简化版)</h2><p>下图为GoogleNet中的Inception模块:</p><p><img src="/images/11. 卷积神经网络(高级篇).assets/image-20200924210616539.png" alt="image-20200924210616539"></p><p>代码:</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br></pre></td><td class="code"><pre><span class="line"><span class="class"><span class="keyword">class</span> <span class="title">InceptionA</span><span class="params">(nn.Module)</span>:</span></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">__init__</span><span class="params">(self, in_channels)</span>:</span></span><br><span class="line"> self.branch1x1 = nn.Conv2d(in_channels, <span class="number">16</span>, kernel_size=<span class="number">1</span>)</span><br><span class="line"> </span><br><span class="line"> self.branch5x5_1 = nn.Conv2d(in_channels, <span class="number">16</span>, kernel_size=<span class="number">1</span>)</span><br><span class="line"> self.branch5x5_2 = nn.Conv2d(<span class="number">16</span>, <span class="number">24</span>, kernel_size=<span class="number">5</span>, padding=<span class="number">2</span>)</span><br><span class="line"></span><br><span class="line"> self.branch3x3_1 = nn.Conv2d(in_channels, <span class="number">16</span>, kernel_size=<span class="number">1</span>)</span><br><span class="line"> self.branch3x3_2 = nn.Conv2d(<span class="number">16</span>, <span class="number">24</span>, kernel_size=<span class="number">3</span>, padding=<span class="number">1</span>)</span><br><span class="line"> self.branch3x3_3 = nn.Conv2d(<span class="number">24</span>, <span class="number">24</span>, kernel_size=<span class="number">3</span>, padding=<span class="number">1</span>)</span><br><span class="line"></span><br><span class="line"> self.branch_pool = nn.Conv2d(in_channels, <span class="number">24</span>, kernel_size=<span class="number">1</span>)</span><br><span class="line"></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">forward</span><span class="params">(self, x)</span>:</span></span><br><span class="line"> branch1x1 = self.branch1x1(x)</span><br><span class="line"></span><br><span class="line"> branch5x5 = self.branch5x5_1(x)</span><br><span class="line"> branch5x5 = self.branch5x5_2(branch5x5)</span><br><span class="line"></span><br><span class="line"> branch3x3 = self.branch3x3_1(x)</span><br><span class="line"> branch3x3 = self.branch3x3_2(branch3x3)</span><br><span class="line"> branch3x3 = self.branch3x3_3(branch3x3)</span><br><span class="line"></span><br><span class="line"> branch_pool = F.avg_pool2d(x, kernel_size=<span class="number">3</span>, stride=<span class="number">1</span>, padding=<span class="number">1</span>)</span><br><span class="line"> branch_pool = self.branch_pool(branch_pool)</span><br><span class="line"></span><br><span class="line"> outputs = [branch1x1, branch5x5, branch3x3, branch_pool] <span class="comment"># 把各个输出根据channel拼起来,共88个channel</span></span><br><span class="line"> <span class="keyword">return</span> torch.cat(outputs, dim=<span class="number">1</span>)</span><br></pre></td></tr></table></figure><p>GoogleNet的一个简化实现:</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br></pre></td><td class="code"><pre><span class="line"><span class="class"><span class="keyword">class</span> <span class="title">Net</span><span class="params">(nn.Module)</span>:</span></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">__init__</span><span class="params">(self)</span>:</span></span><br><span class="line"> super(Net, self).__init__()</span><br><span class="line"> self.conv1 = nn.Conv2d(<span class="number">1</span>, <span class="number">10</span>, kernel_size=<span class="number">5</span>)</span><br><span class="line"> self.conv2 = nn.Conv2d(<span class="number">88</span>, <span class="number">20</span>, kernel_size=<span class="number">5</span>) <span class="comment"># 88=24+24+24+16</span></span><br><span class="line"></span><br><span class="line"> self.incep1 = InceptionA(in_channels=<span class="number">10</span>)</span><br><span class="line"> self.incep2 = InceptionA(in_channels=<span class="number">20</span>)</span><br><span class="line"></span><br><span class="line"> self.mp = nn.MaxPool2d(<span class="number">2</span>)</span><br><span class="line"> self.fc = nn.Linear(<span class="number">1408</span>, <span class="number">10</span>)</span><br><span class="line"></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">forward</span><span class="params">(self, x)</span>:</span></span><br><span class="line"> in_size = x.size(<span class="number">0</span>)</span><br><span class="line"> x = F.relu(self.mp(self.conv1(x)))</span><br><span class="line"> x = self.incep1(x)</span><br><span class="line"> x = F.relu(self.mp(self.conv2(x)))</span><br><span class="line"> x = self.incep2(x)</span><br><span class="line"> x = x.view(in_size, <span class="number">-1</span>)</span><br><span class="line"> x = self.fc(x)</span><br><span class="line"> <span class="keyword">return</span> x</span><br></pre></td></tr></table></figure><h2 id="梯度消失与Residual-Net"><a href="#梯度消失与Residual-Net" class="headerlink" title="梯度消失与Residual Net"></a>梯度消失与Residual Net</h2><p>随着网络层数的加深,可能会出现梯度消失的问题,更新的时候,接近输入的层参数更新会非常缓慢。对于这种情况的一种解决方案是采用<code>residual net</code>,即在做relu激活前,先把现有的函数输出与原始输入相加(要求长宽和通道数一致),这样得到<span>$H(x)=F(x)+x$</span><!-- Has MathJax -->的形式,好处是当求导时可以得到这样的形式:<span>$\frac{\partial{H(x)}}{\partial{x}}=\frac{\partial{F(x)}}{\partial{x}}+1$</span><!-- Has MathJax -->,最小值为一个接近1的数而非很小的小数,相乘时就不会产生梯度消失的问题了。</p><p><img src="/images/11. 卷积神经网络(高级篇).assets/image-20200925153126823.png" alt="image-20200925153126823"></p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br></pre></td><td class="code"><pre><span class="line"><span class="class"><span class="keyword">class</span> <span class="title">ResidualBlock</span><span class="params">(nn.Module)</span>:</span></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">__init__</span><span class="params">(self, channels)</span>:</span></span><br><span class="line"> super(ResidualBlock, self).__init__()</span><br><span class="line"> self.channels = channels</span><br><span class="line"> self.conv1 = nn.Conv2d(channels, channels,</span><br><span class="line"> kernel_size=<span class="number">3</span>, padding=<span class="number">1</span>)</span><br><span class="line"> self.conv2 = nn.Conv2d(channels, channels,</span><br><span class="line"> kernel_size=<span class="number">3</span>, padding=<span class="number">1</span>)</span><br><span class="line"> </span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">forward</span><span class="params">(self, x)</span>:</span></span><br><span class="line"> y = F.relu(self.conv1(x))</span><br><span class="line"> y = self.conv2(y)</span><br><span class="line"> <span class="keyword">return</span> F.relu(x + y) <span class="comment"># 注意是先求和后激活</span></span><br></pre></td></tr></table></figure><p>总体网络构造:</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br></pre></td><td class="code"><pre><span class="line"><span class="class"><span class="keyword">class</span> <span class="title">Net</span><span class="params">(nn.Module)</span>:</span></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">__init__</span><span class="params">(self)</span>:</span></span><br><span class="line"> super(Net, self).__init__()</span><br><span class="line"> self.conv1 = nn.Conv2d(<span class="number">1</span>, <span class="number">16</span>, kernel_size=<span class="number">5</span>)</span><br><span class="line"> self.conv2 = nn.Conv2d(<span class="number">16</span>, <span class="number">32</span>, kernel_size=<span class="number">5</span>)</span><br><span class="line"> self.mp = nn.MaxPool2d(<span class="number">2</span>)</span><br><span class="line"></span><br><span class="line"> self.rblock1 = ResidualBlock(<span class="number">16</span>)</span><br><span class="line"> self.rblock2 = ResidualBlock(<span class="number">32</span>)</span><br><span class="line"></span><br><span class="line"> self.fc = nn.Linear(<span class="number">512</span>, <span class="number">10</span>)</span><br><span class="line"></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">forward</span><span class="params">(self, x)</span>:</span></span><br><span class="line"> in_size = x.size(<span class="number">0</span>)</span><br><span class="line"> x = self.mp(F.relu(self.conv1(x)))</span><br><span class="line"> x = self.rblock1(x)</span><br><span class="line"> x = self.mp(F.relu(self.conv2(x)))</span><br><span class="line"> x = self.rblock2(x)</span><br><span class="line"> x = x.view(in_size, <span class="number">-1</span>)</span><br><span class="line"> x = self.fc(x)</span><br><span class="line"> <span class="keyword">return</span> x</span><br></pre></td></tr></table></figure><p><a href="https://colab.research.google.com/drive/1ukDiQ3Rz6wRyiRPduJQbEuguHOtIWfPU?usp=sharing" rel="external nofollow noopener noreferrer" target="_blank">在Colab上运行</a></p><p>课程来源:<a href="https://www.bilibili.com/video/BV1Y7411d7Ys?p=11" rel="external nofollow noopener noreferrer" target="_blank">《PyTorch深度学习实践》完结合集</a></p>]]></content>
<summary type="html">
<h1 id="卷积神经网络(高级篇)"><a href="#卷积神经网络(高级篇)" class="headerlink" title="卷积神经网络(高级篇)"></a>卷积神经网络(高级篇)</h1><h2 id="1-1的卷积核"><a href="#1-1的卷积核" class="headerlink" title="1*1的卷积核"></a>1*1的卷积核</h2><p>下图是两种网络的构造方式,图二相比于图一多了一个1*1的卷积核层,在长宽不变的情况下减小了通道数,把运算次数减少了一个数量级。</p>
<p><img src="/images/11. 卷积神经网络(高级篇).assets/image-20200924112727978.png" alt="image-20200924112727978"><br>
</summary>
<category term="深度学习" scheme="https://www.mhlwsk.com/tags/%E6%B7%B1%E5%BA%A6%E5%AD%A6%E4%B9%A0/"/>
<category term="PyTorch" scheme="https://www.mhlwsk.com/tags/PyTorch/"/>
</entry>
<entry>
<title>《PyTorch深度学习实践》10. 卷积神经网络(基础篇)</title>
<link href="https://www.mhlwsk.com/archives/57592.html"/>
<id>https://www.mhlwsk.com/archives/57592.html</id>
<published>2020-09-27T16:09:25.000Z</published>
<updated>2020-09-27T16:16:11.884Z</updated>
<content type="html"><![CDATA[<h1 id="卷积神经网络(基础篇)"><a href="#卷积神经网络(基础篇)" class="headerlink" title="卷积神经网络(基础篇)"></a>卷积神经网络(基础篇)</h1><p>朴素的全连接神经网络在处理图像时会丢失图像的位置信息(因为每个像素都被视为无差别的输入)。卷积神经网络(CNN)可以解决这个问题。<br><a id="more"></a></p><h2 id="CNN基本结构"><a href="#CNN基本结构" class="headerlink" title="CNN基本结构"></a>CNN基本结构</h2><p><img src="/images/10. 卷积神经网络(基础篇).assets/image-20200921210317889.png" alt="image-20200921210317889"></p><p>基本结构分两部分,第一部分是特征提取器,主要是卷积层和池化层;第二部分是分类器,主要是全连接层,后面跟一个softmax层用来分类。</p><p>补充:栅格图像:最常见的图像形式,用像素表示图(如RGB,或灰度)。与之相对的是矢量图像。</p><p>单输入通道下的卷积计算:</p><p><img src="/images/10. 卷积神经网络(基础篇).assets/image-20200922201817860.png" alt="image-20200922201817860"></p><p>注意:输入与卷积核相乘的时候是数乘而非矩阵乘,即对应位置的元素相乘。</p><p>输入三通道输出一通道下的卷积运算:</p><p><img src="/images/10. 卷积神经网络(基础篇).assets/image-20200922202457869.png" alt="image-20200922202457869"></p><p><img src="/images/10. 卷积神经网络(基础篇).assets/image-20200922202601746.png" alt="image-20200922202601746"></p><p><img src="/images/10. 卷积神经网络(基础篇).assets/image-20200922202706970.png" alt="image-20200922202706970"></p><p>注意卷积核的channel数与输入的channel数保持一致</p><p>多通道输入多通道输出的情况,采用了多个卷积核,卷积核的数目与输出通道数一致:</p><p><img src="/images/10. 卷积神经网络(基础篇).assets/image-20200922202900241.png" alt="image-20200922202900241"></p><p>多通道输入多通道输出的一个例子:</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> torch</span><br><span class="line">in_channels, out_channels = <span class="number">5</span>, <span class="number">10</span></span><br><span class="line">width, height = <span class="number">100</span>, <span class="number">100</span></span><br><span class="line">kernel_size = <span class="number">3</span></span><br><span class="line">batch_size = <span class="number">1</span></span><br><span class="line"></span><br><span class="line">input = torch.randn(batch_size,</span><br><span class="line"> in_channels,</span><br><span class="line"> width,</span><br><span class="line"> height)</span><br><span class="line"></span><br><span class="line">conv_layer = torch.nn.Conv2d(in_channels,</span><br><span class="line"> out_channels,</span><br><span class="line"> kernel_size=kernel_size)</span><br><span class="line"></span><br><span class="line">output = conv_layer(input)</span><br><span class="line"></span><br><span class="line">print(input.shape)<span class="comment"># torch.Size([1, 5, 100, 100])</span></span><br><span class="line">print(output.shape) <span class="comment"># torch.Size([1, 10, 98, 98])</span></span><br><span class="line">print(conv_layer.weight.shape)<span class="comment"># torch.Size([10, 5, 3, 3])</span></span><br></pre></td></tr></table></figure><h2 id="padding-stride-pooling"><a href="#padding-stride-pooling" class="headerlink" title="padding, stride, pooling"></a>padding, stride, pooling</h2><p>根据某些对输出大小的需求,如希望输出的宽度与输入的不便,可以给输入加上<code>padding</code></p><p><img src="/images/10. 卷积神经网络(基础篇).assets/image-20200923162830909.png" alt="image-20200923162830909"></p><p><img src="/images/10. 卷积神经网络(基础篇).assets/image-20200923162931256.png" alt="image-20200923162931256"></p><p>上图的代码实现:</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line">input = [<span class="number">3</span>, <span class="number">4</span>, <span class="number">6</span>, <span class="number">5</span>, <span class="number">7</span>,</span><br><span class="line"> <span class="number">2</span>, <span class="number">4</span>, <span class="number">6</span>, <span class="number">8</span>, <span class="number">2</span>,</span><br><span class="line"> <span class="number">1</span>, <span class="number">6</span>, <span class="number">7</span>, <span class="number">8</span>, <span class="number">4</span>,</span><br><span class="line"> <span class="number">9</span>, <span class="number">7</span>, <span class="number">4</span>, <span class="number">6</span>, <span class="number">2</span>,</span><br><span class="line"> <span class="number">3</span>, <span class="number">7</span>, <span class="number">5</span>, <span class="number">4</span>, <span class="number">1</span>]</span><br><span class="line">input = torch.Tensor(input).view(<span class="number">1</span>, <span class="number">1</span>, <span class="number">5</span>, <span class="number">5</span>)</span><br><span class="line"></span><br><span class="line">conv_layer = torch.nn.Conv2d(<span class="number">1</span>, <span class="number">1</span>, kernel_size=<span class="number">3</span>, padding=<span class="number">1</span>, bias=<span class="literal">False</span>)</span><br><span class="line"></span><br><span class="line">kernel = torch.Tensor([<span class="number">1</span>, <span class="number">2</span>, <span class="number">3</span>, <span class="number">4</span>, <span class="number">5</span>, <span class="number">6</span>, <span class="number">7</span>, <span class="number">8</span>, <span class="number">9</span>]).view(<span class="number">1</span>, <span class="number">1</span>, <span class="number">3</span>, <span class="number">3</span>)</span><br><span class="line">conv_layer.weight.data = kernel.data</span><br><span class="line"></span><br><span class="line">output = conv_layer(input)</span><br><span class="line">print(output)</span><br></pre></td></tr></table></figure><p>除了<code>padding</code>,还有一个参数<code>stride</code>,即步长,是用来减小feature的宽度和高度的。</p><p><img src="/images/10. 卷积神经网络(基础篇).assets/image-20200923165029612.png" alt="image-20200923165029612"></p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> torch</span><br><span class="line">input = [<span class="number">3</span>, <span class="number">4</span>, <span class="number">6</span>, <span class="number">5</span>, <span class="number">7</span>,</span><br><span class="line"> <span class="number">2</span>, <span class="number">4</span>, <span class="number">6</span>, <span class="number">8</span>, <span class="number">2</span>,</span><br><span class="line"> <span class="number">1</span>, <span class="number">6</span>, <span class="number">7</span>, <span class="number">8</span>, <span class="number">4</span>,</span><br><span class="line"> <span class="number">9</span>, <span class="number">7</span>, <span class="number">4</span>, <span class="number">6</span>, <span class="number">2</span>,</span><br><span class="line"> <span class="number">3</span>, <span class="number">7</span>, <span class="number">5</span>, <span class="number">4</span>, <span class="number">1</span>]</span><br><span class="line">input = torch.Tensor(input).view(<span class="number">1</span>, <span class="number">1</span>, <span class="number">5</span>, <span class="number">5</span>)</span><br><span class="line"></span><br><span class="line">conv_layer = torch.nn.Conv2d(<span class="number">1</span>, <span class="number">1</span>, kernel_size=<span class="number">3</span>, stride=<span class="number">2</span>, bias=<span class="literal">False</span>)</span><br><span class="line"></span><br><span class="line">kernel = torch.Tensor([<span class="number">1</span>, <span class="number">2</span>, <span class="number">3</span>, <span class="number">4</span>, <span class="number">5</span>, <span class="number">6</span>, <span class="number">7</span>, <span class="number">8</span>, <span class="number">9</span>]).view(<span class="number">1</span>, <span class="number">1</span>, <span class="number">3</span>, <span class="number">3</span>)</span><br><span class="line">conv_layer.weight.data = kernel.data</span><br><span class="line"></span><br><span class="line">output = conv_layer(input)</span><br><span class="line">print(output)</span><br></pre></td></tr></table></figure><p>下采样pooling</p><p><img src="/images/10. 卷积神经网络(基础篇).assets/image-20200923165618588.png" alt="image-20200923165618588"></p><p><img src="/images/10. 卷积神经网络(基础篇).assets/image-20200923170228115.png" alt="image-20200923170228115"></p><p><a href="https://colab.research.google.com/drive/19v6Om4bDRPj4xfyj316_aS55vKMPJaCb?usp=sharing" rel="external nofollow noopener noreferrer" target="_blank">在Colab上运行</a></p><p>课程来源:<a href="https://www.bilibili.com/video/BV1Y7411d7Ys?p=10" rel="external nofollow noopener noreferrer" target="_blank">《PyTorch深度学习实践》完结合集</a></p>]]></content>
<summary type="html">
<h1 id="卷积神经网络(基础篇)"><a href="#卷积神经网络(基础篇)" class="headerlink" title="卷积神经网络(基础篇)"></a>卷积神经网络(基础篇)</h1><p>朴素的全连接神经网络在处理图像时会丢失图像的位置信息(因为每个像素都被视为无差别的输入)。卷积神经网络(CNN)可以解决这个问题。<br>
</summary>
<category term="深度学习" scheme="https://www.mhlwsk.com/tags/%E6%B7%B1%E5%BA%A6%E5%AD%A6%E4%B9%A0/"/>
<category term="PyTorch" scheme="https://www.mhlwsk.com/tags/PyTorch/"/>
</entry>
<entry>
<title>《PyTorch深度学习实践》9. 多分类问题</title>
<link href="https://www.mhlwsk.com/archives/59414.html"/>
<id>https://www.mhlwsk.com/archives/59414.html</id>
<published>2020-09-19T12:35:10.000Z</published>
<updated>2020-09-19T12:45:42.204Z</updated>
<content type="html"><![CDATA[<h1 id="多分类问题"><a href="#多分类问题" class="headerlink" title="多分类问题"></a>多分类问题</h1><p>N分类问题(N>2)最朴素的解决方案是拆分为N个二分类问题。但这样做的问题是不同类别间应该是互斥的(或者说相互抑制),直接拆分的话可能每个类别的概率都很高(或者加起来不等于1)。<br><a id="more"></a></p><h2 id="Softmax-Layer"><a href="#Softmax-Layer" class="headerlink" title="Softmax Layer"></a>Softmax Layer</h2><p>假定<span>$z^l\in\mathbb{R}^K$</span><!-- Has MathJax --> 是最后一个线性层的输出,softmax layer的形式为:</p><span>$P(y=i)=\frac{e^{z_i}}{\sum_{j=0}^{K-1}e^{z_j}},i\in\{0,\dots,K-1\}$</span><!-- Has MathJax --><p>Softmax layer的好处是可以把最后一层的输出归一化为每一类[0,1]的概率,且概率和为1</p><p>注:之前的朴素二分类问题,因为最后一层以sigmoid函数结尾,只有一个[0,1]的输出,本身符合概率定义,所以不需要用softmax。损失的计算直接用二类交叉熵就行了。</p><h2 id="Softmax的损失函数"><a href="#Softmax的损失函数" class="headerlink" title="Softmax的损失函数"></a>Softmax的损失函数</h2><p>Softmax层后接的损失函数仍然可以用交叉熵, 如图(<code>NLLLoss, negative log likelihood loss</code>):</p><p><img src="/images/9. 多分类问题.assets/image-20200919112307060.png" alt="image-20200919112307060"></p><p>具体公式为<span>$Loss(\hat{Y},Y)=-Y\log\hat{Y}$</span><!-- Has MathJax -->,上图中最右侧的<strong>1</strong>为label,经过一个binarizer转化为one-hot表示,即一个维数为label类别总数的01向量,其中只有一维为1。这个向量作为<span>$Y$</span><!-- Has MathJax -->,与softmax层的预测值<span>$\hat{Y}$</span><!-- Has MathJax -->求交叉熵。</p><p>补充一下这里采用的交叉熵公式:</p><span>$-\sum_{c=1}^My_{o,c}\log(p_{o,c})$</span><!-- Has MathJax --><ul><li>M - number of classes (dog, cat, fish)</li><li>log - the natural log</li><li>y - <strong>binary indicator (0 or 1)</strong> if class label <span>$c$</span><!-- Has MathJax --> is the correct classification for observation <span>$o$</span><!-- Has MathJax --></li><li>p - predicted probability observation <span>$o$</span><!-- Has MathJax --> is of class <span>$c$</span><!-- Has MathJax --></li></ul><p>所以这里的交叉熵结果其实就是正确的那个类别预测概率(正确类别的softmax输出值)取log的负值,其它类别的概率都乘了0。</p><p>实现的基本原理:</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> numpy <span class="keyword">as</span> np</span><br><span class="line">y = np.array([<span class="number">1</span>, <span class="number">0</span>, <span class="number">0</span>])</span><br><span class="line">z = np.array([<span class="number">0.2</span>, <span class="number">0.1</span>, <span class="number">-0.1</span>])</span><br><span class="line">y_pred = np.exp(z) / np.exp(z).sum()</span><br><span class="line">loss = (-y * np.log(y_pred)).sum()</span><br><span class="line">print(loss)</span><br></pre></td></tr></table></figure><p>采用torch中提供的<code>CrossEntropyLoss</code>的实现方式:</p><p><img src="/images/9. 多分类问题.assets/image-20200919160853359.png" alt="image-20200919160853359"></p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> torch</span><br><span class="line">y = torch.LongTensor([<span class="number">0</span>])</span><br><span class="line">z = torch.Tensor([[<span class="number">0.2</span>, <span class="number">0.1</span>, <span class="number">-0.1</span>]])</span><br><span class="line">criterion = torch.nn.CrossEntropyLoss()</span><br><span class="line">loss = criterion(z, y)</span><br><span class="line">print(loss)</span><br></pre></td></tr></table></figure><p>一个更具体的例子:</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> torch</span><br><span class="line">criterion = torch.nn.CrossEntropyLoss()</span><br><span class="line"></span><br><span class="line">Y = torch.LongTensor([<span class="number">2</span>, <span class="number">0</span>, <span class="number">1</span>]) <span class="comment"># 对应3个样本的label</span></span><br><span class="line"></span><br><span class="line"><span class="comment"># 第一个模型的预测输出(Softmax)</span></span><br><span class="line">Y_pred1 = torch.Tensor([[<span class="number">0.1</span>, <span class="number">0.2</span>, <span class="number">0.9</span>], <span class="comment"># 下标2的预测概率最大,实际label为2</span></span><br><span class="line"> [<span class="number">1.1</span>, <span class="number">0.1</span>, <span class="number">0.2</span>], <span class="comment"># 下标0的预测概率最大,实际label为0</span></span><br><span class="line"> [<span class="number">0.2</span>, <span class="number">2.1</span>, <span class="number">0.1</span>]]) <span class="comment"># 下标1的预测概率最大,实际label为1</span></span><br><span class="line"><span class="comment"># 另一个模型的预测输出(Softmax)</span></span><br><span class="line">Y_pred2 = torch.Tensor([[<span class="number">0.8</span>, <span class="number">0.2</span>, <span class="number">0.3</span>], <span class="comment"># 下标0的预测概率最大,实际label为2</span></span><br><span class="line"> [<span class="number">0.2</span>, <span class="number">0.3</span>, <span class="number">0.5</span>], <span class="comment"># 下标2的预测概率最大,实际label为0</span></span><br><span class="line"> [<span class="number">0.2</span>, <span class="number">0.2</span>, <span class="number">0.5</span>]]) <span class="comment"># 下标2的预测概率最大,实际label为1</span></span><br><span class="line"></span><br><span class="line">l1 = criterion(Y_pred1, Y)</span><br><span class="line">l2 = criterion(Y_pred2, Y)</span><br><span class="line"><span class="comment"># Batch Loss1 = tensor(0.4966) </span></span><br><span class="line"><span class="comment"># Batch Loss2= tensor(1.2389)</span></span><br><span class="line">print(<span class="string">"Batch Loss1 = "</span>, l1.data, <span class="string">"\nBatch Loss2="</span>, l2.data)</span><br></pre></td></tr></table></figure><p>补充:PyTorch中<code>CrossEntropyLoss</code>和<code>NLLLoss的关系</code>:</p><p><code>CrossEntropyLoss<==>LogSoftmax+NLLLoss</code></p><h2 id="实例:MNIST数据集训练"><a href="#实例:MNIST数据集训练" class="headerlink" title="实例:MNIST数据集训练"></a>实例:MNIST数据集训练</h2><ul><li><p>Prepare dataset</p><p><strong>Dataset and Dataloader</strong></p></li><li><p>Design model using Class</p><ul><li>Inherit from nn.Module</li></ul></li><li><p>Construct loss and optimizer</p></li><li><p>Training cycle <strong>+ Test</strong></p><ul><li>forward, backward, update</li></ul></li></ul><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> torch</span><br><span class="line"><span class="keyword">from</span> torchvision <span class="keyword">import</span> transforms</span><br><span class="line"><span class="keyword">from</span> torchvision <span class="keyword">import</span> datasets</span><br><span class="line"><span class="keyword">from</span> torch.utils.data <span class="keyword">import</span> DataLoader</span><br><span class="line"><span class="keyword">import</span> torch.nn.functional <span class="keyword">as</span> F <span class="comment"># 这里用了ReLU而非Sigmoid</span></span><br><span class="line"><span class="keyword">import</span> torch.optim <span class="keyword">as</span> optim</span><br><span class="line"></span><br><span class="line">batch_size = <span class="number">64</span></span><br><span class="line">transform = transforms.Compose([</span><br><span class="line"> transforms.ToTensor(),</span><br><span class="line"> transforms.Normalize((<span class="number">0.1307</span>, ), (<span class="number">0.3081</span>, ))</span><br><span class="line"> <span class="comment"># The parameters are mean and std respectively. </span></span><br><span class="line">])</span><br><span class="line"></span><br><span class="line">train_dataset = datasets.MNIST(root=<span class="string">'../dataset/mnist/'</span>,</span><br><span class="line"> train=<span class="literal">True</span>,</span><br><span class="line"> download=<span class="literal">True</span>,</span><br><span class="line"> transform=transform)</span><br><span class="line">train_loader = DataLoader(train_dataset,</span><br><span class="line"> shuffle=<span class="literal">True</span>,</span><br><span class="line"> batch_size=batch_size)</span><br><span class="line">test_dataset = datasets.MNIST(root=<span class="string">"../dataset/mnist/"</span>,</span><br><span class="line"> train=<span class="literal">False</span>,</span><br><span class="line"> download=<span class="literal">True</span>,</span><br><span class="line"> transform=transform)</span><br><span class="line">test_loader = DataLoader(test_dataset,</span><br><span class="line"> shuffle=<span class="literal">False</span>,</span><br><span class="line"> batch_size=batch_size)</span><br><span class="line"></span><br><span class="line"><span class="class"><span class="keyword">class</span> <span class="title">Net</span><span class="params">(torch.nn.Module)</span>:</span></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">__init__</span><span class="params">(self)</span>:</span></span><br><span class="line"> super(Net, self).__init__()</span><br><span class="line"> self.l1 = torch.nn.Linear(<span class="number">784</span>, <span class="number">512</span>)</span><br><span class="line"> self.l2 = torch.nn.Linear(<span class="number">512</span>, <span class="number">256</span>)</span><br><span class="line"> self.l3 = torch.nn.Linear(<span class="number">256</span>, <span class="number">128</span>)</span><br><span class="line"> self.l4 = torch.nn.Linear(<span class="number">128</span>, <span class="number">64</span>)</span><br><span class="line"> self.l5 = torch.nn.Linear(<span class="number">64</span>, <span class="number">10</span>)</span><br><span class="line"></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">forward</span><span class="params">(self, x)</span>:</span></span><br><span class="line"> x = x.view(<span class="number">-1</span>, <span class="number">784</span>)</span><br><span class="line"> x = F.relu(self.l1(x))</span><br><span class="line"> x = F.relu(self.l2(x))</span><br><span class="line"> x = F.relu(self.l3(x))</span><br><span class="line"> x = F.relu(self.l4(x))</span><br><span class="line"> <span class="keyword">return</span> self.l5(x) <span class="comment"># 注意:返回的是一个线性层的输出,没有经过激活函数</span></span><br><span class="line"> </span><br><span class="line">model = Net()</span><br><span class="line"></span><br><span class="line">criterion = torch.nn.CrossEntropyLoss()</span><br><span class="line">optimizer = optim.SGD(model.parameters(), lr=<span class="number">0.01</span>, momentum=<span class="number">0.5</span>)</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">train</span><span class="params">(epoch)</span>:</span></span><br><span class="line"> running_loss = <span class="number">0.0</span></span><br><span class="line"> <span class="keyword">for</span> batch_idx, data <span class="keyword">in</span> enumerate(train_loader, <span class="number">0</span>):</span><br><span class="line"> inputs, target = data</span><br><span class="line"> optimizer.zero_grad()</span><br><span class="line"></span><br><span class="line"> <span class="comment"># forward + backward + update</span></span><br><span class="line"> outputs = model(inputs)</span><br><span class="line"> loss = criterion(outputs, target)</span><br><span class="line"> loss.backward()</span><br><span class="line"> optimizer.step()</span><br><span class="line"></span><br><span class="line"> running_loss += loss.item()</span><br><span class="line"> <span class="keyword">if</span> batch_idx % <span class="number">300</span> == <span class="number">299</span>:</span><br><span class="line"> print(<span class="string">"[%d, %5d] loss: %.3f"</span> % (epoch + <span class="number">1</span>, batch_idx + <span class="number">1</span>, running_loss / <span class="number">300</span>))</span><br><span class="line"> running_loss = <span class="number">0.0</span></span><br><span class="line"> </span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">test</span><span class="params">()</span>:</span></span><br><span class="line"> correct = <span class="number">0</span></span><br><span class="line"> total = <span class="number">0</span></span><br><span class="line"> <span class="keyword">with</span> torch.no_grad(): <span class="comment"># 测试不需要求梯度</span></span><br><span class="line"> <span class="keyword">for</span> data <span class="keyword">in</span> test_loader:</span><br><span class="line"> images, labels = data</span><br><span class="line"> outputs = model(images)</span><br><span class="line"> _, predicted = torch.max(outputs.data, dim=<span class="number">1</span>) <span class="comment"># 找最大概率的下标</span></span><br><span class="line"> total += labels.size(<span class="number">0</span>)</span><br><span class="line"> correct += (predicted == labels).sum().item()</span><br><span class="line"> print(<span class="string">"Accuracy on test set: %d %%"</span> % (<span class="number">100</span> * correct / total))</span><br><span class="line"> </span><br><span class="line"><span class="keyword">if</span> __name__ == <span class="string">"__main__"</span>:</span><br><span class="line"> <span class="keyword">for</span> epoch <span class="keyword">in</span> range(<span class="number">10</span>):</span><br><span class="line"> train(epoch)</span><br><span class="line"> test()</span><br></pre></td></tr></table></figure><p><a href="https://colab.research.google.com/drive/1sNj8uUXYB5p-AzY9HKGJR2HF7RgcCnjF?usp=sharing" rel="external nofollow noopener noreferrer" target="_blank">在Colab上运行</a></p><p>课程来源:<a href="https://www.bilibili.com/video/BV1Y7411d7Ys?p=9" rel="external nofollow noopener noreferrer" target="_blank">《PyTorch深度学习实践》完结合集</a></p>]]></content>
<summary type="html">
<h1 id="多分类问题"><a href="#多分类问题" class="headerlink" title="多分类问题"></a>多分类问题</h1><p>N分类问题(N&gt;2)最朴素的解决方案是拆分为N个二分类问题。但这样做的问题是不同类别间应该是互斥的(或者说相互抑制),直接拆分的话可能每个类别的概率都很高(或者加起来不等于1)。<br>
</summary>
<category term="深度学习" scheme="https://www.mhlwsk.com/tags/%E6%B7%B1%E5%BA%A6%E5%AD%A6%E4%B9%A0/"/>
<category term="PyTorch" scheme="https://www.mhlwsk.com/tags/PyTorch/"/>
</entry>
<entry>
<title>《PyTorch深度学习实践》8. 加载数据集</title>
<link href="https://www.mhlwsk.com/archives/6973.html"/>
<id>https://www.mhlwsk.com/archives/6973.html</id>
<published>2020-09-19T12:31:26.000Z</published>
<updated>2020-09-19T12:45:42.203Z</updated>
<content type="html"><![CDATA[<h1 id="加载数据集"><a href="#加载数据集" class="headerlink" title="加载数据集"></a>加载数据集</h1><p>Dataset和Dataloader的用法</p><h2 id="两个极端"><a href="#两个极端" class="headerlink" title="两个极端"></a>两个极端</h2><ol><li><p>直接用全部的数据集训练(Batch):最大化利用向量计算优势,但消耗资源过大</p></li><li><p>每次只用一个样本训练模型:具有较好的随机性,有助于跨越鞍点,但并行化差,计算效率太低(存疑:batch_size过小,每个mini-batch的样本数据将没有统计意义)</p></li></ol><p>因此引入Mini-Batch,可以均衡训练的速度和训练效果<br><a id="more"></a></p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># Training cycle</span></span><br><span class="line"><span class="keyword">for</span> epoch <span class="keyword">in</span> range(training_epochs):</span><br><span class="line"> <span class="comment"># Loop over all batches</span></span><br><span class="line"> <span class="keyword">for</span> i <span class="keyword">in</span> range(total_batch):</span><br></pre></td></tr></table></figure><h2 id="一些概念"><a href="#一些概念" class="headerlink" title="一些概念"></a>一些概念</h2><ul><li>Epoch:所有样本都参与了一次训练</li><li>Batch-Size:一次forward-backward pass中用的样本数量</li><li>Iteration:内层迭代一共进行了多少次,即pass的数目</li></ul><h2 id="DataLoader用法"><a href="#DataLoader用法" class="headerlink" title="DataLoader用法"></a>DataLoader用法</h2><p>这是训练的四个步骤中Prepare dataset这一步</p><ul><li><p>Prepare dataset</p><p><strong>Dataset and Dataloader</strong></p></li><li><p>Design model using Class</p><ul><li>Inherit from nn.Module</li></ul></li><li><p>Construct loss and optimizer</p></li><li><p>Training cycle</p><ul><li>forward, backward, update</li></ul></li></ul><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> torch</span><br><span class="line"><span class="keyword">from</span> torch.utils.data <span class="keyword">import</span> Dataset <span class="comment"># Dataset是一个抽象类,不能直接实例化,必须被继承</span></span><br><span class="line"><span class="keyword">from</span> torch.utils.data <span class="keyword">import</span> DataLoader</span><br><span class="line"></span><br><span class="line"><span class="class"><span class="keyword">class</span> <span class="title">DiabetesDataset</span><span class="params">(Dataset)</span>:</span></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">__init__</span><span class="params">(self)</span>:</span></span><br><span class="line"> <span class="keyword">pass</span></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">__getitem__</span><span class="params">(self, index)</span>:</span> <span class="comment"># 这是一个magic function, dataset[index]会调用这个函数</span></span><br><span class="line"> <span class="keyword">pass</span></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">__len__</span><span class="params">(self)</span>:</span> <span class="comment"># 这个也是magic function,在调用len(dataset)的时候被调用</span></span><br><span class="line"> <span class="keyword">pass</span></span><br><span class="line"></span><br><span class="line">dataset = DiabetesDataset()</span><br><span class="line">train_loader = DataLoader(dataset=dataset,</span><br><span class="line"> batch_size=<span class="number">32</span>,</span><br><span class="line"> shuffle=<span class="literal">True</span>, <span class="comment"># 是否打乱数据</span></span><br><span class="line"> num_workers=<span class="number">2</span>) <span class="comment"># 几个进程来读数据</span></span><br><span class="line"></span><br><span class="line"><span class="keyword">for</span> epoch <span class="keyword">in</span> range(<span class="number">100</span>):</span><br><span class="line"> <span class="keyword">for</span> i, data <span class="keyword">in</span> enumerate(train_loader, <span class="number">0</span>): <span class="comment"># 从train_loader中读取mini-batch</span></span><br><span class="line"> <span class="comment"># 0表示从0开始枚举</span></span><br><span class="line"> <span class="comment"># 1. Prepare data</span></span><br><span class="line"> inputs, labels = data</span><br><span class="line"> <span class="comment"># 2. Forward</span></span><br><span class="line"> <span class="comment"># 3. Backward</span></span><br><span class="line"> <span class="comment"># 4. Update</span></span><br></pre></td></tr></table></figure><h2 id="一些现成数据集"><a href="#一些现成数据集" class="headerlink" title="一些现成数据集"></a>一些现成数据集</h2><p><code>torchvision</code>包中带有众多已有数据集</p><p><a href="https://colab.research.google.com/drive/1Ietg9GoeynxPFfbSebMaIXiegpbyJQd6?usp=sharing" rel="external nofollow noopener noreferrer" target="_blank">在Colab上运行</a></p><p>课程来源:<a href="https://www.bilibili.com/video/BV1Y7411d7Ys?p=8" rel="external nofollow noopener noreferrer" target="_blank">《PyTorch深度学习实践》完结合集</a></p>]]></content>
<summary type="html">
<h1 id="加载数据集"><a href="#加载数据集" class="headerlink" title="加载数据集"></a>加载数据集</h1><p>Dataset和Dataloader的用法</p>
<h2 id="两个极端"><a href="#两个极端" class="headerlink" title="两个极端"></a>两个极端</h2><ol>
<li><p>直接用全部的数据集训练(Batch):最大化利用向量计算优势,但消耗资源过大</p>
</li>
<li><p>每次只用一个样本训练模型:具有较好的随机性,有助于跨越鞍点,但并行化差,计算效率太低(存疑:batch_size过小,每个mini-batch的样本数据将没有统计意义)</p>
</li>
</ol>
<p>因此引入Mini-Batch,可以均衡训练的速度和训练效果<br>
</summary>
<category term="深度学习" scheme="https://www.mhlwsk.com/tags/%E6%B7%B1%E5%BA%A6%E5%AD%A6%E4%B9%A0/"/>
<category term="PyTorch" scheme="https://www.mhlwsk.com/tags/PyTorch/"/>
</entry>
<entry>
<title>《PyTorch深度学习实践》7. 处理多维特征的输入</title>
<link href="https://www.mhlwsk.com/archives/38243.html"/>
<id>https://www.mhlwsk.com/archives/38243.html</id>
<published>2020-09-18T08:58:27.000Z</published>
<updated>2020-09-19T12:42:02.525Z</updated>
<content type="html"><![CDATA[<h1 id="处理多维特征的输入"><a href="#处理多维特征的输入" class="headerlink" title="处理多维特征的输入"></a>处理多维特征的输入</h1><h2 id="常见数据格式"><a href="#常见数据格式" class="headerlink" title="常见数据格式"></a>常见数据格式</h2><ul><li>每一行是一条<strong>记录(Record)</strong></li><li>每一列是一个<strong>特征/字段(Feature)</strong></li></ul><p><img src="/images/7. 处理多维特征的输入.assets/image-20200918142954530.png" alt="image-20200918142954530"><br><a id="more"></a></p><h2 id="Logistic-Regression-Model"><a href="#Logistic-Regression-Model" class="headerlink" title="Logistic Regression Model"></a>Logistic Regression Model</h2><p>首先演示在数据有多维特征时采用Logistic Regression Model的公式形式:</p><span>$\hat{y}^{(i)}=\sigma(\sum_{n=1}^8x_n^{(i)}\cdot\omega_n+b)$</span><!-- Has MathJax --><p>下图是上面公式的矩阵图示,其中<span>$\sigma$</span><!-- Has MathJax -->为sigmoid函数:</p><p><img src="/images/7. 处理多维特征的输入.assets/image-20200918144651810.png" alt="image-20200918144651810"></p><p>对于右下角的矩阵等式,<span>$N\times8$</span><!-- Has MathJax -->的那个矩阵可以理解为每一行一个record,每一列一个feature,<span>$N\times1$</span><!-- Has MathJax -->的<strong>b</strong>矩阵是b广播而来的</p><p>8维特征,一层线性模型:</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line"><span class="class"><span class="keyword">class</span> <span class="title">Model</span><span class="params">(torch.nn.Module)</span>:</span></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">__init__</span><span class="params">(self)</span>:</span></span><br><span class="line"> super(Model, self).__init__()</span><br><span class="line"> self.linear = torch.nn.Linear(<span class="number">8</span>, <span class="number">1</span>) <span class="comment"># 8维输入1维输出,同时可以有N条记录</span></span><br><span class="line"> self.sigmoid = torch.nn.Sigmoid()</span><br><span class="line"></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">forward</span><span class="params">(self, x)</span>:</span></span><br><span class="line"> x = self.sigmoid(self.linear(x))</span><br><span class="line"> <span class="keyword">return</span> x</span><br><span class="line"></span><br><span class="line">model = Model()</span><br></pre></td></tr></table></figure><p>其中<code>self.linear=torch.nn.Linear(8,1)</code>是输出为一维线性模型。输出改成其它维度,如改为<code>self.linear=torch.nn.Linear(8,6)</code>,模型可视化如下:</p><p><img src="/images/7. 处理多维特征的输入.assets/image-20200918152718584.png" alt="image-20200918152718584"></p><p>多个不同维度的线性层(以激活函数结束)串联在一起,得到一个简单的神经网络:</p><p><img src="/images/7. 处理多维特征的输入.assets/image-20200918154823946.png" alt="image-20200918154823946"></p><p><img src="/images/7. 处理多维特征的输入.assets/image-20200918155750697.png" alt="image-20200918155750697"></p><p>层越多参数越多,学习能力越强,但学习能力不是越强越好,太强的话会把噪声都学进去了,没有泛化能力</p><p>上面的神经网络图示:</p><p><img src="/images/7. 处理多维特征的输入.assets/image-20200919090443557.png" alt="image-20200919090443557"></p><h2 id="示例:Diabetes-Prediction"><a href="#示例:Diabetes-Prediction" class="headerlink" title="示例:Diabetes Prediction"></a>示例:Diabetes Prediction</h2><p>四个步骤:</p><ul><li>Prepare dataset</li><li>Design model using Class<ul><li>Inherit from nn.Module</li></ul></li><li>Construct loss and optimizer</li><li>Training cycle<ul><li>forward, backward, update</li></ul></li></ul><h3 id="Prepare-dataset"><a href="#Prepare-dataset" class="headerlink" title="Prepare dataset"></a>Prepare dataset</h3><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment"># 数据集链接: https://pan.baidu.com/s/1Ku5c99yDHNFMt8EJAcF5LA 提取码: n4xh</span></span><br><span class="line">xy = np.loadtxt(<span class="string">'diabetes.csv'</span>, delimiter=<span class="string">','</span>, dtype=np.float32) <span class="comment"># 也可不解压直接用diabetes.csv.gz</span></span><br><span class="line"> <span class="comment"># 不建议用double类型,神经网络一般用float32</span></span><br><span class="line">x_data = torch.from_numpy(xy[:, :<span class="number">-1</span>])</span><br><span class="line">y_data = torch.from_numpy(xy[:, [<span class="number">-1</span>]]) <span class="comment"># 注意-1上有[],这样能保证拿出来的是一个矩阵,否则成向量了</span></span><br></pre></td></tr></table></figure><h3 id="Design-model-using-Class"><a href="#Design-model-using-Class" class="headerlink" title="Design model using Class"></a>Design model using Class</h3><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br></pre></td><td class="code"><pre><span class="line"><span class="class"><span class="keyword">class</span> <span class="title">Model</span><span class="params">(torch.nn.Module)</span>:</span></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">__init__</span><span class="params">(self)</span>:</span></span><br><span class="line"> super(Model, self).__init__()</span><br><span class="line"> self.linear1 = torch.nn.Linear(<span class="number">8</span>, <span class="number">6</span>)</span><br><span class="line"> self.linear2 = torch.nn.Linear(<span class="number">6</span>, <span class="number">4</span>)</span><br><span class="line"> self.linear3 = torch.nn.Linear(<span class="number">4</span>, <span class="number">1</span>)</span><br><span class="line"> self.sigmoid = torch.nn.Sigmoid() <span class="comment"># 注意: 1. 这里的sigmoid与torch.sigmoid()不太一样,</span></span><br><span class="line"> <span class="comment"># 后者是一个函数,这里的是一个类</span></span><br><span class="line"> <span class="comment"># 2. sigmoid没有需要训练的参数,所以定义一个</span></span><br><span class="line"> <span class="comment"># 就行了 sigmoid: 1/(1+e^(-z))</span></span><br><span class="line"> <span class="comment"># 如果要换激活函数,如ReLU,可以像这样:</span></span><br><span class="line"> <span class="comment"># self.activate = torch.nn.ReLU()</span></span><br><span class="line"> </span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">forward</span><span class="params">(self, x)</span>:</span></span><br><span class="line"> x = self.sigmoid(self.linear1(x))</span><br><span class="line"> x = self.sigmoid(self.linear2(x))</span><br><span class="line"> x = self.sigmoid(self.linear3(x)) <span class="comment"># 如果前面激活函数用的ReLU,由于ReLU在输入<10时会</span></span><br><span class="line"> <span class="comment"># 直接输出0,如果后面算ln可能会出问题,</span></span><br><span class="line"> <span class="comment"># 在最后一层要套sigmoid</span></span><br><span class="line"> <span class="keyword">return</span> x</span><br><span class="line"></span><br><span class="line">model = Model()</span><br></pre></td></tr></table></figure><h3 id="Construct-loss-and-optimizer"><a href="#Construct-loss-and-optimizer" class="headerlink" title="Construct loss and optimizer"></a>Construct loss and optimizer</h3><p>由于是二分类问题,采用了交叉熵作为loss函数,optimizer与之前课程中的选择一致</p><p><img src="/images/7. 处理多维特征的输入.assets/image-20200918162925437.png" alt="image-20200918162925437"></p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">criterion = torch.nn.BCELoss(size_average=<span class="literal">True</span>)</span><br><span class="line">optimizer = torch.optim.SGD(model.parameters(), lr=<span class="number">0.1</span>)</span><br></pre></td></tr></table></figure><h3 id="Training-Cycle"><a href="#Training-Cycle" class="headerlink" title="Training Cycle"></a>Training Cycle</h3><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">for</span> epoch <span class="keyword">in</span> range(<span class="number">100</span>):</span><br><span class="line"> <span class="comment"># Forward</span></span><br><span class="line"> y_pred = model(x_data) <span class="comment"># 注意:这里还没有用mini-batch,而是直接把整个数据集用于训练了,</span></span><br><span class="line"> <span class="comment"># 之后讨论DataLoader的用法</span></span><br><span class="line"> loss = criterion(y_pred, y_data)</span><br><span class="line"> print(epoch, loss.item())</span><br><span class="line"></span><br><span class="line"> <span class="comment"># Backward</span></span><br><span class="line"> optimizer.zero_grad()</span><br><span class="line"> loss.backward()</span><br><span class="line"></span><br><span class="line"> <span class="comment"># Update</span></span><br><span class="line"> optimizer.step()</span><br></pre></td></tr></table></figure><h2 id="附:神经网络中经常用到的激活函数"><a href="#附:神经网络中经常用到的激活函数" class="headerlink" title="附:神经网络中经常用到的激活函数"></a>附:神经网络中经常用到的激活函数</h2><p>来源:<a href="http://rasbt.github.io/mlxtend/user_guide/general_concepts/activation-functions/#activation-functions-for-artificial-neural-networks" rel="external nofollow noopener noreferrer" target="_blank">http://rasbt.github.io/mlxtend/user_guide/general_concepts/activation-functions/#activation-functions-for-artificial-neural-networks</a></p><p><img src="/images/7. 处理多维特征的输入.assets/activation-functions.png" alt="img"></p><p>ReLU在神经网络里面用的比较多,但ReLU不连续</p><p><a href="https://colab.research.google.com/drive/12dTEVe-ERR2L8rRCND1GNwn0DZZ8CkAi?usp=sharing" rel="external nofollow noopener noreferrer" target="_blank">在Colab上运行</a></p><p>课程来源:<a href="https://www.bilibili.com/video/BV1Y7411d7Ys?p=7" rel="external nofollow noopener noreferrer" target="_blank">《PyTorch深度学习实践》完结合集</a></p>]]></content>
<summary type="html">
<h1 id="处理多维特征的输入"><a href="#处理多维特征的输入" class="headerlink" title="处理多维特征的输入"></a>处理多维特征的输入</h1><h2 id="常见数据格式"><a href="#常见数据格式" class="headerlink" title="常见数据格式"></a>常见数据格式</h2><ul>
<li>每一行是一条<strong>记录(Record)</strong></li>
<li>每一列是一个<strong>特征/字段(Feature)</strong></li>
</ul>
<p><img src="/images/7. 处理多维特征的输入.assets/image-20200918142954530.png" alt="image-20200918142954530"><br>
</summary>
<category term="深度学习" scheme="https://www.mhlwsk.com/tags/%E6%B7%B1%E5%BA%A6%E5%AD%A6%E4%B9%A0/"/>
<category term="PyTorch" scheme="https://www.mhlwsk.com/tags/PyTorch/"/>
</entry>
<entry>
<title>《PyTorch深度学习实践》5.用PyTorch实现线性模型</title>
<link href="https://www.mhlwsk.com/archives/3988.html"/>
<id>https://www.mhlwsk.com/archives/3988.html</id>
<published>2020-09-15T07:45:22.000Z</published>
<updated>2020-09-18T08:04:18.163Z</updated>
<content type="html"><![CDATA[<h1 id="用PyTorch实现线性模型"><a href="#用PyTorch实现线性模型" class="headerlink" title="用PyTorch实现线性模型"></a>用PyTorch实现线性模型</h1><p>四个步骤:</p><ul><li>Prepare dataset</li><li>Design model using Class<ul><li>Inherit from nn.Module</li></ul></li><li>Construct loss and optimizer</li><li>Training cycle<ul><li>forward, backward, update</li></ul></li></ul><a id="more"></a><h2 id="Design-model-using-Class"><a href="#Design-model-using-Class" class="headerlink" title="Design model using Class"></a>Design model using Class</h2><p>演示所用的线性模型:</p><span>$\hat{y}=\omega*x+b$</span><!-- Has MathJax --><p>线性模型类的定义:</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line"><span class="class"><span class="keyword">class</span> <span class="title">LinearModel</span><span class="params">(torch.nn.Module)</span>:</span></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">__init__</span><span class="params">(self)</span>:</span></span><br><span class="line"> super(LinearModel, self).__init__()</span><br><span class="line"> self.linear = torch.nn.Linear(<span class="number">1</span>, <span class="number">1</span>) <span class="comment"># 分别为输入的维度数(n*x)和输出的维度数(n*y),</span></span><br><span class="line"> <span class="comment"># 在输入和输出维度确定的情况下可以直接推出权重w矩阵的维度</span></span><br><span class="line"> <span class="comment"># 缺省情况下还会自己补一个偏置矩阵b</span></span><br><span class="line"></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">forward</span><span class="params">(self, x)</span>:</span></span><br><span class="line"> y_pred = self.linear(x)</span><br><span class="line"> <span class="keyword">return</span> y_pred <span class="comment"># 没有backward,因为构建的计算图会自动计算</span></span><br><span class="line"></span><br><span class="line">model = LinearModel()</span><br></pre></td></tr></table></figure><p>补充Python的callable知识点:</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line"><span class="class"><span class="keyword">class</span> <span class="title">Foobar</span>:</span></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">__init__</span><span class="params">(self)</span>:</span></span><br><span class="line"> <span class="keyword">pass</span></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">__call__</span><span class="params">(self, *args, **kwargs)</span>:</span></span><br><span class="line"> print(<span class="string">"hello"</span> + str(args[<span class="number">0</span>]))</span><br><span class="line"> </span><br><span class="line">foobar = Foobar()</span><br><span class="line">foobar(<span class="number">1</span>, <span class="number">2</span>, <span class="number">3</span>) <span class="comment"># 输出: hello1</span></span><br></pre></td></tr></table></figure><p>当一个类定义了<code>__call__(self, *args, **kwargs)</code>函数,这个类的实例(上例中为foobar)就可以直接调用了</p><p><code>nn.Linear</code>类也执行了<code>__call__()</code>函数,并在这个函数调用了<code>forward()</code>函数</p><p>所以说用<code>model=LinearModel()</code>声明了<code>model</code>之后,就可以用<code>model(x)</code>来对输入数据进行变换了,<code>LinearModel</code>的<code>__call__()</code>函数会调用对应的<code>forward()</code>函数</p><h2 id="Construct-loss-and-optimizer"><a href="#Construct-loss-and-optimizer" class="headerlink" title="Construct loss and optimizer"></a>Construct loss and optimizer</h2><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">criterion = torch.nn.MSELoss(size_average=<span class="literal">False</span>)</span><br><span class="line">optimizer = torch.optim.SGD(model.parameters(), lr=<span class="number">0.01</span>)</span><br></pre></td></tr></table></figure><h2 id="Training-Cycle"><a href="#Training-Cycle" class="headerlink" title="Training Cycle"></a>Training Cycle</h2><p>前馈,反馈,更新</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">for</span> epoch <span class="keyword">in</span> range(<span class="number">100</span>):</span><br><span class="line"> <span class="string">"""</span></span><br><span class="line"><span class="string"> 基本步骤是:先forward再backward最后更新</span></span><br><span class="line"><span class="string"> """</span></span><br><span class="line"> y_pred = model(x_data)</span><br><span class="line"> loss = criterion(y_pred, y_data)</span><br><span class="line"> print(epoch, loss) <span class="comment"># 前面到这里是forward部分</span></span><br><span class="line"></span><br><span class="line"> optimizer.zero_grad() <span class="comment"># The grad computed by .backward() will be accumulated.</span></span><br><span class="line"> <span class="comment"># So after update, remember set the grad to ZERO!</span></span><br><span class="line"> loss.backward() <span class="comment"># backward部分</span></span><br><span class="line"> optimizer.step() <span class="comment"># update部分</span></span><br></pre></td></tr></table></figure><p>补充:PyTorch中常用的优化器:</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line">torch.optim.Adagrad</span><br><span class="line">torch.optim.Adam</span><br><span class="line">torch.optim.Adamax</span><br><span class="line">torch.optim.ASGD</span><br><span class="line">torch.optim.LBFGS</span><br><span class="line">torch.optim.RMSprop</span><br><span class="line">torch.optim.Rprop</span><br><span class="line">torch.optim.SGD</span><br></pre></td></tr></table></figure><p><a href="https://colab.research.google.com/drive/1vKUtjSgAWdO-tXuseTRF55TxmQM4pGGU?usp=sharing" rel="external nofollow noopener noreferrer" target="_blank">在Colab上运行</a></p><p>课程来源:<a href="https://www.bilibili.com/video/BV1Y7411d7Ys?p=5" rel="external nofollow noopener noreferrer" target="_blank">《PyTorch深度学习实践》完结合集</a></p>]]></content>
<summary type="html">
<h1 id="用PyTorch实现线性模型"><a href="#用PyTorch实现线性模型" class="headerlink" title="用PyTorch实现线性模型"></a>用PyTorch实现线性模型</h1><p>四个步骤:</p>
<ul>
<li>Prepare dataset</li>
<li>Design model using Class<ul>
<li>Inherit from nn.Module</li>
</ul>
</li>
<li>Construct loss and optimizer</li>
<li>Training cycle<ul>
<li>forward, backward, update</li>
</ul>
</li>
</ul>
</summary>
<category term="深度学习" scheme="https://www.mhlwsk.com/tags/%E6%B7%B1%E5%BA%A6%E5%AD%A6%E4%B9%A0/"/>
<category term="PyTorch" scheme="https://www.mhlwsk.com/tags/PyTorch/"/>
</entry>
<entry>
<title>《PyTorch深度学习实践》4.反向传播</title>
<link href="https://www.mhlwsk.com/archives/422.html"/>
<id>https://www.mhlwsk.com/archives/422.html</id>
<published>2020-08-10T12:36:10.000Z</published>
<updated>2020-09-15T07:47:46.164Z</updated>
<content type="html"><![CDATA[<h1 id="反向传播"><a href="#反向传播" class="headerlink" title="反向传播"></a>反向传播</h1><h2 id="计算图Computational-Graph"><a href="#计算图Computational-Graph" class="headerlink" title="计算图Computational Graph"></a>计算图Computational Graph</h2><p>两层神经网络</p><p>$\hat{y}=W_2(W_1 \cdot X + b_1) + b_2$</p><p><img src="/images/4. 反向传播.assets/image-20200809153529355.png" alt="image-20200809153529355"></p><a id="more"></a><p>上述式子其实是可以化简的,即<strong>多层网络能找到一个等价的一层网络</strong></p><p><img src="/images/4. 反向传播.assets/image-20200809153814657.png" alt="image-20200809153814657"></p><p>如果在每一层末尾加一个非线性函数,式子变得无法化简:<br><img src="/images/4. 反向传播.assets/image-20200809154034556.png" alt="image-20200809154034556"></p><h2 id="链式求导"><a href="#链式求导" class="headerlink" title="链式求导"></a>链式求导</h2><p><img src="/images/4. 反向传播.assets/image-20200809154656704.png" alt="image-20200809154656704"></p><p>反馈过程</p><p><img src="/images/4. 反向传播.assets/image-20200809160816544.png" alt="image-20200809160816544"></p><p>一个具体的例子,首先是一个前馈过程,再是一个反馈的过程:</p><p><img src="/images/4. 反向传播.assets/image-20200812235428202.png" alt="image-20200812235428202"></p><p>loss对要更新的参数求偏导</p><h2 id="PyTorch实现"><a href="#PyTorch实现" class="headerlink" title="PyTorch实现"></a>PyTorch实现</h2><p>在Tensor中包含了<em>数据</em>和<em>梯度</em></p><p>代码如下:</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> torch</span><br><span class="line"></span><br><span class="line">x_data = [<span class="number">1.0</span>, <span class="number">2.0</span>, <span class="number">3.0</span>]</span><br><span class="line">y_data = [<span class="number">2.0</span>, <span class="number">4.0</span>, <span class="number">6.0</span>]</span><br><span class="line"></span><br><span class="line">w = torch.Tensor([<span class="number">1.0</span>])</span><br><span class="line">w.requires_grad = <span class="literal">True</span> <span class="comment"># If autograd mechanics are required, the element variable</span></span><br><span class="line"> <span class="comment"># requires_grad of Tensor has to be set to True</span></span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">forward</span><span class="params">(x)</span>:</span></span><br><span class="line"> <span class="keyword">return</span> x * w</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">loss</span><span class="params">(x, y)</span>:</span></span><br><span class="line"> y_pred = forward(x)</span><br><span class="line"> <span class="keyword">return</span> (y_pred - y) ** <span class="number">2</span></span><br><span class="line"></span><br><span class="line">print(<span class="string">"predict (before training)"</span>, <span class="number">4</span>, forward(<span class="number">4</span>).item())</span><br><span class="line"></span><br><span class="line"><span class="keyword">for</span> epoch <span class="keyword">in</span> range(<span class="number">100</span>):</span><br><span class="line"> <span class="keyword">for</span> x, y <span class="keyword">in</span> zip(x_data, y_data):</span><br><span class="line"> l = loss(x, y)</span><br><span class="line"> l.backward() <span class="comment"># Backward, compute grad for Tensor whose requires_grad set to True</span></span><br><span class="line"> print(<span class="string">"\tgrad:"</span>, x, y, w.grad.item())</span><br><span class="line"> w.data = w.data - <span class="number">0.01</span> * w.grad.data <span class="comment"># 注意:Tensor的运算会生成计算图,</span></span><br><span class="line"> <span class="comment"># 这里w.data - 0.01 * w.grad.data一定要赋值给w.data而不是一个新的变量(否则会生成新的计算图)</span></span><br><span class="line"></span><br><span class="line"> w.grad.data.zero_() <span class="comment"># The grad computed by .backward() will be accumulated.</span></span><br><span class="line"> <span class="comment"># So after update, remember set the grad to ZERO!</span></span><br><span class="line"></span><br><span class="line"> print(<span class="string">"progress:"</span>, epoch, l.item())</span><br><span class="line"></span><br><span class="line">print(<span class="string">"predict (after training)"</span>, <span class="number">4</span>, forward(<span class="number">4</span>).item())</span><br></pre></td></tr></table></figure><p><a href="https://colab.research.google.com/drive/11wBO3UVSjc4lwdJS5lx6kuQKkFPIiVeR?usp=sharing" rel="external nofollow noopener noreferrer" target="_blank">在Colab上运行</a></p><p>课程来源:[<a href="https://www.bilibili.com/video/BV1Y7411d7Ys?p=4" rel="external nofollow noopener noreferrer" target="_blank">https://www.bilibili.com/video/BV1Y7411d7Ys?p=4</a>)</p>]]></content>
<summary type="html">
<h1 id="反向传播"><a href="#反向传播" class="headerlink" title="反向传播"></a>反向传播</h1><h2 id="计算图Computational-Graph"><a href="#计算图Computational-Graph" class="headerlink" title="计算图Computational Graph"></a>计算图Computational Graph</h2><p>两层神经网络</p>
<p>$\hat{y}=W_2(W_1 \cdot X + b_1) + b_2$</p>
<p><img src="/images/4. 反向传播.assets/image-20200809153529355.png" alt="image-20200809153529355"></p>
</summary>
<category term="深度学习" scheme="https://www.mhlwsk.com/tags/%E6%B7%B1%E5%BA%A6%E5%AD%A6%E4%B9%A0/"/>
<category term="PyTorch" scheme="https://www.mhlwsk.com/tags/PyTorch/"/>
</entry>
<entry>
<title>《PyTorch深度学习实践》3.梯度下降算法</title>
<link href="https://www.mhlwsk.com/archives/42454.html"/>
<id>https://www.mhlwsk.com/archives/42454.html</id>
<published>2020-08-10T12:33:32.000Z</published>
<updated>2020-09-15T07:48:09.108Z</updated>
<content type="html"><![CDATA[<h1 id="梯度下降算法"><a href="#梯度下降算法" class="headerlink" title="梯度下降算法"></a>梯度下降算法</h1><h2 id="减小loss的暴力算法"><a href="#减小loss的暴力算法" class="headerlink" title="减小loss的暴力算法"></a>减小loss的暴力算法</h2><p>直接枚举,时间复杂度过高</p><p>分段枚举(如下图),先大粒度枚举(图中16个红色的点),再挑选较优点小粒度枚举(用绿色网格表示),可以减小枚举次数(如图中从直接枚举的256次减小到<code>16+16=32</code>次)。但分段枚举也有问题,无法解决局部最优和导数为0的点</p><p><img src="/images/3. 梯度下降算法.assets/image-20200809102607810.png" alt="image-20200809102607810"></p><h2 id="梯度下降算法-1"><a href="#梯度下降算法-1" class="headerlink" title="梯度下降算法"></a>梯度下降算法</h2><a id="more"></a><h3 id="基本思想"><a href="#基本思想" class="headerlink" title="基本思想"></a>基本思想</h3><p>梯度公式:<br><span>$gradient=\frac{\partial{cost}}{\partial{\omega}}$</span><!-- Has MathJax --></p><p>参数更新:</p><span>$\omega = \omega - \alpha \frac{\partial{cost}}{\partial{\omega}}$</span><!-- Has MathJax --><p>优化方向为梯度上升方向的反方向</p><p>梯度下降算法也避免不了局部最优</p><p>另一种特殊的情况为<strong>鞍点(saddle point)</strong>,这种点梯度为0,一旦到达这个位置,梯度下降就无法继续进行了</p><p><img src="/images/3. 梯度下降算法.assets/image-20200809104134819.png" alt="image-20200809104134819"></p><h3 id="公式在线性模型MSE的应用"><a href="#公式在线性模型MSE的应用" class="headerlink" title="公式在线性模型MSE的应用"></a>公式在线性模型MSE的应用</h3><p>梯度计算:</p><span>$$\begin{align}\frac{\partial{cost(\omega)}}{\partial{\omega}}&=\frac{\partial}{\partial{\omega}}\frac{1}{N}\sum_{n=1}^{N}(x_n\omega - y_n)^2 \\&=\frac{1}{N}\sum_{n=1}^{N}\frac{\partial}{\partial{\omega}}(x_n\omega - y_n)^2 \\&=\frac{1}{N}\sum_{n=1}^{N}2\cdot (x_n\omega - y_n)\frac{\partial{(x_n\omega - y_n)}}{\partial{\omega}} \\&=\frac{1}{N}\sum_{n=1}^{N}2\cdot x_n (x_n\omega - y_n)\end{align}$$</span><!-- Has MathJax --><p>参数更新:</p><span>$\omega = \omega -\alpha \frac{1}{N}\sum_{n=1}^{N}2\cdot x_n (x_n \cdot \omega - y_n)$</span><!-- Has MathJax --><p>代码参考:</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br></pre></td><td class="code"><pre><span class="line">x_data = [<span class="number">1.0</span>, <span class="number">2.0</span>, <span class="number">3.0</span>]</span><br><span class="line">y_data = [<span class="number">2.0</span>, <span class="number">4.0</span>, <span class="number">6.0</span>]</span><br><span class="line"></span><br><span class="line">w = <span class="number">1.0</span></span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">forward</span><span class="params">(x)</span>:</span></span><br><span class="line"> <span class="keyword">return</span> x * w</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">cost</span><span class="params">(xs, ys)</span>:</span></span><br><span class="line"> cost = <span class="number">0</span></span><br><span class="line"> <span class="keyword">for</span> x,y <span class="keyword">in</span> zip(xs, ys):</span><br><span class="line"> y_pred = forward(x)</span><br><span class="line"> cost += (y_pred - y) ** <span class="number">2</span></span><br><span class="line"> <span class="keyword">return</span> cost / len(xs)</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">gradient</span><span class="params">(xs, ys)</span>:</span></span><br><span class="line"> grad = <span class="number">0</span></span><br><span class="line"> <span class="keyword">for</span> x, y <span class="keyword">in</span> zip(xs, ys):</span><br><span class="line"> grad += <span class="number">2</span> * x * (x * w - y)</span><br><span class="line"> <span class="keyword">return</span> grad / len(xs)</span><br><span class="line"></span><br><span class="line">print(<span class="string">'Predict (before training'</span>, <span class="number">4</span>, forward(<span class="number">4</span>))</span><br><span class="line"><span class="keyword">for</span> epoch <span class="keyword">in</span> range(<span class="number">100</span>):</span><br><span class="line"> cost_val = cost(x_data, y_data)</span><br><span class="line"> grad_val = gradient(x_data, y_data)</span><br><span class="line"> w -= <span class="number">0.01</span> * grad_val</span><br><span class="line"> print(<span class="string">'Epoch:'</span>, epoch, <span class="string">'w='</span>, <span class="string">'loss='</span>, cost_val)</span><br><span class="line">print(<span class="string">'Predict (after training'</span>, <span class="number">4</span>, forward(<span class="number">4</span>))</span><br></pre></td></tr></table></figure><p>正常情况下,loss应该是单调递减的(中途可能出现波动),最终应该是收敛的,如果不收敛,说明函数本身不收敛,或者更常见的情况是<strong>学习率取大了</strong></p><h2 id="随机梯度下降SGD"><a href="#随机梯度下降SGD" class="headerlink" title="随机梯度下降SGD"></a>随机梯度下降SGD</h2><p><img src="/images/3. 梯度下降算法.assets/image-20200809111751255.png" alt="image-20200809111751255"></p><p>这里讲到的SGD相对于上面的梯度下降的不同是每次更新只考虑一个样本而不是所有样本。相比于神经网络中一次更新一个batch的做法是一个简化版</p><p>代码参考:</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br></pre></td><td class="code"><pre><span class="line">x_data = [<span class="number">1.0</span>, <span class="number">2.0</span>, <span class="number">3.0</span>]</span><br><span class="line">y_data = [<span class="number">2.0</span>, <span class="number">4.0</span>, <span class="number">6.0</span>]</span><br><span class="line"></span><br><span class="line">w = <span class="number">1.0</span></span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">forward</span><span class="params">(x)</span>:</span></span><br><span class="line"> <span class="keyword">return</span> x * w</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">loss</span><span class="params">(xs, ys)</span>:</span></span><br><span class="line"> y_pred = forward(x)</span><br><span class="line"> <span class="keyword">return</span> (y_pred - y) ** <span class="number">2</span></span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">gradient</span><span class="params">(xs, ys)</span>:</span></span><br><span class="line"> <span class="keyword">return</span> <span class="number">2</span> * x * (x * w - y)</span><br><span class="line"></span><br><span class="line">print(<span class="string">'Predict (before training'</span>, <span class="number">4</span>, forward(<span class="number">4</span>))</span><br><span class="line"><span class="keyword">for</span> epoch <span class="keyword">in</span> range(<span class="number">100</span>):</span><br><span class="line"> <span class="keyword">for</span> x, y <span class="keyword">in</span> zip(x_data, y_data):</span><br><span class="line"> grad = gradient(x, y)</span><br><span class="line"> w = w - <span class="number">0.01</span> * grad</span><br><span class="line"> print(<span class="string">"\tgrad: "</span>, x, y, grad)</span><br><span class="line"> l = loss(x, y)</span><br><span class="line"> print(<span class="string">"progress:"</span>, epoch, <span class="string">"w="</span>, w, <span class="string">"loss="</span>, <span class="number">1</span>)</span><br><span class="line"></span><br><span class="line">print(<span class="string">"Predict (after training)"</span>, <span class="number">4</span>, forward(<span class="number">4</span>))</span><br></pre></td></tr></table></figure><p><a href="https://colab.research.google.com/drive/19JgtThn_uAlmRe_r95lYMpylIezDvUv2?usp=sharing" rel="external nofollow noopener noreferrer" target="_blank">在Colab上运行</a></p><p>课程来源:<a href="https://www.bilibili.com/video/BV1Y7411d7Ys?p=3" rel="external nofollow noopener noreferrer" target="_blank">《PyTorch深度学习实践》完结合集</a></p>]]></content>
<summary type="html">
<h1 id="梯度下降算法"><a href="#梯度下降算法" class="headerlink" title="梯度下降算法"></a>梯度下降算法</h1><h2 id="减小loss的暴力算法"><a href="#减小loss的暴力算法" class="headerlink" title="减小loss的暴力算法"></a>减小loss的暴力算法</h2><p>直接枚举,时间复杂度过高</p>
<p>分段枚举(如下图),先大粒度枚举(图中16个红色的点),再挑选较优点小粒度枚举(用绿色网格表示),可以减小枚举次数(如图中从直接枚举的256次减小到<code>16+16=32</code>次)。但分段枚举也有问题,无法解决局部最优和导数为0的点</p>
<p><img src="/images/3. 梯度下降算法.assets/image-20200809102607810.png" alt="image-20200809102607810"></p>
<h2 id="梯度下降算法-1"><a href="#梯度下降算法-1" class="headerlink" title="梯度下降算法"></a>梯度下降算法</h2>
</summary>
<category term="深度学习" scheme="https://www.mhlwsk.com/tags/%E6%B7%B1%E5%BA%A6%E5%AD%A6%E4%B9%A0/"/>
<category term="PyTorch" scheme="https://www.mhlwsk.com/tags/PyTorch/"/>
</entry>
<entry>
<title>Andrew Ng序列模型 笔记</title>
<link href="https://www.mhlwsk.com/archives/898.html"/>
<id>https://www.mhlwsk.com/archives/898.html</id>
<published>2020-08-02T14:15:26.000Z</published>
<updated>2020-10-16T14:12:48.676Z</updated>
<content type="html"><![CDATA[<h1 id="Andrew-Ng循环序列模型-学习笔记"><a href="#Andrew-Ng循环序列模型-学习笔记" class="headerlink" title="Andrew Ng循环序列模型 学习笔记"></a>Andrew Ng循环序列模型 学习笔记</h1><p>笔记课程来源:<a href="https://mooc.study.163.com/learn/2001280005" rel="external nofollow noopener noreferrer" target="_blank">https://mooc.study.163.com/learn/2001280005</a></p><h2 id="应用例子"><a href="#应用例子" class="headerlink" title="应用例子"></a>应用例子</h2><ol><li>语音识别</li><li>音乐生成</li><li>情感分类</li><li>DNA序列分析</li><li>机器翻译</li><li>视频活动识别</li><li>命名实体识别</li></ol><p><img src="/images/1. 循环序列模型.assets/image-20200802184812691.png" alt="image-20200802184812691"><br><a id="more"></a></p><h2 id="数学符号"><a href="#数学符号" class="headerlink" title="数学符号"></a>数学符号</h2><h2 id="Notation"><a href="#Notation" class="headerlink" title="Notation"></a>Notation</h2><p>对于每个例子,采用<span>$x^{<i>}$</span><!-- Has MathJax -->来表示例子的第<span>$i$</span><!-- Has MathJax -->个单词,采用<span>$y^{<i>}$</span><!-- Has MathJax -->来表示第<span>$i$</span><!-- Has MathJax -->个单词的标签,具体如下:</p><table><thead><tr><th>x</th><th>Harry</th><th>Potter</th><th>and</th><th>Hermione</th><th>Graner</th><th>invented</th><th>a</th><th>new</th><th>spell</th></tr></thead><tbody><tr><td>Element</td><td><span>$x^{<1>}$</span><!-- Has MathJax --></td><td><span>$x^{<2>}$</span><!-- Has MathJax --></td><td><span>$x^{<3>}$</span><!-- Has MathJax --></td><td><span>$x^{<4>}$</span><!-- Has MathJax --></td><td><span>$x^{<5>}$</span><!-- Has MathJax --></td><td><span>$x^{<6>}$</span><!-- Has MathJax --></td><td><span>$x^{<7>}$</span><!-- Has MathJax --></td><td><span>$x^{<8>}$</span><!-- Has MathJax --></td><td><span>$x^{<9>}$</span><!-- Has MathJax --></td></tr><tr><td>y</td><td>1</td><td>1</td><td>0</td><td>1</td><td>1</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>Label</td><td><span>$y^{<1>}$</span><!-- Has MathJax --></td><td><span>$y^{<2>}$</span><!-- Has MathJax --></td><td><span>$y^{<3>}$</span><!-- Has MathJax --></td><td><span>$y^{<4>}$</span><!-- Has MathJax --></td><td><span>$y^{<5>}$</span><!-- Has MathJax --></td><td><span>$y^{<6>}$</span><!-- Has MathJax --></td><td><span>$y^{<7>}$</span><!-- Has MathJax --></td><td><span>$y^{<8>}$</span><!-- Has MathJax --></td><td><span>$y^{<9>}$</span><!-- Has MathJax --></td></tr></tbody></table><p>采用<span>$T_x$</span><!-- Has MathJax -->和<span>$T_y$</span><!-- Has MathJax -->表示输入序列的元素数和输出序列的元素数,如上例中<span>$T_x=9, T_y=9$</span><!-- Has MathJax -->。在上例中<span>$T_x=T_y$</span><!-- Has MathJax -->,但要注意在有的序列模型中这是不成立的</p><p>对于不同实例的表示,用<span>$x^{(i)}$</span><!-- Has MathJax -->(注意这里是圆括号,上面采用的是尖括号)表示第<span>$i$</span><!-- Has MathJax -->个例子,这样对于第<span>$i$</span><!-- Has MathJax -->个实例,输入的第<span>$t$</span><!-- Has MathJax -->个元素表示为<span>$x^{(i)<t>}$</span><!-- Has MathJax -->,输出(标签)的第<span>$t$</span><!-- Has MathJax -->个元素表示为<span>$y^{(i)<t>}$</span><!-- Has MathJax -->;输入的元素数为<span>$T_x^{(i)}$</span><!-- Has MathJax -->,输出的元素数为<span>$T_y^{(i)}$</span><!-- Has MathJax --></p><h3 id="Representation"><a href="#Representation" class="headerlink" title="Representation"></a>Representation</h3><p>Andrew Ng在这里采用了one-hot方法进行讲解</p><h4 id="词典与单词"><a href="#词典与单词" class="headerlink" title="词典与单词"></a>词典与单词</h4><p><strong>词典</strong>为一个<span>$n$</span><!-- Has MathJax -->维向量空间,每一维表示一个单词<br><strong>单词</strong>为一个<span>$n$</span><!-- Has MathJax -->为向量,每个单词由<span>$n-1$</span><!-- Has MathJax -->维的<span>$0$</span><!-- Has MathJax -->和<span>$1$</span><!-- Has MathJax -->维的<span>$1$</span><!-- Has MathJax -->表示,这个<span>$1$</span><!-- Has MathJax -->代表词典中对应位置的单词</p><p>词典有大小限制,通常在几万之间,词典中词的选择有很多方法,其中包括选择高频词(当然要去掉停用词)</p><p>实际问题中可能出现不在词典中的词,一种做法是采用一个特殊的标记(token)来表示这类单词(如UNK, Unknown Word)</p><h2 id="循环神经网络"><a href="#循环神经网络" class="headerlink" title="循环神经网络"></a>循环神经网络</h2><h3 id="一种简单的思路"><a href="#一种简单的思路" class="headerlink" title="一种简单的思路"></a>一种简单的思路</h3><p>对于序列问题,一种Naive的做法是直接采用一个标准的神经网络:</p><p><img src="/images/1. 循环序列模型.assets/image-20200802193236668.png" alt="image-20200802193236668"></p><p>但这个模型有两个主要问题:</p><ol><li>输入输出在不同的例子中有不同的长度 就算是每个句子有最长长度,采用了pad或zero pad方法进行填充,也仍不是好的表示方法</li><li>句子中的不同位置不能共享学到的特征(Don’t share features learned across different positions of text) 这会导致巨大的参数量(类似于全连接神经网络对比CNN,后者的一大特点是可以共享特征,减小了参数量,同时有助于泛化,比如在图片中一个学到的物体换一个位置还能识别出来)</li></ol><h3 id="循环神经网络(Recurrent-Neural-Network)"><a href="#循环神经网络(Recurrent-Neural-Network)" class="headerlink" title="循环神经网络(Recurrent Neural Network)"></a>循环神经网络(Recurrent Neural Network)</h3><h4 id="基本概念"><a href="#基本概念" class="headerlink" title="基本概念"></a>基本概念</h4><p><img src="/images/1. 循环序列模型.assets/image-20200802194724987.png" alt="image-20200802194724987"></p><p>循环神经网络是分状态共享参数的,如上图是展开的循环神经网络,每个矩形代表不同时刻下的模型,模型从左向右扫描,每个状态读入一个符号输入<span>$x^{<i>}$</span><!-- Has MathJax -->和从上个状态传来的激活值<span>$a^{<i>}$</span><!-- Has MathJax -->,输出这个状态的预测值<span>$\hat{y}^{<1>}$</span><!-- Has MathJax -->,并传递一个激活值<span>$a^{<i+1>}$</span><!-- Has MathJax -->给下一个状态</p><p>对于循环网络的开头<span>$a^{<0>}$</span><!-- Has MathJax -->,需要自行选择一个值作为零时刻的伪激活值,如随机值或0</p><p><img src="/images/1. 循环序列模型.assets/image-20200802195415641.png" alt="image-20200802195415641"></p><p>RNN中有几套参数,输入端的<span>$W_{ax}$</span><!-- Has MathJax -->,激活端的<span>$W_{aa}$</span><!-- Has MathJax -->和输出端的<span>$W_{ya}$</span><!-- Has MathJax -->。每个状态<span>$i$</span><!-- Has MathJax -->不仅仅用到它前一个状态<span>$i-1$</span><!-- Has MathJax -->的信息,还能用到它之前所有状态<span>$i-1, i-2, \dots$</span><!-- Has MathJax -->的信息,这是因为信息是沿着激活值<span>$\dots, a^{<i-2>},a^{<i-1>}$</span><!-- Has MathJax -->一路传过来的。但状态<span>$i$</span><!-- Has MathJax --><strong>不能用到它之后的状态<span>$i+1, i+2, \dots$</span><!-- Has MathJax -->的信息</strong>,而这些信息有时候是很有用的,比如:</p><blockquote><p>He said, “Teddy Roosevelt was a great President”</p></blockquote><p>在对Teddy Roosevelt做人名识别时,”was a great President”是能帮助判断的,而仅通过”He said”则很难判断Teddy是人名的一部分,如下面的例子:</p><blockquote><p>He said, “Teddy bear are on sale”</p></blockquote><p>不能采用双向输入的问题可以采用双向循环神经网络(BRNN)来解决,在之后会提及</p><h4 id="Forward-Propagation"><a href="#Forward-Propagation" class="headerlink" title="Forward Propagation"></a>Forward Propagation</h4><p><img src="/images/1. 循环序列模型.assets/image-20200802201033326.png" alt="image-20200802201033326"></p><p>初始化零时刻的伪激活值为零向量<span>$a^{<0>}=\vec{0}$</span><!-- Has MathJax --><br>接下来对于下个状态的<span>$a$</span><!-- Has MathJax -->和<span>$y$</span><!-- Has MathJax -->有:<br><span>$a^{<1>}=g(W_{aa}a^{<0>}+W_{ax}x^{<1>}+b_a) \leftarrow \text{这里的激活函数}g\text{经常选择}tanh\text{或}ReLU\text{,相对来说}tanh\text{更常见些}$</span><!-- Has MathJax --></p><span>$\hat{y}^{<1>}=g_2(W_{ya}a^{<1>}+b_y) \leftarrow \text{如果是二分类问题,那么激活函数}g\text{一般用}sigmoid\text{,如果是k分类问题,则可以用}softmax$</span><!-- Has MathJax --><p>更通用的写法:</p><span>$a^{<t>}=g(W_{aa}a^{<t-1>}+W_{ax}x^{<t>}+b_a)$</span><!-- Has MathJax --><span>$\hat{y}^{<t>}=g_2(W_{ya}a^{<t>}+b_y)$</span><!-- Has MathJax --><p>刚看到这里的时候我有个问题:<strong><span>$\hat{y}^{<t>}$</span><!-- Has MathJax -->没有利用当前的输入<span>$x^{<t>}$</span><!-- Has MathJax -->的信息,而是只采用了之前状态传来的<span>$a^{<t>}$</span><!-- Has MathJax -->的信息,那么是不是第一个状态<span>$\hat{y}^{<1>}$</span><!-- Has MathJax -->的输出是与输入无关的?</strong></p><p>这个问题是不存在的,因为计算<span>$\hat{y}^{<t>}$</span><!-- Has MathJax -->所用到的<span>$a^{<t>}$</span><!-- Has MathJax -->就是当前轮迭代中由第一个公式算出来的,而这个公式用到了输入<span>$x^{<t>}$</span><!-- Has MathJax --></p><h5 id="简化的公式"><a href="#简化的公式" class="headerlink" title="简化的公式"></a>简化的公式</h5><p>这里简化示例直接引用用Andrew Ng的slide:</p><p><img src="/images/1. 循环序列模型.assets/image-20200802204918980.png" alt="image-20200802204918980"></p><span>$W_{aa}$</span><!-- Has MathJax -->和<span>$W_{ax}$</span><!-- Has MathJax -->被合并为<span>$W_x$</span><!-- Has MathJax -->,<span>$W_{ya}$</span><!-- Has MathJax -->被简化为<span>$W_y$</span><!-- Has MathJax -->,<span>$a^{t-1}$</span><!-- Has MathJax -->和<span>$x^{t}$</span><!-- Has MathJax -->被vstack(垂直拼接)为一个向量。具体合并方法见图示<br><br>这样公式简化为:<br><br><span>$a^{<t>}=g(W_a[a^{<t-1>},x^{<t>}]+b_a)$</span><!-- Has MathJax --><span>$\hat{y}^{<t>}=g_2(W_{y}a^{<t>}+b_y)$</span><!-- Has MathJax --><p>其中<span>$W_a[a^{<t-1>},x^{<t>}]$</span><!-- Has MathJax -->对应:<br><span>$$W_a[a^{<t-1>},x^{<t>}]=\left[ \begin{matrix} \begin{array}{c:c} W_{aa} & W_{ax} \end{array} \end{matrix} \right]\left[ \begin{matrix} a^{<t-1>} \\ x^{<t>} \end{matrix}\right]=W_{aa}a^{t-1}+W_{ax}x^{<t>}$$</span><!-- Has MathJax --></p><h2 id="通过时间的反向传播"><a href="#通过时间的反向传播" class="headerlink" title="通过时间的反向传播"></a>通过时间的反向传播</h2><h3 id="损失函数"><a href="#损失函数" class="headerlink" title="损失函数"></a>损失函数</h3><p><img src="/images/1. 循环序列模型.assets/image-20200802212042028.png" alt="image-20200802212042028"></p><span>$\mathcal{L}^{<t>}(\hat{y}^{<t>},y^{<t>})=-y^{<t>}\log\hat{y}^{<t>}-(1-y^{<t>})\log(1-\hat{y}^{<t>})$</span><!-- Has MathJax --><span>$\mathcal{L}(\hat{y},y)=\sum^{T_y}_{t=1}\mathcal{L}^{<t>}(\hat{y}^{<t>},y^{<t>})$</span><!-- Has MathJax --><p>每个状态的损失函数是一个交叉熵,这个RNN的损失函数是每个状态损失的和</p><p><img src="/images/1. 循环序列模型.assets/image-20200802213135277.png" alt="image-20200802213135277"></p><p>用反向的随机梯度下降法来更新参数,其中有一个信息的反向传递尤为重要,就是上面红圈标注的,这个反向传播算法有一个专门的名字<em>backpropagation through time</em></p><h2 id="不同类型的神经网络"><a href="#不同类型的神经网络" class="headerlink" title="不同类型的神经网络"></a>不同类型的神经网络</h2><p><img src="/images/1. 循环序列模型.assets/image-20200802184812691.png" alt="image-20200802184812691"></p><p>不用的序列模型输入和输出差异是很大的,其中不一定有<span>$T_x=T_y$</span><!-- Has MathJax -->,比如音乐生成的输入为空集,情感分类的输入为序列而输出为一个分类等,机器翻译的输入语言和输出语言长度不一定相等</p><h3 id="分类"><a href="#分类" class="headerlink" title="分类"></a>分类</h3><p><img src="/images/1. 循环序列模型.assets/image-20200802214524237.png" alt="image-20200802214524237"></p><p>左图的RNN为常规的RNN,每个状态都有一个输入和输出,是<em>Many-to-many</em>的,右图的RNN为只有一个输出(这里的例子是针对情感分类任务),是<em>Many-to-one</em>的。当然其实还有<em>One-to-one</em>的,这种神经网络只有一个时间状态,就是传统意义上的神经网络</p><p><img src="/images/1. 循环序列模型.assets/image-20200802215430664.png" alt="image-20200802215430664"></p><p>上图中的左图为一个用于音乐生成的RNN,这是一个<em>One-to-many</em>的RNN,它的第一个输入为音乐的第一个音符♪或者音乐的类型,或者零向量(即表示空集),其后便只有输出没有输入</p><p>上图中的右图为一个用于机器翻译的RNN,它是一个<em>Many-to-many</em>的RNN,这个RNN先连续读入一段输入,再连续输出结果。这个RNN的前一段称为<strong>encoder</strong>,后一段称为<strong>decoder</strong></p><h2 id="语言模型和序列生成"><a href="#语言模型和序列生成" class="headerlink" title="语言模型和序列生成"></a>语言模型和序列生成</h2><h3 id="基本任务"><a href="#基本任务" class="headerlink" title="基本任务"></a>基本任务</h3><p>判断一个序列的先验概率</p><blockquote><p>The apple and pair salad<br>The apple and pear salad</p></blockquote><span>$$P(\text{The apple and pair salad})=? \\P(\text{The apple and pear salad})=?$$</span><!-- Has MathJax --><h3 id="Tokenization"><a href="#Tokenization" class="headerlink" title="Tokenization"></a>Tokenization</h3><p>获得每个单词(含\<eos>, UNK等)的token化表示</eos></p><blockquote><p>Cats average 15 hours of sleep a day. \<eos></eos></p></blockquote><p>这里直接采用了上面提到的one-hot进行表示,<span>$n$</span><!-- Has MathJax -->维向量,其中1维为1,其它维为0,表示对应位置的单词</p><h3 id="模型"><a href="#模型" class="headerlink" title="模型"></a>模型</h3><p>模型的结构为一个<em>Many-to-many</em>的RNN</p><p>与基于统计的语言模型直接基于语料库计算后验概率不同,基于RNN的语言模型把语料库的每条语句的单词序列作为输入,预测的下个单词的序列作为输出来训练模型:</p><p><img src="/images/1. 循环序列模型.assets/image-20200809203355448.png" alt="image-20200809203355448"></p><span>$a^{<0>}$</span><!-- Has MathJax -->和<span>$x^{<1>}$</span><!-- Has MathJax -->作为初始输入都为零向量,后面的<span>$\hat{y}^{<i>}$</span><!-- Has MathJax -->和<span>$x^{<i+1>}$</span><!-- Has MathJax -->是对应的,即<span>$x^{<i+1>}$</span><!-- Has MathJax -->不仅仅是<span>$i+1$</span><!-- Has MathJax -->状态的输入,还是<span>$i$</span><!-- Has MathJax -->状态输出的实际值(或者说label)。<span>$i$</span><!-- Has MathJax -->状态的输出为一个<span>$n$</span><!-- Has MathJax -->维向量,第<span>$i$</span><!-- Has MathJax -->维表示<span>$P(y^{<i>}=W_i|y^{<i-1>}y^{<i-2>}\cdots)$</span><!-- Has MathJax -->,其中<span>$W_i$</span><!-- Has MathJax -->表示one-hot向量中第<span>$i$</span><!-- Has MathJax -->个位置对应的词表中第的<span>$i$</span><!-- Has MathJax -->单词。<span>$i$</span><!-- Has MathJax -->状态的输出是通过softmax函数得出的,一方面代表输出出每个单词的概率,另一方面最大概率的那个下标也是预测的单词。这个RNN的loss函数可以计算状态输出和实际值的交叉熵,形式与前面的交叉熵类似:<br><br><span>$\mathcal{L}(\hat{y}^{<t>},y^{<t>})=-\sum_i{y_i^{<t>}\log\hat{y}_i^{<t>}}$</span><!-- Has MathJax --><span>$\mathcal{L}=\sum_t\mathcal{L}^{<t>}(\hat{y}^{<t>},y^{<t>})$</span><!-- Has MathJax --><p>如果要计算一个句子的概率,如上文中的<code>Cats average 15</code>,则代入公式<span>$P(y^{<1>},y^{<2>},y^{<3>})=P(y^{<1>})P(y^{<2>}|y^{<1>})P(y^{<3>}|y^{<1>}y^{<2>})$</span><!-- Has MathJax --></p><p>其中<span>$P(y^{<1>}), P(y^{<2>}|y^{<1>})$</span><!-- Has MathJax -->和<span>$P(y^{<3>}|y^{<1>}y^{<2>})$</span><!-- Has MathJax -->可分别由<span>$1,2$</span><!-- Has MathJax -->和<span>$3$</span><!-- Has MathJax -->状态的输出获得</p><h2 id="对新序列采样"><a href="#对新序列采样" class="headerlink" title="对新序列采样"></a>对新序列采样</h2><p>要检验模型确实学到了东西,可以尝试对序列进行采样</p><h3 id="词语级别的语言模型(Word-level-language-model)"><a href="#词语级别的语言模型(Word-level-language-model)" class="headerlink" title="词语级别的语言模型(Word-level language model)"></a>词语级别的语言模型(Word-level language model)</h3><p><img src="/images/1. 循环序列模型.assets/image-20200930165919635.png" alt="image-20200930165919635"></p><p>从<span>$x^{<1>}$</span><!-- Has MathJax -->开始指定一个输入。接下来从它算出的输出<span>$\hat{y}^{<1>}$</span><!-- Has MathJax -->(注:<span>$\hat{y}^{<1>}$</span><!-- Has MathJax -->是一个概率分布矩阵)中随机选择(准确地说是<strong>根据概率分布选择</strong>)一个label作为下一个状态的输入<span>$x^{<2>}$</span><!-- Has MathJax -->。对于后面的输出和输入按照这样的方法进行操作,输出的label就可以构成一个句子。如何停止:一种方法是生成了<code><EOS></code>标记;另一种是可以指定生成的长度,如20。</p><p>为什么要根据概率分布选择而不是直接用argmax:如果直接用argmax,相同的初始输入<span>$x^{<1>}$</span><!-- Has MathJax -->得到的句子输出是完全一样的。</p><p>另外,要避免生成<code><UNK></code>这种未知单词。</p><h3 id="字符级别的语言模型(Character-level-language-model)"><a href="#字符级别的语言模型(Character-level-language-model)" class="headerlink" title="字符级别的语言模型(Character-level language model)"></a>字符级别的语言模型(Character-level language model)</h3><p>优点:主要是不存在词典中<strong>未登录词</strong>的问题,对于每一个单词都能给出一个概率分布,同时词表会很小</p><p>缺点:生成的序列很长,难以解决长距离依赖问题,而且训练成本比较高</p><h2 id="带有神经网络的梯度消失问题"><a href="#带有神经网络的梯度消失问题" class="headerlink" title="带有神经网络的梯度消失问题"></a>带有神经网络的梯度消失问题</h2><p>句子中可能出现<strong>长距离依赖</strong>问题,如:</p><blockquote><p>The <em>cat</em>, which already ate ….. , <em>was</em> full</p><p>The <em>cats</em>, which already ate ……, <em>were</em> full</p></blockquote><p>全连接神经网络存在<strong>梯度消失</strong>问题。在非常深的网络中,权重更新会变得很困难。具体地,在反向传播过程中,从输出<span>$\hat{y}$</span><!-- Has MathJax -->得到的梯度很难传播回去影响到前面层的权重。</p><p><img src="/images/1. 循环序列模型.assets/image-20201015143544087.png" alt="image-20201015143544087"></p><p>RNN网络具有相似的问题,更具体地,是无法解决句子中的长距离依赖问题。</p><p>深度神经网络中还可能出现<strong>梯度爆炸</strong>的问题,在实践中一般模型训练会直接失败,参数中会出现许多的NaN。一个解决方法是采用梯度修剪(gradient clipping),如果梯度梯度向量大于某个阈值,缩放整个梯度向量。梯度爆炸比梯度消失更容易解决。</p><h3 id="GRU门控循环单元"><a href="#GRU门控循环单元" class="headerlink" title="GRU门控循环单元"></a>GRU门控循环单元</h3><h4 id="回顾RNN"><a href="#回顾RNN" class="headerlink" title="回顾RNN"></a>回顾RNN</h4><p>RNN的基本公式包括一个激活值<span>$a$</span><!-- Has MathJax -->和预测值<span>$\hat{y}$</span><!-- Has MathJax -->的计算:<br><span>$a^{<t>}=g(W_a[a^{<t-1>},x^{<t>}]+b_a)$</span><!-- Has MathJax --></p><span>$\hat{y}^{<t>}=g_2(W_{y}a^{<t>}+b_y)$</span><!-- Has MathJax --><p>RNN单元进行可视化,可以表示为下图:</p><p><img src="/images/1. 循环序列模型.assets/image-20201015150430055.png" alt="image-20201015150430055"></p><h3 id="简化的GRU单元结构"><a href="#简化的GRU单元结构" class="headerlink" title="简化的GRU单元结构"></a>简化的GRU单元结构</h3><p>首先理解一个简化版的GRU,与RNN相比,它在计算激活值<span>$a$</span><!-- Has MathJax -->上添加对长距离依赖的支持。</p><p>记<span>$c$</span><!-- Has MathJax -->为记忆单元(memory cell),用于存储当前的记忆信息。在GRU中,有<span>$c^{<t>}=a^{<t>}$</span><!-- Has MathJax -->,其中<span>$a^{<t>}$</span><!-- Has MathJax -->对应RNN中上一个状态传来的激活值。</p><p>这个简化版GRU的基本公式如下(这里去掉了与RNN相同的计算<span>$\hat{y}$</span><!-- Has MathJax -->的部分):<br><span>$\tilde{c}^{<t>}=\tanh(\omega_c[c^{<t-1>},x^{<t>}]+b_c)$</span><!-- Has MathJax --></p><span>$\Gamma_u=\sigma(\omega_u[c^{<t-1>},x^{<t>}]+b_u)$</span><!-- Has MathJax --><span>$c^{<t>}=\Gamma_u*\tilde{c}^{<t>}+(1-\Gamma_u)*c^{<t-1>}$</span><!-- Has MathJax --><p>可以看出来计算<span>$\tilde{c}^{<t>}$</span><!-- Has MathJax -->和<span>$\Gamma_u$</span><!-- Has MathJax -->的公式与RNN中更新<span>$a$</span><!-- Has MathJax -->的公式<span>$a^{<t>}=g(W_a[a^{<t-1>},x^{<t>}]+b_a)$</span><!-- Has MathJax -->非常相似,都是激活值<span>$a$</span><!-- Has MathJax -->(在GRU中为记忆单元<span>$c$</span><!-- Has MathJax -->)与输入<span>$x$</span><!-- Has MathJax -->一起乘以一个参数矩阵<span>$W$</span><!-- Has MathJax -->(GRU中为<span>$\omega$</span><!-- Has MathJax -->),加上一个偏置矩阵<span>$b$</span><!-- Has MathJax -->,最后再经过一个激活函数。不同的是GRU多了一个更新门<span>$\Gamma_u$</span><!-- Has MathJax -->的计算用于控制遗忘,同时对记忆单元<span>$c$</span><!-- Has MathJax -->(注意没有波浪线上标)的更新更为复杂。</p><p>其中<span>$\tilde{c}^{<t>}$</span><!-- Has MathJax -->是<span>$c^{<t>}$</span><!-- Has MathJax -->的候选值,用于更新<span>$c^{<t>}$</span><!-- Has MathJax -->的值;而<span>$\Gamma_u$</span><!-- Has MathJax -->为一个更新门(Update Gate),是一个(0,1)的矩阵,用于决定是否用当前的<span>$\tilde{c}^{<t>}$</span><!-- Has MathJax -->来更新<span>$c^{<t>}$</span><!-- Has MathJax -->(或者说遗忘之前的信息);最后计算<span>$c^{<t>}$</span><!-- Has MathJax -->,对记忆单元<span>$c$</span><!-- Has MathJax -->执行实际的更新操作(即是否用当前的信息<span>$\tilde{c}^{<t>}$</span><!-- Has MathJax -->替换掉之前的记忆<span>$c^{<t>}$</span><!-- Has MathJax -->)。</p><p>其中<span>$\Gamma_u$</span><!-- Has MathJax -->的激活函数是Sigmoid,为方便理解这里可以认为<span>$\Gamma_u$</span><!-- Has MathJax -->的取值为0或1,即用于控制<span>$c^{<t>}$</span><!-- Has MathJax -->是否更新(遗忘)。<span>$\Gamma_u*\tilde{c}^{<t>}$</span><!-- Has MathJax -->为数乘,若<span>$\Gamma_u$</span><!-- Has MathJax -->为1,则表示用当前的候选值<span>$\tilde{c}^{<t>}$</span><!-- Has MathJax -->来更新<span>$c^{<t>}$</span><!-- Has MathJax -->(同时遗忘之前的记忆);为0则表示不更新,<strong>以此保存之前的记忆</strong>。</p><p>另外需要注意的是<span>$\Gamma_u$</span><!-- Has MathJax -->、<span>$\tilde{c}^{<t>}$</span><!-- Has MathJax -->和<span>$c^{<t>}$</span><!-- Has MathJax -->可以是多维的。这种情况下,<span>$\Gamma_u$</span><!-- Has MathJax -->某一维的01取值决定了<span>$c^{<t>}$</span><!-- Has MathJax -->的对应维是否更新。</p><p>简化版GRU相比于RNN的改进主要是引入了一个更新门<span>$\Gamma_u$</span><!-- Has MathJax -->,它使得某些<span>$c$</span><!-- Has MathJax -->某些维的信息可以维持很多轮不更新,这样就保持了之前的记忆,解决长距离依赖问题。</p><p>这个简化版的GPU进行可视化,可以表示为下图,可以看出,这个简化版的GRU和RNN在输入输出的形式上是一样的,但<span>$c$</span><!-- Has MathJax -->的计算引入了遗忘机制。</p><p><img src="/images/1. 循环序列模型.assets/image-20201015165916497.png" alt="image-20201015165916497"></p><h3 id="完整的GRU单元结构"><a href="#完整的GRU单元结构" class="headerlink" title="完整的GRU单元结构"></a>完整的GRU单元结构</h3><p>完整的GRU结构与上面简化的版本相比,基本思想是相同的,不同之处在于计算<span>$\tilde{c}$</span><!-- Has MathJax -->时引入了一个新的相关门<span>$\Gamma_r$</span><!-- Has MathJax -->。至于为什么这么用,诶Andrew给出的解释是实验得出的。<br><span>$\tilde{c}^{<t>}=\tanh(\omega_c[\Gamma_r*c^{<t-1>},x^{<t>}]+b_c)$</span><!-- Has MathJax --></p><span>$\Gamma_u=\sigma(\omega_u[c^{<t-1>},x^{<t>}]+b_u)$</span><!-- Has MathJax --><span>$\Gamma_r=\sigma(\omega_r[c^{<t-1>},x^{<t>}]+b_r)$</span><!-- Has MathJax --><span>$c^{<t>}=\Gamma_u*\tilde{c}^{<t>}+(1-\Gamma_u)*c^{<t-1>}$</span><!-- Has MathJax --><h3 id="LSTM单元结构"><a href="#LSTM单元结构" class="headerlink" title="LSTM单元结构"></a>LSTM单元结构</h3><p>相比于GRU,LSTM单元的结构要更加复杂,但也更强大。</p><p>与GRU不同,LSTM中,<span>$c^{<t>}\neq a^{<t>}$</span><!-- Has MathJax -->,<span>$c$</span><!-- Has MathJax -->与<span>$a$</span><!-- Has MathJax -->是分别计算的;不再采用相关门<span>$\Gamma_r$</span><!-- Has MathJax -->;<span>$c^{<t>}$</span><!-- Has MathJax -->的更新不只由更新门<span>$\Gamma_u$</span><!-- Has MathJax -->控制,而是由更新门<span>$\Gamma_u$</span><!-- Has MathJax -->和一个新的遗忘门<span>$\Gamma_f$</span><!-- Has MathJax -->控制;<span>$a^{<t>}$</span><!-- Has MathJax -->的更新由一个额外的输出门<span>$\Gamma_o$</span><!-- Has MathJax -->控制<br><span>$\tilde{c}^{<t>}=\tanh(\omega_c[c^{<t-1>},x^{<t>}]+b_c)$</span><!-- Has MathJax --></p><span>$\Gamma_u=\sigma(\omega_u[a^{<t-1>},x^{<t>}]+b_u)$</span><!-- Has MathJax --><span>$\Gamma_f=\sigma(\omega_f[a^{<t-1>},x^{<t>}]+b_f)$</span><!-- Has MathJax --><span>$\Gamma_o=\sigma(\omega_o[a^{<t-1>},x^{<t>}]+b_o)$</span><!-- Has MathJax --><span>$c^{<t>}=\Gamma_u*\tilde{c}^{<t>}+\Gamma_f*c^{<t-1>}$</span><!-- Has MathJax --><span>$a^{<t>}=\Gamma_o*c^{<t>}$</span><!-- Has MathJax --><p>LSTM单元进行可视化,可以表示为下图:</p><p><img src="/images/1. 循环序列模型.assets/image-20201016171732787.png" alt="image-20201016171732787"></p><p>LSTM有3个门而GRU有2个门。与GRU相比,LSTM的单个单元要更加强大和灵活,但计算代价要更大;GRU适合构造更大规模的网络。</p><h2 id="双向神经网络"><a href="#双向神经网络" class="headerlink" title="双向神经网络"></a>双向神经网络</h2><p>单向神经网络只能利用一个方向的语义信息,但有时这是不够的。</p><blockquote><p>He said, “Teddy bears are on sale!”</p><p>He said, “Teddy Roosevelt was a great President!”</p></blockquote><p>比如上面的例子,要判断<code>Teddy</code>是人名还是泰迪熊,需要用到句子后面的信息。</p><p>双向RNN的结构如下图,激活函数需要用到来自两个方向的激活值:</p><p><img src="/images/1. 循环序列模型.assets/image-20201016210337843.png" alt="image-20201016210337843"></p><p>其中的单元不仅可以是普通的RNN单元,还可以是GRU或LSTM单元,对应BiGRU或BiLSTM。</p><p>双向神经网络的<strong>缺点</strong>:双向神经网络要求获取完整的序列数据,这在语音识别等场景下不是非常适用(如语音识别需要说话者说完一整句话才能开始识别)。</p><h2 id="深层循环神经网络"><a href="#深层循环神经网络" class="headerlink" title="深层循环神经网络"></a>深层循环神经网络</h2><p>RNN可以在结构上嵌套以构造深层的网络结构。深层RNN的结构分为两个维度,分别是横向的时间维度和纵向的维度。每一个单元可以利用到之前时间和之前层的激活值。而横向时间维度和纵向的维度是相互独立的,即可以出现在前两个时刻疯狂堆层数来预测<span>$y^{<1>}$</span><!-- Has MathJax -->和<span>$y^{<2>}$</span><!-- Has MathJax -->,而这些层在横向上并不相连的情况。</p><p><img src="/images/1. 循环序列模型.assets/image-20201016211416102.png" alt="image-20201016211416102"></p><p>由于深层RNN多了一个横向的时间维度,有少量的层数RNN网络其实已经非常大了。所以与深层CNN相比,深层RNN在深度上会浅一些。</p>]]></content>
<summary type="html">
<h1 id="Andrew-Ng循环序列模型-学习笔记"><a href="#Andrew-Ng循环序列模型-学习笔记" class="headerlink" title="Andrew Ng循环序列模型 学习笔记"></a>Andrew Ng循环序列模型 学习笔记</h1><p>笔记课程来源:<a href="https://mooc.study.163.com/learn/2001280005" rel="external nofollow noopener noreferrer" target="_blank">https://mooc.study.163.com/learn/2001280005</a></p>
<h2 id="应用例子"><a href="#应用例子" class="headerlink" title="应用例子"></a>应用例子</h2><ol>
<li>语音识别</li>
<li>音乐生成</li>
<li>情感分类</li>
<li>DNA序列分析</li>
<li>机器翻译</li>
<li>视频活动识别</li>
<li>命名实体识别</li>
</ol>
<p><img src="/images/1. 循环序列模型.assets/image-20200802184812691.png" alt="image-20200802184812691"><br>
</summary>
<category term="深度学习" scheme="https://www.mhlwsk.com/tags/%E6%B7%B1%E5%BA%A6%E5%AD%A6%E4%B9%A0/"/>
<category term="序列模型" scheme="https://www.mhlwsk.com/tags/%E5%BA%8F%E5%88%97%E6%A8%A1%E5%9E%8B/"/>
</entry>
<entry>
<title>《PyTorch深度学习实践》6.逻辑斯蒂回归 笔记</title>
<link href="https://www.mhlwsk.com/archives/17385.html"/>
<id>https://www.mhlwsk.com/archives/17385.html</id>
<published>2020-07-31T14:48:47.000Z</published>
<updated>2020-09-19T12:48:11.533Z</updated>
<content type="html"><![CDATA[<h1 id="逻辑斯蒂回归"><a href="#逻辑斯蒂回归" class="headerlink" title="逻辑斯蒂回归"></a>逻辑斯蒂回归</h1><p>逻辑斯蒂回归是一个<strong>分类模型</strong></p><h2 id="分类问题"><a href="#分类问题" class="headerlink" title="分类问题"></a>分类问题</h2><p>相比于回归,分类问题的结果是离散的值(或者类别的概率)</p><a id="more"></a><h3 id="逻辑函数"><a href="#逻辑函数" class="headerlink" title="逻辑函数"></a>逻辑函数</h3><p>$$<br>\sigma(x)=\frac{1}{1+e^{-x}}<br>$$</p><p><img src="/images/3. 逻辑斯蒂回归.assets/image-20200731213425792.png" alt="image-20200731213425792"></p><p>上面的逻辑函数是sigmoid函数(其实sigmoid是一类函数的叫法,特征是$[-1,1]$内单增的饱和函数,逻辑函数是其中最典型的一种),它是一个饱和函数(随着值的增大导数趋近于零),作用是把无穷区间上的值映射到$[0.0,1.0]$之间。最简单的做法,可以把线性模型$\hat{y} = x * \omega + b$的输出作为sigmoid的输入,以得到$[0.0,1.0]$之间的概率:</p><p><img src="/images/3. 逻辑斯蒂回归.assets/image-20200731214414898.png" alt="image-20200731214414898"></p><p>对应的损失函数也要发生变化:</p><p><img src="/images/3. 逻辑斯蒂回归.assets/image-20200731214647221.png" alt="image-20200731214647221"></p><p>上图中的第二个loss函数就是交叉熵:</p><p>假设有两个概率分布$P_D$和$P_T$,则衡量它们差异的交叉熵公式为:</p><p>$$<br>\sum_iP_D(X=i)\ln P_T(X=i)<br>$$</p><p>与一般的信息熵相比,主要差别在于<strong>事实</strong>的概率乘以<strong>预测</strong>的概率的ln</p><p>图中公式二(BCE, Binary Cross Entropy)优化的目标是loss最小,对应事实的分布与预测的分布最接近</p><p>二分类的交叉熵代码很简洁:</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">CrossEntropy</span><span class="params">(yHat, y)</span>:</span></span><br><span class="line"> <span class="keyword">if</span> y == <span class="number">1</span>:</span><br><span class="line"> <span class="keyword">return</span> -log(yHat)</span><br><span class="line"> <span class="keyword">else</span>:</span><br><span class="line"> <span class="keyword">return</span> -log(<span class="number">1</span> - yHat)</span><br></pre></td></tr></table></figure><p>针对多个样本的情况可以取一下均值:</p><p><img src="/images/3. 逻辑斯蒂回归.assets/image-20200731220930095.png" alt="image-20200731220930095"></p><p>或者<a href="https://ml-cheatsheet.readthedocs.io/en/latest/loss_functions.html#:~:text=Cross%2Dentropy%20loss%2C%20or%20log,diverges%20from%20the%20actual%20label." rel="external nofollow noopener noreferrer" target="_blank">网上</a>有另一种形式更好理解些:</p><p>$-\sum_{c=1}^My_{o,c}\log(p_{o,c})$</p><ul><li>M - number of classes (dog, cat, fish)</li><li>log - the natural log</li><li>y - <strong>binary indicator (0 or 1)</strong> if class label $c$ is the correct classification for observation $o$</li><li>p - predicted probability observation $o$ is of class $c$</li></ul><h2 id="实验中所用代码"><a href="#实验中所用代码" class="headerlink" title="实验中所用代码"></a>实验中所用代码</h2><p>模型:<br><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> torch</span><br><span class="line"></span><br><span class="line"><span class="comment"># Prepare dataset</span></span><br><span class="line">x_data = torch.Tensor([[<span class="number">1.0</span>], [<span class="number">2.0</span>], [<span class="number">3.0</span>]])</span><br><span class="line">y_data = torch.Tensor([[<span class="number">0</span>], [<span class="number">0</span>], [<span class="number">1</span>]])</span><br><span class="line"></span><br><span class="line"><span class="comment"># Design model using Class</span></span><br><span class="line"><span class="class"><span class="keyword">class</span> <span class="title">LogisticRegressionModel</span><span class="params">(torch.nn.Module)</span>:</span></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">__init__</span><span class="params">(self)</span>:</span></span><br><span class="line"> super(LogisticRegressionModel, self).__init__()</span><br><span class="line"> self.linear = torch.nn.Linear(<span class="number">1</span>, <span class="number">1</span>)</span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">forward</span><span class="params">(self, x)</span>:</span></span><br><span class="line"> y_pred = torch.sigmoid(self.linear(x))</span><br><span class="line"> <span class="keyword">return</span> y_pred</span><br><span class="line"> </span><br><span class="line">model = LogisticRegressionModel()</span><br><span class="line"></span><br><span class="line"><span class="comment"># Construct loss and optimizer</span></span><br><span class="line">criterion = torch.nn.BCELoss(size_average=<span class="literal">False</span>)</span><br><span class="line">optimizer = torch.optim.SGD(model.parameters(), lr=<span class="number">0.01</span>)</span><br><span class="line"></span><br><span class="line"><span class="comment"># Training cycle</span></span><br><span class="line"><span class="keyword">for</span> epoch <span class="keyword">in</span> range(<span class="number">1000</span>):</span><br><span class="line"> y_pred = model(x_data)</span><br><span class="line"> loss = criterion(y_pred, y_data)</span><br><span class="line"> print(epoch, loss.item())</span><br><span class="line"></span><br><span class="line"> optimizer.zero_grad()</span><br><span class="line"> loss.backward()</span><br><span class="line"> optimizer.step()</span><br></pre></td></tr></table></figure></p><p> 图形化:<br> <figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br></pre></td><td class="code"><pre><span class="line"> <span class="keyword">import</span> numpy <span class="keyword">as</span> np</span><br><span class="line"><span class="keyword">import</span> matplotlib.pyplot <span class="keyword">as</span> plt</span><br><span class="line"></span><br><span class="line">x = np.linspace(<span class="number">0</span>, <span class="number">10</span>, <span class="number">200</span>)</span><br><span class="line">x_t = torch.Tensor(x).view((<span class="number">200</span>, <span class="number">1</span>)) <span class="comment"># 类似于numpy中的reshape</span></span><br><span class="line">y_t = model(x_t)</span><br><span class="line">y = y_t.data.numpy()</span><br><span class="line">plt.plot(x, y)</span><br><span class="line">plt.plot([<span class="number">0</span>, <span class="number">10</span>], [<span class="number">0.5</span>, <span class="number">0.5</span>], c=<span class="string">'r'</span>)</span><br><span class="line">plt.xlabel(<span class="string">'Hours'</span>)</span><br><span class="line">plt.ylabel(<span class="string">'Probability of Pass'</span>)</span><br><span class="line">plt.grid()</span><br><span class="line">plt.show()</span><br></pre></td></tr></table></figure></p><p><a href="https://colab.research.google.com/drive/1PrzFcvMjWJAPF1KJqFyUk186wZuKdYKW?usp=sharing" rel="external nofollow noopener noreferrer" target="_blank">在Colab上运行</a></p><p>课程来源:<a href="https://www.bilibili.com/video/BV1Y7411d7Ys?p=2" rel="external nofollow noopener noreferrer" target="_blank">《PyTorch深度学习实践》完结合集</a></p>]]></content>
<summary type="html">
<h1 id="逻辑斯蒂回归"><a href="#逻辑斯蒂回归" class="headerlink" title="逻辑斯蒂回归"></a>逻辑斯蒂回归</h1><p>逻辑斯蒂回归是一个<strong>分类模型</strong></p>
<h2 id="分类问题"><a href="#分类问题" class="headerlink" title="分类问题"></a>分类问题</h2><p>相比于回归,分类问题的结果是离散的值(或者类别的概率)</p>
</summary>
<category term="深度学习" scheme="https://www.mhlwsk.com/tags/%E6%B7%B1%E5%BA%A6%E5%AD%A6%E4%B9%A0/"/>
<category term="PyTorch" scheme="https://www.mhlwsk.com/tags/PyTorch/"/>
</entry>
<entry>
<title>《PyTorch深度学习实践》2.线性模型 笔记</title>
<link href="https://www.mhlwsk.com/archives/21981.html"/>
<id>https://www.mhlwsk.com/archives/21981.html</id>
<published>2020-07-30T16:31:17.000Z</published>
<updated>2020-08-02T15:06:31.111Z</updated>
<content type="html"><![CDATA[<h1 id="线性模型"><a href="#线性模型" class="headerlink" title="线性模型"></a>线性模型</h1><h2 id="基本流程"><a href="#基本流程" class="headerlink" title="基本流程"></a>基本流程</h2><ol><li>Dataset<br> <code>traing set</code>, <code>dev set</code>, <code>test set</code></li><li>Model</li><li>Training</li><li>infering</li></ol><h2 id="线性模型-1"><a href="#线性模型-1" class="headerlink" title="线性模型"></a>线性模型</h2><h3 id="Linear-model"><a href="#Linear-model" class="headerlink" title="Linear model"></a>Linear model</h3><p>$$<br>\hat{y} = x * \omega<br>$$</p><p>上面是本次实验中用到的<br>更一般的线性模型要加一个偏置b,形式如下:</p><p>$$<br>\hat{y} = x * \omega + b<br>$$</p><h3 id="Training-Loss-Error"><a href="#Training-Loss-Error" class="headerlink" title="Training Loss(Error)"></a>Training Loss(Error)</h3><p>$$<br>loss = (\hat{y}-y)^2=(x*\omega-y)^2<br>$$</p><p>这是针对每一个样本的</p><a id="more"></a><h3 id="Mean-Square-Error"><a href="#Mean-Square-Error" class="headerlink" title="Mean Square Error"></a>Mean Square Error</h3><p>$$<br>cost = \frac{1}{N}\sum_{n=1}^{N}(\hat{y_n}-y_n)^2<br>$$</p><p>这是针对整个测试集的</p><h3 id="实验中所采用代码"><a href="#实验中所采用代码" class="headerlink" title="实验中所采用代码"></a>实验中所采用代码</h3><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> numpy <span class="keyword">as</span> np</span><br><span class="line"><span class="keyword">import</span> matplotlib.pyplot <span class="keyword">as</span> plt</span><br><span class="line"></span><br><span class="line">x_data = [<span class="number">1.0</span>, <span class="number">2.0</span>, <span class="number">3.0</span>]</span><br><span class="line">y_data = [<span class="number">2.0</span>, <span class="number">4.0</span>, <span class="number">6.0</span>]</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">forward</span><span class="params">(x)</span>:</span></span><br><span class="line"> <span class="keyword">return</span> x * w</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">loss</span><span class="params">(x, y)</span>:</span></span><br><span class="line"> y_pred = forward(x)</span><br><span class="line"> <span class="keyword">return</span> (y_pred - y) * (y_pred - y)</span><br><span class="line"></span><br><span class="line">w_list = []</span><br><span class="line">mse_list = []</span><br><span class="line"><span class="keyword">for</span> w <span class="keyword">in</span> np.arange(<span class="number">0.0</span>, <span class="number">4.1</span>, <span class="number">0.1</span>):</span><br><span class="line"> print(<span class="string">'w='</span>, w)</span><br><span class="line"> l_sum =<span class="number">0</span></span><br><span class="line"> <span class="keyword">for</span> x_val, y_val <span class="keyword">in</span> zip(x_data, y_data):</span><br><span class="line"> y_pred_val = forward(x_val)</span><br><span class="line"> loss_val = loss(x_val, y_val)</span><br><span class="line"> l_sum += loss_val</span><br><span class="line"> print(<span class="string">'\t'</span>, x_val, y_val, y_pred_val, loss_val)</span><br><span class="line"> print(<span class="string">'MSE='</span>, l_sum / <span class="number">3</span>)</span><br><span class="line"> w_list.append(w)</span><br><span class="line"> mse_list.append(l_sum / <span class="number">3</span>)</span><br><span class="line"></span><br><span class="line">plt.plot(w_list, mse_list)</span><br><span class="line">plt.ylabel(<span class="string">'Loss'</span>)</span><br><span class="line">plt.xlabel(<span class="string">'w'</span>)</span><br><span class="line">plt.show()</span><br></pre></td></tr></table></figure><p>其中对于参数w是采用枚举的方式来找的,实际不会采用这个做法。但调超参的话理论上可以用类似暴力的做法(虽然如果在公用机器上这么做会被骂)</p><p><a href="https://colab.research.google.com/drive/1_bwu8HW5jVRRaHwLURPaMkopz6ZwIBTp?usp=sharing" rel="external nofollow noopener noreferrer" target="_blank">在Colab上运行</a></p><p>课程来源:<a href="https://www.bilibili.com/video/BV1Y7411d7Ys?p=2" rel="external nofollow noopener noreferrer" target="_blank">《PyTorch深度学习实践》完结合集</a></p>]]></content>
<summary type="html">
<h1 id="线性模型"><a href="#线性模型" class="headerlink" title="线性模型"></a>线性模型</h1><h2 id="基本流程"><a href="#基本流程" class="headerlink" title="基本流程"></a>基本流程</h2><ol>
<li>Dataset<br> <code>traing set</code>, <code>dev set</code>, <code>test set</code></li>
<li>Model</li>
<li>Training</li>
<li>infering</li>
</ol>
<h2 id="线性模型-1"><a href="#线性模型-1" class="headerlink" title="线性模型"></a>线性模型</h2><h3 id="Linear-model"><a href="#Linear-model" class="headerlink" title="Linear model"></a>Linear model</h3><p>$$<br>\hat{y} = x * \omega<br>$$</p>
<p>上面是本次实验中用到的<br>更一般的线性模型要加一个偏置b,形式如下:</p>
<p>$$<br>\hat{y} = x * \omega + b<br>$$</p>
<h3 id="Training-Loss-Error"><a href="#Training-Loss-Error" class="headerlink" title="Training Loss(Error)"></a>Training Loss(Error)</h3><p>$$<br>loss = (\hat{y}-y)^2=(x*\omega-y)^2<br>$$</p>
<p>这是针对每一个样本的</p>
</summary>
<category term="深度学习" scheme="https://www.mhlwsk.com/tags/%E6%B7%B1%E5%BA%A6%E5%AD%A6%E4%B9%A0/"/>
<category term="PyTorch" scheme="https://www.mhlwsk.com/tags/PyTorch/"/>
</entry>
<entry>
<title>《PyTorch深度学习实践》1.Overview 笔记</title>
<link href="https://www.mhlwsk.com/archives/5957.html"/>
<id>https://www.mhlwsk.com/archives/5957.html</id>
<published>2020-07-30T16:27:18.000Z</published>
<updated>2020-07-31T14:41:10.232Z</updated>
<content type="html"><![CDATA[<h1 id="Overview"><a href="#Overview" class="headerlink" title="Overview"></a>Overview</h1><h2 id="几种思维方式"><a href="#几种思维方式" class="headerlink" title="几种思维方式"></a>几种思维方式</h2><h3 id="算法课"><a href="#算法课" class="headerlink" title="算法课"></a>算法课</h3><p>计算过程来自人工设计</p><ul><li>穷举法</li><li>贪心法</li><li>分治法</li><li>动态规划</li><li>…</li></ul><h3 id="机器学习"><a href="#机器学习" class="headerlink" title="机器学习"></a>机器学习</h3><p>计算过程来自数据,基于统计,方法如最大似然、最大后验</p><ul><li>Data</li><li>DataSet</li></ul><a id="more"></a><h2 id="人工智能领域分类(出自Deep-learning-by-Bengio)"><a href="#人工智能领域分类(出自Deep-learning-by-Bengio)" class="headerlink" title="人工智能领域分类(出自Deep learning by Bengio)"></a>人工智能领域分类(出自<em>Deep learning by Bengio</em>)</h2><p><img src="/images/1. Overview 笔记.assets/image-20200727220619808.png" alt="image-20200727220619808"></p><p>注意这里面的表示学习,早期方法如shallow autoencoders</p><h3 id="基于规则的系统(早期人工智能方案)"><a href="#基于规则的系统(早期人工智能方案)" class="headerlink" title="基于规则的系统(早期人工智能方案)"></a>基于规则的系统(早期人工智能方案)</h3><p><img src="/images/1. Overview 笔记.assets/image-20200727221448608.png" alt="image-20200727221448608"></p><h3 id="传统机器学习"><a href="#传统机器学习" class="headerlink" title="传统机器学习"></a>传统机器学习</h3><p><img src="/images/1. Overview 笔记.assets/image-20200727221550455.png" alt="image-20200727221550455"></p><p>这里强调Hand-designed feature,特征需人工提取。曾经较火的模型如SVM。</p><p><img src="/images/1. Overview 笔记.assets/image-20200727223034331.png" alt="image-20200727223034331"></p><h4 id="局限性"><a href="#局限性" class="headerlink" title="局限性"></a>局限性</h4><ul><li>手动设计的特征有限</li><li>SVM对大数据集处理能力有限</li><li>许多应用需要处理无结构数据,机器学习对这类数据需要手动构建特征</li></ul><h3 id="表示学习(Representation-learning)"><a href="#表示学习(Representation-learning)" class="headerlink" title="表示学习(Representation learning)"></a>表示学习(Representation learning)</h3><blockquote><p>补充:维度诅咒</p><p>feature增加,对样本数要求非线性增加<br>对应的需求:压缩维度</p></blockquote><p>表示学习希望能用低维空间表示高维空间的信息。方法如<em>Manifold流形</em>。</p><p><img src="/images/1. Overview 笔记.assets/image-20200727222431377.png" alt="image-20200727222431377"></p><p>深度学习所利用的特征更加简单了,不需要手动提取feature。Deep learning的训练过程是end-to-end的<br>目前的神经网络,来源于神经科学,但应用上更接近于数学和工程学<br>神经网络最重要的算法是back-propagation,基于计算图</p><h3 id="深度学习近几年的发展"><a href="#深度学习近几年的发展" class="headerlink" title="深度学习近几年的发展"></a>深度学习近几年的发展</h3><ul><li>算法更新</li><li>数据增多</li><li>算力变强</li></ul><p>课程来源:<a href="https://www.bilibili.com/video/BV1Y7411d7Ys?p=1" rel="external nofollow noopener noreferrer" target="_blank">《PyTorch深度学习实践》完结合集</a></p>]]></content>
<summary type="html">
<h1 id="Overview"><a href="#Overview" class="headerlink" title="Overview"></a>Overview</h1><h2 id="几种思维方式"><a href="#几种思维方式" class="headerlink" title="几种思维方式"></a>几种思维方式</h2><h3 id="算法课"><a href="#算法课" class="headerlink" title="算法课"></a>算法课</h3><p>计算过程来自人工设计</p>
<ul>
<li>穷举法</li>
<li>贪心法</li>
<li>分治法</li>
<li>动态规划</li>
<li>…</li>
</ul>
<h3 id="机器学习"><a href="#机器学习" class="headerlink" title="机器学习"></a>机器学习</h3><p>计算过程来自数据,基于统计,方法如最大似然、最大后验</p>
<ul>
<li>Data</li>
<li>DataSet</li>
</ul>
</summary>
<category term="深度学习" scheme="https://www.mhlwsk.com/tags/%E6%B7%B1%E5%BA%A6%E5%AD%A6%E4%B9%A0/"/>
<category term="PyTorch" scheme="https://www.mhlwsk.com/tags/PyTorch/"/>
</entry>
<entry>
<title>[2019春软件构造]期末总结(思维导图)</title>
<link href="https://www.mhlwsk.com/archives/19868.html"/>
<id>https://www.mhlwsk.com/archives/19868.html</id>
<published>2019-06-24T13:33:44.000Z</published>
<updated>2020-09-18T09:09:13.653Z</updated>
<content type="html"><![CDATA[<p>图片较大,请右键另存为查看。<br><a id="more"></a></p><p><img src="/images/2019-software-construction/software-construction.png" alt="pic1"></p>]]></content>
<summary type="html">
<p>图片较大,请右键另存为查看。<br>
</summary>
<category term="Java" scheme="https://www.mhlwsk.com/tags/Java/"/>
<category term="软件构造" scheme="https://www.mhlwsk.com/tags/%E8%BD%AF%E4%BB%B6%E6%9E%84%E9%80%A0/"/>
</entry>
<entry>
<title>\[2019春软件构造\]优化笔记:我是如何将实验五的建图操作压缩到1.5s的</title>
<link href="https://www.mhlwsk.com/archives/3309.html"/>
<id>https://www.mhlwsk.com/archives/3309.html</id>
<published>2019-05-31T16:49:42.000Z</published>
<updated>2020-09-18T09:08:55.321Z</updated>
<content type="html"><![CDATA[<p>本文使用JGraphT存储Social Network中的人际关系图结构,使用JGraphT中的预设算法压缩建图时间。</p><p>注:这里的建图时间包含最短路求解与轨道插入,下文重点优化最短路求解时间。</p><h2 id="背景"><a href="#背景" class="headerlink" title="背景"></a>背景</h2><p>之前写完软件构造Lab2时,rainywang有建议过将Lab2的Graph ADT进行封装以便之后的使用。</p><p>在写Lab3时的Social Network时,为了保存和处理关系图,一开始我直接迁移了Lab2的代码,将Lab2的ADT搬到Lab3并添加了数个API。但看到标签为Larger的几个文件,心里还是对Lab2中<strong>naïve</strong>的ADT实现感到担忧。仅仅一个返回邻接顶点的操作时间复杂度都要O(n),明显它是不能够胜任Lab3或者之后的Lab5的性能要求的。</p><p>rainywang在Lab2课程结束时,不仅建议过封装Lab2的ADT,也提出了这样一个疑问:网上是否已经有相关图算法ADT了?这个既是疑问又带有明显暗示的说法让人感到不安,显然图算法这个轮子已经被人重造过很多次了,我当然有理由相信有人已经为之写出了不错的java库。</p><p>抱着找找看的心理,在Google上尝试搜索了下”java graph library”,在返回的第一个结果发现了<a href="https://jgrapht.org/" rel="external nofollow noopener noreferrer" target="_blank">JGraphT</a>,它的开源许可证为 <code>Eclipse Public License - v 2.0</code>。<br><a id="more"></a></p><p><img src="https://jgrapht.org/img/logo.png" alt="JGraphT Logo"></p><blockquote><h3 id="a-Java-library-of-graph-theory-data-structures-and-algorithms"><a href="#a-Java-library-of-graph-theory-data-structures-and-algorithms" class="headerlink" title="a Java library of graph theory data structures and algorithms"></a>a Java library of graph theory data structures and algorithms</h3><h3 id="flexible"><a href="#flexible" class="headerlink" title="flexible"></a><em>flexible</em></h3><h5 id="any-object-can-be-used-for-vertex-and-edge-types-with-full-type-safety-via-generics"><a href="#any-object-can-be-used-for-vertex-and-edge-types-with-full-type-safety-via-generics" class="headerlink" title="any object can be used for vertex and edge types, with full type safety via generics"></a><strong>any object</strong> can be used for vertex and edge types, with full <strong>type safety</strong> via generics</h5><h5 id="edges-can-be-directed-or-undirected-weighted-or-unweighted"><a href="#edges-can-be-directed-or-undirected-weighted-or-unweighted" class="headerlink" title="edges can be directed or undirected, weighted or unweighted"></a>edges can be <strong>directed</strong> or <strong>undirected</strong>, <strong>weighted</strong> or <strong>unweighted</strong></h5><h5 id="simple-graphs-multigraphs-and-pseudographs"><a href="#simple-graphs-multigraphs-and-pseudographs" class="headerlink" title="simple graphs, multigraphs, and pseudographs"></a><strong>simple graphs</strong>, <strong>multigraphs</strong>, and <strong>pseudographs</strong></h5><h3 id="powerful"><a href="#powerful" class="headerlink" title="powerful"></a><em>powerful</em></h3><h5 id="specialized-iterators-for-graph-traversal-DFS-BFS-etc"><a href="#specialized-iterators-for-graph-traversal-DFS-BFS-etc" class="headerlink" title="specialized iterators for graph traversal (DFS, BFS, etc)"></a>specialized <strong>iterators</strong> for graph traversal (<strong>DFS</strong>, <strong>BFS</strong>, etc)</h5><h5 id="algorithms-for-path-finding-clique-detection-isomorphism-detection-coloring-common-ancestors-tours-connectivity-matching-cycle-detection-partitions-cuts-flows-centrality-spanning-and-the-list-goes-on"><a href="#algorithms-for-path-finding-clique-detection-isomorphism-detection-coloring-common-ancestors-tours-connectivity-matching-cycle-detection-partitions-cuts-flows-centrality-spanning-and-the-list-goes-on" class="headerlink" title="algorithms for path finding, clique detection, isomorphism detection, coloring, common ancestors, tours, connectivity, matching, cycle detection, partitions, cuts, flows, centrality, spanning, and the list goes on"></a><strong>algorithms</strong> for path finding, clique detection, isomorphism detection, coloring, common ancestors, tours, connectivity, matching, cycle detection, partitions, cuts, flows, centrality, spanning, <strong>and the list goes on</strong></h5><h3 id="efficient"><a href="#efficient" class="headerlink" title="efficient"></a><em>efficient</em></h3><h5 id="designed-for-performance-with-near-native-speed-in-many-cases"><a href="#designed-for-performance-with-near-native-speed-in-many-cases" class="headerlink" title="designed for performance, with near-native speed in many cases"></a>designed for performance, with <strong>near-native</strong> speed in many cases</h5><h5 id="adapters-for-memory-optimized-fastutil-representation"><a href="#adapters-for-memory-optimized-fastutil-representation" class="headerlink" title="adapters for memory-optimized fastutil representation"></a>adapters for memory-optimized <strong>fastutil</strong> representation</h5></blockquote><p>支持带权有向边、无向边,实现好的BFS和最短路算法,以及<strong>高效率</strong>。满足了Lab3的所有要求,同时也许能顺便满足Lab5的性能需求。此时我还有些犹豫,因为我不确定在Lab3 deadline将至的情况下,JGraphT的学习成本是否可以平衡掉之后代码的调试时间。不过考虑到JGraphT的算法实现较全可以比较容易的满足需求的变化,以及著名的<em>Don’t reinvent the wheel</em>原则,我还是花了两个小时来阅读文档,事后证明这是一个没有让我后悔的决定。</p><h2 id="实现"><a href="#实现" class="headerlink" title="实现"></a>实现</h2><p>JGraphT的Graph ADT默认支持两种边:带权边<code>DefaultWeightedEdge</code>和无权边<code>DefaultEdge</code>。Social Network需要保存人与人之间的关系以及亲密度,且<code>312change</code>中人的关系是有向的。因此我创建了两张有向图,分别用于保存人与人之间的关系以及亲密度:</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">protected</span> Graph<String, DefaultWeightedEdge> intimacy = <span class="keyword">new</span> SimpleDirectedWeightedGraph<>(DefaultWeightedEdge<span class="class">.<span class="keyword">class</span>)</span>; <span class="comment">//亲密度图</span></span><br><span class="line"><span class="keyword">protected</span> Graph<String, DefaultEdge> relation = <span class="keyword">new</span> SimpleDirectedGraph<>(DefaultEdge<span class="class">.<span class="keyword">class</span>)</span>; <span class="comment">//关系图</span></span><br></pre></td></tr></table></figure><p>对节点和边的操作类似于Lab2中定义的API:</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">intimacy.addVertex(name1); <span class="comment">//插入节点</span></span><br><span class="line">DefaultWeightedEdge e = intimacy.addEdge(name1, name2); <span class="comment">//插入边</span></span><br><span class="line">intimacy.setEdgeWeight(e, intimacy); <span class="comment">//设置边权</span></span><br><span class="line">relation.removeEdge(name1, name2); <span class="comment">//删除边</span></span><br></pre></td></tr></table></figure><p>计算Friend所在的轨道需要得到Friend与中心点User的最短距离。担心之后可能出现奇怪的需求(如要求轨道以亲密度的最短距离来定义),我一开始没有根据<strong>关系没有边权</strong>或者说<strong>图边权相等</strong>这一条件采用BFS,而采用了在最短路问题中适用度更高的Dijkstra算法。JGraphT中的Dijkstra算法实现疑似使用了Decorator模式,调用十分简洁:</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br></pre></td><td class="code"><pre><span class="line">DijkstraShortestPath<String, DefaultEdge> dijkstraAlg = <span class="keyword">new</span> DijkstraShortestPath<>(relation); <span class="comment">//定义算法</span></span><br><span class="line">SingleSourcePaths<String, DefaultEdge> iPaths = dijkstraAlg.getPaths(centralName); <span class="comment">//求以centralName为起点的单源最短路</span></span><br><span class="line"></span><br><span class="line">GraphPath<String, DefaultEdge> path = iPaths.getPath(name); <span class="comment">//获取name节点的最短路</span></span><br><span class="line"><span class="keyword">if</span>(path != <span class="keyword">null</span>) {</span><br><span class="line"> <span class="keyword">int</span> track = path.getLength(); <span class="comment">//得到最短路距离</span></span><br><span class="line">}</span><br></pre></td></tr></table></figure><p>此时对Lab3的Larger文件进行读取和建图操作时间已经压缩到了3s左右。</p><blockquote><p>这里还有一个有趣的插曲:在写完了Lab3不久,在一节形式语言与自动机课上,老师打趣地布置了一个画有2k个点的DFA的任务,并开玩笑说要画出这个图要相当一段时间,能画出这个图的学生期末加5分。JgraphT支持伪图非常适合用来表示自动机,而且自带GraphViz导出API。结果在下午老师布置那个任务之后的3个小时,我就画出了那张图。</p><p>这个项目放在<a href="https://github.com/mhlwsk/DFA" rel="external nofollow noopener noreferrer" target="_blank">https://github.com/mhlwsk/DFA</a></p><p><del>果然软件构造助力数学课程的学习</del>。</p></blockquote><p>Lab5中的数据量急剧扩大,虽然Dijkstra的速度非常快(时间复杂度为O(E*log(E)),单纯建图只需要数秒),但仍有优化的余地。考虑到<strong>关系边权相等</strong>这一条件,我把Lab3中采用的Dijkstra换成了时间复杂度为O(n)的BFS,JGraphT中的BFS采用<strong>迭代器模式</strong>实现,它的调用也非常简洁:</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">BreadthFirstIterator<String, DefaultEdge> bfsIterator = </span><br><span class="line"> <span class="keyword">new</span> BreadthFirstIterator<>(relation, centralName); <span class="comment">// 定义BFS序迭代器,以centralName为遍历起点</span></span><br><span class="line"></span><br><span class="line"><span class="keyword">while</span> (bfsIterator.hasNext()) {</span><br><span class="line"> String name = bfsIterator.next();</span><br><span class="line"><span class="comment">// ...</span></span><br><span class="line"> <span class="keyword">int</span> distance = bfsIterator.getDepth(name); <span class="comment">// 获取BFS深度</span></span><br><span class="line"><span class="comment">// ...</span></span><br><span class="line">}</span><br></pre></td></tr></table></figure><p>此时Lab5中的<code>SocialNetworkCircle.txt</code>文件建图时间已经压缩到了1.5s左右。</p><blockquote><p>这里有一个插曲:搜索JgraphT的API时,我发现一个<a href="https://jgrapht.org/javadoc-SNAPSHOT/org/jgrapht/alg/shortestpath/BFSShortestPath.html" rel="external nofollow noopener noreferrer" target="_blank">BFSShortestPath<V,E></a>类,它的API与之前用到的<code>DijkstraShortestPath<V,E></code>一致。一开始我尝试直接把后者改成前者,但编译器提示<code>BFSShortestPath<V,E></code>不存在。在确认了不是包导入的问题后,我查看了Github上BFSShortestPath的<a href="https://github.com/jgrapht/jgrapht/blob/master/jgrapht-core/src/main/java/org/jgrapht/alg/shortestpath/BFSShortestPath.java" rel="external nofollow noopener noreferrer" target="_blank">源码</a>,发现这个文件是<code>13 Feb</code>创建的。而截至此文写作时(<code>1 June</code>)JgraphT最新的<code>release 1.3.0</code>是在<code>13 Nov 2018</code>发布的。也即JGraphT的doc比release都要新。显然这是Javadoc基于最新的源码自动生成的,这里不得不感叹一下javadoc的强大。</p></blockquote><h2 id="结语"><a href="#结语" class="headerlink" title="结语"></a>结语</h2><p><strong>不要重新发明轮子</strong>,当面对一个具体问题时优先考虑下是否已经有较好的实现了,使用它们可能有助于减轻代码实现与调试导致的焦虑、脱发与偏头痛,并且能够获得较好的可靠性与效率。</p><p>不过,使用JGraphT似乎与过早优化原则相抵触:</p><blockquote><p><strong>Premature optimization is the root of all evil</strong> – Donald Knuth</p></blockquote><p>我的理解是,如果优化使得软件的其它各项指标(如可变性)急剧下降,那么滞后优化是必要的;而采用JGraphT作为一项优化使得应用的changeability反而有所增加,此时就不必拘泥教条。毕竟,软件开发的过程也是软件的各项指标相互折衷的过程。</p>]]></content>
<summary type="html">
<p>本文使用JGraphT存储Social Network中的人际关系图结构,使用JGraphT中的预设算法压缩建图时间。</p>
<p>注:这里的建图时间包含最短路求解与轨道插入,下文重点优化最短路求解时间。</p>
<h2 id="背景"><a href="#背景" class="headerlink" title="背景"></a>背景</h2><p>之前写完软件构造Lab2时,rainywang有建议过将Lab2的Graph ADT进行封装以便之后的使用。</p>
<p>在写Lab3时的Social Network时,为了保存和处理关系图,一开始我直接迁移了Lab2的代码,将Lab2的ADT搬到Lab3并添加了数个API。但看到标签为Larger的几个文件,心里还是对Lab2中<strong>naïve</strong>的ADT实现感到担忧。仅仅一个返回邻接顶点的操作时间复杂度都要O(n),明显它是不能够胜任Lab3或者之后的Lab5的性能要求的。</p>
<p>rainywang在Lab2课程结束时,不仅建议过封装Lab2的ADT,也提出了这样一个疑问:网上是否已经有相关图算法ADT了?这个既是疑问又带有明显暗示的说法让人感到不安,显然图算法这个轮子已经被人重造过很多次了,我当然有理由相信有人已经为之写出了不错的java库。</p>
<p>抱着找找看的心理,在Google上尝试搜索了下”java graph library”,在返回的第一个结果发现了<a href="https://jgrapht.org/" rel="external nofollow noopener noreferrer" target="_blank">JGraphT</a>,它的开源许可证为 <code>Eclipse Public License - v 2.0</code>。<br>
</summary>
<category term="Java" scheme="https://www.mhlwsk.com/tags/Java/"/>
<category term="软件构造" scheme="https://www.mhlwsk.com/tags/%E8%BD%AF%E4%BB%B6%E6%9E%84%E9%80%A0/"/>
</entry>
<entry>
<title>\[2019春软件构造\]面向可复用的软件构造</title>
<link href="https://www.mhlwsk.com/archives/4111.html"/>
<id>https://www.mhlwsk.com/archives/4111.html</id>
<published>2019-05-26T13:51:33.000Z</published>
<updated>2020-09-18T09:10:23.468Z</updated>
<content type="html"><![CDATA[<h2 id="设计可复用的类"><a href="#设计可复用的类" class="headerlink" title="设计可复用的类"></a>设计可复用的类</h2><h3 id="Liskov替换原则-LSP"><a href="#Liskov替换原则-LSP" class="headerlink" title="Liskov替换原则(LSP)"></a>Liskov替换原则(LSP)</h3><blockquote><p>Let q(x) be a property provable about objects x of type T, then q(y) should be provable for objects y of type S where S is a subtype of T.</p><p>– Barbara Liskov</p></blockquote><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">Animal a = <span class="keyword">new</span> Animal();</span><br><span class="line">Animal c1 = <span class="keyword">new</span> Cat();</span><br><span class="line">Cat c2 = <span class="keyword">new</span> Cat();</span><br></pre></td></tr></table></figure><p>在可以使用a的场景,都可以用c1和c2代替而不会有任何问题。<br><a id="more"></a></p><p>Liskov替换原则是衍生子类需要遵循的重要原则,它使得客户端可用统一的方式处理不同类型的对象,LSP要求子类相对于父类具有:</p><ul><li>更强的不变量</li><li>更弱的前置条件</li><li>更强的后置条件</li></ul><p>注:更强的后置条件要求子类型中不能产生新的异常。</p><p>父类与子类的衔接类似于一个漏斗。</p><h2 id="委派-Delegation-与组合-Composition"><a href="#委派-Delegation-与组合-Composition" class="headerlink" title="委派(Delegation)与组合(Composition)"></a>委派(Delegation)与组合(Composition)</h2><h3 id="A-simple-Delegation-example"><a href="#A-simple-Delegation-example" class="headerlink" title="A simple Delegation example"></a>A simple Delegation example</h3><p>B类为对A类的一个委派,在B类中的私有字段a绑定了一个A类,B类的foo操作被委派给绑定的a来做:</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br></pre></td><td class="code"><pre><span class="line"><span class="class"><span class="keyword">class</span> <span class="title">A</span> </span>{</span><br><span class="line"> <span class="function"><span class="keyword">void</span> <span class="title">foo</span><span class="params">()</span> </span>{</span><br><span class="line"> <span class="keyword">this</span>.bar();</span><br><span class="line"> }</span><br><span class="line"> <span class="function"><span class="keyword">void</span> <span class="title">bar</span><span class="params">()</span> </span>{</span><br><span class="line"> print(<span class="string">"a.bar"</span>);</span><br><span class="line"> }</span><br><span class="line">}</span><br><span class="line"><span class="class"><span class="keyword">class</span> <span class="title">B</span> </span>{</span><br><span class="line"> <span class="keyword">private</span> A a; <span class="comment">// delegation link</span></span><br><span class="line"> <span class="function"><span class="keyword">public</span> <span class="title">B</span><span class="params">(A a)</span> </span>{</span><br><span class="line"> <span class="keyword">this</span>.a = a;</span><br><span class="line"> }</span><br><span class="line"> <span class="function"><span class="keyword">void</span> <span class="title">foo</span><span class="params">()</span> </span>{</span><br><span class="line"> a.foo(); <span class="comment">// call foo() on the a-instance</span></span><br><span class="line"> }</span><br><span class="line"> <span class="function"><span class="keyword">void</span> <span class="title">bar</span><span class="params">()</span> </span>{</span><br><span class="line"> print(<span class="string">"b.bar"</span>);</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure><p>建立绑定关系:</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">A a = <span class="keyword">new</span> A();</span><br><span class="line">B b = <span class="keyword">new</span> B(a); <span class="comment">// establish delegation between two objects</span></span><br></pre></td></tr></table></figure><h3 id="Delegation-vs-Inheritance"><a href="#Delegation-vs-Inheritance" class="headerlink" title="Delegation vs Inheritance"></a>Delegation vs Inheritance</h3><p>From <a href="https://www.geeksforgeeks.org/delegation-vs-inheritance-java/" rel="external nofollow noopener noreferrer" target="_blank">GeeksforGeeks</a></p><blockquote><p><strong>Delegation</strong> is simply passing a duty off to someone/something else.</p><ul><li>Delegation can be an alternative to inheritance.</li><li>Delegation means that you use an object of another class as an instance variable, and forward messages to the instance.</li><li>It is better than inheritance for many cases because it makes you to think about each message you forward, because the instance is of a known class, rather than a new class, and because it doesn’t force you to accept all the methods of the super class: you can provide only the methods that really make sense.</li><li>Delegation can be viewed as a relationship between objects where one object forwards certain method calls to another object, called its delegate.</li><li>The primary advantage of delegation is run-time flexibility – the delegate can easily be changed at run-time. But unlike inheritance, delegation is not directly supported by most popular object-oriented languages, and it doesn’t facilitate <a href="https://www.geeksforgeeks.org/dynamic-method-dispatch-runtime-polymorphism-java/" rel="external nofollow noopener noreferrer" target="_blank">dynamic polymorphism</a>.</li></ul></blockquote><p>Delegation可以作为继承的一个替代品,它最主要的优点是<strong>灵活</strong>。相比于继承,它不需要继承另一个类的所有方法,而是可以通过委派机制调用部分方法;同时它支持类的<strong>动态绑定</strong>。</p><p><strong>Delegation发生在object层面,Inheritance发生在class层面。</strong></p><h2 id="几类常见Delegation"><a href="#几类常见Delegation" class="headerlink" title="几类常见Delegation"></a>几类常见Delegation</h2><h3 id="Dependency-临时性的delegation"><a href="#Dependency-临时性的delegation" class="headerlink" title="Dependency: 临时性的delegation"></a>Dependency: 临时性的delegation</h3><p>Dependency在调用函数时传入delegation类,并将操作委派给传入的类处理,是一种临时的delegation关系。</p><p>类与被委派的类之间的关系为<strong>uses-a</strong>。</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">Flyable f = <span class="keyword">new</span> FlyWithWings();</span><br><span class="line">Duck d = <span class="keyword">new</span> Duck();</span><br><span class="line">d.fly(f);</span><br></pre></td></tr></table></figure><p>Duck类的定义如下:<br><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="class"><span class="keyword">class</span> <span class="title">Duck</span> </span>{</span><br><span class="line"> <span class="function"><span class="keyword">void</span> <span class="title">fly</span><span class="params">(Flyable f)</span> </span>{</span><br><span class="line"> f.fly();</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p><h3 id="Association-永久性的delegation"><a href="#Association-永久性的delegation" class="headerlink" title="Association: 永久性的delegation"></a>Association: 永久性的delegation</h3><p>Association是一种永久的delegation关系,delegation关系通过将被委派的类存入字段的方式来实现。</p><p>类与被委派的类之间的关系为<strong>has-a</strong>。</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">Flyable f = <span class="keyword">new</span> FlyWithWings();</span><br><span class="line">Duck d = <span class="keyword">new</span> Duck(f);</span><br><span class="line">d.fly();</span><br><span class="line">d.setFlyBehavior(<span class="keyword">new</span> CannotFly());</span><br><span class="line">d.fly();</span><br></pre></td></tr></table></figure><p>Duck类的定义如下:<br><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br></pre></td><td class="code"><pre><span class="line"><span class="class"><span class="keyword">class</span> <span class="title">Duck</span> </span>{</span><br><span class="line"> Flyable f;</span><br><span class="line"> <span class="function"><span class="keyword">void</span> <span class="title">Duck</span><span class="params">(Flyable f)</span> </span>{</span><br><span class="line"> <span class="keyword">this</span>.f = f;</span><br><span class="line"> }</span><br><span class="line"> <span class="function"><span class="keyword">void</span> <span class="title">setFlyBehavior</span><span class="params">(Flyable f)</span> </span>{</span><br><span class="line"> <span class="keyword">this</span>.f = f;</span><br><span class="line"> }</span><br><span class="line"> <span class="function"><span class="keyword">void</span> <span class="title">fly</span><span class="params">()</span> </span>{</span><br><span class="line"> f.fly();</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p><h3 id="Composition-更强的association"><a href="#Composition-更强的association" class="headerlink" title="Composition: 更强的association"></a>Composition: 更强的association</h3><p>Composition与Association类似,均将delegation关系存入字段。但不同之处在于Composition将委派关系的绑定写入了代码,在运行时委派关系不可变。</p><p>被委派的类与类之间的关系为<strong>is_part_of</strong></p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">Duck d = <span class="keyword">new</span> Duck();</span><br><span class="line">d.fly();</span><br></pre></td></tr></table></figure><p>Duck类的定义如下:<br><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line"><span class="class"><span class="keyword">class</span> <span class="title">Duck</span> </span>{</span><br><span class="line"> Flyable f = <span class="keyword">new</span> FlyWithWings();</span><br><span class="line"> <span class="function"><span class="keyword">void</span> <span class="title">fly</span><span class="params">()</span> </span>{</span><br><span class="line"> f.fly();</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p><h3 id="Aggregation-更弱的-association"><a href="#Aggregation-更弱的-association" class="headerlink" title="Aggregation: 更弱的 association"></a>Aggregation: 更弱的 association</h3><p>Aggregation与Composition类似,均将delegation关系存入字段。与Composition的不同之处在于委派关系在运行时可变。</p><p>Aggregation与Composition还具有以下不同:Aggregation关系中的两个类不具有相互依存关系,而Composition中的两个类具有依附关系,即如果A has-a B,那么B不能脱离A存在。</p>]]></content>
<summary type="html">
<h2 id="设计可复用的类"><a href="#设计可复用的类" class="headerlink" title="设计可复用的类"></a>设计可复用的类</h2><h3 id="Liskov替换原则-LSP"><a href="#Liskov替换原则-LSP" class="headerlink" title="Liskov替换原则(LSP)"></a>Liskov替换原则(LSP)</h3><blockquote>
<p>Let q(x) be a property provable about objects x of type T, then q(y) should be provable for objects y of type S where S is a subtype of T.</p>
<p>– Barbara Liskov</p>
</blockquote>
<figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br></pre></td><td class="code"><pre><span class="line">Animal a = <span class="keyword">new</span> Animal();</span><br><span class="line">Animal c1 = <span class="keyword">new</span> Cat();</span><br><span class="line">Cat c2 = <span class="keyword">new</span> Cat();</span><br></pre></td></tr></table></figure>
<p>在可以使用a的场景,都可以用c1和c2代替而不会有任何问题。<br>
</summary>
<category term="Java" scheme="https://www.mhlwsk.com/tags/Java/"/>
<category term="软件构造" scheme="https://www.mhlwsk.com/tags/%E8%BD%AF%E4%BB%B6%E6%9E%84%E9%80%A0/"/>
</entry>
<entry>
<title>[2018 HIT CSAPP大作业] 程序人生 Hello’s P2P</title>
<link href="https://www.mhlwsk.com/archives/44834.html"/>
<id>https://www.mhlwsk.com/archives/44834.html</id>
<published>2018-12-29T18:19:42.000Z</published>
<updated>2020-07-30T13:07:23.779Z</updated>
<content type="html"><![CDATA[<h1 id="摘-要"><a href="#摘-要" class="headerlink" title="摘 要"></a>摘 要</h1><p>计算机系统是由硬件和软件组成的,它们共同工作来运行应用程序。即使是最简单的一个应用程序,也需要计算机系统中的每个主要组成部分协调工作。本文以hello程序的运行为切入点,解释了当在系统上运行hello程序时,系统发生的编译、链接、加载、进程管理、存储管理等过程,以及它们的运行机制。<br><strong>关键词</strong>:操作系统;计算机组成原理;汇编<br><a id="more"></a></p><h1 id="第1章-概述"><a href="#第1章-概述" class="headerlink" title="第1章 概述"></a>第1章 概述</h1><h2 id="1-1-Hello简介"><a href="#1-1-Hello简介" class="headerlink" title="1.1 Hello简介"></a>1.1 Hello简介</h2><p><strong>Hello的P2P(Program to process)和020(From Zero to Zero)过程</strong><br>用户在文本编辑器中编写代码得到hello.c。hello.c经过预处理(cpp)变成hello.i(修改了的源程序),经过编译(ccl)生成hello.s(汇编程序),经过汇编生成hello.o(可重定位目标程序),经过链接(ld)生成hello(可执行目标程序)。<br>用户键入命令,bash自行fork一个process,并在这个process中调用execve执行hello。execve加载hello,并调用_start函数,不久控制权被转移到hello的main函数。<br>hello调用write等系统函数在屏幕打印信息,随后退出,接下来终止的hello进程被父进程bash回收。<br>实验中所用的hello.c代码如下:<br><figure class="highlight c"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment">// 大作业的 hello.c 程序</span></span><br><span class="line"><span class="comment">// gcc -m64 -no-pie -fno-PIC hello.c -o hello</span></span><br><span class="line"><span class="comment">// 程序运行过程中可以按键盘,如不停乱按,包括回车,Ctrl-Z,Ctrl-C等。</span></span><br><span class="line"><span class="comment">// 可以 运行 ps jobs pstree fg 等命令</span></span><br><span class="line"></span><br><span class="line"><span class="meta">#<span class="meta-keyword">include</span> <span class="meta-string"><stdio.h></span></span></span><br><span class="line"><span class="meta">#<span class="meta-keyword">include</span> <span class="meta-string"><unistd.h> </span></span></span><br><span class="line"><span class="meta">#<span class="meta-keyword">include</span> <span class="meta-string"><stdlib.h></span></span></span><br><span class="line"></span><br><span class="line"><span class="keyword">int</span> sleepsecs=<span class="number">2.5</span>;</span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">int</span> <span class="title">main</span><span class="params">(<span class="keyword">int</span> argc,<span class="keyword">char</span> *argv[])</span></span></span><br><span class="line"><span class="function"></span>{</span><br><span class="line"><span class="keyword">int</span> i;</span><br><span class="line"></span><br><span class="line"><span class="keyword">if</span>(argc!=<span class="number">3</span>)</span><br><span class="line">{</span><br><span class="line"><span class="built_in">printf</span>(<span class="string">"Usage: Hello 学号 姓名!\n"</span>);</span><br><span class="line"><span class="built_in">exit</span>(<span class="number">1</span>);</span><br><span class="line">}</span><br><span class="line"><span class="keyword">for</span>(i=<span class="number">0</span>;i<<span class="number">10</span>;i++)</span><br><span class="line">{</span><br><span class="line"><span class="built_in">printf</span>(<span class="string">"Hello %s %s\n"</span>,argv[<span class="number">1</span>],argv[<span class="number">2</span>]);</span><br><span class="line">sleep(sleepsecs);</span><br><span class="line">}</span><br><span class="line">getchar();</span><br><span class="line"><span class="keyword">return</span> <span class="number">0</span>;</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p><h2 id="1-2-环境与工具"><a href="#1-2-环境与工具" class="headerlink" title="1.2 环境与工具"></a>1.2 环境与工具</h2><p>硬件环境:Intel(R) Core(TM) i5-3320M CPU;8.00GB RAM<br>软件环境:Windows 10 64位;Vmware Workstation 14 Pro;Ubuntu 16.04 LTS 64位<br>开发工具:CodeBlocks 64位;Visual Studio Code;GCC 5.4.0;objdump;EDB;readelf;hexedit</p><h2 id="1-3-中间结果"><a href="#1-3-中间结果" class="headerlink" title="1.3 中间结果"></a>1.3 中间结果</h2><p>为编写本论文,生成的中间结果文件的名字以及文件的作用。</p><table><thead><tr><th>文件名称</th><th>文件作用</th></tr></thead><tbody><tr><td>hello.i</td><td>hello预处理之后的文本文件</td></tr><tr><td>hello.s</td><td>hello编译之后的汇编文件</td></tr><tr><td>hello.o</td><td>hello汇编之后的可重定位目标文件</td></tr><tr><td>hello</td><td>hello链接之后的可执行目标文件</td></tr><tr><td>hello_o.objdump</td><td>hello.o的反汇编代码</td></tr><tr><td>hello_o.elf</td><td>hello.o的ELF文件信息</td></tr><tr><td>hello.objdump</td><td>hello的反汇编代码</td></tr><tr><td>hello.elf</td><td>hello的ELF文件信息</td></tr><tr><td>test.c</td><td>测试用代码</td></tr></tbody></table><h2 id="1-4-本章小结"><a href="#1-4-本章小结" class="headerlink" title="1.4 本章小结"></a>1.4 本章小结</h2><p>本章简要介绍了hello的P2P,O2O过程,并列出了本次实验的环境和中间结果。</p><h1 id="第2章-预处理"><a href="#第2章-预处理" class="headerlink" title="第2章 预处理"></a>第2章 预处理</h1><h2 id="2-1-预处理的概念与作用"><a href="#2-1-预处理的概念与作用" class="headerlink" title="2.1 预处理的概念与作用"></a>2.1 预处理的概念与作用</h2><p>预处理器(cpp)根据以字符#开头的命令,修改原始的C程序。比如hello.c中第1行的#include <stdio.h>命令告诉预处理器读取系统头文件stdio.h的内容,并把它直接插入程序文本中。结果就得到了另一个C程序,通常是以.i作为文件拓展名。<br>这个过程为接下来的编译过程“简化”了代码。</p><h2 id="2-2在Ubuntu下预处理的命令"><a href="#2-2在Ubuntu下预处理的命令" class="headerlink" title="2.2在Ubuntu下预处理的命令"></a>2.2在Ubuntu下预处理的命令</h2><p><strong>预处理命令:cpp hello.c > hello.i</strong><br><img src="/images/2018-HIT-CSAPP-hello/t1.png" alt="pic1"></p><h2 id="2-3-Hello的预处理结果解析"><a href="#2-3-Hello的预处理结果解析" class="headerlink" title="2.3 Hello的预处理结果解析"></a>2.3 Hello的预处理结果解析</h2><p><img src="/images/2018-HIT-CSAPP-hello/t2.png" alt="pic2"><br>经过预处理的hello代码被展开,在main之前插入了大量代码。这些代码是根据#include从stdio.h、unistd.h、stdlib.h中提取的,其中包含了printf的声明等。<br><img src="/images/2018-HIT-CSAPP-hello/t3.png" alt="pic3"></p><h2 id="2-4-本章小结"><a href="#2-4-本章小结" class="headerlink" title="2.4 本章小结"></a>2.4 本章小结</h2><p>hello.c在编译之前需要经过预处理步骤,该步骤会根据hello.c中以#开头的命令展开相应代码并修改原始的C程序,以便下一步编译。</p><h1 id="第3章-编译"><a href="#第3章-编译" class="headerlink" title="第3章 编译"></a>第3章 编译</h1><h2 id="3-1-编译的概念与作用"><a href="#3-1-编译的概念与作用" class="headerlink" title="3.1 编译的概念与作用"></a>3.1 编译的概念与作用</h2><p>编译器(ccl)将文本文件hello.i翻译成文本文件hello.s,它包含一个汇编语言程序。</p><h2 id="3-2-在Ubuntu下编译的命令"><a href="#3-2-在Ubuntu下编译的命令" class="headerlink" title="3.2 在Ubuntu下编译的命令"></a>3.2 在Ubuntu下编译的命令</h2><p><strong>编译命令:gcc -S hello.i -o hello.s</strong><br><img src="/images/2018-HIT-CSAPP-hello/t4.png" alt="pic4"></p><h2 id="3-3-Hello的编译结果解析"><a href="#3-3-Hello的编译结果解析" class="headerlink" title="3.3 Hello的编译结果解析"></a>3.3 Hello的编译结果解析</h2><h3 id="3-3-1-hello-s中出现的标识"><a href="#3-3-1-hello-s中出现的标识" class="headerlink" title="3.3.1 hello.s中出现的标识"></a>3.3.1 hello.s中出现的标识</h3><ul><li>.file 源文件</li><li>.data 数据段</li><li>.globl 全局标识符</li><li>.string 字符串类型</li><li>.long long类型</li><li>.text 代码段</li></ul><h3 id="3-3-2-数据类型"><a href="#3-3-2-数据类型" class="headerlink" title="3.3.2 数据类型"></a>3.3.2 数据类型</h3><p>hello.c中出现的数据类型有整数类型(int)、字符串、字符指针数组。</p><h4 id="整数类型"><a href="#整数类型" class="headerlink" title="整数类型"></a>整数类型</h4><p>hello.c中的整数类型有全局变量int sleepsecs,main的参数int argc,局部变量int i。<br>全局变量int sleepecs的定义如下,可以看到第六行为其分配大小4字节,第八行为其赋初值2。<br><img src="/images/2018-HIT-CSAPP-hello/t5.png" alt="pic5"><br>对sleepecs的调用采用了PC相对寻址:<br><img src="/images/2018-HIT-CSAPP-hello/t6.png" alt="pic6"><br>参数int argc,局部变量i出现在main的栈帧中,它们没有标识符,也不需要被声明,而是直接使用。<br><img src="/images/2018-HIT-CSAPP-hello/t7.png" alt="pic7"><br><img src="/images/2018-HIT-CSAPP-hello/t8.png" alt="pic8"></p><h4 id="字符串和字符指针数组"><a href="#字符串和字符指针数组" class="headerlink" title="字符串和字符指针数组"></a>字符串和字符指针数组</h4><p>两个printf语句中的格式字符串出现在.rodata段。<br><img src="/images/2018-HIT-CSAPP-hello/t9.png" alt="pic9"><br>作为main参数的<code>char *argv[]</code>则出现在栈帧中。<br><img src="/images/2018-HIT-CSAPP-hello/t10.png" alt="pic10"></p><h3 id="3-3-3运算与操作"><a href="#3-3-3运算与操作" class="headerlink" title="3.3.3运算与操作"></a>3.3.3运算与操作</h3><h4 id="赋值操作"><a href="#赋值操作" class="headerlink" title="赋值操作"></a>赋值操作</h4><p>源程序21行对i赋值为零的操作使用mov语句实现的。<br><img src="/images/2018-HIT-CSAPP-hello/t11.png" alt="pic11"></p><h4 id="比较操作"><a href="#比较操作" class="headerlink" title="比较操作"></a>比较操作</h4><p>hello.c中的两个比较操作被解析为cmpl操作。<br><strong>argc!=3</strong><br><img src="/images/2018-HIT-CSAPP-hello/t12.png" alt="pic12"><br><strong>i<10</strong><br><img src="/images/2018-HIT-CSAPP-hello/t13.png" alt="pic13"></p><h4 id="算术运算"><a href="#算术运算" class="headerlink" title="算术运算"></a>算术运算</h4><p>for循环中的i++采用addl来实现。<br><img src="/images/2018-HIT-CSAPP-hello/t14.png" alt="pic14"></p><h4 id="数组操作"><a href="#数组操作" class="headerlink" title="数组操作"></a>数组操作</h4><p>argv[1]:首先从-32(%rbp)读取argv地址存入rax,接下来rax增加8个字节,此时rax中存放的是&(argv[1]),读取此地址指向的argv[1]放入rax,最后存入rsi。<br><img src="/images/2018-HIT-CSAPP-hello/t15.png" alt="pic15"><br>argv[2]:首先从-32(%rbp)读取argv地址存入rax,接下来rax增加16个字节,此时rax中存放的是&(argv[2]),读取此地址指向的argv[2]放入rdx。<br><img src="/images/2018-HIT-CSAPP-hello/t16.png" alt="pic16"></p><h3 id="3-3-4控制转移"><a href="#3-3-4控制转移" class="headerlink" title="3.3.4控制转移"></a>3.3.4控制转移</h3><h4 id="if语句"><a href="#if语句" class="headerlink" title="if语句"></a>if语句</h4><p><strong>if(argc!=3) {}</strong><br>比较argc与3的大小,然后通过条件跳转je,实现若argc==3,则跳过if语句的代码块。<br><img src="/images/2018-HIT-CSAPP-hello/t17.png" alt="pic17"><br><img src="/images/2018-HIT-CSAPP-hello/t18.png" alt="pic18"></p><h4 id="for循环"><a href="#for循环" class="headerlink" title="for循环"></a>for循环</h4><p>.L2初始化<br><img src="/images/2018-HIT-CSAPP-hello/t19.png" alt="pic19"><br>.L3判断循环条件</p><p><img src="/images/2018-HIT-CSAPP-hello/t20.png" alt="pic20"><br>.L4循环块(51行为迭代i)<br><img src="/images/2018-HIT-CSAPP-hello/t21.png" alt="pic21"><br>首先给i赋值为0,然后跳转到.L3以比较i是否小于等于9,如果小于等于9,则跳转到循环块.L4,否则继续执行循环外的语句getchar。而循环块执行到末尾会继续执行循环判断条件.L3,重复以上步骤直至循环结束。</p><h3 id="3-3-5-函数调用"><a href="#3-3-5-函数调用" class="headerlink" title="3.3.5 函数调用"></a>3.3.5 函数调用</h3><p>对printf的调用,参数被存放在寄存器传递。以printf(“Hello %s %s\n”,argv[1],argv[2]);为例,格式化字符串被存放在edi传递,argv[1]被放在rsi,argv[2]被放在rdx。使用call来调用printf,而printf的返回值则会被存入eax返回。<br><img src="/images/2018-HIT-CSAPP-hello/t22.png" alt="pic22"><br>对exit函数的调用,参数被存放在edi传递,然后使用call调用exit。<br><img src="/images/2018-HIT-CSAPP-hello/t23.png" alt="pic23"><br>对sleep的调用,参数被存放在edi传递,然后使用call调用sleep。<br><img src="/images/2018-HIT-CSAPP-hello/t24.png" alt="pic24"><br>对getchar的调用直接使用了call。<br>main函数的返回值放在eax传递。<br><img src="/images/2018-HIT-CSAPP-hello/t25.png" alt="pic25"></p><h2 id="3-4-本章小结"><a href="#3-4-本章小结" class="headerlink" title="3.4 本章小结"></a>3.4 本章小结</h2><p>本章主要阐述了汇编操作是怎样处理源程序中的数据、各种操作、控制转移、函数调用的。<br>编译器将C语言代码转换成汇编代码,并最终转换生成机器码。这个转换过程中需要对原始代码中的数据和操作进行映射得到相应汇编代码下的解决方案,而由于全局变量引用等因素的影响,这个过程不是简单的一一映射关系。</p><h1 id="第4章-汇编"><a href="#第4章-汇编" class="headerlink" title="第4章 汇编"></a>第4章 汇编</h1><h2 id="4-1-汇编的概念与作用"><a href="#4-1-汇编的概念与作用" class="headerlink" title="4.1 汇编的概念与作用"></a>4.1 汇编的概念与作用</h2><p>汇编器(as)将hello.s翻译成机器指令,把这些指令打包成可重定位目标程序(relocatable object program)的格式,并将结果保存在目标文件hello.o中。hello.o文件是一个二进制文件,它包含的17个字节是函数main的指令编码。</p><h2 id="4-2-在Ubuntu下汇编的命令"><a href="#4-2-在Ubuntu下汇编的命令" class="headerlink" title="4.2 在Ubuntu下汇编的命令"></a>4.2 在Ubuntu下汇编的命令</h2><p><strong>汇编命令as hello.s -o hello.o</strong></p><p><img src="/images/2018-HIT-CSAPP-hello/t26.png" alt="pic26"></p><h2 id="4-3-可重定位目标elf格式"><a href="#4-3-可重定位目标elf格式" class="headerlink" title="4.3 可重定位目标elf格式"></a>4.3 可重定位目标elf格式</h2><p>分析hello.o的ELF格式,用readelf等列出其各节的基本信息,特别是重定位项目分析。</p><table><thead><tr><th>名称</th><th>作用</th></tr></thead><tbody><tr><td>ELF头</td><td>描述了生成该文件的系统的大小和字节顺序以及帮助链接器语法分析和解释目标文件的信息</td></tr><tr><td>.text</td><td>已编译的程序的机器代码</td></tr><tr><td>.rodata</td><td>只读数据</td></tr><tr><td>.data</td><td>已初始化的全局和静态C变量</td></tr><tr><td>.bss</td><td>未初始化的全局和静态C变量</td></tr><tr><td>.symtab</td><td>一个符号表,存放在程序中定义和引用的函数和全局变量的信息</td></tr><tr><td>.rel.text</td><td>.text节的重定位记录表</td></tr><tr><td>.rel.data</td><td>被模块引用或定义的所有全局变量的重定位信息</td></tr><tr><td>.debug</td><td>一个调试符号表</td></tr><tr><td>.line</td><td>原始C源程序的行号和.text节中机器指令之间的映射</td></tr><tr><td>.strtab</td><td>一个字符串表</td></tr><tr><td>节头部表</td><td>每个节的偏移量大小</td></tr></tbody></table><p>ELF头(ELF header)以一个16字节的序列开始,这个序列描述了生成该文件的系统的大小和字节顺序。ELF头剩下的部分包含帮助链接器语法分析和解释目标文件的信息。其中包括ELF头的大小、目标文件的类型(如可重定位、可执行或者共享的)、机器类型(如x86-64)、节头部表(section header table)的文件偏移,以及节头部表中条目的大小和数量。<br><img src="/images/2018-HIT-CSAPP-hello/t27.png" alt="pic27"><br>不同节的位置和大小是由节头部表描述的,其中目标文件中每个节都有一个固定大小的条目。<br><img src="/images/2018-HIT-CSAPP-hello/t28.png" alt="pic28"><br>.rela.text 一个.text节中位置的列表,当链接器把这个目标文件和其它文件组合时,需要修改这些位置。一般而言,任何调用外部函数或者引用全局变量的指令都需要修改。另一方面,调用本地函数的指令则不需要修改。<br>如下图中有八个重定位记录。重定位记录的结构如下所示:<br><figure class="highlight c"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">typedef</span> <span class="class"><span class="keyword">struct</span> {</span></span><br><span class="line"><span class="keyword">int</span> offset; <span class="comment">/* Offset of the reference to relocate */</span> </span><br><span class="line"><span class="keyword">int</span> symbol:<span class="number">24</span>, <span class="comment">/* Symbol of the reference should point to */</span></span><br><span class="line">type:<span class="number">8</span>; <span class="comment">/* Relocation type */</span></span><br><span class="line">} Elf32_Rel;</span><br></pre></td></tr></table></figure></p><p>以下图对sleepsecs的重定位记录为例。它的offset为0x5c,即需要修改的位置是.text段偏移量0x5c处;sybol为0x9,对应.symtab中第9号索引(sleepsecs);type是0x2,即类型为重定位PC相对引用。<br><img src="/images/2018-HIT-CSAPP-hello/t29.png" alt="pic29"><br><img src="/images/2018-HIT-CSAPP-hello/t30.png" alt="pic30"><br>.symtab 一个符号表,它存放在程序中定义和引用的函数和全局变量的信息。和编译器中的符号表不同,.symtab符号表不包含局部变量的条目。</p><h2 id="4-4-Hello-o的结果解析"><a href="#4-4-Hello-o的结果解析" class="headerlink" title="4.4 Hello.o的结果解析"></a>4.4 Hello.o的结果解析</h2><p><img src="/images/2018-HIT-CSAPP-hello/t31.png" alt="pic31"><br>hello.o的反汇编与hello.s的差别总体不大,主要体现在以下几方面:</p><ol><li>全局变量引用 hello.o反汇编采用的是offset(%rip)的形式,而hello.s采用的是symbol(%rip)的形式。</li><li>函数调用 hello.o反汇编采用的是call offset的形式,而hello.s采用call symbol的形式。</li><li>分支转移 hello.o反汇编采用的是jmp offset的形式,而hello.s采用jmp Label的形式。</li><li>栈帧大小不同。<br>机器指令由指令指示符、(寄存器指示符)、(常数字)组成。</li></ol><p>机器语言与汇编语言大致具有一一对应的关系。但有些特殊情况,比如:</p><ol><li>转移控制 汇编语言中的jmp指令有直接跳转(在hello.o的反汇编中这个地址为绝对地址)和间接跳转,而转换成机器码后跳转指令会有几种不同的编码,最常用的是PC相对的,还有给出绝对地址的。而汇编器和链接器会选择适当的跳转目的编码。</li><li>一条指令可能有多个汇编码中的别名,例如jle和jg。</li><li>函数调用,在hello.o的反汇编文件中,call的地址是下一条指令的地址,而对应机器码中的操作码为0。这是因为hello.c中调用的函数都是共享库中的函数,在链接后才能确定函数的最终地址。因而在hello.o中只是将call的地址置为下一条指令的地址,而机器码的操作数则为目标位置(这里为下一条指令)相对于下一条指令的偏移,即0。</li></ol><h2 id="4-5-本章小结"><a href="#4-5-本章小结" class="headerlink" title="4.5 本章小结"></a>4.5 本章小结</h2><p>本章阐述了hello从hello.s到hello.o的汇编过程。分析了hello.o的ELF格式,并通过查看比较反汇编代码和汇编代码分析了汇编语言与机器码的关系。<br>汇编过程将汇编语言转换为机器码,生成可重定位目标文件,这个文件根据ELF格式对机器码进行打包,并为接下来的链接过程做好了准备。</p><h1 id="第5章-链接"><a href="#第5章-链接" class="headerlink" title="第5章 链接"></a>第5章 链接</h1><h2 id="5-1-链接的概念与作用"><a href="#5-1-链接的概念与作用" class="headerlink" title="5.1 链接的概念与作用"></a>5.1 链接的概念与作用</h2><p>链接是将各种代码和数据片段收集并组合成一个单一文件的过程,这个文件可以被加载(复制)到内存执行。链接可以执行于编译时,也可以执行于加载时,甚至执行于运行时。<br>链接器使得分离编译成为可能。我们不用将一个大型的应用程序组织为一个巨大的源文件,而是可以把它分解为更小、更好管理的模块,可以独立地修改和编译这些模块。当我们改变这些模块中的其中一个时,只需简单地重新编译它,并重新链接应用,而不必重新编译其它文件。</p><h2 id="5-2-在Ubuntu下链接的命令"><a href="#5-2-在Ubuntu下链接的命令" class="headerlink" title="5.2 在Ubuntu下链接的命令"></a>5.2 在Ubuntu下链接的命令</h2><p><code>ld -dynamic-linker /lib64/ld-linux-x86-64.so.2 /usr/lib/x86_64-linux-gnu/crt1.o /usr/lib/x86_64-linux-gnu/crti.o /usr/lib/gcc/x86_64-linux-gnu/5/crtbegin.o hello.o -lc /usr/lib/gcc/x86_64-linux-gnu/5/crtend.o /usr/lib/x86_64-linux-gnu/crtn.o -z relro -o hello.out</code><br><img src="/images/2018-HIT-CSAPP-hello/t32.png" alt="pic32"></p><h2 id="5-3-可执行目标文件hello的格式"><a href="#5-3-可执行目标文件hello的格式" class="headerlink" title="5.3 可执行目标文件hello的格式"></a>5.3 可执行目标文件hello的格式</h2><p>Linux的ELF文件格式如下:</p><table><thead><tr><th>名称</th><th>作用</th></tr></thead><tbody><tr><td>ELF头</td><td>描述文件的总体格式,包括程序的入口点</td></tr><tr><td>段头部表</td><td>将连续的文件映射到运行时内存段</td></tr><tr><td>.init</td><td>定义了一个小函数<code>_init</code></td></tr><tr><td>.text</td><td>已编译的程序的机器代码</td></tr><tr><td>.rodata</td><td>只读数据</td></tr><tr><td>.data</td><td>已初始化的全局和静态C变量</td></tr><tr><td>.bss</td><td>未初始化的全局和静态C变量</td></tr><tr><td>.symtab</td><td>一个符号表,存放在程序中定义和引用的函数和全局变量的信息</td></tr><tr><td>.debug</td><td>一个调试符号表</td></tr><tr><td>.line</td><td>原始C源程序的行号和.text节中机器指令之间的映射</td></tr><tr><td>.strtab</td><td>一个字符串表</td></tr><tr><td>节头部表</td><td>每个节的偏移量大小</td></tr></tbody></table><h4 id="hello的ELF头"><a href="#hello的ELF头" class="headerlink" title="hello的ELF头"></a>hello的ELF头</h4><p><img src="/images/2018-HIT-CSAPP-hello/t33.png" alt="pic33"><br>节头部表 节头部表对hello中所有的节进行了声明,其中Size是每个节的大小,Offset是每个节在程序中的偏移量,Address是程序被载入后各段的虚拟地址。<br><img src="/images/2018-HIT-CSAPP-hello/t34.png" alt="pic34"></p><h4 id="段头部表"><a href="#段头部表" class="headerlink" title="段头部表"></a>段头部表</h4><p><img src="/images/2018-HIT-CSAPP-hello/t35.png" alt="pic35"></p><h2 id="5-4-hello的虚拟地址空间"><a href="#5-4-hello的虚拟地址空间" class="headerlink" title="5.4 hello的虚拟地址空间"></a>5.4 hello的虚拟地址空间</h2><p><strong>.plt 位于代码段的plt表</strong><br><img src="/images/2018-HIT-CSAPP-hello/t36.png" alt="pic36"><br><strong>.text 代码段 如图为hello!_start函数</strong><br><img src="/images/2018-HIT-CSAPP-hello/t37.png" alt="pic37"><br><strong>.rodata 只读数据段</strong><br><img src="/images/2018-HIT-CSAPP-hello/t38.png" alt="pic38"><br><strong>.data 数据段</strong><br><img src="/images/2018-HIT-CSAPP-hello/t39.png" alt="pic39"><br><strong>.bss 位初始化和被初始化为零的数据段</strong><br><img src="/images/2018-HIT-CSAPP-hello/t40.png" alt="pic40"><br><strong>.got 初始化前的got表</strong><br><img src="/images/2018-HIT-CSAPP-hello/t41.png" alt="pic41"></p><h2 id="5-5-链接的重定位过程分析"><a href="#5-5-链接的重定位过程分析" class="headerlink" title="5.5 链接的重定位过程分析"></a>5.5 链接的重定位过程分析</h2><p>hello相比hello多了许多节,如:</p><table><thead><tr><th>节</th><th>作用</th></tr></thead><tbody><tr><td>.interp</td><td>保存ld.so的路径</td></tr><tr><td>.rela.plt</td><td>.plt的重定位项目</td></tr><tr><td>.init</td><td>初始化代码</td></tr><tr><td>.plt</td><td>动态链接过程链接表</td></tr><tr><td>.got</td><td>动态链接全局偏移量表,用于存放变量</td></tr><tr><td>.got.plt</td><td>动态链接全局偏移量表,用于存放函数</td></tr></tbody></table><p>hello.o的objdump与hello的objdump主要有以下几点不同:</p><ol><li>hello.o的objdump没有<code>_init</code>函数、<code>_start</code>函数、plt表等。</li><li>hello.o的objdump中对全局变量的引用地址均为0,函数调用的地址也只是当前指令的下一条指令的地址。</li></ol><p>hello的重定位记录有两种,分别是PC相对地址的引用和绝对地址的引用。<br>进行重定位时,hello根据.rela.text和.rela.data中的重定位记录,在.symtab中查找需要修改的记录的符号,并结合符号与重定位记录中的位置信息对目标位置进行 修改。如果需要修改的符号是本地符号,则计算偏移量并修改目标位置;如果是共享库中的符号,则创建.got表项(如果是函数还需创建.plt项),并创建新的重定位记录指向.got表项。</p><h2 id="5-6-hello的执行流程"><a href="#5-6-hello的执行流程" class="headerlink" title="5.6 hello的执行流程"></a>5.6 hello的执行流程</h2><table><thead><tr><th>hello执行过程中调用的函数</th><th>函数的地址</th></tr></thead><tbody><tr><td><code>_dl_start</code></td><td>0x7fb78d93ac38</td></tr><tr><td><code>_dl_init</code></td><td>0x7fb78d9424e0</td></tr><tr><td><code>_start</code></td><td>0x400550</td></tr><tr><td><code>__libc_start_main@plt</code></td><td>0x7fb78d590740</td></tr><tr><td><code>__libc_csu_init</code></td><td>0x4006d0</td></tr><tr><td><code>init</code></td><td>0x4004a8</td></tr><tr><td><code>main</code></td><td>0x400646</td></tr><tr><td><code>__GI_exit</code></td><td>0x7fb78d5aa030</td></tr><tr><td><code>__run_exit_handlers</code></td><td>0x7fb78d5a9f10</td></tr><tr><td><code>_dl_fini</code></td><td>0x7fb78d94aab0</td></tr><tr><td><code>_IO_cleanup</code></td><td>0x7fb78d5ec310</td></tr><tr><td><code>_IO_flush_all_lockp</code></td><td>0x7fb785ec020</td></tr></tbody></table><h2 id="5-7-Hello的动态链接分析"><a href="#5-7-Hello的动态链接分析" class="headerlink" title="5.7 Hello的动态链接分析"></a>5.7 Hello的动态链接分析</h2><p>无论在内存中的何处加载一个目标模块(包括共享目标模块),数据段与代码段的距离总是保持不变。因此,代码段中任何指令和数据段中任何变量之间的距离都是一个运行时常量,与代码段和数据段的绝对内存位置是无关的。<br>而要生成对全局变量PIC引用的编译器利用了这个事实,它在数据段开始的地方创建了一个表,叫做全局偏移量表(GOT)。在GOT中,每个被这个目标模块引用的全局数据目标(过程或全局变量)都有一个8字节条目。编译器还为GOT中每个条目生成 一个重定位记录。在加载时,动态链接器会重定位GOT中的每个条目,使得它包含目标的正确的绝对地址。<br>hello中对.got的初始化是由_dl_start函数执行的。下面的四张图片反应了这一过程:</p><p><img src="/images/2018-HIT-CSAPP-hello/t42.png" alt="pic42"><br>.got <code>_dl_start</code>执行前<br><img src="/images/2018-HIT-CSAPP-hello/t43.png" alt="pic43"><br>.got.plt <code>_dl_start</code>执行前<br><img src="/images/2018-HIT-CSAPP-hello/t44.png" alt="pic44"><br>.got <code>_dl_start</code>执行后<br><img src="/images/2018-HIT-CSAPP-hello/t45.png" alt="pic45"><br>.got.plt <code>_dl_start</code>执行后</p><p>hello要调取由共享库定义的函数puts,printf,而程序调用一个由共享库定义的函数,编译器没有办法预测这个函数的运行地址,因为定义它的共享模块在运行时可以加载到任何位置。为了解决这个问题,GNU编译系统使用了延迟绑定技术:<br>当hello尝试调用puts时,不直接调用puts,而是调用进入puts对应的PLT条目。这个条目会尝试利用GOT项进行间接跳转。<br><img src="/images/2018-HIT-CSAPP-hello/t46.png" alt="pic46"><br>第一次被调用时,GOT项的值为PLT条目中的下一条指令地址,因而接下来会跳回PLT条目,在把puts的ID 0压入栈后,会转到PLT[0]的位置,PLT[0]通过GOT[1]间接地把动态链接器的一个参数压入栈中,然后通过GOT[2]跳转进动态链接器中。动态链接器使用两个栈条目来确定puts的运行时位置,用这个地址重写puts的GOT项,再把控制传递给puts。<br><img src="/images/2018-HIT-CSAPP-hello/t47.png" alt="pic47"><br>在下一次执行到puts对应的PLT条目时,GOT项已经被修改,因此利用GOT项进行的间接跳转会直接跳转到puts函数。<br><img src="/images/2018-HIT-CSAPP-hello/t48.png" alt="pic48"></p><h2 id="5-8-本章小结"><a href="#5-8-本章小结" class="headerlink" title="5.8 本章小结"></a>5.8 本章小结</h2><p>本章讨论了hello的链接过程。链接过程可以发生在编译时,也可以发生在加载时,甚至可以发生在程序执行时。静态链接直接将目标文件和库文件打包至一个可执行文件中,而动态链接则只在可执行目标文件中添加相应重定向记录,并通过GOT表项和延迟绑定的方法实现对目标模块中符号的引用。</p><h1 id="第6章-hello进程管理"><a href="#第6章-hello进程管理" class="headerlink" title="第6章 hello进程管理"></a>第6章 hello进程管理</h1><h2 id="6-1-进程的概念与作用"><a href="#6-1-进程的概念与作用" class="headerlink" title="6.1 进程的概念与作用"></a>6.1 进程的概念与作用</h2><p><strong>概念</strong>:进程是一个执行中的程序的实例,系统中的每个程序都运行在某个进程的上下文中。上下文是由程序正确运行所需的状态组成的。这个状态包括存放在内存中的程序的代码和数据,它的栈、通用目的寄存器的内容、程序计数器、环境变量以及打开文件描述符的集合。</p><p><strong>作用</strong>:进程提供给应用程序两个关键抽象:</p><ol><li>逻辑控制流<br>a) 每个程序似乎独占地使用CPU<br>b) 通过OS内核的上下文切换机制提供</li><li>私有地址空间<br>a) 每个程序似乎独占地使用内存系统<br>b) OS内核的虚拟内存机制提供</li></ol><h2 id="6-2-简述壳Shell-bash的作用与处理流程"><a href="#6-2-简述壳Shell-bash的作用与处理流程" class="headerlink" title="6.2 简述壳Shell-bash的作用与处理流程"></a>6.2 简述壳Shell-bash的作用与处理流程</h2><p><strong>概念</strong>:shell是一个交互型的应用级程序,它代表用户运行其它程序。它执行一系列的读/求值步骤,然后终止。其中读步骤读取来自用户的一个命令行,求值步骤解析命令行,并代表用户运行程序。</p><p><strong>处理流程</strong>:读取来自用户的命令行并解析,如果是内部命令则直接执行内部命令,否则fork一个shell进程,并在这个进程中用execve加载目标程序,按照命令中的参数决定在前台或者后台运行目标程序,当前台程序结束时用waitpid回收进程。</p><h2 id="6-3-Hello的fork进程创建过程"><a href="#6-3-Hello的fork进程创建过程" class="headerlink" title="6.3 Hello的fork进程创建过程"></a>6.3 Hello的fork进程创建过程</h2><p>在bash中输入 <code>./hello 1******* ***</code> 并敲击回车后,bash解析此条命令,发现./hello不是bash内置命令,于是在当前目录尝试寻找并执行hello文件。此时bash使用fork函数创建一个子进程(这个子进程得到与父进程用户级虚拟地址空间相同但是独立的一份副本),并更改这个子进程的进程组编号。并准备在这个子进程执行execve。</p><h2 id="6-4-Hello的execve过程"><a href="#6-4-Hello的execve过程" class="headerlink" title="6.4 Hello的execve过程"></a>6.4 Hello的execve过程</h2><p>在新创建的子进程中,execve函数加载并运行hello,且带参数列表argv和环境变量envp。在execve加载了hello之后,它调用<code>_start</code>,<code>_start</code>设置栈,并将控制传递给新程序的主函数。</p><h2 id="6-5-Hello的进程执行"><a href="#6-5-Hello的进程执行" class="headerlink" title="6.5 Hello的进程执行"></a>6.5 Hello的进程执行</h2><p>在输入合适参数执行hello程序之后,hello进程一开始运行在用户模式。内核为hello维持一个上下文,它由一系列的值组成,这些对象包括通用目的寄存器、浮点寄存器、程序计数器、用户栈、状态寄存器、内核栈和各种内核数据结构(比如页表、进程表、文件表)。</p><p>在hello运行时,也有一些其它进程在并行地运行,这些进程的逻辑流的执行时间与hello的逻辑流重叠,称为并发流。而一个进程和其它进程轮流运行的概念叫作多任务,一个进程执行它的控制流的一部分的每一时间段叫做时间片。因此,多任务也叫做时间分片。<br><img src="/images/2018-HIT-CSAPP-hello/t49.png" alt="pic49"><br>不久hello调用printf与sleep,这两个函数引发系统调用,系统调用使得进程从用户模式变为内核模式,处理程序运行在内核模式中,当它返回到应用程序代码时,处理器就把模式从内核模式改回到用户模式,而执行sleep系统调用时,内核可能会执行上下文切换而非将控制返回给hello进程。在切换的第一部分中,内核代表hello在内核模式下执行指令,然后在某一时刻,它开始代表另一个进程在内核模式下执行指令,在切换之后,内核代表那个进程在用户模式下执行指令。<br>而这个切换过程可以分为三个步骤<br>1) 保存当前进程的上下文<br>2) 恢复某个先前被抢占的进程被保存的上下文<br>3) 将控制传递给这个新恢复的进程。</p><p><img src="/images/2018-HIT-CSAPP-hello/t50.png" alt="pic50"><br>这时我们说内核调度了一个新的进程,在内核调度了一个新的进程后,它就抢占了当前进程。<br>不仅仅是系统调用会导致上下文切换,中断也会。当hello执行了一段时间(通常是1-10ms)后,定时器引发的中断也会导致内核执行上下文切换并调度一个新的进程。<br>接下来的十秒中,内核继续执行上下文切换,轮流运行hello与其它进程,十次循环结束后,hello返回,程序终止。</p><h2 id="6-6-hello的异常与信号处理"><a href="#6-6-hello的异常与信号处理" class="headerlink" title="6.6 hello的异常与信号处理"></a>6.6 hello的异常与信号处理</h2><h3 id="6-6-1-hello在运行时可能会出现的异常"><a href="#6-6-1-hello在运行时可能会出现的异常" class="headerlink" title="6.6.1 hello在运行时可能会出现的异常"></a>6.6.1 hello在运行时可能会出现的异常</h3><p><strong>故障</strong>:缺页异常 加载完成后hello进程的页表被映射到hello文件,但还未将实际代码拷贝至内存,在执行到相应地址的代码时会引发缺页异常,拷贝相关代码。<br><strong>中断</strong>:如来自键盘的信号(见下文) 定时器中断<br><strong>陷阱</strong>:系统调用造成,如sleep函数<br><strong>终止</strong>:不可恢复的错误,如hello执行时硬件被物理伤害造成奇偶校验错误。</p><h3 id="6-6-2-hello在运行时可能会接受到的信号"><a href="#6-6-2-hello在运行时可能会接受到的信号" class="headerlink" title="6.6.2 hello在运行时可能会接受到的信号"></a>6.6.2 hello在运行时可能会接受到的信号</h3><p><strong>SIGINT 中断信号</strong> 当用户键入ctrl+c时会产生这个信号,接受这个信号,程序默认终止,如果有已经定义的handler,则会执行handler。<br><img src="/images/2018-HIT-CSAPP-hello/t51.png" alt="pic51"><br><strong>SIGTSTP 停止信号</strong> 当用户键入ctrl+z时会产生这个信号,接受这个信号的默认行为是中止程序,这个默认行为不可更改。<br><img src="/images/2018-HIT-CSAPP-hello/t52.png" alt="pic52"><br><strong>SIGKILL 终止信号</strong> 使用kill -9向hello发出这个信号,接受这个信号的默认行为是终止程序,这个默认行为不可更改。<br><img src="/images/2018-HIT-CSAPP-hello/t53.png" alt="pic53"><br><strong>SIGCHLD 子进程信号</strong> hello终止或中止时会向父进程(bash)发出这个信号。如果hello中止;则bash会将其标记为中止状态,如果hello已经终止,则bash会回收hello进程。</p><h2 id="6-7本章小结"><a href="#6-7本章小结" class="headerlink" title="6.7本章小结"></a>6.7本章小结</h2><p>本章介绍了进程的概念与作用,并通过hello程序演示了进程的执行过程。<br>简要介绍了shell的工作流程,并分析了linux下的异常处理机制,介绍了应用程序的信号处理。</p><p>shell执行程序是通过fork函数以及execve创建新的进程并执行程序的。<br>程序运行中可能会遇到异常,异常分为中断、陷阱、故障、终止四类,由异常处理子程序来处理,信号作为一种特殊的异常,实现了对程序运行终止等操作的控制。</p><h1 id="第7章-hello的存储管理"><a href="#第7章-hello的存储管理" class="headerlink" title="第7章 hello的存储管理"></a>第7章 hello的存储管理</h1><h2 id="7-1-hello的存储器地址空间"><a href="#7-1-hello的存储器地址空间" class="headerlink" title="7.1 hello的存储器地址空间"></a>7.1 hello的存储器地址空间</h2><p><strong>逻辑地址空间</strong>:段地址:偏移地址<br>23:8048000 段寄存器(CS等16位):偏移地址(16/32/64)</p><ul><li>实模式下:逻辑地址CS:EA=物理地址CS * 16 + EA</li><li>保护模式下:以段描述符作为下标,到GDT/LDT表查表获得段地址,<br>段地址+偏移地址=线性地址。</li></ul><p><strong>线性地址空间</strong>: 非负整数地址的有序集合::{0, 1, 2, 3 … }<br><strong>虚拟地址空间</strong>: N = 2n 个虚拟地址的集合=线性地址空间<br>{0, 1, 2, 3, …, N-1}<br><strong>物理地址空间</strong>: M = 2m 个物理地址的集合<br> {0, 1, 2, 3, …, M-1}<br>Intel采用段页式存储管理(MMU实现)<br><strong>段式管理</strong>: 逻辑地址->线性地址==虚拟地址<br><strong>页式管理</strong>: 虚拟地址->物理地址</p><h2 id="7-2-Intel逻辑地址到线性地址的变换-段式管理"><a href="#7-2-Intel逻辑地址到线性地址的变换-段式管理" class="headerlink" title="7.2 Intel逻辑地址到线性地址的变换-段式管理"></a>7.2 Intel逻辑地址到线性地址的变换-段式管理</h2><p>实模式下:逻辑地址CS:EA=物理地址CS * 16 + EA<br>保护模式下:以段描述符作为下标,到GDT/LDT表查表获得段地址,<br>段地址+偏移地址=线性地址。<br><img src="/images/2018-HIT-CSAPP-hello/t54.png" alt="pic54"></p><h2 id="7-3-Hello的线性地址到物理地址的变换-页式管理"><a href="#7-3-Hello的线性地址到物理地址的变换-页式管理" class="headerlink" title="7.3 Hello的线性地址到物理地址的变换-页式管理"></a>7.3 Hello的线性地址到物理地址的变换-页式管理</h2><p>hello的线性地址到物理地址的变换需要查询页表得出,hello的线性地址被分成两个部分,第一部分虚拟页号VPN用于在页表查询物理页号PPN,而第二部分虚拟页偏移量VPO则与查询到的物理页号PPN一起组成物理地址。<br><img src="/images/2018-HIT-CSAPP-hello/t55.png" alt="pic55"></p><h2 id="7-4-TLB与四级页表支持下的VA到PA的变换"><a href="#7-4-TLB与四级页表支持下的VA到PA的变换" class="headerlink" title="7.4 TLB与四级页表支持下的VA到PA的变换"></a>7.4 TLB与四级页表支持下的VA到PA的变换</h2><p><img src="/images/2018-HIT-CSAPP-hello/t56.png" alt="pic56"><br>虚拟地址VA被分成VPN和VPO两部分,VPN被分为TLBT和TLBI用于在TLB中查询。根据TLBI确定TLB中的组索引,并根据TLBT判断PPN是否已被缓存到TLB中,如果TLB命中,则直接返回PPN,否则会到页表中查询PPN。在页表中查询PPN时,VPN会被分为四个部分,分别用作一二三四级页表的索引,而前三级页表的查询结果为下一级页表的基地址,第四级页表的查询结果为PPN。将查询到的PPN与VPO组合,得到物理地址PA。</p><h2 id="7-5-三级Cache支持下的物理内存访问"><a href="#7-5-三级Cache支持下的物理内存访问" class="headerlink" title="7.5 三级Cache支持下的物理内存访问"></a>7.5 三级Cache支持下的物理内存访问</h2><p>MMU发送物理地址PA给L1缓存,L1缓存从物理地址中抽取出缓存偏移CO、缓存组索引CI以及缓存标记CT。高速缓存根据CI找到缓存中的一组,并通过CT判断是否已经缓存地址对应的数据,若缓存命中,则根据偏移量直接从缓存中读取数据并返回;若缓存不命中,则继续从L2、L3缓存中查询,若仍未命中,则从主存中读取数据。</p><h2 id="7-6-hello进程fork时的内存映射"><a href="#7-6-hello进程fork时的内存映射" class="headerlink" title="7.6 hello进程fork时的内存映射"></a>7.6 hello进程fork时的内存映射</h2><p>当fork函数被新进程调用时,内核为新进程创建各种数据结构,并分配给它唯一的PID。为了给这个新进程创建虚拟内存,它创建了当前进程的mm_struct、区域结构和页表的原样副本。它将两个进程中的每个页面都标记为只读,并将两个进程中的每个区域结构都标记为私有的写时复制。</p><p>当fork在新进程中返回时,新进程现在的虚拟内存刚好和调用fork时存在的虚拟内存相同。当这两个进程中的任一个后来进行写操作时,写时复制机制就会创建新页面,因此,也就为每个进程保持了私有地址空间的抽象概念。</p><h2 id="7-7-hello进程execve时的内存映射"><a href="#7-7-hello进程execve时的内存映射" class="headerlink" title="7.7 hello进程execve时的内存映射"></a>7.7 hello进程execve时的内存映射</h2><p>execve函数在当前进程中加载并运行包含在可执行文件hello中的程序,加载并运行hello时出现的内存映射有:</p><ol><li>映射私有区域 为新程序的代码、数据、bss和栈区域创建新的区域结构。所有这些新的区域都是私有的、写时复制的。代码和数据区域被映射为hello文件中的.text和.data区。Bss区域时请求二进制零的,映射到匿名文件,其大小包含在hello中。栈和堆区域也是请求二进制零的,初始长度为零。 </li><li>映射共享区域 如果hello程序与共享对象(或目标链接),比如C库libc.so,那么这些对象都是动态链接到这个程序的,然后再映射到用户虚拟地址空间中的共享区域内。<br><img src="/images/2018-HIT-CSAPP-hello/t57.png" alt="pic57"></li></ol><h2 id="7-8-缺页故障与缺页中断处理"><a href="#7-8-缺页故障与缺页中断处理" class="headerlink" title="7.8 缺页故障与缺页中断处理"></a>7.8 缺页故障与缺页中断处理</h2><p><strong>缺页故障</strong>:虚拟内存中的字不在物理内存中(DRAM缓存不命中)<br>如下图,VP3已经被映射到页表中,但却没有被缓存到物理内存中,此时堆VP3的引用会引发缺页故障。<br><img src="/images/2018-HIT-CSAPP-hello/t58.png" alt="pic58"><br>缺页会导致页面出错引发一个缺页中断,而缺页异常处理程序会选择一个牺牲页(如下图选择了VP4,将VP4从内存交换到磁盘,并从磁盘读取VP3交换到物理内存)。<br><img src="/images/2018-HIT-CSAPP-hello/t59.png" alt="pic59"><br>此时令导致缺页的指令重新启动,就可以使得页面命中了。</p><h2 id="7-9动态存储分配管理"><a href="#7-9动态存储分配管理" class="headerlink" title="7.9动态存储分配管理"></a>7.9动态存储分配管理</h2><p>printf会调用malloc,接下来提一下动态内存分配的基本原理。</p><p>动态内存分配器维护着一个进程的虚拟内存区域,称为堆。系统之间细节不同,但是不失通用性,假设堆是一个请求二进制零的区域,它紧接在未初始化的数据区域后开始,并向上生长。对于每个进程,内核维护着一个变量brk,它指向堆的顶部。分配器将堆视为一组不同大小的块的集合来维护,每个块就是一个连续的需内存片,要么是已分配的,要么是空闲的。已分配的块显示地保留为供应用程序使用。空闲块可以用来分配。空闲块保持空闲,直到它显示地被应用所分配。一个已分配的块保持已分配状态,直到它被释放,这种释放要么是应用程序显示执行的,要么是内存分配器自身隐式执行的。</p><p>两种堆的数据结构组织形式:</p><h4 id="带标签的隐式空闲链表"><a href="#带标签的隐式空闲链表" class="headerlink" title="带标签的隐式空闲链表"></a>带标签的隐式空闲链表</h4><p>带标签的隐式空闲链表的数据组织方式如下图:<br><img src="/images/2018-HIT-CSAPP-hello/t60.png" alt="pic60"><br>空闲块是通过头部中的大小字段隐含地连接着的。分配器可以通过遍历堆中所有的块,从而间接地遍历整个空闲块的集合。</p><h4 id="显式空闲链表"><a href="#显式空闲链表" class="headerlink" title="显式空闲链表"></a>显式空闲链表</h4><p>显式空闲链表将链表的指针存放在空闲块的主体里面。堆被组织成一个双向空闲链表,在每个空闲块中,都包含一个pred和succ指针,如下图所示:<br><img src="/images/2018-HIT-CSAPP-hello/t61.png" alt="pic61"></p><h2 id="7-10本章小结"><a href="#7-10本章小结" class="headerlink" title="7.10本章小结"></a>7.10本章小结</h2><p>现代操作系统多采用虚拟内存系统,访存时地址需要从逻辑地址翻译到虚拟地址并进一步翻译成物理地址。<br>操作系统通过地址的页式管理来实现对磁盘的缓存、内存管理、内存保护等功能。<br>虚拟内存为便捷的加载、进程管理提供了可能。<br>程序运行过程中往往涉及动态内存分配,动态内存分配通过动态内存分配器完成。</p><h1 id="第8章-hello的IO管理"><a href="#第8章-hello的IO管理" class="headerlink" title="第8章 hello的IO管理"></a>第8章 hello的IO管理</h1><h2 id="8-1-Linux的IO设备管理方法"><a href="#8-1-Linux的IO设备管理方法" class="headerlink" title="8.1 Linux的IO设备管理方法"></a>8.1 Linux的IO设备管理方法</h2><p>一个linux文件就是一个m个字节的序列:<br> <strong>B0, B1, … Bk, …, Bm-1</strong></p><p>所有的I/O设备(例如网络、磁盘和终端)都被模型化为文件,而所有的输入和输出都被当做对相应文件的读和写来执行。这种将设备优雅地映射为文件的方式,允许linux内核引出一个简单、低级的应用接口,称为Unix I/O,这使得所有的输入和输出都能以一种统一且一致的方式来执行:UNIX I/O。</p><h2 id="8-2-简述Unix-IO接口及其函数"><a href="#8-2-简述Unix-IO接口及其函数" class="headerlink" title="8.2 简述Unix IO接口及其函数"></a>8.2 简述Unix IO接口及其函数</h2><h4 id="打开和关闭文件"><a href="#打开和关闭文件" class="headerlink" title="打开和关闭文件"></a>打开和关闭文件</h4><p><code>int open(char *filename, int flags, mode_t mode);</code><br>open函数将filename转换为一个文件描述符,并返回描述符数字。返回的描述符总是在进程中当前没有打开的最小描述符。flags参数指明了进程打算如何访问这个文件,mode参数指定了新文件的访问权限位。<br><code>int close(int fd);</code><br>进程通过调用close关闭一个打开的文件。</p><h4 id="读和写文件"><a href="#读和写文件" class="headerlink" title="读和写文件"></a>读和写文件</h4><p><code>ssize_t read(int fd, void *buf, size_t n);</code><br>read函数从描述符为fd的当前文件位置复制最多n个字节到内存位置buf。返回值-1表示一个错误,而返回值0表示EOF。否则,返回值表示的是实际传送的字节数量。<br><code>ssize_t write(int fd, const void *buf, size_t n);</code><br>write函数从内存位置buf复制之多n个字节到描述符fd的当前文件位置。<br><code>DIO *opendir(const char *name);</code><br>函数opendir以路径名为参数,返回指向目录流的指针。流是对条目有序列表的抽象,在这里是指目录项的列表。<br><code>struct dirent *readdir(DIR *dirp);</code><br>每次对readdir的调用返回的都是指向流dirp中下一个目录项的指针,或者,如果没有更过目录项则返回NULL。<br><code>int closedir(DIR *dirp);</code><br>函数closedir关闭流并释放其所有的资源。</p><h4 id="I-O重定向"><a href="#I-O重定向" class="headerlink" title="I/O重定向"></a>I/O重定向</h4><p><code>int dup2(int oldfd, int newfd);</code><br>dup2函数复制描述符表表项oldfd到描述符表项newfd,覆盖描述符表表项newfd以前的内容。如果newfd已经打开了,dup2会在复制oldfd之前关闭newfd。</p><h2 id="8-3-printf的实现分析"><a href="#8-3-printf的实现分析" class="headerlink" title="8.3 printf的实现分析"></a>8.3 printf的实现分析</h2><p>printf函数的实现大致与下面代码一致:<br><figure class="highlight c"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">int</span> <span class="title">printf</span><span class="params">(<span class="keyword">const</span> <span class="keyword">char</span> *fmt, ...)</span></span></span><br><span class="line"><span class="function"></span>{</span><br><span class="line"> <span class="keyword">int</span> i;</span><br><span class="line"> <span class="keyword">char</span> buf[<span class="number">256</span>];</span><br><span class="line"></span><br><span class="line"> va_list arg = (va_list)((<span class="keyword">char</span> *)(&fmt) + <span class="number">4</span>);</span><br><span class="line"> i = <span class="built_in">vsprintf</span>(buf, fmt, arg);</span><br><span class="line"> write(buf, i);</span><br><span class="line"></span><br><span class="line"> <span class="keyword">return</span> i;</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p><p>其中va_list_arg是边长参数列表中的第一个参数的地址,vsprintf的作用是以fmt作为格式字符串,根据arg中的参数,向buf中输出格式化后的字符串。write则是Unix I/O接口,它将栈中参数存入寄存器,并由它来进行系统调用。write的实现大致如下,其中ecx是字符个数,ebx存放第一个字符地址:<br>write:<br> <figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line"> mov eax, _NR_write</span><br><span class="line"> mov ebx, [esp + 4]</span><br><span class="line"> mov ecx, [esp + 8]</span><br><span class="line">int INT_VECTOR_SYS_CALL</span><br></pre></td></tr></table></figure></p><p>接下来syscall将字符串从寄存器中通过总线复制到显卡显存中。字符显示驱动子程序通过ASCII码在字模库中找到点阵信息并将其存储到vram中。接下来显示芯片按照刷新频率逐行读取vram,并通过信号线向液晶显示器传输每一个点(RGB分量)。此时字符串就被打印到了屏幕上。</p><h2 id="8-4-getchar的实现分析"><a href="#8-4-getchar的实现分析" class="headerlink" title="8.4 getchar的实现分析"></a>8.4 getchar的实现分析</h2><p>getchar函数的大致实现如下:<br><figure class="highlight c"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">int</span> <span class="title">getchar</span><span class="params">(<span class="keyword">void</span>)</span></span></span><br><span class="line"><span class="function"></span>{</span><br><span class="line"> <span class="keyword">char</span> c;</span><br><span class="line"> <span class="keyword">return</span> (read(<span class="number">0</span>,&c,<span class="number">1</span>)==<span class="number">1</span>)?(<span class="keyword">unsigned</span> <span class="keyword">char</span>)c:EOF</span><br><span class="line">}</span><br></pre></td></tr></table></figure></p><p>getchar函数通过调用read函数来读取字符。read函数由三个参数,第一个参数为文件描述符fd,fd为0表示标准输入;第二个参数为输入内容的指针;第三个参数为读入字符的个数。read函数的返回值是读入字符的个数,若出错则返回-1。</p><p>当用户按键时,键盘接口会产生一个键盘扫描码和一个中断请求,中断处理程序会从键盘接口取得按键扫描码并把它转换成ASCII码,保存到系统的键盘缓冲区。</p><p>read执行一个系统调用,按照系统调用从键盘缓冲区读取按键ASCII码,直到接受到回车键才返回。</p><h2 id="8-5本章小结"><a href="#8-5本章小结" class="headerlink" title="8.5本章小结"></a>8.5本章小结</h2><p>I/O时在主存和外部设备之间复制数据的过程。在Linux中,I/O的实现是通过Unix I/O函数来执行的。Linux把所有的I/O设备模型化为文件,并提供统一的Unix I/O接口,这使得所有的输入输出都能以一种统一且一致的方式来执行。</p><h1 id="结论"><a href="#结论" class="headerlink" title="结论"></a>结论</h1><h1 id="hello的一生"><a href="#hello的一生" class="headerlink" title="hello的一生"></a>hello的一生</h1><ol><li>用户从键盘输入,得到hello.c源文件。</li><li>编译器和汇编器对hello.c进行预处理,然后对其进行编译和汇编,得到可重定位目标文件hello.o。</li><li>链接器对hello.o进行链接,并得到可执行目标文件hello,此时hello已经可以被操作系统加载和执行。</li><li>bash执行hello,首先bash会fork一个进程,然后在这个新的进程中execve hello,execve会清空当前进程的数据并加载hello,然后把rip指向hello的程序入口,把控制权交给hello。</li><li>hello与许多进程并行执行,执行过程中由于系统调用或者计时器中断,会导致上下文切换,内核会选择另一个进程进行调度,并抢占当前的hello进程。</li><li>hello执行的过程中可能收到来自键盘或者其它进程的信号,当收到信号时hello会调用信号处理程序来进行处理,可能出现的行为有停止终止忽略等。</li><li>hello输出信息时需要调用printf和getchar,而printf和getchar的实现需要调用Unix I/O中的write和read函数,而它们的实现需要借助系统调用。</li><li>hello中的访存操作,需要经历逻辑地址到线性地址最后到物理地址的变换,而访问物理地址的数据可能已被缓存至高速缓冲区,也可能位于主存中,也可能位于磁盘中等待被交换到主存。</li><li>hello结束进程后,bash作为hello的父进程会回收hello进程。</li></ol><p>在Kernighan和Ritchie的关于C编程语言的经典教材中,他们通过一个简单的hello程序来向读者介绍C。尽管hello非常简单,但是为了让它实现运行,系统的每个主要组成部分都需要协调工作。从某种意义上来说,计算机系统课程的学习,就是让我们了解当在系统上执行hello程序时,系统发生了什么以及为什么会这样。</p>]]></content>
<summary type="html">
<h1 id="摘-要"><a href="#摘-要" class="headerlink" title="摘 要"></a>摘 要</h1><p>计算机系统是由硬件和软件组成的,它们共同工作来运行应用程序。即使是最简单的一个应用程序,也需要计算机系统中的每个主要组成部分协调工作。本文以hello程序的运行为切入点,解释了当在系统上运行hello程序时,系统发生的编译、链接、加载、进程管理、存储管理等过程,以及它们的运行机制。<br><strong>关键词</strong>:操作系统;计算机组成原理;汇编<br>
</summary>
<category term="计算机组成原理" scheme="https://www.mhlwsk.com/tags/%E8%AE%A1%E7%AE%97%E6%9C%BA%E7%BB%84%E6%88%90%E5%8E%9F%E7%90%86/"/>
<category term="汇编" scheme="https://www.mhlwsk.com/tags/%E6%B1%87%E7%BC%96/"/>
</entry>
<entry>
<title>计蒜客31445 (Made In Heaven)[A*,第K最短路]</title>
<link href="https://www.mhlwsk.com/archives/20320.html"/>
<id>https://www.mhlwsk.com/archives/20320.html</id>
<published>2018-09-09T02:20:15.000Z</published>
<updated>2020-07-30T13:07:23.802Z</updated>
<content type="html"><![CDATA[<p><strong>题目链接:</strong><a href="https://nanti.jisuanke.com/t/31445" rel="external nofollow noopener noreferrer" target="_blank">https://nanti.jisuanke.com/t/31445</a></p><p>题目大意:给定一张图,问第K最短路的长度是否小于给定值T<br>求第K最短路,令$f=g+h$,$g$为当前已经走过的距离,$h$为当前点到终点的最短距离<br>从起点开始拓展,每次选择$f$最短的点进行拓展(入队),每次出队时检查是否走到终点并统计终点出队次数,当终点出队次数为$k$时当前点的$g$就是答案<br>可以这样考虑:每次严格按照估价函数进行拓展,则第一次走到终点的方案一定是最短路,第二次就是第2最短路,第k次就是第k最短路<br>这样先用spfa预处理出每个节点的$h$再用A*拓展即可</p><p>这题还有一个需要注意的地方:A*用的优先队列很有可能在使用完后没有被清空(因为函数可能提前退出了),此时如果直接用会MLE,而如果用一个一个pop的方式清空会直接TLE,所以比较好的解决方法是直接把优先队列写在函数体内部,这样就不用清空了</p><a id="more"></a><figure class="highlight cpp"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br><span class="line">78</span><br><span class="line">79</span><br><span class="line">80</span><br><span class="line">81</span><br><span class="line">82</span><br><span class="line">83</span><br><span class="line">84</span><br><span class="line">85</span><br><span class="line">86</span><br><span class="line">87</span><br><span class="line">88</span><br><span class="line">89</span><br><span class="line">90</span><br><span class="line">91</span><br><span class="line">92</span><br><span class="line">93</span><br><span class="line">94</span><br><span class="line">95</span><br><span class="line">96</span><br><span class="line">97</span><br><span class="line">98</span><br><span class="line">99</span><br><span class="line">100</span><br><span class="line">101</span><br><span class="line">102</span><br><span class="line">103</span><br><span class="line">104</span><br><span class="line">105</span><br><span class="line">106</span><br><span class="line">107</span><br><span class="line">108</span><br><span class="line">109</span><br><span class="line">110</span><br><span class="line">111</span><br><span class="line">112</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta">#<span class="meta-keyword">include</span> <span class="meta-string"><cstdio></span></span></span><br><span class="line"><span class="meta">#<span class="meta-keyword">include</span> <span class="meta-string"><cstring></span></span></span><br><span class="line"><span class="meta">#<span class="meta-keyword">include</span> <span class="meta-string"><algorithm></span></span></span><br><span class="line"><span class="meta">#<span class="meta-keyword">include</span> <span class="meta-string"><vector></span></span></span><br><span class="line"><span class="meta">#<span class="meta-keyword">include</span> <span class="meta-string"><stack></span></span></span><br><span class="line"><span class="meta">#<span class="meta-keyword">include</span> <span class="meta-string"><queue></span></span></span><br><span class="line"><span class="keyword">using</span> <span class="keyword">namespace</span> <span class="built_in">std</span>;</span><br><span class="line"><span class="keyword">const</span> <span class="keyword">int</span> maxn=<span class="number">1009</span>;</span><br><span class="line"><span class="keyword">const</span> <span class="keyword">int</span> maxm=<span class="number">10009</span>;</span><br><span class="line"><span class="keyword">int</span> s,t,k,up,n,m;</span><br><span class="line"><span class="keyword">int</span> head[maxn],revhead[maxn];</span><br><span class="line"><span class="class"><span class="keyword">struct</span> <span class="title">Edge</span>{</span></span><br><span class="line"> <span class="keyword">int</span> v,w,next;</span><br><span class="line"></span><br><span class="line">}ed[maxm],reved[maxm];</span><br><span class="line"><span class="keyword">int</span> ne1=<span class="number">0</span>,ne2=<span class="number">0</span>;</span><br><span class="line"><span class="function"><span class="keyword">void</span> <span class="title">init</span><span class="params">()</span> </span>{</span><br><span class="line"> <span class="built_in">memset</span>(head,<span class="number">-1</span>,<span class="keyword">sizeof</span>(head));</span><br><span class="line"> <span class="built_in">memset</span>(revhead,<span class="number">-1</span>,<span class="keyword">sizeof</span>(revhead));</span><br><span class="line"> ne1=<span class="number">0</span>;</span><br><span class="line"> ne2=<span class="number">0</span>;</span><br><span class="line"></span><br><span class="line">}</span><br><span class="line"><span class="function"><span class="keyword">void</span> <span class="title">add</span><span class="params">(<span class="keyword">int</span> u,<span class="keyword">int</span> v,<span class="keyword">int</span> w)</span> </span>{</span><br><span class="line"> ed[ne1].v=v;ed[ne1].w=w;ed[ne1].next=head[u];head[u]=ne1++;</span><br><span class="line"> reved[ne2].v=u;reved[ne2].w=w;reved[ne2].next=revhead[v];revhead[v]=ne2++;</span><br><span class="line"></span><br><span class="line">}</span><br><span class="line"><span class="keyword">int</span> h[maxn];</span><br><span class="line"><span class="keyword">bool</span> vis[maxn];</span><br><span class="line"></span><br><span class="line"><span class="built_in">stack</span><<span class="keyword">int</span>> q;</span><br><span class="line"><span class="function"><span class="keyword">void</span> <span class="title">spfa</span><span class="params">()</span> </span>{</span><br><span class="line"> <span class="keyword">for</span>(<span class="keyword">int</span> i=<span class="number">1</span>;i<=n;i++) h[i]=<span class="number">0x3f3f3f3f</span>;</span><br><span class="line"> <span class="built_in">memset</span>(vis,<span class="number">0</span>,<span class="keyword">sizeof</span>(vis));</span><br><span class="line"> <span class="keyword">while</span>(!q.empty()) q.pop();</span><br><span class="line"> h[t]=<span class="number">0</span>;</span><br><span class="line"> vis[t]=<span class="literal">true</span>;</span><br><span class="line"> q.push(t);</span><br><span class="line"><span class="keyword">while</span>(!q.empty()) {</span><br><span class="line"> <span class="keyword">int</span> now=q.top(); q.pop();</span><br><span class="line"><span class="keyword">for</span>(<span class="keyword">int</span> i=revhead[now],v;~i;i=reved[i].next) {</span><br><span class="line"> v=reved[i].v;</span><br><span class="line"><span class="keyword">if</span>(h[v]>h[now]+reved[i].w) {</span><br><span class="line"> h[v]=h[now]+reved[i].w;</span><br><span class="line"><span class="keyword">if</span>(!vis[v]) {</span><br><span class="line"> vis[v]=<span class="literal">true</span>;</span><br><span class="line"> q.push(v);</span><br><span class="line"> </span><br><span class="line">}</span><br><span class="line"> </span><br><span class="line">}</span><br><span class="line"> </span><br><span class="line">}</span><br><span class="line"> vis[now]=<span class="literal">false</span>;</span><br><span class="line"> </span><br><span class="line">}</span><br><span class="line"></span><br><span class="line">}</span><br><span class="line"><span class="class"><span class="keyword">struct</span> <span class="title">Node</span> {</span></span><br><span class="line"> <span class="keyword">int</span> f,g,v;</span><br><span class="line"> Node(<span class="keyword">int</span> _f,<span class="keyword">int</span> _g,<span class="keyword">int</span> _v):f(_f),g(_g),v(_v) {}</span><br><span class="line"><span class="keyword">bool</span> <span class="keyword">operator</span> <(<span class="keyword">const</span> Node &y) <span class="keyword">const</span> {</span><br><span class="line"> <span class="keyword">if</span>(f==y.f) <span class="keyword">return</span> g>y.g;</span><br><span class="line"> <span class="keyword">return</span> f>y.f;</span><br><span class="line"> </span><br><span class="line">}</span><br><span class="line"></span><br><span class="line">};</span><br><span class="line"><span class="function"><span class="keyword">bool</span> <span class="title">astar</span><span class="params">()</span> </span>{</span><br><span class="line"> priority_queue<Node> pq;</span><br><span class="line"> <span class="keyword">while</span>(!pq.empty()) pq.pop();</span><br><span class="line"> <span class="keyword">int</span> cnt=<span class="number">0</span>;</span><br><span class="line"> <span class="keyword">if</span>(h[s]==<span class="number">0x3f3f3f3f</span>) <span class="keyword">return</span> <span class="literal">false</span>;<span class="comment">//</span></span><br><span class="line"> pq.push(Node(<span class="number">0</span>+h[s],<span class="number">0</span>,s));</span><br><span class="line"> <span class="keyword">if</span>(s==t) k++;</span><br><span class="line"><span class="keyword">while</span>(!pq.empty()) {</span><br><span class="line"> Node now=pq.top(); pq.pop();</span><br><span class="line"> <span class="keyword">if</span>(now.v==t) ++cnt;</span><br><span class="line"><span class="keyword">if</span>(cnt==k) {</span><br><span class="line"> <span class="keyword">if</span>(now.g<=up) <span class="keyword">return</span> <span class="literal">true</span>;</span><br><span class="line"> <span class="keyword">else</span> <span class="keyword">return</span> <span class="literal">false</span>;</span><br><span class="line"> </span><br><span class="line">}</span><br><span class="line"><span class="keyword">for</span>(<span class="keyword">int</span> i=head[now.v],v;~i;i=ed[i].next) {</span><br><span class="line"> <span class="comment">//if(i&1) continue;</span></span><br><span class="line"> v=ed[i].v;</span><br><span class="line"> pq.push(Node(now.g+ed[i].w+h[v],now.g+ed[i].w,v));</span><br><span class="line"> </span><br><span class="line">}</span><br><span class="line"> </span><br><span class="line">}</span><br><span class="line"> <span class="keyword">return</span> <span class="literal">false</span>;</span><br><span class="line"></span><br><span class="line">}</span><br><span class="line"><span class="function"><span class="keyword">int</span> <span class="title">main</span><span class="params">()</span> </span>{</span><br><span class="line"> <span class="keyword">int</span> u,v,w;</span><br><span class="line"> <span class="comment">//printf("%d",0x3f3f3f3f);</span></span><br><span class="line"><span class="keyword">while</span>(~<span class="built_in">scanf</span>(<span class="string">"%d%d"</span>,&n,&m)) {</span><br><span class="line"> <span class="built_in">scanf</span>(<span class="string">"%d%d%d%d"</span>,&s,&t,&k,&up);</span><br><span class="line"> init();</span><br><span class="line"><span class="keyword">for</span>(<span class="keyword">int</span> i=<span class="number">0</span>;i<m;i++) {</span><br><span class="line"> <span class="built_in">scanf</span>(<span class="string">"%d%d%d"</span>,&u,&v,&w);</span><br><span class="line"> add(u,v,w);</span><br><span class="line"> </span><br><span class="line">}</span><br><span class="line"> spfa();</span><br><span class="line"> <span class="built_in">printf</span>(astar()?<span class="string">"yareyaredawa\n"</span>:<span class="string">"Whitesnake!\n"</span>);</span><br><span class="line"> </span><br><span class="line">}</span><br><span class="line"></span><br><span class="line">}</span><br></pre></td></tr></table></figure>]]></content>
<summary type="html">
<p><strong>题目链接:</strong><a href="https://nanti.jisuanke.com/t/31445" rel="external nofollow noopener noreferrer" target="_blank">https://nanti.jisuanke.com/t/31445</a></p>
<p>题目大意:给定一张图,问第K最短路的长度是否小于给定值T<br>求第K最短路,令$f=g+h$,$g$为当前已经走过的距离,$h$为当前点到终点的最短距离<br>从起点开始拓展,每次选择$f$最短的点进行拓展(入队),每次出队时检查是否走到终点并统计终点出队次数,当终点出队次数为$k$时当前点的$g$就是答案<br>可以这样考虑:每次严格按照估价函数进行拓展,则第一次走到终点的方案一定是最短路,第二次就是第2最短路,第k次就是第k最短路<br>这样先用spfa预处理出每个节点的$h$再用A*拓展即可</p>
<p>这题还有一个需要注意的地方:A*用的优先队列很有可能在使用完后没有被清空(因为函数可能提前退出了),此时如果直接用会MLE,而如果用一个一个pop的方式清空会直接TLE,所以比较好的解决方法是直接把优先队列写在函数体内部,这样就不用清空了</p>
</summary>
<category term="A\*" scheme="https://www.mhlwsk.com/tags/A/"/>
<category term="最短路" scheme="https://www.mhlwsk.com/tags/%E6%9C%80%E7%9F%AD%E8%B7%AF/"/>
</entry>
<entry>
<title>CodeForces 100342 (Triatrip)[bitset]</title>
<link href="https://www.mhlwsk.com/archives/55288.html"/>
<id>https://www.mhlwsk.com/archives/55288.html</id>
<published>2018-09-01T02:45:18.000Z</published>
<updated>2020-07-30T13:07:23.782Z</updated>
<content type="html"><![CDATA[<p><strong>题目链接:</strong><a href="http://codeforces.com/gym/100342/attachments" rel="external nofollow noopener noreferrer" target="_blank">http://codeforces.com/gym/100342/attachments</a></p><p>题目大意:给定一张有向图,询问有多少个三元环。</p><blockquote><p>这道题数据范围只有1500,所以可以n^2,我们暴力枚举两个点,假设为A->B,然后我们预处理出有哪些点可以到A,B可以到哪些点,这样就可以得到俩集合,然后再交一下,再统计一下集合里面元素的个数就好了<br>引用自<a href="https://www.cnblogs.com/qscqesze/p/4708878.html" rel="external nofollow noopener noreferrer" target="_blank">qscqesze</a></p></blockquote><a id="more"></a><figure class="highlight cpp"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta">#<span class="meta-keyword">include</span> <span class="meta-string"><cstdio></span></span></span><br><span class="line"><span class="meta">#<span class="meta-keyword">include</span> <span class="meta-string"><cstring></span></span></span><br><span class="line"><span class="meta">#<span class="meta-keyword">include</span> <span class="meta-string"><algorithm></span></span></span><br><span class="line"><span class="meta">#<span class="meta-keyword">include</span> <span class="meta-string"><bitset></span></span></span><br><span class="line"><span class="keyword">using</span> <span class="keyword">namespace</span> <span class="built_in">std</span>;</span><br><span class="line"><span class="keyword">const</span> <span class="keyword">int</span> maxn=<span class="number">1509</span>;</span><br><span class="line"><span class="keyword">typedef</span> <span class="keyword">long</span> <span class="keyword">long</span> ll;</span><br><span class="line"><span class="built_in">bitset</span><maxn> g[maxn];</span><br><span class="line"><span class="keyword">char</span> s[maxn];</span><br><span class="line"><span class="function"><span class="keyword">int</span> <span class="title">main</span><span class="params">()</span> </span>{</span><br><span class="line"> freopen(<span class="string">"triatrip.in"</span>,<span class="string">"r"</span>,<span class="built_in">stdin</span>);</span><br><span class="line"> freopen(<span class="string">"triatrip.out"</span>,<span class="string">"w"</span>,<span class="built_in">stdout</span>);</span><br><span class="line"><span class="keyword">int</span> n;</span><br><span class="line"><span class="built_in">scanf</span>(<span class="string">"%d"</span>,&n);</span><br><span class="line"><span class="keyword">for</span>(<span class="keyword">int</span> i=<span class="number">0</span>;i<n;i++) {</span><br><span class="line"><span class="built_in">scanf</span>(<span class="string">"%s"</span>,s);</span><br><span class="line"><span class="keyword">for</span>(<span class="keyword">int</span> j=<span class="number">0</span>;s[j]!=<span class="string">'\0'</span>;j++) </span><br><span class="line"><span class="keyword">if</span>(s[j]==<span class="string">'+'</span>) g[i].<span class="built_in">set</span>(j);</span><br><span class="line">}</span><br><span class="line">ll ans=<span class="number">0</span>;</span><br><span class="line"><span class="function"><span class="built_in">bitset</span><maxn> <span class="title">tmp</span><span class="params">(<span class="number">0</span>)</span></span>;</span><br><span class="line"><span class="keyword">for</span>(<span class="keyword">int</span> i=<span class="number">0</span>;i<n;i++) {</span><br><span class="line">tmp.reset();</span><br><span class="line"><span class="keyword">for</span>(<span class="keyword">int</span> j=<span class="number">0</span>;j<n;j++) {</span><br><span class="line"><span class="keyword">if</span>(i==j) <span class="keyword">continue</span>;</span><br><span class="line"><span class="keyword">if</span>(g[j].test(i)) tmp.<span class="built_in">set</span>(j);</span><br><span class="line">}</span><br><span class="line"><span class="keyword">for</span>(<span class="keyword">int</span> j=<span class="number">0</span>;j<n;j++) {</span><br><span class="line"><span class="keyword">if</span>(i==j) <span class="keyword">continue</span>;</span><br><span class="line"><span class="keyword">if</span>(g[i].test(j)) ans+=(tmp&g[j]).count();</span><br><span class="line">}</span><br><span class="line">}</span><br><span class="line"><span class="built_in">printf</span>(<span class="string">"%lld"</span>,ans/<span class="number">3</span>);</span><br><span class="line">}</span><br></pre></td></tr></table></figure>]]></content>
<summary type="html">
<p><strong>题目链接:</strong><a href="http://codeforces.com/gym/100342/attachments" rel="external nofollow noopener noreferrer" target="_blank">http://codeforces.com/gym/100342/attachments</a></p>
<p>题目大意:给定一张有向图,询问有多少个三元环。</p>
<blockquote>
<p>这道题数据范围只有1500,所以可以n^2,我们暴力枚举两个点,假设为A->B,然后我们预处理出有哪些点可以到A,B可以到哪些点,这样就可以得到俩集合,然后再交一下,再统计一下集合里面元素的个数就好了<br>引用自<a href="https://www.cnblogs.com/qscqesze/p/4708878.html" rel="external nofollow noopener noreferrer" target="_blank">qscqesze</a></p>
</blockquote>
</summary>
<category term="STL" scheme="https://www.mhlwsk.com/tags/STL/"/>
</entry>
</feed>