You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

248 lines
8.6 KiB

This file contains ambiguous Unicode characters!

This file contains ambiguous Unicode characters that may be confused with others in your current locale. If your use case is intentional and legitimate, you can safely ignore this warning. Use the Escape button to highlight these characters.

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 6.7 门控循环单元GRU\n",
"## 6.7.2 读取数据集"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"1.2.0 cpu\n"
]
}
],
"source": [
"import numpy as np\n",
"import torch\n",
"from torch import nn, optim\n",
"import torch.nn.functional as F\n",
"\n",
"import sys\n",
"sys.path.append(\"..\") \n",
"import d2lzh_pytorch as d2l\n",
"device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n",
"\n",
"(corpus_indices, char_to_idx, idx_to_char, vocab_size) = d2l.load_data_jay_lyrics()\n",
"print(torch.__version__, device)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 6.7.3 从零开始实现\n",
"### 6.7.3.1 初始化模型参数"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"will use cpu\n"
]
}
],
"source": [
"num_inputs, num_hiddens, num_outputs = vocab_size, 256, vocab_size\n",
"print('will use', device)\n",
"\n",
"def get_params():\n",
" def _one(shape):\n",
" ts = torch.tensor(np.random.normal(0, 0.01, size=shape), device=device, dtype=torch.float32)\n",
" return torch.nn.Parameter(ts, requires_grad=True)\n",
" def _three():\n",
" return (_one((num_inputs, num_hiddens)),\n",
" _one((num_hiddens, num_hiddens)),\n",
" torch.nn.Parameter(torch.zeros(num_hiddens, device=device, dtype=torch.float32), requires_grad=True))\n",
" \n",
" W_xz, W_hz, b_z = _three() # 更新门参数\n",
" W_xr, W_hr, b_r = _three() # 重置门参数\n",
" W_xh, W_hh, b_h = _three() # 候选隐藏状态参数\n",
" \n",
" # 输出层参数\n",
" W_hq = _one((num_hiddens, num_outputs))\n",
" b_q = torch.nn.Parameter(torch.zeros(num_outputs, device=device, dtype=torch.float32), requires_grad=True)\n",
" return nn.ParameterList([W_xz, W_hz, b_z, W_xr, W_hr, b_r, W_xh, W_hh, b_h, W_hq, b_q])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 6.7.3.2 定义模型"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def init_gru_state(batch_size, num_hiddens, device):\n",
" return (torch.zeros((batch_size, num_hiddens), device=device), )"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"def gru(inputs, state, params):\n",
" W_xz, W_hz, b_z, W_xr, W_hr, b_r, W_xh, W_hh, b_h, W_hq, b_q = params\n",
" H, = state\n",
" outputs = []\n",
" for X in inputs:\n",
" Z = torch.sigmoid(torch.matmul(X, W_xz) + torch.matmul(H, W_hz) + b_z)\n",
" R = torch.sigmoid(torch.matmul(X, W_xr) + torch.matmul(H, W_hr) + b_r)\n",
" H_tilda = torch.tanh(torch.matmul(X, W_xh) + torch.matmul(R * H, W_hh) + b_h)\n",
" H = Z * H + (1 - Z) * H_tilda\n",
" Y = torch.matmul(H, W_hq) + b_q\n",
" outputs.append(Y)\n",
" return outputs, (H,)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 6.7.3.3 训练模型并创作歌词"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"num_epochs, num_steps, batch_size, lr, clipping_theta = 160, 35, 32, 1e2, 1e-2\n",
"pred_period, pred_len, prefixes = 40, 50, ['分开', '不分开']"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"epoch 40, perplexity 150.963116, time 1.11 sec\n",
" - 分开 我想你 我不你 我不你 我不你 我不你 我不你 我不你 我不你 我不你 我不你 我不你 我不你 我\n",
" - 不分开 我想你 我不你 我不你 我不你 我不你 我不你 我不你 我不你 我不你 我不你 我不你 我不你 我\n",
"epoch 80, perplexity 31.683252, time 1.16 sec\n",
" - 分开 我想要你的微笑 一定 \n",
" - 不分开 不知不觉 我不要再想 我不要再想 我不 我不 我不 我不 我不 我不 我不 我不 我不 我不 我不\n",
"epoch 120, perplexity 5.855305, time 1.49 sec\n",
" - 分开我 想要你这样打我妈妈 难道你手不会痛吗 我想你这样打我妈妈 难道你手 你怎么在我想 说散 你说我久\n",
" - 不分开 没有你在我有多烦熬多烦恼 没有你烦 我有多烦恼 没有你在我有多难熬多难多 没有你烦 我有多\n",
"epoch 160, perplexity 1.815359, time 1.04 sec\n",
" - 分开 我想要这样牵 对你依依不舍 连隔壁邻居都猜到我现在的感受 河边的风 在吹着头发飘动 牵着你的手 一\n",
" - 不分开 是后过风 迷不知蒙 我给再这样活 我该好好生活 不知不觉 你已经离开我 不知不觉 我跟了这节奏 \n"
]
}
],
"source": [
"d2l.train_and_predict_rnn(gru, get_params, init_gru_state, num_hiddens,\n",
" vocab_size, device, corpus_indices, idx_to_char,\n",
" char_to_idx, False, num_epochs, num_steps, lr,\n",
" clipping_theta, batch_size, pred_period, pred_len,\n",
" prefixes)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 6.7.4 简洁实现"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"epoch 40, perplexity 1.018485, time 0.79 sec\n",
" - 分开的快乐是你 想你想的都会笑 没有你在 我有多难熬 没有你在我有多难熬多烦恼 没有你烦 我有多烦恼\n",
" - 不分开不 我不 我不要再想你 爱情来的太快就像龙卷风 离不开暴风圈来不及逃 我不能再想 我不能再想 我不 \n",
"epoch 80, perplexity 1.028805, time 0.74 sec\n",
" - 分开始想像 爸和妈当年的模样 说著一口吴侬软语的姑娘缓缓走过外滩 消失的 旧时光 一九四三 回头看 的片\n",
" - 不分开不 我不 我不 我不要再想你 爱情来的太快就像龙卷风 离不开暴风圈来不及逃 我不能再想 我不能再想 \n",
"epoch 120, perplexity 1.012296, time 0.73 sec\n",
" - 分开的话像语言暴力 我已无能为力再提起 决定中断熟悉 然后在这里 不限日期 然后将过去 慢慢温习 让我爱\n",
" - 不分开不 我不 我不能 爱情走的太快就像龙卷风 不能承受我已无处可躲 我不要再想 我不要再想 我不 我不 \n",
"epoch 160, perplexity 1.184842, time 0.74 sec\n",
" - 分开的快乐是你 想我想大声宣布 对你依依不舍 连隔壁邻居都猜到我现在的感受 河边的风 在吹着头发飘动 牵\n",
" - 不分开 快使用双截棍 哼哼哈兮 如果我有轻功 飞檐走壁 为人耿直不屈 一身正气 他们儿子我习惯 从小就耳濡\n"
]
}
],
"source": [
"lr = 1e-2\n",
"gru_layer = nn.GRU(input_size=vocab_size, hidden_size=num_hiddens)\n",
"model = d2l.RNNModel(gru_layer, vocab_size).to(device)\n",
"d2l.train_and_predict_rnn_pytorch(model, num_hiddens, vocab_size, device,\n",
" corpus_indices, idx_to_char, char_to_idx,\n",
" num_epochs, num_steps, lr, clipping_theta,\n",
" batch_size, pred_period, pred_len, prefixes)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.2"
}
},
"nbformat": 4,
"nbformat_minor": 2
}