Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
M
models
Project
Project
Details
Activity
Releases
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
bitsoko services
models
Commits
3bbc5d2f
Commit
3bbc5d2f
authored
Apr 03, 2017
by
Neal Wu
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Make the README for textsum a little clearer
parent
29881fb4
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
29 additions
and
31 deletions
+29
-31
README.md
textsum/README.md
+29
-31
No files found.
textsum/README.md
View file @
3bbc5d2f
...
...
@@ -70,9 +70,7 @@ vocabulary size: Most frequent 200k words from dataset's article and summaries.
<b>
How To Run
</b>
Pre-requesite:
Install TensorFlow and Bazel.
Prerequisite: install TensorFlow and Bazel.
```
shell
# cd to your workspace
...
...
@@ -83,7 +81,7 @@ Install TensorFlow and Bazel.
# If your data files have different names, update the --data_path.
# If you don't have data but want to try out the model, copy the toy
# data from the textsum/data/data to the data/ directory in the workspace.
ls
-R
$
ls
-R
.:
data textsum WORKSPACE
...
...
@@ -97,38 +95,38 @@ data.py seq2seq_attention_decode.py seq2seq_attention.py seq2seq_lib.py
./textsum/data:
data vocab
bazel build
-c
opt
--config
=
cuda textsum/...
$
bazel build
-c
opt
--config
=
cuda textsum/...
# Run the training.
bazel-bin/textsum/seq2seq_attention
\
--mode
=
train
\
--article_key
=
article
\
--abstract_key
=
abstract
\
--data_path
=
data/training-
*
\
--vocab_path
=
data/vocab
\
--log_root
=
textsum/log_root
\
--train_dir
=
textsum/log_root/train
$
bazel-bin/textsum/seq2seq_attention
\
--mode
=
train
\
--article_key
=
article
\
--abstract_key
=
abstract
\
--data_path
=
data/training-
*
\
--vocab_path
=
data/vocab
\
--log_root
=
textsum/log_root
\
--train_dir
=
textsum/log_root/train
# Run the eval. Try to avoid running on the same machine as training.
bazel-bin/textsum/seq2seq_attention
\
--mode
=
eval
\
--article_key
=
article
\
--abstract_key
=
abstract
\
--data_path
=
data/validation-
*
\
--vocab_path
=
data/vocab
\
--log_root
=
textsum/log_root
\
--eval_dir
=
textsum/log_root/eval
$
bazel-bin/textsum/seq2seq_attention
\
--mode
=
eval
\
--article_key
=
article
\
--abstract_key
=
abstract
\
--data_path
=
data/validation-
*
\
--vocab_path
=
data/vocab
\
--log_root
=
textsum/log_root
\
--eval_dir
=
textsum/log_root/eval
# Run the decode. Run it when the most is mostly converged.
bazel-bin/textsum/seq2seq_attention
\
--mode
=
decode
\
--article_key
=
article
\
--abstract_key
=
abstract
\
--data_path
=
data/test-
*
\
--vocab_path
=
data/vocab
\
--log_root
=
textsum/log_root
\
--decode_dir
=
textsum/log_root/decode
\
--beam_size
=
8
$
bazel-bin/textsum/seq2seq_attention
\
--mode
=
decode
\
--article_key
=
article
\
--abstract_key
=
abstract
\
--data_path
=
data/test-
*
\
--vocab_path
=
data/vocab
\
--log_root
=
textsum/log_root
\
--decode_dir
=
textsum/log_root/decode
\
--beam_size
=
8
```
...
...
@@ -157,7 +155,7 @@ article: the european court of justice ( ecj ) recently ruled in lock v british
abstract: will british gas ecj ruling fuel holiday pay hike ?
decode: eu law requires worker 's statutory holiday pay
decode: eu law requires worker 's statutory holiday pay
======================================
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment