-
Updated
Jun 3, 2020 - Python
machine-translation
Here are 197 public repositories matching this topic...
From the code (input_pipeline.py) I can see that the ParallelTextInputPipeline automatically generates the SEQUENCE_START and SEQUENCE_END tokens (which means that the input text does not need to have those special tokens).
Does ParallelTextInputPipeline also perform **_padding
When positional encoding is disabled, the embedding scaling is also disabled even though the operations are independent:
https://github.com/OpenNMT/OpenNMT-py/blob/1.0.0/onmt/modules/embeddings.py#L48
In consequence, Transformer models with relative position representations do not follow the reference implementation which scales the embedding [by default](https://github.com/tensorflow/tensor
Documentation
Current documentation in README explains how to install the toolkit and how to run examples. However, I don't think this is enough for users who want to make some changes to the existing recipes or make their own new recipe. In that case, one needs to understand what run.sh does step by step, but I think docs for that are missing at the moment. It would be great if we provide documentation for:
-
Updated
Jun 6, 2020 - Python
- Add CI test for building documentations (Do not ignore
warningsand add spellcheck). - Fix docstrings with incorrect/inconsistent Sphinx format. Currently, such issues are treated as
warningsin the docs building.
-
Updated
May 22, 2020 - Python
Hi,
Is it possible to add benchmarks of some models into documentation for comparison purposes ?
Also run time would be helpful. For example 1M iteration takes a weekend with GTX 1080.
-
Updated
Jun 5, 2020 - Python
-
Updated
Mar 1, 2020 - Python
-
Updated
Apr 20, 2020 - Python
-
Updated
Feb 10, 2020 - Python
-
TransformerDecoder.forward: where doesself.trainingcome from?
https://github.com/asyml/texar-pytorch/blob/d17d502b50da1d95cb70435ed21c6603370ce76d/texar/torch/modules/decoders/transformer_decoders.py#L448-L449 -
All arguments should say their types explicitly in the docstring. E.g., what is the type of
infer_mode? The [method signature](https://texar-pytorch.readthedocs.
-
Updated
Mar 7, 2020 - Python
-
Updated
May 26, 2020 - Python
-
Updated
Jun 1, 2020 - Python
Based on this line of code:
https://github.com/ufal/neuralmonkey/blob/master/neuralmonkey/decoders/output_projection.py#L125
Current implementation isn't flexible enough; if we train a "submodel" (e.g. decoder without attention - not containing any ctx_tensors) we cannot use the trained variables to initialize model with attention defined because the size of the dense layer matrix input become
-
Updated
Aug 23, 2017 - Python
-
Updated
May 25, 2020 - Python
-
Updated
May 5, 2020 - Python
-
Updated
May 4, 2020 - Python
-
Updated
Oct 24, 2019 - Python
-
Updated
Dec 10, 2018 - Python
-
Updated
Mar 13, 2018 - Python
-
Updated
Mar 23, 2020 - Python
-
Updated
Jul 13, 2019 - Python
Environment
- Python 3.7.6
tensorflow==1.14.0
Log
$ python build_vocab.py data/monument_300/data_300.en > data/monument_300/vocab.en
WARNING:tensorflow:From build_vocab.py:44: VocabularyProcessor.__init__ (from tensorflow.contrib.learn.python.learn.preprocessing.text) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tensorfl-
Updated
May 29, 2020 - Python
-
Updated
Mar 17, 2020 - Python
Improve this page
Add a description, image, and links to the machine-translation topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the machine-translation topic, visit your repo's landing page and select "manage topics."
Description
I am wondering when Assessing the Factual Accuracy of Generated Text in https://github.com/tensorflow/tensor2tensor/tree/master/tensor2tensor/data_generators/wikifact will be publically available since it's already been 6 months. @bengoodrich