Skip to content

question about backward model #14

@Nuyoah003

Description

@Nuyoah003

backward_encoder = EncoderRNN(hidden_size, embedding, encoder_n_layers, dropout)
backward_decoder = LuongAttnDecoderRNN(attn_model, embedding, hidden_size, voc.num_words, decoder_n_layers, dropout)

I have a question: why doesn't the backward model use a pre-trained seq2seq model for initialization? What is the purpose of redefining a backward model?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions