Skip to content

checkpointing #1

@cabjudo

Description

@cabjudo

In the documentation on tf.train.Saver there's an option (keep_checkpoint every_n_hours)

"// How often to keep an additional checkpoint. If not specified, only the last
// "max_to_keep" checkpoints are kept; if specified, in addition to keeping
// the last "max_to_keep" checkpoints, an additional checkpoint will be kept
// for every n hours of training."
https://github.com/tensorflow/tensorflow/blob/r1.7/tensorflow/core/protobuf/saver.proto

Incorporation of this function would potentially improve logger

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions