-
Notifications
You must be signed in to change notification settings - Fork 208
Description
Hello! Sorry to bother, I'm trying to use the LeakGAN model with my own dataset, but when the program reaches the Generator MLE training, this error pops up... Following some advise online, I've tried to copy/clone the tensors, but so far I had no luck. Do you have any suggestions?
/homenfs/s.corbara/.conda/envs/gan_aug/lib/python3.9/site-packages/torch/autograd/init.py:173: UserWarning: Error detected in AddmmBackward0. Traceback of forward call that caused the error:
File "/homenfs/s.corbara/GAN_aug/src2/main.py", line 41, in
TextGAN(author_file)
File "/homenfs/s.corbara/GAN_aug/src2/TextGAN_PyTorch/textGAN_main.py", line 170, in TextGAN
inst._run()
File "/homenfs/s.corbara/GAN_aug/src2/TextGAN_PyTorch/instructor/real_data/leakgan_instructor.py", line 56, in _run
self.pretrain_generator(cfg.MLE_train_epoch)
File "/homenfs/s.corbara/GAN_aug/src2/TextGAN_PyTorch/instructor/real_data/leakgan_instructor.py", line 105, in pretrain_generator
mana_loss, work_loss = self.gen.pretrain_loss(target, self.dis)
File "/homenfs/s.corbara/GAN_aug/src2/TextGAN_PyTorch/models/LeakGAN_G.py", line 134, in pretrain_loss
_, feature_array, goal_array, leak_out_array = self.forward_leakgan(target, dis, if_sample=False, no_log=False,
File "/homenfs/s.corbara/GAN_aug/src2/TextGAN_PyTorch/models/LeakGAN_G.py", line 308, in forward_leakgan
out, cur_goal, work_hidden, mana_hidden = self.forward(i, leak_inp, work_hidden, mana_hidden, feature,
File "/homenfs/s.corbara/GAN_aug/src2/TextGAN_PyTorch/models/LeakGAN_G.py", line 74, in forward
mana_out = self.mana2goal(torch.permute(mana_out, (1, 0, 2))).clone() # batch_size * 1 * goal_out_size
File "/homenfs/s.corbara/.conda/envs/gan_aug/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/homenfs/s.corbara/.conda/envs/gan_aug/lib/python3.9/site-packages/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
(Triggered internally at /opt/conda/conda-bld/pytorch_1659484809662/work/torch/csrc/autograd/python_anomaly_mode.cpp:102.)
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
Traceback (most recent call last):
File "/homenfs/s.corbara/GAN_aug/src2/main.py", line 41, in
TextGAN(author_file)
File "/homenfs/s.corbara/GAN_aug/src2/TextGAN_PyTorch/textGAN_main.py", line 170, in TextGAN
inst._run()
File "/homenfs/s.corbara/GAN_aug/src2/TextGAN_PyTorch/instructor/real_data/leakgan_instructor.py", line 56, in _run
self.pretrain_generator(cfg.MLE_train_epoch)
File "/homenfs/s.corbara/GAN_aug/src2/TextGAN_PyTorch/instructor/real_data/leakgan_instructor.py", line 106, in pretrain_generator
self.optimize_multi(self.gen_opt, [mana_loss, work_loss])
File "/homenfs/s.corbara/GAN_aug/src2/TextGAN_PyTorch/instructor/real_data/instructor.py", line 184, in optimize_multi
loss.backward(retain_graph=True if i < len(opts) - 1 else False)
File "/homenfs/s.corbara/.conda/envs/gan_aug/lib/python3.9/site-packages/torch/_tensor.py", line 396, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/homenfs/s.corbara/.conda/envs/gan_aug/lib/python3.9/site-packages/torch/autograd/init.py", line 173, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [32, 1720]], which is output 0 of AsStridedBackward0, is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!