-
Notifications
You must be signed in to change notification settings - Fork 98
Description
Hi, In MetaOptNet, inappropriate dependency versioning constraints can cause risks.
Below are the dependencies and version constraints that the project is using
torch==1.0.1.post2
torchvision==0.2.2.post2
qpth==0.0.13
torchnet
tqdm
The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict.
The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.
After further analysis, in this project,
The version constraint of dependency tqdm can be changed to >=4.36.0,<=4.64.0.
The above modification suggestions can reduce the dependency conflicts as much as possible,
and introduce the latest version as much as possible without calling Error in the projects.
The invocation of the current project includes all the following methods.
The calling methods from the tqdm
tqdm.tqdm
The calling methods from the all methods
self._compute_block_mask torch.gesv utils.check_dir torch.save torch.optim.SGD.zero_grad utils.count_accuracy.item time.time numpy.asarray log_prb.smoothed_one_hot.sum torch.nn.BatchNorm2d self.block_size.torch.zeros.cuda.long binv Y_support_reshaped.unsqueeze.expand.unsqueeze self.get_iterator batched_kronecker b_mat.size.b_mat.new_ones.diag n_query.n_support.tasks_per_batch.torch.ones.cuda torch.nn.MaxPool2d label.view.view n_way.torch.arange.long Y_support.reshape.transpose self.block_size.self.block_size.self.block_size.torch.arange.view.expand.reshape torch.nn.AvgPool2d utils.count_accuracy x.float os.mkdir nr_blocks.offsets.repeat.view self.block_size.torch.arange.view.expand torch.nn.functional.log_softmax enumerate self.bn1 b_mat.new_ones torch.nn.ReLU x.view.size support.size round self.block_size.self.block_size.torch.arange.repeat.self.block_size.self.block_size.self.block_size.torch.arange.view.expand.reshape.torch.stack.t.cuda pickle.load self.block_size.width.self.block_size.height.channels.batch_size.bernoulli.sample.cuda matrix1.reshape.unsqueeze models.classification_heads.ClassificationHead label.label2inds.append torch.Tensor.Variable.cuda self._make_layer models.dropblock.DropBlock self.ResNet.super.__init__ pickle._Unpickler models.ResNet12_embedding.resnet12.cuda torch.Size self.bn2 self.encoder.view torch.cat self.block_size.torch.arange.repeat self.block1 query.query.sum.expand_as qp_sol.float.unsqueeze.expand.reshape torch.stack self.DropBlock format self.ClassificationHead.super.__init__ data.CIFAR_FS.CIFAR_FS torch.Tensor.Variable.cuda.detach n_support.n_way.tasks_per_batch.torch.zeros.cuda data_support.reshape n_support.n_way.tasks_per_batch.torch.ones.cuda os.path.exists emb_support.reshape.reshape float self.encoder support_labels_one_hot.transpose.sum one_hot load_data torch.autograd.Variable get_dataset A_i.unsqueeze.expand numpy.mean n_support.n_way.torch.eye.expand self.sample_test_examples_for_base_categories self.sample_train_and_test_examples_for_novel_categories numpy.concatenate dloader_val self.block_size.width.self.block_size.height.channels.batch_size.bernoulli.sample.cuda.nonzero matrix2.size.list.matrix1.size.list.matrix1.size.matrix2_flatten.unsqueeze.matrix1_flatten.unsqueeze.torch.bmm.reshape.permute.reshape torch.nn.init.constant_ torchvision.transforms.Compose matrix2.reshape block_kernel_matrix_inter.repeat utils.Timer models.protonet_embedding.ProtoNetEmbedding.cuda list self.block_size.torch.arange.view self.layer2 torch.zeros models.ResNet12_embedding.resnet12 self._compute_block_mask.sum encoded_indicies.scatter_.scatter_ Y_support.reshape.view data_novel.buildLabelIndex.keys computeGramMatrix cls_head torch.nn.functional.dropout.size torch.sum labels_query.reshape utils.log torch.autograd.Variable.detach embedding_net numpy.random.choice n_way.n_support.n_support.n_way.torch.arange.long.repeat.reshape.transpose.reshape computeGramMatrix.repeat self.createExamplesTensorData random.sample numpy.unique self.sample_base_and_novel_categories m.weight.data.normal_ str prototypes.prototypes.sum utils.Timer.measure self.layer3 n_way.n_support.tasks_per_batch.torch.ones.cuda m.weight.data.fill_ A_i.unsqueeze.expand.unsqueeze query.size data.mini_imagenet.MiniImageNet indices.view f.write matrix2.reshape.unsqueeze torch.nn.Dropout label.pred.eq.float labels_train_transposed.sum.expand_as query.query.sum cls_head.reshape n_support.n_way.n_support.n_way.tasks_per_batch.n_support.n_way.torch.eye.expand.cuda data_query.reshape torch.ones torch.nn.Parameter torch.optim.lr_scheduler.LambdaLR.step matrix2.size.list.matrix1.size.list.matrix1.size.matrix2_flatten.unsqueeze.matrix1_flatten.unsqueeze.torch.bmm.reshape.permute conv3x3 x_entropy.mean len argparse.ArgumentParser.parse_args random.seed self.DropBlock.super.__init__ embedding_net.state_dict Y_support.reshape.reshape support_labels_one_hot.reshape.reshape torch.optim.SGD models.protonet_embedding.ProtoNetEmbedding torchvision.transforms.Normalize support_labels_one_hot.reshape.transpose n_way.n_way.tasks_per_batch.n_way.torch.eye.expand.cuda self.ProtoNetEmbedding.super.__init__ max torch.LongTensor logits.torch.argmax.view.eq set ResNet models.classification_heads.ClassificationHead.cuda torchvision.transforms.ColorJitter self.block4 train_accuracies.append self.modules self.block3.size offsets.long.long torch.nn.CrossEntropyLoss block qp_sol.float.unsqueeze.expand.permute emb_query.reshape.reshape compatibility.unsqueeze.expand.float int non_zero_idxs.repeat.repeat self.BasicBlock.super.__init__ ValueError qpth.qp.QPFunction torch.nn.Sequential.add_module self.block_size.torch.zeros.cuda torch.distributions.Bernoulli.sample utils.set_gpu self.block.add_module self.block3.view numpy.array self.layer4 data_base.buildLabelIndex.keys torchnet.dataset.ListDataset.parallel kernel_matrix_mask_y.kernel_matrix_mask_x.float torch.arange self.conv2 numpy.arange query.dim torch.eye torch.nn.init.kaiming_normal_ self.block_size.self.block_size.torch.arange.repeat.self.block_size.self.block_size.self.block_size.torch.arange.view.expand.reshape.torch.stack.t x.cuda logits.torch.argmax.view pickle._Unpickler.load kernel_matrix_mask_second_term.float.float models.R2D2_embedding.R2D2Embedding R2D2_conv_block self.relu B.dim matrix1.size n_support.n_support.tasks_per_batch.torch.ones.cuda range torch.optim.SGD.step torch.sum.transpose val_accuracies.append torch.optim.lr_scheduler.LambdaLR matrix2.size self.block torch.nn.functional.pad print math.sqrt torch.argmax prototypes.div.div n_way.tasks_per_batch.torch.ones.cuda vars tqdm.tqdm numpy.std prototypes.prototypes.sum.reshape.expand_as torch.autograd.Variable.dim offsets.long.repeat torch.ones.detach torch.distributions.Bernoulli matrix2_flatten.unsqueeze.matrix1_flatten.unsqueeze.torch.bmm.reshape n_support.n_support.tasks_per_batch.n_support.torch.eye.expand.cuda torch.sum.view x.float.cuda dloader_train os.path.join numpy.random.seed torch.nn.LeakyReLU matrix1.reshape depth.torch.Size.indices.size.torch.zeros.cuda b_mat.size.b_mat.new_ones.diag.expand_as n_way.torch.eye.expand x.train b_mat.b_mat.size.b_mat.new_ones.diag.expand_as.cuda val_losses.append self.R2D2Embedding.super.__init__ n_support.tasks_per_batch.support_labels.reshape.expand f.flush self.sampleCategories label.pred.eq.float.mean torch.nn.Conv2d torch.nn.Sequential self.block2 data.tiered_imagenet.tieredImageNet qp_sol.float.unsqueeze n_way.n_support.n_support.n_way.torch.arange.long.repeat.reshape.transpose indices.size self.sampleImageIdsFrom qp_sol.float.unsqueeze.expand data.FC100.FC100 self.block4.size self.encoder.size x.eval torchnet.dataset.ListDataset support_labels.view support_labels_one_hot.reshape.view compatibility.unsqueeze.expand.unsqueeze PIL.Image.fromarray n_support.torch.eye.expand support_labels.reshape buildLabelIndex torchvision.transforms.RandomCrop self.downsample models.classification_heads.ClassificationHead.cuda.state_dict numpy.load argparse.ArgumentParser.add_argument open prototypes.prototypes.sum.reshape models.R2D2_embedding.R2D2Embedding.cuda B.transpose x.view.view n_way.n_support.n_support.n_way.torch.arange.long.repeat.reshape.transpose.reshape.repeat support.dim isinstance self.head b_mat.size qp_sol.float.unsqueeze.expand.float self.ConvBlock.super.__init__ self.bn3 self.transform zip self.avgpool n_support.n_way.n_support.n_support.n_way.torch.arange.long.repeat.reshape.transpose.reshape.repeat.cuda x_entropy x.double.cuda random.shuffle self.sample_episode data_loader self._compute_block_mask.size support.transpose self.label2ind.keys maxIter.QPFunction torch.nn.DataParallel self.layer1 torch.Tensor ConvBlock self.maxpool torch.FloatTensor Y_support_reshaped.unsqueeze.expand argparse.ArgumentParser torch.bmm embedding_net.parameters sorted train_losses.append x_entropy.backward x_entropy.item self.conv3 layers.append B.size numpy.sqrt torch.nn.functional.dropout x.double m.bias.data.zero_ n_way.torch.arange.long.repeat super V.V.computeGramMatrix.detach models.classification_heads.ClassificationHead.cuda.parameters self.conv1 self.block4.view compatibility.unsqueeze.expand torch.autograd.Variable.size torchvision.transforms.ToTensor get_model n_way.kernel_matrix.repeat.kernel_matrix_mask_second_term.repeat n_support.n_way.torch.arange.long.repeat.reshape torchvision.transforms.RandomHorizontalFlip self.block3
@developer
Could please help me check this issue?
May I pull a request to fix it?
Thank you very much.