{"query_id": "q-en-pytorch-9ccbc97eb054bf88772f5b4fbbd720390ddaa6da2f0faf966bf9060c32b47da9", "query": "This happens when the index tensor contains duplicate elements. If this is not allowed indexcopy should raise an exception when this happens and we should fix the test. Otherwise we need to fix the autograd op. Here's an example: cc\nWhy would you with repeated indices? Sounds for me like UB is a reasonable. We should check that in C\nI ran into this in and fixed the test for now. I'm not sure how to interpret last comment: did you mean UB behavior is \"unreasonable\" rather than \"a reasonable\"? The last sentence reads that way.\nI'd be ok with saying that it's UB in such case. There's no natural choice, and it can decrease performance.\nIs this issue still worth working on? If adding the additional check can decrease performance, can we add another argument specifying that indexes contain duplicate elements?", "positive_passages": [{"docid": "doc-en-pytorch-b7efc1be2e54d5adbcd168d9ea6a2d0637592ce4b5a16edbc8abc376fab9abf3", "text": "(Addcdiv, (), ((S, S), (S, S), torch.rand(S, S) + 1e-2) ), (Addcdiv, (0.6,), ((S, S), (S, S), torch.rand(S, S) + 1e-2), 'scale'), (IndexAdd, (0,), ((S, S), index_variable(2, S), (2, S)) ), (IndexCopy, (0,), ((S, S), index_variable(2, S), (2, S)) ), # (IndexCopy, (0,), ((S, S), index_variable(2, S), (2, S)) ), (IndexFill, (0, 2), ((S, S), index_variable(2, S)) ), (IndexSelect, (0,), ((S, S), index_variable(2, S)) ), (Gather, (0,), ((M, S), gather_variable((S, S), 1, M)) ),", "commid": "pytorch_pr_474"}], "negative_passages": []} {"query_id": "q-en-pytorch-a6ca36a1cfee2b3454534f6eb50dc9348a5301ff7878b05af2534c06aa42a1da", "query": "As Arthur Szlam reports, fb-internal cudnn is still lagging behind, and giving batch-size 1024 with batchnorm is raising an error from cudnn. Need to check for compile-time version and disable this codepath\nThe 1024 limitation was removed in 5.1.10", "positive_passages": [{"docid": "doc-en-pytorch-a5509dbbbfa858e8c6cb55e89fc0a00c1506c539c221b4e583c7ec8963caac7e", "text": " import torch from torch.autograd.function import Function from torch._thnn import type2backend import torch.backends.cudnn as cudnn class BatchNorm(Function): def __init__(self, running_mean, running_var, training, momentum, eps): super(BatchNorm, self).__init__() self.running_mean = running_mean self.running_var = running_var self.training = training self.momentum = momentum self.eps = eps def forward(self, input, weight=None, bias=None): self.save_for_backward(input, weight, bias) # don't use cuDNN for half inputs because cuDNN requires the weight and # bias tensors to be floats, unlike THCUNN which requires half tensors. self.use_cudnn = (cudnn.is_acceptable(input) and cudnn.version() > 5110 and weight is not None and bias is not None and not isinstance(input, torch.cuda.HalfTensor)) # temporary buffers used in forward and backward num_features = input.size(1) _save_mean = input.new(num_features) _save_std = input.new(num_features) output = input.new(input.size()) if self.use_cudnn: torch._C._cudnn_batch_norm_forward( input, output, weight, bias, self.running_mean, self.running_var, _save_mean, _save_std, self.training, self.momentum, self.eps) else: backend = type2backend[type(input)] backend.BatchNormalization_updateOutput( backend.library_state, input, output, weight, bias, self.running_mean, self.running_var, _save_mean, _save_std, self.training, self.momentum, self.eps) if self.requires_grad: self._save_mean = _save_mean self._save_std = _save_std return output def backward(self, grad_output): input, weight, bias = self.saved_tensors grad_input, grad_weight, grad_bias = None, None, None if self.needs_input_grad[0] or self.use_cudnn: grad_input = input.new(input.size()) if (len(self.needs_input_grad) > 1 and self.needs_input_grad[1]) or self.use_cudnn: grad_weight = weight.new(weight.size()).zero_() if (len(self.needs_input_grad) > 1 and self.needs_input_grad[2]) or self.use_cudnn: grad_bias = bias.new(bias.size()).zero_() if self.use_cudnn and self.training: # cudnn does not support backward in evaluate mode torch._C._cudnn_batch_norm_backward( input, grad_output, grad_input, grad_weight, grad_bias, weight, self.running_mean, self.running_var, self._save_mean, self._save_std, self.training, self.eps) else: grad_output = grad_output.contiguous() backend = type2backend[type(input)] backend.BatchNormalization_backward( backend.library_state, input, grad_output, grad_input, grad_weight, grad_bias, weight, self.running_mean, self.running_var, self._save_mean, self._save_std, self.training, 1.0, self.eps) return grad_input, grad_weight, grad_bias ", "commid": "pytorch_pr_1199"}], "negative_passages": []} {"query_id": "q-en-pytorch-a6ca36a1cfee2b3454534f6eb50dc9348a5301ff7878b05af2534c06aa42a1da", "query": "As Arthur Szlam reports, fb-internal cudnn is still lagging behind, and giving batch-size 1024 with batchnorm is raising an error from cudnn. Need to check for compile-time version and disable this codepath\nThe 1024 limitation was removed in 5.1.10", "positive_passages": [{"docid": "doc-en-pytorch-cadd920c3290a7738a8ce57c915804e69be3a64f7d66ff825142f5499221a79d", "text": "def _initialize_backend(): from .._functions.thnn import _all_functions as _thnn_functions from .._functions.linear import Linear from .._functions.batchnorm import BatchNorm from .._functions.conv import ConvNd from .._functions.rnn import RNN, RNNTanhCell, RNNReLUCell, GRUCell, LSTMCell", "commid": "pytorch_pr_1199"}], "negative_passages": []} {"query_id": "q-en-pytorch-a6ca36a1cfee2b3454534f6eb50dc9348a5301ff7878b05af2534c06aa42a1da", "query": "As Arthur Szlam reports, fb-internal cudnn is still lagging behind, and giving batch-size 1024 with batchnorm is raising an error from cudnn. Need to check for compile-time version and disable this codepath\nThe 1024 limitation was removed in 5.1.10", "positive_passages": [{"docid": "doc-en-pytorch-0e85e2da78bb6403e368fc842d283f6c2f8947167f185d1229142870d43226f1", "text": "HingeEmbeddingLoss, MarginRankingLoss backend.register_function('Linear', Linear) backend.register_function('BatchNorm', BatchNorm) backend.register_function('ConvNd', ConvNd) backend.register_function('RNN', RNN) backend.register_function('RNNTanhCell', RNNTanhCell)", "commid": "pytorch_pr_1199"}], "negative_passages": []} {"query_id": "q-en-pytorch-815f7a4bc9bd150a96577f3e8c7eaed378af052f98845620366116b43d873a7c", "query": "Hi, I need to pass input through one nn.Module, then argmax the output, and then pass that output to a second nn.Module. I want to backprop through the argmax back to the weights of the first module. That's impossible right now, though, it seems. When I loop through the gradient of the weights of my first module (with the following code): firstmodel.zerograd() secondmodel.zerograd() loss.backward() # retainvariables=True for param in secondmodel.parameters(): print( 'data: ', ) print( 'grad data: ', ) for param in firstmodel.parameters(): print( 'data: ', ) print( 'grad data: ', ) I get the following error, stating that those gradients can no longer be viewed/exist: -main()\nargmax is not differentiable (and even if you try to think about it as a function differentiable almost everywhere, its derivative is always 0, so you won't get anything meaningful).", "positive_passages": [{"docid": "doc-en-pytorch-070db6c4fc6f6f907a67f5eff63dda2759f93a2cf19670bccc99c9ab7375a073", "text": "| :attr:`stride` controls the stride for the cross-correlation. | If :attr:`padding` is non-zero, then the input is implicitly zero-padded on both sides for :attr:`padding` number of points | :attr:`dilation` controls the spacing between the kernel points. It is harder to describe, but this `link`_ has a nice visualization of what :attr:`dilation` does. | :attr:`groups` controls the connections between inputs and outputs. for :attr:`padding` number of points. | :attr:`dilation` controls the spacing between the kernel points; also known as the \u00e0 trous algorithm. It is harder to describe, but this `link`_ has a nice visualization of what :attr:`dilation` does. | :attr:`groups` controls the connections between inputs and outputs. `in_channels` and `out_channels` must both be divisible by `groups`. | At groups=1, all inputs are convolved to all outputs. | At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. At groups=`in_channels`, each input channel is convolved with its own set of filters (of size `out_channels // in_channels`). .. note::", "commid": "pytorch_pr_1602"}], "negative_passages": []} {"query_id": "q-en-pytorch-815f7a4bc9bd150a96577f3e8c7eaed378af052f98845620366116b43d873a7c", "query": "Hi, I need to pass input through one nn.Module, then argmax the output, and then pass that output to a second nn.Module. I want to backprop through the argmax back to the weights of the first module. That's impossible right now, though, it seems. When I loop through the gradient of the weights of my first module (with the following code): firstmodel.zerograd() secondmodel.zerograd() loss.backward() # retainvariables=True for param in secondmodel.parameters(): print( 'data: ', ) print( 'grad data: ', ) for param in firstmodel.parameters(): print( 'data: ', ) print( 'grad data: ', ) I get the following error, stating that those gradients can no longer be viewed/exist: -main()\nargmax is not differentiable (and even if you try to think about it as a function differentiable almost everywhere, its derivative is always 0, so you won't get anything meaningful).", "positive_passages": [{"docid": "doc-en-pytorch-f7031e660f7387bb371e278ab1f5e0deba5828cc04c655b5e42a5cb6a283bf7f", "text": "| :attr:`stride` controls the stride for the cross-correlation. | If :attr:`padding` is non-zero, then the input is implicitly zero-padded on both sides for :attr:`padding` number of points | :attr:`dilation` controls the spacing between the kernel points. It is harder to describe, but this `link`_ has a nice visualization of what :attr:`dilation` does. | :attr:`groups` controls the connections between inputs and outputs. for :attr:`padding` number of points. | :attr:`dilation` controls the spacing between the kernel points; also known as the \u00e0 trous algorithm. It is harder to describe, but this `link`_ has a nice visualization of what :attr:`dilation` does. | :attr:`groups` controls the connections between inputs and outputs. `in_channels` and `out_channels` must both be divisible by `groups`. | At groups=1, all inputs are convolved to all outputs. | At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. At groups=`in_channels`, each input channel is convolved with its own set of filters (of size `out_channels // in_channels`). The parameters :attr:`kernel_size`, :attr:`stride`, :attr:`padding`, :attr:`dilation` can either be:", "commid": "pytorch_pr_1602"}], "negative_passages": []} {"query_id": "q-en-pytorch-815f7a4bc9bd150a96577f3e8c7eaed378af052f98845620366116b43d873a7c", "query": "Hi, I need to pass input through one nn.Module, then argmax the output, and then pass that output to a second nn.Module. I want to backprop through the argmax back to the weights of the first module. That's impossible right now, though, it seems. When I loop through the gradient of the weights of my first module (with the following code): firstmodel.zerograd() secondmodel.zerograd() loss.backward() # retainvariables=True for param in secondmodel.parameters(): print( 'data: ', ) print( 'grad data: ', ) for param in firstmodel.parameters(): print( 'data: ', ) print( 'grad data: ', ) I get the following error, stating that those gradients can no longer be viewed/exist: -main()\nargmax is not differentiable (and even if you try to think about it as a function differentiable almost everywhere, its derivative is always 0, so you won't get anything meaningful).", "positive_passages": [{"docid": "doc-en-pytorch-ca8a9e7cde8b3e302e9883c007c0b71c952ec4d2b1b23b639294d25fe54d87c8", "text": "composed of several input planes. This module can be seen as the gradient of Conv1d with respect to its input. It is sometimes (but incorrectly) refered to as a deconvolutional operation. It is also known as a fractionally-strided convolution or a deconvolution (although it is not an actual deconvolution operation). | :attr:`stride` controls the stride for the cross-correlation. | If :attr:`padding` is non-zero, then the input is implicitly zero-padded on both sides for :attr:`padding` number of points. | If :attr:`output_padding` is non-zero, then the output is implicitly zero-padded on one side for :attr:`output_padding` number of points. | :attr:`dilation` controls the spacing between the kernel points; also known as the \u00e0 trous algorithm. It is harder to describe, but this `link`_ has a nice visualization of what :attr:`dilation` does. | :attr:`groups` controls the connections between inputs and outputs. `in_channels` and `out_channels` must both be divisible by `groups`. | At groups=1, all inputs are convolved to all outputs. | At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. At groups=`in_channels`, each input channel is convolved with its own set of filters (of size `out_channels // in_channels`). .. note::", "commid": "pytorch_pr_1602"}], "negative_passages": []} {"query_id": "q-en-pytorch-815f7a4bc9bd150a96577f3e8c7eaed378af052f98845620366116b43d873a7c", "query": "Hi, I need to pass input through one nn.Module, then argmax the output, and then pass that output to a second nn.Module. I want to backprop through the argmax back to the weights of the first module. That's impossible right now, though, it seems. When I loop through the gradient of the weights of my first module (with the following code): firstmodel.zerograd() secondmodel.zerograd() loss.backward() # retainvariables=True for param in secondmodel.parameters(): print( 'data: ', ) print( 'grad data: ', ) for param in firstmodel.parameters(): print( 'data: ', ) print( 'grad data: ', ) I get the following error, stating that those gradients can no longer be viewed/exist: -main()\nargmax is not differentiable (and even if you try to think about it as a function differentiable almost everywhere, its derivative is always 0, so you won't get anything meaningful).", "positive_passages": [{"docid": "doc-en-pytorch-2a4353f25e6b66254df4b7bf1a55ef91eb3c6280d482b7114d6ee0c1c2abfae1", "text": "output_padding (int or tuple, optional): Zero-padding added to one side of the output groups (int, optional): Number of blocked connections from input channels to output channels bias (bool, optional): If True, adds a learnable bias to the output dilation (int or tuple, optional): Spacing between kernel elements Shape: - Input: :math:`(N, C_{in}, L_{in})`", "commid": "pytorch_pr_1602"}], "negative_passages": []} {"query_id": "q-en-pytorch-815f7a4bc9bd150a96577f3e8c7eaed378af052f98845620366116b43d873a7c", "query": "Hi, I need to pass input through one nn.Module, then argmax the output, and then pass that output to a second nn.Module. I want to backprop through the argmax back to the weights of the first module. That's impossible right now, though, it seems. When I loop through the gradient of the weights of my first module (with the following code): firstmodel.zerograd() secondmodel.zerograd() loss.backward() # retainvariables=True for param in secondmodel.parameters(): print( 'data: ', ) print( 'grad data: ', ) for param in firstmodel.parameters(): print( 'data: ', ) print( 'grad data: ', ) I get the following error, stating that those gradients can no longer be viewed/exist: -main()\nargmax is not differentiable (and even if you try to think about it as a function differentiable almost everywhere, its derivative is always 0, so you won't get anything meaningful).", "positive_passages": [{"docid": "doc-en-pytorch-29428830a26059a849496a55d799332d3518e5869a3f91cc1c2fde212359f495", "text": "composed of several input planes. This module can be seen as the gradient of Conv2d with respect to its input. It is sometimes (but incorrectly) refered to as a deconvolutional operation. It is also known as a fractionally-strided convolution or a deconvolution (although it is not an actual deconvolution operation). | :attr:`stride` controls the stride for the cross-correlation. | If :attr:`padding` is non-zero, then the input is implicitly zero-padded on both sides for :attr:`padding` number of points for :attr:`padding` number of points. | If :attr:`output_padding` is non-zero, then the output is implicitly zero-padded on one side for :attr:`output_padding` number of points | :attr:`dilation` controls the spacing between the kernel points. It is harder to describe, but this `link`_ has a nice visualization of what :attr:`dilation` does. | :attr:`groups` controls the connections between inputs and outputs. for :attr:`output_padding` number of points. | :attr:`dilation` controls the spacing between the kernel points; also known as the \u00e0 trous algorithm. It is harder to describe, but this `link`_ has a nice visualization of what :attr:`dilation` does. | :attr:`groups` controls the connections between inputs and outputs. `in_channels` and `out_channels` must both be divisible by `groups`. | At groups=1, all inputs are convolved to all outputs. | At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. At groups=`in_channels`, each input channel is convolved with its own set of filters (of size `out_channels // in_channels`). The parameters :attr:`kernel_size`, :attr:`stride`, :attr:`padding`, :attr:`output_padding` can either be: - a single ``int`` -- in which case the same value is used for the height and width dimension - a single ``int`` -- in which case the same value is used for the height and width dimensions - a ``tuple`` of two ints -- in which case, the first `int` is used for the height dimension, and the second `int` for the width dimension", "commid": "pytorch_pr_1602"}], "negative_passages": []} {"query_id": "q-en-pytorch-815f7a4bc9bd150a96577f3e8c7eaed378af052f98845620366116b43d873a7c", "query": "Hi, I need to pass input through one nn.Module, then argmax the output, and then pass that output to a second nn.Module. I want to backprop through the argmax back to the weights of the first module. That's impossible right now, though, it seems. When I loop through the gradient of the weights of my first module (with the following code): firstmodel.zerograd() secondmodel.zerograd() loss.backward() # retainvariables=True for param in secondmodel.parameters(): print( 'data: ', ) print( 'grad data: ', ) for param in firstmodel.parameters(): print( 'data: ', ) print( 'grad data: ', ) I get the following error, stating that those gradients can no longer be viewed/exist: -main()\nargmax is not differentiable (and even if you try to think about it as a function differentiable almost everywhere, its derivative is always 0, so you won't get anything meaningful).", "positive_passages": [{"docid": "doc-en-pytorch-89807e395c83aa297144688018ba65eda0cbd3a26f5906aad71ea90d8b1422e8", "text": "The transposed convolution operator multiplies each input value element-wise by a learnable kernel, and sums over the outputs from all input feature planes. **This module can be seen as the exact reverse of Conv3d**. It is sometimes (but incorrectly) refered to as a deconvolutional operation. This module can be seen as the gradient of Conv3d with respect to its input. It is also known as a fractionally-strided convolution or a deconvolution (although it is not an actual deconvolution operation). | :attr:`stride` controls the stride for the cross-correlation. | If :attr:`padding` is non-zero, then the input is implicitly zero-padded on both sides for :attr:`padding` number of points for :attr:`padding` number of points. | If :attr:`output_padding` is non-zero, then the output is implicitly zero-padded on one side for :attr:`output_padding` number of points | :attr:`groups` controls the connections between inputs and outputs. for :attr:`output_padding` number of points. | :attr:`dilation` controls the spacing between the kernel points; also known as the \u00e0 trous algorithm. It is harder to describe, but this `link`_ has a nice visualization of what :attr:`dilation` does. | :attr:`groups` controls the connections between inputs and outputs. `in_channels` and `out_channels` must both be divisible by `groups`. | At groups=1, all inputs are convolved to all outputs. | At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated. At groups=`in_channels`, each input channel is convolved with its own set of filters (of size `out_channels // in_channels`). The parameters :attr:`kernel_size`, :attr:`stride`, :attr:`padding`, :attr:`output_padding` can either be: - a single ``int`` -- in which case the same value is used for the height and width dimension - a single ``int`` -- in which case the same value is used for the depth, height and width dimensions - a ``tuple`` of three ints -- in which case, the first `int` is used for the depth dimension, the second `int` for the width dimension and the third `int` for the width dimension", "commid": "pytorch_pr_1602"}], "negative_passages": []} {"query_id": "q-en-pytorch-76085f668823e1e14d3db20f6e407e0fc43a0e781d2a486c75c60b2bf0c64322", "query": "This is how I implement the decoder of a sequence to sequence model I'm not sure about the new autograd mechanics but this worked in the previous version. If I didn't make the unrolling codes a function it will work. It will also work with CPU. I compiled from source with Python 2.7, CUDA 8.0 and Cudnn 6\nThanks for the minimal reproduction! It broke OpenNMT and a big internal model but we weren't looking forward to isolating what parts of those had the issue.\nThat's great, I'll take a look today. Thanks!\nI've pushed a fix to .\nHow does the inference work in this case? I mean, once you train it how do you test it in the test phase?", "positive_passages": [{"docid": "doc-en-pytorch-e603c4dc93ed38afc2495b61d9ec1a735327d801673d781f612664d7b18db9f0", "text": "{ PyObject_GC_UnTrack(self); THPFunction_clear(self); self->cdata_ptr.~weak_ptr(); self->cdata.~PyFunction(); Py_TYPE(self)->tp_free((PyObject*)self); }", "commid": "pytorch_pr_1454"}], "negative_passages": []} {"query_id": "q-en-pytorch-76085f668823e1e14d3db20f6e407e0fc43a0e781d2a486c75c60b2bf0c64322", "query": "This is how I implement the decoder of a sequence to sequence model I'm not sure about the new autograd mechanics but this worked in the previous version. If I didn't make the unrolling codes a function it will work. It will also work with CPU. I compiled from source with Python 2.7, CUDA 8.0 and Cudnn 6\nThanks for the minimal reproduction! It broke OpenNMT and a big internal model but we weren't looking forward to isolating what parts of those had the issue.\nThat's great, I'll take a look today. Thanks!\nI've pushed a fix to .\nHow does the inference work in this case? I mean, once you train it how do you test it in the test phase?", "positive_passages": [{"docid": "doc-en-pytorch-0a77384ef5a6ec2235681b5740207d2b05aaf28faa94cc9f12de1822b7e88284", "text": "// most fields THPFunction* self = (THPFunction*)obj; new (&self->cdata) torch::autograd::PyFunction(obj); new (&self->cdata_ptr) std::weak_ptr(); self->cdata.num_inputs = -1; self->cdata.is_stochastic = PyObject_IsInstance(obj, THPStochasticFunctionClass); return obj;", "commid": "pytorch_pr_1454"}], "negative_passages": []} {"query_id": "q-en-pytorch-76085f668823e1e14d3db20f6e407e0fc43a0e781d2a486c75c60b2bf0c64322", "query": "This is how I implement the decoder of a sequence to sequence model I'm not sure about the new autograd mechanics but this worked in the previous version. If I didn't make the unrolling codes a function it will work. It will also work with CPU. I compiled from source with Python 2.7, CUDA 8.0 and Cudnn 6\nThanks for the minimal reproduction! It broke OpenNMT and a big internal model but we weren't looking forward to isolating what parts of those had the issue.\nThat's great, I'll take a look today. Thanks!\nI've pushed a fix to .\nHow does the inference work in this case? I mean, once you train it how do you test it in the test phase?", "positive_passages": [{"docid": "doc-en-pytorch-795b2ab192cab92f70f1710949659bdbab7d96fb947e1eb04e3bd1a080cfaf4a", "text": "} }; // Similar to shared_from_this. There's a problem that the Python object // and its cdata depend on each other being alive, so we can't keep // shared_ptrs as members, but we'd like to be able to manage the lifetime of // the objects using shared_ptrs in the C++ graph. The only way to get a new // shared_ptr that references them is through THPFunction_asFunction. When // called for the first time it will allocate a new shared_ptr and save a // weak_ptr in cdata_ptr attr. Later, when we try to take another reference, // we'll try to lock cdata_ptr and return its value if successful. Otherwise it // means that all shared_ptrs returned previously have been freed, so we can // create a new one. This ensures that this object is managed by at most one // shared_ptr control block at any time - a guarantee we depend on in other places // (e.g. we use weak_ptrs in SavedVariable because we know it won't go out of scope). std::shared_ptr THPFunction_asFunction(THPFunction* self) { if (!self) { return std::shared_ptr(); } Py_INCREF((PyObject*)self); return std::shared_ptr(&self->cdata, Decref()); auto ptr = self->cdata_ptr.lock(); if (ptr) return ptr; ptr = std::shared_ptr(&self->cdata, Decref()); self->cdata_ptr = ptr; return ptr; }", "commid": "pytorch_pr_1454"}], "negative_passages": []} {"query_id": "q-en-pytorch-76085f668823e1e14d3db20f6e407e0fc43a0e781d2a486c75c60b2bf0c64322", "query": "This is how I implement the decoder of a sequence to sequence model I'm not sure about the new autograd mechanics but this worked in the previous version. If I didn't make the unrolling codes a function it will work. It will also work with CPU. I compiled from source with Python 2.7, CUDA 8.0 and Cudnn 6\nThanks for the minimal reproduction! It broke OpenNMT and a big internal model but we weren't looking forward to isolating what parts of those had the issue.\nThat's great, I'll take a look today. Thanks!\nI've pushed a fix to .\nHow does the inference work in this case? I mean, once you train it how do you test it in the test phase?", "positive_passages": [{"docid": "doc-en-pytorch-4c76b9791fe29abeb43ad1113e76e402f5a05f42386d55c8621cc65114c5bec7", "text": "std::vector *is_variable_input; char has_freed_buffers; // See a comment in THPFucntion_asFunction for details about this field. std::weak_ptr cdata_ptr; torch::autograd::PyFunction cdata; };", "commid": "pytorch_pr_1454"}], "negative_passages": []} {"query_id": "q-en-pytorch-76085f668823e1e14d3db20f6e407e0fc43a0e781d2a486c75c60b2bf0c64322", "query": "This is how I implement the decoder of a sequence to sequence model I'm not sure about the new autograd mechanics but this worked in the previous version. If I didn't make the unrolling codes a function it will work. It will also work with CPU. I compiled from source with Python 2.7, CUDA 8.0 and Cudnn 6\nThanks for the minimal reproduction! It broke OpenNMT and a big internal model but we weren't looking forward to isolating what parts of those had the issue.\nThat's great, I'll take a look today. Thanks!\nI've pushed a fix to .\nHow does the inference work in this case? I mean, once you train it how do you test it in the test phase?", "positive_passages": [{"docid": "doc-en-pytorch-6217b780b346a24b481b52fa16ca32bab553e766bdf03a23a5beeb3ab9e75d8e", "text": "// should have saved the grad accumulator. Even if the Variable no longer // alive, the accumulator should be kept alive by the references in the graph). if (requires_grad && !grad_fn && weak_grad_fn.expired() && grad_accumulator.expired()) throw std::logic_error(\"No grad accumulator for a saved leaf!\"); throw std::logic_error(\"No grad accumulator for a saved leaf!\"); new_var->grad_accumulator = grad_accumulator; return new_var;", "commid": "pytorch_pr_1454"}], "negative_passages": []} {"query_id": "q-en-pytorch-76085f668823e1e14d3db20f6e407e0fc43a0e781d2a486c75c60b2bf0c64322", "query": "This is how I implement the decoder of a sequence to sequence model I'm not sure about the new autograd mechanics but this worked in the previous version. If I didn't make the unrolling codes a function it will work. It will also work with CPU. I compiled from source with Python 2.7, CUDA 8.0 and Cudnn 6\nThanks for the minimal reproduction! It broke OpenNMT and a big internal model but we weren't looking forward to isolating what parts of those had the issue.\nThat's great, I'll take a look today. Thanks!\nI've pushed a fix to .\nHow does the inference work in this case? I mean, once you train it how do you test it in the test phase?", "positive_passages": [{"docid": "doc-en-pytorch-b62bb85e91269872389a86680d531177e7ea2d3df3439640328fac38c35f4ace", "text": "LDFLAGS=\"-L$INSTALL_DIR/lib \" LD_POSTFIX=\".so.1\" LD_POSTFIX_UNVERSIONED=\".so\" if [[ $(uname) == 'Darwin' ]]; then if [[ $(uname) == 'Darwin' ]]; then LDFLAGS=\"$LDFLAGS -Qunused-arguments -Wl,-rpath,@loader_path\" LD_POSTFIX=\".1.dylib\" LD_POSTFIX_UNVERSIONED=\".dylib\"", "commid": "pytorch_pr_1454"}], "negative_passages": []} {"query_id": "q-en-pytorch-76085f668823e1e14d3db20f6e407e0fc43a0e781d2a486c75c60b2bf0c64322", "query": "This is how I implement the decoder of a sequence to sequence model I'm not sure about the new autograd mechanics but this worked in the previous version. If I didn't make the unrolling codes a function it will work. It will also work with CPU. I compiled from source with Python 2.7, CUDA 8.0 and Cudnn 6\nThanks for the minimal reproduction! It broke OpenNMT and a big internal model but we weren't looking forward to isolating what parts of those had the issue.\nThat's great, I'll take a look today. Thanks!\nI've pushed a fix to .\nHow does the inference work in this case? I mean, once you train it how do you test it in the test phase?", "positive_passages": [{"docid": "doc-en-pytorch-786f69c0bf8a2720026de6b219dea5dc3377a7d91fd6be8247ffec9c42571275", "text": "-DCMAKE_CXX_FLAGS=\"$C_FLAGS $CPP_FLAGS\" make install cp \"lib/libnccl.so.1\" \"${INSTALL_DIR}/lib/libnccl.so.1\" ln -s \"${INSTALL_DIR}/lib/libnccl.so.1\" \"${INSTALL_DIR}/lib/libnccl.so\" if [ ! -f \"${INSTALL_DIR}/lib/libnccl.so\" ]; then ln -s \"${INSTALL_DIR}/lib/libnccl.so.1\" \"${INSTALL_DIR}/lib/libnccl.so\" fi cd ../.. }", "commid": "pytorch_pr_1454"}], "negative_passages": []} {"query_id": "q-en-pytorch-f356b8e87796ce8c604f40a17e0ccd34ef326a6487927d997013a1f527bca839", "query": "We should check that the version from cudnn.h matches the version from the loaded shared library. Really weird things happen if they don't match.", "positive_passages": [{"docid": "doc-en-pytorch-e088ba7eb06236af95ffbf20892258359ddcc1f965fc7285966c25650f6bbb84", "text": "if arg['name'] in ['self', 'state', 'dataType', 'handle']: arg['ignore_check'] = True declaration['options'] = self.filter_unique_options(declaration['options']) return declarations return [d for d in declarations if not d.get('only_register', False)] def filter_unique_options(self, options): def signature(option):", "commid": "pytorch_pr_1586"}], "negative_passages": []} {"query_id": "q-en-pytorch-f356b8e87796ce8c604f40a17e0ccd34ef326a6487927d997013a1f527bca839", "query": "We should check that the version from cudnn.h matches the version from the loaded shared library. Really weird things happen if they don't match.", "positive_passages": [{"docid": "doc-en-pytorch-ee807a8e8ebf0fd8542ccbaf50e041a6ab1d275e3125a10ba59b21252b2e1b13", "text": "if hasattr(lib, 'cudnnGetErrorString'): lib.cudnnGetErrorString.restype = ctypes.c_char_p __cudnn_version = lib.cudnnGetVersion() compile_version = torch._C._cudnn_version() # Check that cuDNN major and minor versions match if (__cudnn_version // 100) != (compile_version // 100): raise RuntimeError( 'cuDNN version mismatch: PyTorch was compiled against {} ' 'but linked against {}'.format(compile_version, __cudnn_version)) else: lib = None return lib", "commid": "pytorch_pr_1586"}], "negative_passages": []} {"query_id": "q-en-pytorch-f356b8e87796ce8c604f40a17e0ccd34ef326a6487927d997013a1f527bca839", "query": "We should check that the version from cudnn.h matches the version from the loaded shared library. Really weird things happen if they don't match.", "positive_passages": [{"docid": "doc-en-pytorch-279f897a3c8e0683bf1478390a961f1ad962cef1822c20e026c9288bcad6f801", "text": "- THTensor* output - std::vector pad - std::vector stride - std::vector dilation - std::vector dilation - int groups - bool benchmark ]]", "commid": "pytorch_pr_1586"}], "negative_passages": []} {"query_id": "q-en-pytorch-f356b8e87796ce8c604f40a17e0ccd34ef326a6487927d997013a1f527bca839", "query": "We should check that the version from cudnn.h matches the version from the loaded shared library. Really weird things happen if they don't match.", "positive_passages": [{"docid": "doc-en-pytorch-fdc3cf1a5a94c106ca38b27943089513289aa388eb7a092c77a71e0dcce94438", "text": "- bool training - double epsilon ]] [[ name: cudnn_version only_register: True ]] static PyObject * THCUDNN_cudnn_version(PyObject *self, PyObject *args) { return PyLong_FromLong(CUDNN_VERSION); } ", "commid": "pytorch_pr_1586"}], "negative_passages": []} {"query_id": "q-en-pytorch-89ff28c4a5771497cf71252d9105dbaeb0e85eecfd1e3aa3339959ef1fc89104", "query": "Hi I think that in lines 28 and 30: if self.transA: a = a.transpose(2, 3) if self.transB: b = b.transpose(2, 3) should be: if self.transA: a = a.transpose(1, 2) if self.transB: b = b.transpose(1, 2) Indeed in that branch the tensor has 3 dimension and thus the code crashes. Maybe an indexing error when translating from lua ? Thank you very much,\nYes, you're right. Could you send a PR with a fix? Thanks!\nYes of course, just done, thank you very much.", "positive_passages": [{"docid": "doc-en-pytorch-3a4ab99ab856219ac85201364e4bc4edc86046263a15d798b21c81b505236150", "text": "torch.mm(a, b, out=self.output) else: if self.transA: a = a.transpose(2, 3) a = a.transpose(1, 2) if self.transB: b = b.transpose(2, 3) b = b.transpose(1, 2) self.output.resize_(a.size(0), a.size(1), b.size(2)) torch.bmm(a, b, out=self.output)", "commid": "pytorch_pr_1617"}], "negative_passages": []} {"query_id": "q-en-pytorch-77fbdb92efba00b171fda95ef72ad1556da96d83b890858d72fb01d6583272cb", "query": "as shown in source code, this parameters is not used. Any reason that we keep it?\nthis can be cleaned up.", "positive_passages": [{"docid": "doc-en-pytorch-14ae05be02173d399ec555be9e0cbceb1564c3c390e31fc761ea536caad90432", "text": "\"\"\" return self._apply(lambda t: t.cuda(device_id)) def cpu(self, device_id=None): def cpu(self): \"\"\"Moves all model parameters and buffers to the CPU.\"\"\" return self._apply(lambda t: t.cpu())", "commid": "pytorch_pr_2073"}], "negative_passages": []} {"query_id": "q-en-pytorch-77fbdb92efba00b171fda95ef72ad1556da96d83b890858d72fb01d6583272cb", "query": "as shown in source code, this parameters is not used. Any reason that we keep it?\nthis can be cleaned up.", "positive_passages": [{"docid": "doc-en-pytorch-b934b16f7e71a839229d7249bd3f6ee13378fc4decd660b53578b16485160f19", "text": "optimizer (Optimizer): Wrapped optimizer. step_size (int): Period of learning rate decay. gamma (float): Multiplicative factor of learning rate decay. Default: -0.1. Default: 0.1. last_epoch (int): The index of last epoch. Default: -1. Example:", "commid": "pytorch_pr_2280"}], "negative_passages": []} {"query_id": "q-en-pytorch-77fbdb92efba00b171fda95ef72ad1556da96d83b890858d72fb01d6583272cb", "query": "as shown in source code, this parameters is not used. Any reason that we keep it?\nthis can be cleaned up.", "positive_passages": [{"docid": "doc-en-pytorch-8a512103206f66edc9a58f9968e1199ec6631cce0dc5816e4d83e5be73846071", "text": "optimizer (Optimizer): Wrapped optimizer. milestones (list): List of epoch indices. Must be increasing. gamma (float): Multiplicative factor of learning rate decay. Default: -0.1. Default: 0.1. last_epoch (int): The index of last epoch. Default: -1. Example:", "commid": "pytorch_pr_2280"}], "negative_passages": []} {"query_id": "q-en-pytorch-63eb661e07e3fadddd1e2c8c3e1932066a1b6e25b49c780bbf420bf912a21e4a", "query": "Consider following code, and run it with multi gpus (e.g. 4): It will output: That is, the default device would always be zero even in DataParallel. My pytorch version is 0.1.122. I think this is not the desired behavior. It would cause some troubles. I tried to insert my own cuda kernel into backward to calculate the gradients, it become very slow, and I fixed it by (grad.getdevice()). Anyway, I think currentdevice in forward and backward should be the same, or could anyone explain to me why they are different?\nthis seems like something we should fix: ?\n