diff --git a/README.md b/README.md index b71c458..0084aca 100644 --- a/README.md +++ b/README.md @@ -3,11 +3,161 @@ CUDA Stream Compaction **University of Pennsylvania, CIS 565: GPU Programming and Architecture, Project 2** -* (TODO) YOUR NAME HERE -* Tested on: (TODO) Windows 22, i7-2222 @ 2.22GHz 22GB, GTX 222 222MB (Moore 2222 Lab) +* (TODO) Yi Guo +* Tested on: Windows 8.1, Intel(R) Core(TM)i5-4200M CPU @ 2.50GHz 8GB, NVIDIA GeForce 840M (Personal Notebook) + +## Description. +In this project, I implemented the parallel computing algorithm of streaming compaction. For more details, see `INSTRUCTION.md`. + +## ScreenShot +These are the test results of all the method I implemented. +![](./img/result1.png); + +![](./img/result2.png); + +## Performance Analysis + +* **Block Size** + +I compare the time cost of scan function under different block size value. The result is shown as the graph below. + +![](./img/blocksizeComparison.png); + +It seems that there is no great difference when the block size value is changed. But there is one thing we need to do. That is when we sweep up or sweep down the array, we should change the value of block size for each loop. Since we don't need to deal with all the elements in the array in each loop, we should adjust the block size for each loop to avoid the waste of computation resource.So it should be something like: + + for (int d = 0; d < layer; d++) + { + int nodeNum = 1 << (layer - 1 - d); + blocknum = nodeNum / threadPerBlock + 1; + KernUpSweep << > >(d, dev_Scatter, nodeNum); + } + +instead of: + + blocknum = oLength / threadPerBlock + 1; + for (int d = 0; d < layer; d++) + { + int nodeNum = 1 << (layer - 1 - d); + KernUpSweep << > >(d, dev_Scatter, nodeNum); + } + +* **Efficiency of different scan method** + +I compare the efficiency of different scan method and make a plot below. +![](./img/ScanComparison.png); + +As the plot shows, when the size of array is not very huge, `cpu scan` will be a little faster than all the other methods run on GPU. But when the size of array is very huge, `efficient scan` on GPU will be much faster than `cpu scan`. From algorithm perspective, GPU scan should always be much faster than cpu scan. The time complexity of `cpu scan` should be O(n) or more, but on GPU it can be reduced to O(logn). But from architecture perspective, GPU will produce greater latency when we access the data in the global memory(I save the data in the global memory in this project. It can be optimized by using sharing memory). When we want to deal with a huge amount of data using GPU, the massive parallel computing will "hide" the feedback of data access latency. But when we only want to deal with a limited amount of data, GPU has no obvious advantage ,or even less efficient, compared to CPU. + +But there is an another thing I don't quite understand. That is the `naive scan` takes the most time when the size of array is very huge. I think `naive scan` should be more efficient than cpu scan, but I don't know what's going on here. + +* **Thrust scan** + +As the plot above shows, `thrust::scan` is more efficient than the scan methods we implemented on GPU. I think there may be 2 reasons. One is that `thrust::scan` function may use the share memory to store the data and access the data from share memory instead of from global memory. In this way, it will produce less latency because it visits the global memory less times. The other is that `thrust::scan` may make some optimizations on the binary search algorithm. The best proof of this is that the time cost of `thrust::scan` will be much less when the size of the array is not the power of 2, which means when the size value is the power of 2, it is probably the worst case for its algorithm. + +* **Test Result** + +``` +**************** +** SCAN TESTS ** +**************** + [ 34 28 17 4 6 42 43 24 15 44 27 19 13 ... 43 0 ] +==== cpu scan, power-of-two ==== + elapsed time: 0ms (std::chrono Measured) + [ 0 34 62 79 83 89 131 174 198 213 257 284 303 ... 24338 24381 ] +==== cpu scan, non-power-of-two ==== + elapsed time: 0ms (std::chrono Measured) + [ 0 34 62 79 83 89 131 174 198 213 257 284 303 ... 24197 24245 ] + passed +==== naive scan, power-of-two ==== + elapsed time: 0.057184ms (CUDA Measured) + passed +==== naive scan, non-power-of-two ==== + elapsed time: 0.057216ms (CUDA Measured) + passed +==== work-efficient scan, power-of-two ==== + elapsed time: 0.157728ms (CUDA Measured) + passed +==== work-efficient scan, non-power-of-two ==== + elapsed time: 0.153376ms (CUDA Measured) + passed +==== thrust scan, power-of-two ==== + elapsed time: 0.156192ms (CUDA Measured) + passed +==== thrust scan, non-power-of-two ==== + elapsed time: 0.023776ms (CUDA Measured) + passed + +***************************** +** STREAM COMPACTION TESTS ** +***************************** + [ 2 0 1 2 2 0 1 2 1 0 3 1 1 ... 3 0 ] +==== cpu compact without scan, power-of-two ==== + elapsed time: 0.003695ms (std::chrono Measured) + [ 2 1 2 2 1 2 1 3 1 1 1 2 1 ... 1 3 ] + passed +==== cpu compact without scan, non-power-of-two ==== + elapsed time: 0.004105ms (std::chrono Measured) + [ 2 1 2 2 1 2 1 3 1 1 1 2 1 ... 2 2 ] + passed +==== cpu compact with scan ==== + elapsed time: 0.009853ms (std::chrono Measured) + [ 2 1 2 2 1 2 1 3 1 1 1 2 1 ... 1 3 ] + passed +==== work-efficient compact, power-of-two ==== + elapsed time: 0.212384ms (CUDA Measured) + passed +==== work-efficient compact, non-power-of-two ==== + elapsed time: 0.219104ms (CUDA Measured) + passed +``` +## Extra Credit + +* **Efficient scan optimization** + +Compared to the basic algorithm, I optimize the `kernUpsweep` and `kernDownsweep` kernal function by reducing the branches in it. Instead of judging whether the current index is the power of 2, I computer the index we need to deal with directly. + +``` +__global__ void KernUpSweep(int d, int *idata, int nodeNum) + { + int idx = (blockIdx.x * blockDim.x) + threadIdx.x; + if (idx >= nodeNum) return; + idata[(idx + 1)*(1 << (d + 1)) - 1] += idata[idx*(1 << (d + 1)) + (1 << d) - 1]; + } + +__global__ void KernDownSweep(int d, int *idata, int nodeNum) +{ + int idx = (blockIdx.x * blockDim.x) + threadIdx.x; + if (idx >= nodeNum) return; + int nodeIdx = idx*(1 << (d + 1)) + (1 << d) - 1; + int temp = idata[nodeIdx]; + idata[nodeIdx] = idata[nodeIdx + (1 << d)]; + idata[nodeIdx + (1 << d)] += temp; +} +``` + +Call kernal function: +``` +for (int d = 0; d < layer; d++) +{ + int nodeNum = 1 << (layer - 1 - d); + int blocknum = nodeNum / threadPerBlock + 1; + KernUpSweep << > >(d, dev_Data, nodeNum); +} +cudaMemset(dev_Data + oLength - 1, 0, sizeof(int)); +checkCUDAError("cudaMemset failed!"); +for (int d = layer - 1; d >= 0; d--) +{ + int nodeNum = 1 << (layer - 1 - d); + int blocknum = nodeNum / threadPerBlock + 1; + KernDownSweep << > >(d, dev_Data, nodeNum); +} +``` + + + + + + -### (TODO: Your README) -Include analysis, etc. (Remember, this is public, so don't put -anything here that you don't want to share with the world.) diff --git a/img/ScanComparison.png b/img/ScanComparison.png new file mode 100644 index 0000000..7f48b46 Binary files /dev/null and b/img/ScanComparison.png differ diff --git a/img/blocksizeComparison.png b/img/blocksizeComparison.png new file mode 100644 index 0000000..58a8e1b Binary files /dev/null and b/img/blocksizeComparison.png differ diff --git a/img/result1.png b/img/result1.png new file mode 100644 index 0000000..8b034f5 Binary files /dev/null and b/img/result1.png differ diff --git a/img/result2.png b/img/result2.png new file mode 100644 index 0000000..2d0102f Binary files /dev/null and b/img/result2.png differ diff --git a/src/main.cpp b/src/main.cpp index 7305641..c335103 100644 --- a/src/main.cpp +++ b/src/main.cpp @@ -13,7 +13,7 @@ #include #include "testing_helpers.hpp" -const int SIZE = 1 << 8; // feel free to change the size of array +const int SIZE = 1 << 10; // feel free to change the size of array const int NPOT = SIZE - 3; // Non-Power-Of-Two int a[SIZE], b[SIZE], c[SIZE]; diff --git a/stream_compaction/common.cu b/stream_compaction/common.cu index 8fc0211..ae96953 100644 --- a/stream_compaction/common.cu +++ b/stream_compaction/common.cu @@ -1,39 +1,46 @@ #include "common.h" void checkCUDAErrorFn(const char *msg, const char *file, int line) { - cudaError_t err = cudaGetLastError(); - if (cudaSuccess == err) { - return; - } + cudaError_t err = cudaGetLastError(); + if (cudaSuccess == err) { + return; + } - fprintf(stderr, "CUDA error"); - if (file) { - fprintf(stderr, " (%s:%d)", file, line); - } - fprintf(stderr, ": %s: %s\n", msg, cudaGetErrorString(err)); - exit(EXIT_FAILURE); + fprintf(stderr, "CUDA error"); + if (file) { + fprintf(stderr, " (%s:%d)", file, line); + } + fprintf(stderr, ": %s: %s\n", msg, cudaGetErrorString(err)); + exit(EXIT_FAILURE); } namespace StreamCompaction { - namespace Common { + namespace Common { - /** - * Maps an array to an array of 0s and 1s for stream compaction. Elements - * which map to 0 will be removed, and elements which map to 1 will be kept. - */ - __global__ void kernMapToBoolean(int n, int *bools, const int *idata) { - // TODO - } + /** + * Maps an array to an array of 0s and 1s for stream compaction. Elements + * which map to 0 will be removed, and elements which map to 1 will be kept. + */ + __global__ void kernMapToBoolean(int n, int *bools, const int *idata) { + // TODO + int idx = (blockIdx.x * blockDim.x) + threadIdx.x; + if (idx >= n) return; + bools[idx] = idata[idx] ? 1 : 0; + } - /** - * Performs scatter on an array. That is, for each element in idata, - * if bools[idx] == 1, it copies idata[idx] to odata[indices[idx]]. - */ - __global__ void kernScatter(int n, int *odata, - const int *idata, const int *bools, const int *indices) { - // TODO - } + /** + * Performs scatter on an array. That is, for each element in idata, + * if bools[idx] == 1, it copies idata[idx] to odata[indices[idx]]. + */ + __global__ void kernScatter(int n, int *odata, + const int *idata, const int *bools, const int *indices) { + // TODO + int idx = (blockIdx.x * blockDim.x) + threadIdx.x; + if (idx >= n)return; + if (bools[idx]) + odata[indices[idx]] = idata[idx]; + } - } + } } diff --git a/stream_compaction/cpu.cu b/stream_compaction/cpu.cu index 05ce667..bad1976 100644 --- a/stream_compaction/cpu.cu +++ b/stream_compaction/cpu.cu @@ -1,50 +1,89 @@ #include #include "cpu.h" - -#include "common.h" +#include +#include "common.h" namespace StreamCompaction { - namespace CPU { - using StreamCompaction::Common::PerformanceTimer; - PerformanceTimer& timer() - { - static PerformanceTimer timer; - return timer; - } - - /** - * CPU scan (prefix sum). - * For performance analysis, this is supposed to be a simple for loop. - * (Optional) For better understanding before starting moving to GPU, you can simulate your GPU scan in this function first. - */ - void scan(int n, int *odata, const int *idata) { - timer().startCpuTimer(); - // TODO - timer().endCpuTimer(); - } - - /** - * CPU stream compaction without using the scan function. - * - * @returns the number of elements remaining after compaction. - */ - int compactWithoutScan(int n, int *odata, const int *idata) { - timer().startCpuTimer(); - // TODO - timer().endCpuTimer(); - return -1; - } - - /** - * CPU stream compaction using scan and scatter, like the parallel version. - * - * @returns the number of elements remaining after compaction. - */ - int compactWithScan(int n, int *odata, const int *idata) { - timer().startCpuTimer(); - // TODO - timer().endCpuTimer(); - return -1; - } - } + namespace CPU { + using StreamCompaction::Common::PerformanceTimer; + PerformanceTimer& timer() + { + static PerformanceTimer timer; + return timer; + } + + /** + * CPU scan (prefix sum). + * For performance analysis, this is supposed to be a simple for loop. + * (Optional) For better understanding before starting moving to GPU, you can simulate your GPU scan in this function first. + */ + void scan(int n, int *odata, const int *idata) { + + if (n <= 0) return; + memcpy(odata, idata, n * sizeof(int)); + int layer = ilog2ceil(n); + int oLength = 1 << layer; + + // Uncomment the timer here if you want to test the efficiency of scan function + //timer().startCpuTimer(); + for (int d = 0; d < layer; d++) { + for (int k = 0; k < oLength; k += (1 << (d + 1))) { + + odata[k + (1 << (d + 1)) - 1] += odata[k + (1 << d) - 1]; + } + } + odata[oLength - 1] = 0; + for (int d = layer - 1; d >= 0; d--) { + for (int k = 0; k < oLength; k += (1 << (d + 1))) { + int nodeIdx = k + (1 << d) - 1; + int temp = odata[nodeIdx]; + odata[nodeIdx] = odata[nodeIdx + (1 << d)]; + odata[nodeIdx + (1 << d)] += temp; + } + } + //timer().endCpuTimer(); + } + + /** + * CPU stream compaction without using the scan function. + * + * @returns the number of elements remaining after compaction. + */ + int compactWithoutScan(int n, int *odata, const int *idata) { + + // TODO + if (n <= 0) return -1; + int num = 0; + timer().startCpuTimer(); + for (int i = 0; i < n; i++) { + if (idata[i]) + odata[num++] = idata[i]; + } + timer().endCpuTimer(); + return num; + } + + /** + * CPU stream compaction using scan and scatter, like the parallel version. + * + * @returns the number of elements remaining after compaction. + */ + int compactWithScan(int n, int *odata, const int *idata) { + if (n <= 0) return -1; + int num = 0; + // TODO + timer().startCpuTimer(); + for (int i = 0; i < n; i++) { + odata[i] = idata[i] ? 1 : 0; + } + scan(n, odata, odata); + num = odata[n - 1]; + for (int i = 0; i < n; i++) { + if (idata[i]) + odata[odata[i]] = idata[i]; + } + timer().endCpuTimer(); + return num; + } + } } diff --git a/stream_compaction/efficient.cu b/stream_compaction/efficient.cu index 36c5ef2..3ee98f8 100644 --- a/stream_compaction/efficient.cu +++ b/stream_compaction/efficient.cu @@ -4,37 +4,136 @@ #include "efficient.h" namespace StreamCompaction { - namespace Efficient { - using StreamCompaction::Common::PerformanceTimer; - PerformanceTimer& timer() - { - static PerformanceTimer timer; - return timer; - } - - /** - * Performs prefix-sum (aka scan) on idata, storing the result into odata. - */ - void scan(int n, int *odata, const int *idata) { - timer().startGpuTimer(); - // TODO - timer().endGpuTimer(); - } - - /** - * Performs stream compaction on idata, storing the result into odata. - * All zeroes are discarded. - * - * @param n The number of elements in idata. - * @param odata The array into which to store elements. - * @param idata The array of elements to compact. - * @returns The number of elements remaining after compaction. - */ - int compact(int n, int *odata, const int *idata) { - timer().startGpuTimer(); - // TODO - timer().endGpuTimer(); - return -1; - } - } + namespace Efficient { + using StreamCompaction::Common::PerformanceTimer; + PerformanceTimer& timer() + { + static PerformanceTimer timer; + return timer; + } + int threadPerBlock = 256; + int* dev_Data; + int *dev_Map; + int *dev_Scatter; + int *dev_oData; + int *dev_total; + + __global__ void KernUpSweep(int d, int *idata, int nodeNum) + { + int idx = (blockIdx.x * blockDim.x) + threadIdx.x; + if (idx >= nodeNum) return; + idata[(idx + 1)*(1 << (d + 1)) - 1] += idata[idx*(1 << (d + 1)) + (1 << d) - 1]; + } + + __global__ void KernDownSweep(int d, int *idata, int nodeNum) + { + int idx = (blockIdx.x * blockDim.x) + threadIdx.x; + if (idx >= nodeNum) return; + int nodeIdx = idx*(1 << (d + 1)) + (1 << d) - 1; + int temp = idata[nodeIdx]; + idata[nodeIdx] = idata[nodeIdx + (1 << d)]; + idata[nodeIdx + (1 << d)] += temp; + } + /** + * Performs prefix-sum (aka scan) on idata, storing the result into odata. + */ + void scan(int n, int *odata, const int *idata) { + int layer = ilog2ceil(n); + int oLength = 1 << layer; + cudaMalloc((void**)&dev_Data, oLength * sizeof(int)); + checkCUDAError("cudaMalloc failed!"); + cudaMemcpy(dev_Data, idata, sizeof(int) * oLength, cudaMemcpyHostToDevice); + checkCUDAError("cudaMemcpy to device failed!"); + + timer().startGpuTimer(); + for (int d = 0; d < layer; d++) + { + int nodeNum = 1 << (layer - 1 - d); + int blocknum = nodeNum / threadPerBlock + 1; + KernUpSweep << > >(d, dev_Data, nodeNum); + } + cudaMemset(dev_Data + oLength - 1, 0, sizeof(int)); + checkCUDAError("cudaMemset failed!"); + for (int d = layer - 1; d >= 0; d--) + { + int nodeNum = 1 << (layer - 1 - d); + int blocknum = nodeNum / threadPerBlock + 1; + KernDownSweep << > >(d, dev_Data, nodeNum); + } + timer().endGpuTimer(); + cudaMemcpy(odata, dev_Data, sizeof(int) * n, cudaMemcpyDeviceToHost); + checkCUDAError("cudaMemcpy to host failed!"); + cudaFree(dev_Data); + + } + + /** + * Performs stream compaction on idata, storing the result into odata. + * All zeroes are discarded. + * + * @param n The number of elements in idata. + * @param odata The array into which to store elements. + * @param idata The array of elements to compact. + * @returns The number of elements remaining after compaction. + */ + int compact(int n, int *odata, const int *idata) { + if (n <= 0) return -1; + int layer = ilog2ceil(n); + int oLength = 1 << layer; + cudaMalloc((void**)&dev_Data, oLength * sizeof(int)); + cudaMalloc((void**)&dev_Scatter, oLength * sizeof(int)); + cudaMalloc((void**)&dev_Map, oLength * sizeof(int)); + cudaMalloc((void**)&dev_oData, n * sizeof(int)); + checkCUDAError("cudaMalloc failed!"); + cudaMemcpy(dev_Data, idata, oLength * sizeof(int), cudaMemcpyHostToDevice); + checkCUDAError("cudaMemcpy to device failed!"); + + + // TODO + int blocknum = oLength / threadPerBlock + 1; + timer().startGpuTimer(); + Common::kernMapToBoolean << > >(oLength, dev_Map, dev_Data); + + // Here I reimplement the scan part, because in the main function, scan and compaction are timed seperately, + // and I don't want to allocate memory for data 2 times. + cudaMemcpy(dev_Scatter, dev_Map, oLength * sizeof(int), cudaMemcpyDeviceToDevice); + checkCUDAError("cudaMemcpy device to device failed!"); + + for (int d = 0; d < layer; d++) + { + int nodeNum = 1 << (layer - 1 - d); + blocknum = nodeNum / threadPerBlock + 1; + KernUpSweep << > >(d, dev_Scatter, nodeNum); + } + + cudaMemset(dev_Scatter + oLength - 1, 0, sizeof(int)); + checkCUDAError("cudaMemcpy to device failed!"); + for (int d = layer - 1; d >= 0; d--) + { + int nodeNum = 1 << (layer - 1 - d); + blocknum = nodeNum / threadPerBlock + 1; + KernDownSweep << > >(d, dev_Scatter, nodeNum); + } + + blocknum = n / threadPerBlock + 1; + Common::kernScatter << < blocknum, threadPerBlock >> > (n, dev_oData, dev_Data, dev_Map, dev_Scatter); + timer().endGpuTimer(); + + cudaMemcpy(odata, dev_oData, sizeof(int) * n, cudaMemcpyDeviceToHost); + checkCUDAError("cudaMemcpy to host failed!"); + int count, end; + cudaMemcpy(&count, dev_Scatter + n - 1, sizeof(int), cudaMemcpyDeviceToHost); + cudaMemcpy(&end, dev_Map + n - 1, sizeof(int), cudaMemcpyDeviceToHost); + checkCUDAError("cudaMemcpy device to device failed!"); + cudaFree(dev_Data); + cudaFree(dev_Scatter); + cudaFree(dev_Map); + cudaFree(dev_oData); + + return end ? count + 1 : count; + } + + + + } } diff --git a/stream_compaction/naive.cu b/stream_compaction/naive.cu index 9218f8e..8b21cb9 100644 --- a/stream_compaction/naive.cu +++ b/stream_compaction/naive.cu @@ -4,22 +4,51 @@ #include "naive.h" namespace StreamCompaction { - namespace Naive { - using StreamCompaction::Common::PerformanceTimer; - PerformanceTimer& timer() - { - static PerformanceTimer timer; - return timer; - } - // TODO: __global__ + namespace Naive { + using StreamCompaction::Common::PerformanceTimer; + PerformanceTimer& timer() + { + static PerformanceTimer timer; + return timer; + } + int threadPerBlock = 256; + int *dev_0, *dev_1; + // TODO: + __global__ void NaiveScan(int d, int *idata, int *odata, int oLength) { + int idx = (blockDim.x*blockIdx.x) + threadIdx.x; + if (idx >= oLength) return; + int flag = 1 << d; + odata[idx] = idx >= flag ? idata[idx] + idata[idx - flag] : idata[idx]; + } - /** - * Performs prefix-sum (aka scan) on idata, storing the result into odata. - */ - void scan(int n, int *odata, const int *idata) { - timer().startGpuTimer(); - // TODO - timer().endGpuTimer(); - } - } + /** + * Performs prefix-sum (aka scan) on idata, storing the result into odata. + */ + void scan(int n, int *odata, const int *idata) { + int layer = ilog2ceil(n); + int oLength = 1 << layer; + cudaMalloc((void**)&dev_0, oLength * sizeof(int)); + cudaMalloc((void**)&dev_1, oLength * sizeof(int)); + checkCUDAError("cudaMalloc failed!"); + cudaMemcpy(dev_0, idata, oLength*sizeof(int), cudaMemcpyHostToDevice); + checkCUDAError("cudaMemcpy to device failed!"); + int blocknum = oLength / threadPerBlock + 1; + + timer().startGpuTimer(); + for (int d = 0; d < layer; d++) { + NaiveScan << > >(d, dev_0, dev_1, oLength); + std::swap(dev_0, dev_1); + } + timer().endGpuTimer(); + + odata[0] = 0; + cudaMemcpy(odata + 1, dev_0, (n - 1)*sizeof(int), cudaMemcpyDeviceToHost); + checkCUDAError("cudaMemcpy to host failed!"); + + cudaFree(dev_0); + cudaFree(dev_1); + + + } + } } diff --git a/stream_compaction/thrust.cu b/stream_compaction/thrust.cu index 36b732d..707b515 100644 --- a/stream_compaction/thrust.cu +++ b/stream_compaction/thrust.cu @@ -7,22 +7,26 @@ #include "thrust.h" namespace StreamCompaction { - namespace Thrust { - using StreamCompaction::Common::PerformanceTimer; - PerformanceTimer& timer() - { - static PerformanceTimer timer; - return timer; - } - /** - * Performs prefix-sum (aka scan) on idata, storing the result into odata. - */ - void scan(int n, int *odata, const int *idata) { - timer().startGpuTimer(); - // TODO use `thrust::exclusive_scan` - // example: for device_vectors dv_in and dv_out: - // thrust::exclusive_scan(dv_in.begin(), dv_in.end(), dv_out.begin()); - timer().endGpuTimer(); - } - } + namespace Thrust { + using StreamCompaction::Common::PerformanceTimer; + PerformanceTimer& timer() + { + static PerformanceTimer timer; + return timer; + } + /** + * Performs prefix-sum (aka scan) on idata, storing the result into odata. + */ + void scan(int n, int *odata, const int *idata) { + thrust::device_vector dev_in(idata, idata + n); + thrust::device_vector dev_out(odata, odata + n); + timer().startGpuTimer(); + thrust::exclusive_scan(dev_in.begin(), dev_in.end(), dev_out.begin()); + // TODO use `thrust::exclusive_scan` + // example: for device_vectors dv_in and dv_out: + // thrust::exclusive_scan(dv_in.begin(), dv_in.end(), dv_out.begin()); + timer().endGpuTimer(); + thrust::copy(dev_out.begin(), dev_out.end(), odata); + } + } }