Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
160 changes: 155 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,161 @@ CUDA Stream Compaction

**University of Pennsylvania, CIS 565: GPU Programming and Architecture, Project 2**

* (TODO) YOUR NAME HERE
* Tested on: (TODO) Windows 22, i7-2222 @ 2.22GHz 22GB, GTX 222 222MB (Moore 2222 Lab)
* (TODO) Yi Guo
* Tested on: Windows 8.1, Intel(R) Core(TM)i5-4200M CPU @ 2.50GHz 8GB, NVIDIA GeForce 840M (Personal Notebook)

## Description.
In this project, I implemented the parallel computing algorithm of streaming compaction. For more details, see `INSTRUCTION.md`.

## ScreenShot
These are the test results of all the method I implemented.
![](./img/result1.png);

![](./img/result2.png);

## Performance Analysis

* **Block Size**

I compare the time cost of scan function under different block size value. The result is shown as the graph below.

![](./img/blocksizeComparison.png);

It seems that there is no great difference when the block size value is changed. But there is one thing we need to do. That is when we sweep up or sweep down the array, we should change the value of block size for each loop. Since we don't need to deal with all the elements in the array in each loop, we should adjust the block size for each loop to avoid the waste of computation resource.So it should be something like:

for (int d = 0; d < layer; d++)
{
int nodeNum = 1 << (layer - 1 - d);
blocknum = nodeNum / threadPerBlock + 1;
KernUpSweep << <blocknum, threadPerBlock >> >(d, dev_Scatter, nodeNum);
}

instead of:

blocknum = oLength / threadPerBlock + 1;
for (int d = 0; d < layer; d++)
{
int nodeNum = 1 << (layer - 1 - d);
KernUpSweep << <blocknum, threadPerBlock >> >(d, dev_Scatter, nodeNum);
}

* **Efficiency of different scan method**

I compare the efficiency of different scan method and make a plot below.
![](./img/ScanComparison.png);

As the plot shows, when the size of array is not very huge, `cpu scan` will be a little faster than all the other methods run on GPU. But when the size of array is very huge, `efficient scan` on GPU will be much faster than `cpu scan`. From algorithm perspective, GPU scan should always be much faster than cpu scan. The time complexity of `cpu scan` should be O(n) or more, but on GPU it can be reduced to O(logn). But from architecture perspective, GPU will produce greater latency when we access the data in the global memory(I save the data in the global memory in this project. It can be optimized by using sharing memory). When we want to deal with a huge amount of data using GPU, the massive parallel computing will "hide" the feedback of data access latency. But when we only want to deal with a limited amount of data, GPU has no obvious advantage ,or even less efficient, compared to CPU.

But there is an another thing I don't quite understand. That is the `naive scan` takes the most time when the size of array is very huge. I think `naive scan` should be more efficient than cpu scan, but I don't know what's going on here.

* **Thrust scan**

As the plot above shows, `thrust::scan` is more efficient than the scan methods we implemented on GPU. I think there may be 2 reasons. One is that `thrust::scan` function may use the share memory to store the data and access the data from share memory instead of from global memory. In this way, it will produce less latency because it visits the global memory less times. The other is that `thrust::scan` may make some optimizations on the binary search algorithm. The best proof of this is that the time cost of `thrust::scan` will be much less when the size of the array is not the power of 2, which means when the size value is the power of 2, it is probably the worst case for its algorithm.

* **Test Result**

```
****************
** SCAN TESTS **
****************
[ 34 28 17 4 6 42 43 24 15 44 27 19 13 ... 43 0 ]
==== cpu scan, power-of-two ====
elapsed time: 0ms (std::chrono Measured)
[ 0 34 62 79 83 89 131 174 198 213 257 284 303 ... 24338 24381 ]
==== cpu scan, non-power-of-two ====
elapsed time: 0ms (std::chrono Measured)
[ 0 34 62 79 83 89 131 174 198 213 257 284 303 ... 24197 24245 ]
passed
==== naive scan, power-of-two ====
elapsed time: 0.057184ms (CUDA Measured)
passed
==== naive scan, non-power-of-two ====
elapsed time: 0.057216ms (CUDA Measured)
passed
==== work-efficient scan, power-of-two ====
elapsed time: 0.157728ms (CUDA Measured)
passed
==== work-efficient scan, non-power-of-two ====
elapsed time: 0.153376ms (CUDA Measured)
passed
==== thrust scan, power-of-two ====
elapsed time: 0.156192ms (CUDA Measured)
passed
==== thrust scan, non-power-of-two ====
elapsed time: 0.023776ms (CUDA Measured)
passed

*****************************
** STREAM COMPACTION TESTS **
*****************************
[ 2 0 1 2 2 0 1 2 1 0 3 1 1 ... 3 0 ]
==== cpu compact without scan, power-of-two ====
elapsed time: 0.003695ms (std::chrono Measured)
[ 2 1 2 2 1 2 1 3 1 1 1 2 1 ... 1 3 ]
passed
==== cpu compact without scan, non-power-of-two ====
elapsed time: 0.004105ms (std::chrono Measured)
[ 2 1 2 2 1 2 1 3 1 1 1 2 1 ... 2 2 ]
passed
==== cpu compact with scan ====
elapsed time: 0.009853ms (std::chrono Measured)
[ 2 1 2 2 1 2 1 3 1 1 1 2 1 ... 1 3 ]
passed
==== work-efficient compact, power-of-two ====
elapsed time: 0.212384ms (CUDA Measured)
passed
==== work-efficient compact, non-power-of-two ====
elapsed time: 0.219104ms (CUDA Measured)
passed
```
## Extra Credit

* **Efficient scan optimization**

Compared to the basic algorithm, I optimize the `kernUpsweep` and `kernDownsweep` kernal function by reducing the branches in it. Instead of judging whether the current index is the power of 2, I computer the index we need to deal with directly.

```
__global__ void KernUpSweep(int d, int *idata, int nodeNum)
{
int idx = (blockIdx.x * blockDim.x) + threadIdx.x;
if (idx >= nodeNum) return;
idata[(idx + 1)*(1 << (d + 1)) - 1] += idata[idx*(1 << (d + 1)) + (1 << d) - 1];
}

__global__ void KernDownSweep(int d, int *idata, int nodeNum)
{
int idx = (blockIdx.x * blockDim.x) + threadIdx.x;
if (idx >= nodeNum) return;
int nodeIdx = idx*(1 << (d + 1)) + (1 << d) - 1;
int temp = idata[nodeIdx];
idata[nodeIdx] = idata[nodeIdx + (1 << d)];
idata[nodeIdx + (1 << d)] += temp;
}
```

Call kernal function:
```
for (int d = 0; d < layer; d++)
{
int nodeNum = 1 << (layer - 1 - d);
int blocknum = nodeNum / threadPerBlock + 1;
KernUpSweep << <blocknum, threadPerBlock >> >(d, dev_Data, nodeNum);
}
cudaMemset(dev_Data + oLength - 1, 0, sizeof(int));
checkCUDAError("cudaMemset failed!");
for (int d = layer - 1; d >= 0; d--)
{
int nodeNum = 1 << (layer - 1 - d);
int blocknum = nodeNum / threadPerBlock + 1;
KernDownSweep << <blocknum, threadPerBlock >> >(d, dev_Data, nodeNum);
}
```







### (TODO: Your README)

Include analysis, etc. (Remember, this is public, so don't put
anything here that you don't want to share with the world.)

Binary file added img/ScanComparison.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/blocksizeComparison.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/result1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/result2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion src/main.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
#include <stream_compaction/thrust.h>
#include "testing_helpers.hpp"

const int SIZE = 1 << 8; // feel free to change the size of array
const int SIZE = 1 << 10; // feel free to change the size of array
const int NPOT = SIZE - 3; // Non-Power-Of-Two
int a[SIZE], b[SIZE], c[SIZE];

Expand Down
61 changes: 34 additions & 27 deletions stream_compaction/common.cu
Original file line number Diff line number Diff line change
@@ -1,39 +1,46 @@
#include "common.h"

void checkCUDAErrorFn(const char *msg, const char *file, int line) {
cudaError_t err = cudaGetLastError();
if (cudaSuccess == err) {
return;
}
cudaError_t err = cudaGetLastError();
if (cudaSuccess == err) {
return;
}

fprintf(stderr, "CUDA error");
if (file) {
fprintf(stderr, " (%s:%d)", file, line);
}
fprintf(stderr, ": %s: %s\n", msg, cudaGetErrorString(err));
exit(EXIT_FAILURE);
fprintf(stderr, "CUDA error");
if (file) {
fprintf(stderr, " (%s:%d)", file, line);
}
fprintf(stderr, ": %s: %s\n", msg, cudaGetErrorString(err));
exit(EXIT_FAILURE);
}


namespace StreamCompaction {
namespace Common {
namespace Common {

/**
* Maps an array to an array of 0s and 1s for stream compaction. Elements
* which map to 0 will be removed, and elements which map to 1 will be kept.
*/
__global__ void kernMapToBoolean(int n, int *bools, const int *idata) {
// TODO
}
/**
* Maps an array to an array of 0s and 1s for stream compaction. Elements
* which map to 0 will be removed, and elements which map to 1 will be kept.
*/
__global__ void kernMapToBoolean(int n, int *bools, const int *idata) {
// TODO
int idx = (blockIdx.x * blockDim.x) + threadIdx.x;
if (idx >= n) return;
bools[idx] = idata[idx] ? 1 : 0;
}

/**
* Performs scatter on an array. That is, for each element in idata,
* if bools[idx] == 1, it copies idata[idx] to odata[indices[idx]].
*/
__global__ void kernScatter(int n, int *odata,
const int *idata, const int *bools, const int *indices) {
// TODO
}
/**
* Performs scatter on an array. That is, for each element in idata,
* if bools[idx] == 1, it copies idata[idx] to odata[indices[idx]].
*/
__global__ void kernScatter(int n, int *odata,
const int *idata, const int *bools, const int *indices) {
// TODO
int idx = (blockIdx.x * blockDim.x) + threadIdx.x;
if (idx >= n)return;
if (bools[idx])
odata[indices[idx]] = idata[idx];
}

}
}
}
129 changes: 84 additions & 45 deletions stream_compaction/cpu.cu
Original file line number Diff line number Diff line change
@@ -1,50 +1,89 @@
#include <cstdio>
#include "cpu.h"

#include "common.h"
#include<iostream>
#include "common.h"

namespace StreamCompaction {
namespace CPU {
using StreamCompaction::Common::PerformanceTimer;
PerformanceTimer& timer()
{
static PerformanceTimer timer;
return timer;
}

/**
* CPU scan (prefix sum).
* For performance analysis, this is supposed to be a simple for loop.
* (Optional) For better understanding before starting moving to GPU, you can simulate your GPU scan in this function first.
*/
void scan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
// TODO
timer().endCpuTimer();
}

/**
* CPU stream compaction without using the scan function.
*
* @returns the number of elements remaining after compaction.
*/
int compactWithoutScan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
// TODO
timer().endCpuTimer();
return -1;
}

/**
* CPU stream compaction using scan and scatter, like the parallel version.
*
* @returns the number of elements remaining after compaction.
*/
int compactWithScan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
// TODO
timer().endCpuTimer();
return -1;
}
}
namespace CPU {
using StreamCompaction::Common::PerformanceTimer;
PerformanceTimer& timer()
{
static PerformanceTimer timer;
return timer;
}

/**
* CPU scan (prefix sum).
* For performance analysis, this is supposed to be a simple for loop.
* (Optional) For better understanding before starting moving to GPU, you can simulate your GPU scan in this function first.
*/
void scan(int n, int *odata, const int *idata) {

if (n <= 0) return;
memcpy(odata, idata, n * sizeof(int));
int layer = ilog2ceil(n);
int oLength = 1 << layer;

// Uncomment the timer here if you want to test the efficiency of scan function
//timer().startCpuTimer();
for (int d = 0; d < layer; d++) {
for (int k = 0; k < oLength; k += (1 << (d + 1))) {

odata[k + (1 << (d + 1)) - 1] += odata[k + (1 << d) - 1];
}
}
odata[oLength - 1] = 0;
for (int d = layer - 1; d >= 0; d--) {
for (int k = 0; k < oLength; k += (1 << (d + 1))) {
int nodeIdx = k + (1 << d) - 1;
int temp = odata[nodeIdx];
odata[nodeIdx] = odata[nodeIdx + (1 << d)];
odata[nodeIdx + (1 << d)] += temp;
}
}
//timer().endCpuTimer();
}

/**
* CPU stream compaction without using the scan function.
*
* @returns the number of elements remaining after compaction.
*/
int compactWithoutScan(int n, int *odata, const int *idata) {

// TODO
if (n <= 0) return -1;
int num = 0;
timer().startCpuTimer();
for (int i = 0; i < n; i++) {
if (idata[i])
odata[num++] = idata[i];
}
timer().endCpuTimer();
return num;
}

/**
* CPU stream compaction using scan and scatter, like the parallel version.
*
* @returns the number of elements remaining after compaction.
*/
int compactWithScan(int n, int *odata, const int *idata) {
if (n <= 0) return -1;
int num = 0;
// TODO
timer().startCpuTimer();
for (int i = 0; i < n; i++) {
odata[i] = idata[i] ? 1 : 0;
}
scan(n, odata, odata);
num = odata[n - 1];
for (int i = 0; i < n; i++) {
if (idata[i])
odata[odata[i]] = idata[i];
}
timer().endCpuTimer();
return num;
}
}
}
Loading