Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
159 changes: 154 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,160 @@ CUDA Stream Compaction

**University of Pennsylvania, CIS 565: GPU Programming and Architecture, Project 2**

* (TODO) YOUR NAME HERE
* Tested on: (TODO) Windows 22, i7-2222 @ 2.22GHz 22GB, GTX 222 222MB (Moore 2222 Lab)
* LINSHEN XIAO
* Tested on: Windows 10, Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz, 16.0GB, NVIDIA GeForce GTX 970M (Personal computer)

### (TODO: Your README)
## Features

Include analysis, etc. (Remember, this is public, so don't put
anything here that you don't want to share with the world.)
Part 1 ~ 4, includes:

* CPU Scan
* CPU Stream Compaction
* Naive GPU Scan
* Work-Efficient GPU Scan
* Work-Efficient GPU Stream Compaction
* Thrust's Implementation

Extra Credit:

* Part 5 implemented. After testing the Work-Efficient method with other ones, I find that it's even slower than the CPU one, that's because there are lots of threads that are not used due to the branches, which I have removed in new functions. Check **newkernScanUp**, **newkernScanDown** and **newkernScan** (compared with **kernScanUp**, **kernScanDown** and **kernScan**) in efficient.cu for more details.
* I use the new Work-Efficient method without branches in the following analysis to show its "efficiency" better.


## Performance Analysis

### Comparison of GPU Scan (Naive, Work-Efficient) for different block size.

![](img/scanblocksize.png)

| Block size | Naive scan, power-of-two | Naive scan, non-power-of-two | Work-efficient scan, power-of-two | Work-efficient scan, non-power-of-two |
|:--------------|:----------------------------|:--------------------------------|:-------------------------------------|:-----------------------------------------|
| 32 | 2.31232 | 2.30061 | 1.0649 | 1.03421 |
| 64 | 1.75728 | 1.75683 | 1.02925 | 1.06506 |
| 128 | 1.76538 | 1.77981 | 1.02992 | 1.04938 |
| 256 | 1.79238 | 1.76445 | 1.03142 | 1.03942 |
| 512 | 1.75421 | 1.75085 | 1.10819 | 1.03946 |
| 1024 | 1.75885 | 1.75379 | 1.06323 | 1.05293 |

The array size is 2^20 in the test above. We can tell from the data that after the blocksize is larger than 32, there is no major performance difference between different blocksize. I use 1024 as the blocksize in my following tests.

### Comparison of GPU Scan (Naive, Work-Efficient & Thrust) and CPU version.

![](img/scanarraysize.png)

| Array size | CPU scan, power-of-two | CPU scan, non-power-of-two | Naive scan, power-of-two | Naive scan, non-power-of-two | Work-efficient scan, power-of-two | Work-efficient scan, non-power-of-two | Thrust scan, power-of-two | Thrust scan, non-power-of-two |
|:------------|:------------------------|:----------------------------|:--------------------------|:------------------------------|:-----------------------------------|:---------------------------------------|:---------------------------|:-------------------------------|
| 2^8 | 0.00079 | 0.000395 | 0.046112 | 0.035296 | 0.106176 | 0.074688 | 0.03664 | 0.013728 |
| 2^10 | 0.001975 | 0.001975 | 0.047552 | 0.055584 | 0.106112 | 0.098848 | 0.023136 | 0.013696 |
| 2^12 | 0.007111 | 0.007506 | 0.067552 | 0.065088 | 0.120032 | 0.182144 | 0.029984 | 0.020512 |
| 2^14 | 0.030419 | 0.061234 | 0.074976 | 0.064032 | 0.145696 | 0.140928 | 0.05952 | 0.047424 |
| 2^16 | 0.118124 | 0.120889 | 0.116256 | 0.115424 | 0.185984 | 0.18752 | 0.153856 | 0.163648 |
| 2^18 | 1.34716 | 0.503309 | 0.408064 | 0.397536 | 0.313952 | 0.298272 | 0.228928 | 0.36768 |
| 2^20 | 2.25462 | 2.55684 | 1.7671 | 1.75427 | 1.08269 | 1.07862 | 0.310208 | 0.406784 |
| 2^22 | 8.39032 | 8.32909 | 7.86064 | 7.81309 | 3.96986 | 3.93744 | 0.865472 | 0.864512 |
| 2^24 | 37.5455 | 35.584 | 34.8642 | 34.8712 | 15.5057 | 15.5434 | 2.62048 | 2.70346 |

### Comparison of GPU Stream compaction (Naive, Work-Efficient & Thrust) and CPU version.

![](img/compactarraysize.png)

| Array size | CPU compact without scan, power-of-two | CPU compact without scan, non-power-of-two | CPU compact with scan | Work-efficient compact, power-of-two | Work-efficient compact, non-power-of-two |
|:------------|:----------------------------------------|:--------------------------------------------|:-----------------------|:--------------------------------------|:------------------------------------------|
| 2^8 | 0.001185 | 0.001186 | 0.003951 | 0.094752 | 0.093824 |
| 2^10 | 0.00395 | 0.00316 | 0.024493 | 0.176832 | 0.10768 |
| 2^12 | 0.011851 | 0.012247 | 0.076246 | 0.174656 | 0.202976 |
| 2^14 | 0.047407 | 0.063605 | 0.126025 | 0.20528 | 0.160288 |
| 2^16 | 0.193975 | 0.185284 | 0.463013 | 0.211744 | 0.243264 |
| 2^18 | 1.04138 | 0.776692 | 2.61413 | 0.368416 | 0.360832 |
| 2^20 | 3.66025 | 3.52553 | 8.02608 | 1.46659 | 1.40083 |
| 2^22 | 18.9922 | 12.979 | 39.1909 | 5.3247 | 5.31274 |
| 2^24 | 50.8401 | 57.8019 | 135.726 | 20.4587 | 20.4935 |

We can tell from the data above that in both scan and stream compaction:

* When the array size < 2^16, CPU method is still faster than GPU version. Elapsed time used: CPU < Thrust < Naive < Work-efficient;
* When the array size > 2^16, parallel computation in GPU is better when the number of elements is very large. Elapsed time used: Thrust < Work-efficient < Naive < CPU;

## Questions

* Roughly optimize the block sizes of each of your implementations for minimal run time on your GPU.

Check [**Performance Analysis**](#performance-analysis) for graph comparison.

As I mentioned above, there is no major performance difference between different blocksize.

* Compare all of these GPU Scan implementations (Naive, Work-Efficient, and Thrust) to the serial CPU version of Scan. Plot a graph of the comparison (with array size on the independent axis).

Check [**Performance Analysis**](#performance-analysis) for graph comparison.

In conclusion, when the array size is not too huge, CPU implementation is always better than GPU versions. However, if the array size is very huge, the parallel computation in GPU is faster. Also, Thrust method is always very stable no matter how the size of array changes.

As for Thrust implementation, I guess that share memory may be used, so it's more efficient than other GPU methods as the memory access is faster.

* Write a brief explanation of the phenomena you see here.

The performance bottlenecks may be different for different methods. For CPU implementation, it doesn't have memory issues, so it is faster when the array size is not too huge, but will have worse performance than GPU versions if the size of array is too large, for it can only compute on one single process.

As for GPU implementations, compared to Naive method, Work-Efficient method use more time on computation, so it may be slower when the array size is not huge. Due to the branch which leads to numbers of unused threads, old Work-Efficient method is way slower, not until the branches are removed to improve its performance. Also, compared to Thrust implementation, I did not use any shared memory for other GPU methods, which is better to improve performance.

* Paste the output of the test program into a triple-backtick block in your README.

Check [**Output**](#output).

## Output

Array size: 2 ^ 20; Block size: 1024;
```
****************
** SCAN TESTS **
****************
[ 25 26 46 21 5 19 20 6 5 10 15 2 47 ... 13 0 ]
==== cpu scan, power-of-two ====
elapsed time: 2.19694ms (std::chrono Measured)
[ 0 25 51 97 118 123 142 162 168 173 183 198 200 ... 25692588 25692601 ]
==== cpu scan, non-power-of-two ====
elapsed time: 2.12583ms (std::chrono Measured)
[ 0 25 51 97 118 123 142 162 168 173 183 198 200 ... 25692495 25692534 ]
passed
==== naive scan, power-of-two ====
elapsed time: 1.75926ms (CUDA Measured)
passed
==== naive scan, non-power-of-two ====
elapsed time: 1.74765ms (CUDA Measured)
passed
==== work-efficient scan, power-of-two ====
elapsed time: 1.07485ms (CUDA Measured)
passed
==== work-efficient scan, non-power-of-two ====
elapsed time: 1.09978ms (CUDA Measured)
passed
==== thrust scan, power-of-two ====
elapsed time: 0.440448ms (CUDA Measured)
passed
==== thrust scan, non-power-of-two ====
elapsed time: 0.343456ms (CUDA Measured)
passed

*****************************
** STREAM COMPACTION TESTS **
*****************************
[ 0 2 0 0 0 3 3 0 1 0 1 2 2 ... 2 0 ]
==== cpu compact without scan, power-of-two ====
elapsed time: 4.24494ms (std::chrono Measured)
[ 2 3 3 1 1 2 2 2 2 2 2 3 3 ... 3 2 ]
passed
==== cpu compact without scan, non-power-of-two ====
elapsed time: 3.14983ms (std::chrono Measured)
[ 2 3 3 1 1 2 2 2 2 2 2 3 3 ... 1 3 ]
passed
==== cpu compact with scan ====
elapsed time: 9.90341ms (std::chrono Measured)
[ 2 3 3 1 1 2 2 2 2 2 2 3 3 ... 3 2 ]
passed
==== work-efficient compact, power-of-two ====
elapsed time: 1.45267ms (CUDA Measured)
passed
==== work-efficient compact, non-power-of-two ====
elapsed time: 1.38931ms (CUDA Measured)
passed
```
Binary file added img/compactarraysize.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/scanarraysize.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/scanblocksize.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
5 changes: 4 additions & 1 deletion src/main.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
#include <stream_compaction/thrust.h>
#include "testing_helpers.hpp"

const int SIZE = 1 << 8; // feel free to change the size of array
const int SIZE = 1 << 20; // feel free to change the size of array
const int NPOT = SIZE - 3; // Non-Power-Of-Two
int a[SIZE], b[SIZE], c[SIZE];

Expand Down Expand Up @@ -79,6 +79,9 @@ int main(int argc, char* argv[]) {
printElapsedTime(StreamCompaction::Thrust::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
//printArray(SIZE, c, true);
printCmpResult(SIZE, b, c);
//std::cout << "a[0]=" << b[0] << ",b[0]=" << c[0] << std::endl;
//std::cout << "a[1]=" << b[1] << ",b[1]=" << c[1] << std::endl;
//std::cout << "a[2]=" << b[2] << ",b[2]=" << c[2] << std::endl;

zeroArray(SIZE, c);
printDesc("thrust scan, non-power-of-two");
Expand Down
22 changes: 22 additions & 0 deletions stream_compaction/common.cu
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,19 @@ namespace StreamCompaction {
*/
__global__ void kernMapToBoolean(int n, int *bools, const int *idata) {
// TODO
int index = threadIdx.x + (blockIdx.x * blockDim.x);
if (index >= n)
{
return;
}
if (idata[index])
{
bools[index] = 1;
}
else
{
bools[index] = 0;
}
}

/**
Expand All @@ -33,6 +46,15 @@ namespace StreamCompaction {
__global__ void kernScatter(int n, int *odata,
const int *idata, const int *bools, const int *indices) {
// TODO
int index = threadIdx.x + (blockIdx.x * blockDim.x);
if (index >= n)
{
return;
}
if (bools[index])
{
odata[indices[index]] = idata[index];
}
}

}
Expand Down
2 changes: 2 additions & 0 deletions stream_compaction/common.h
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,8 @@
#define FILENAME (strrchr(__FILE__, '/') ? strrchr(__FILE__, '/') + 1 : __FILE__)
#define checkCUDAError(msg) checkCUDAErrorFn(msg, FILENAME, __LINE__)

#define blockSize 1024

/**
* Check for CUDA errors; print and exit if there was a problem.
*/
Expand Down
61 changes: 59 additions & 2 deletions stream_compaction/cpu.cu
Original file line number Diff line number Diff line change
Expand Up @@ -18,11 +18,33 @@ namespace StreamCompaction {
* (Optional) For better understanding before starting moving to GPU, you can simulate your GPU scan in this function first.
*/
void scan(int n, int *odata, const int *idata) {
if (n == 0)
{
return;
}
timer().startCpuTimer();
// TODO
odata[0] = 0;
for (int i = 1; i < n; i++)
{
odata[i] = idata[i-1] + odata[i-1];
}

timer().endCpuTimer();
}

void scannotimer(int n, int *odata, const int *idata) {
if (n == 0)
{
return;
}
odata[0] = 0;
for (int i = 1; i < n; i++)
{
odata[i] = idata[i - 1] + odata[i - 1];
}
}

/**
* CPU stream compaction without using the scan function.
*
Expand All @@ -31,8 +53,17 @@ namespace StreamCompaction {
int compactWithoutScan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
// TODO
int num = 0;
for (int i = 0; i < n; i++)
{
if (idata[i])
{
odata[num] = idata[i];
num++;
}
}
timer().endCpuTimer();
return -1;
return num;
}

/**
Expand All @@ -43,8 +74,34 @@ namespace StreamCompaction {
int compactWithScan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
// TODO
//map
int *mapped = (int*)malloc(n * sizeof(int));
for (int i = 0; i < n; i++)
{
if (idata[i])
{
mapped[i] = 1;
}
else
{
mapped[i] = 0;
}
}
//scan
int *scanned = (int*)malloc(n * sizeof(int));
scannotimer(n, scanned, mapped);
//scatter
int num = 0;
for (int i = 0; i < n; i++)
{
if (mapped[i])
{
odata[scanned[i]] = idata[i];
num++;
}
}
timer().endCpuTimer();
return -1;
return num;
}
}
}
Loading