Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
88 changes: 83 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,89 @@ CUDA Stream Compaction

**University of Pennsylvania, CIS 565: GPU Programming and Architecture, Project 2**

* (TODO) YOUR NAME HERE
* Tested on: (TODO) Windows 22, i7-2222 @ 2.22GHz 22GB, GTX 222 222MB (Moore 2222 Lab)
* Wenli Zhao
* Tested on: Windows 7, i7-6700 CPU @ 3.40GHz, NVIDIA Quadro K620 (Moore 100C Lab)

### (TODO: Your README)
### README
This project GPU stream compaction in CUDA. The implemented features include:
1. CPU scan and stream compaction.
* primarily used for performance comparison.
2. Naive GPU scan algorithm.
3. Work efficient GPU scan and stream compaction algorithm.
4. Calling Thrust's implementation

Include analysis, etc. (Remember, this is public, so don't put
anything here that you don't want to share with the world.)
Analysis
========
In order to analyze the performance of stream compaction, I first found the highest multiple of 2 for which my program ran correctly and optimized for block size. I chose a block size of 256 which seemed to be optimized for my GPU implementation on 2^17 elements. I then collected and analyzed the runtimes for the scan algorithm.

### Figure 1
![](img/chart.png)

### Figure 2
#### Data corresponding to Figure 1
![](img/image.png)

Figure one shows the array size vs. the runtime of each implementation in ms. Unfortunately, the results were not quite what we wanted. My work efficient implementation is slower than my naive implementation, which is slower than my CPU implementation. This could be due to many factors. One is the amount of global memory access that I am performing in my work efficient. As the array size increases, the memory access becomes more and more costly.

It is hard to accurately say, but the trend for work efficient is that its runtime is leveling off, whereas Naive and CPU have an upward trend. Potentially, the work efficient implementation will succeed for greater array sizes, but my implementation limits me to 2^17.

The Thrust implementation seems relatively efficient, but has arbitrary spikes in performance time. I think this is a thrust-specific implementation. There is some behavior underlying thrust that makes the first invokation of my thrust scan slower. If I call the scan on the same array twice, the second time will run faster. Perhaps thrust caches the inputs and is quicker for later invocations.

Although my GPU scan implementations are slower than the CPU implementation, the work-efficient compact is more efficient than cpu-compact-with-scan.


```

****************
** SCAN TESTS **
****************
[ 28 17 26 2 41 12 6 34 18 12 12 33 23 ... 21 0 ]
==== cpu scan, power-of-two ====
elapsed time: 0.012919ms (std::chrono Measured)
[ 0 28 45 71 73 114 126 132 166 184 196 208 241 ... 200656 200677 ]
==== cpu scan, non-power-of-two ====
elapsed time: 0.012919ms (std::chrono Measured)
[ 0 28 45 71 73 114 126 132 166 184 196 208 241 ... 200635 200635 ]
passed
==== naive scan, power-of-two ====
elapsed time: 0.180576ms (CUDA Measured)
passed
==== naive scan, non-power-of-two ====
elapsed time: 0.171488ms (CUDA Measured)
passed
==== work-efficient scan, power-of-two ====
elapsed time: 0.698912ms (CUDA Measured)
passed
==== work-efficient scan, non-power-of-two ====
elapsed time: 1.96842ms (CUDA Measured)
passed
==== thrust scan, power-of-two ====
elapsed time: 0.080352ms (CUDA Measured)
passed
==== thrust scan, non-power-of-two ====
elapsed time: 0.030464ms (CUDA Measured)
passed

*****************************
** STREAM COMPACTION TESTS **
*****************************
==== cpu compact without scan, power-of-two ====
elapsed time: 0.034251ms (std::chrono Measured)
[ 0 28 45 71 73 114 126 132 166 184 196 208 241 ... 151975 152019 ]
passed
==== cpu compact without scan, non-power-of-two ====
elapsed time: 0.019829ms (std::chrono Measured)
[ 1 1 3 1 3 2 3 1 2 2 3 2 1 ... 3 3 ]
passed
==== cpu compact with scan ====
elapsed time: 0.055282ms (std::chrono Measured)
[ 1 1 3 1 3 2 3 1 2 2 3 2 1 ... 3 2 ]
passed
==== work-efficient compact, power-of-two ====
elapsed time: 0.047008ms (CUDA Measured)
passed
==== work-efficient compact, non-power-of-two ====
elapsed time: 0.048704ms (CUDA Measured)
passed
Press any key to continue . . .

Binary file added img/Capture.PNG
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/chart.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/image.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
32 changes: 25 additions & 7 deletions src/main.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@
#include <stream_compaction/thrust.h>
#include "testing_helpers.hpp"

const int SIZE = 1 << 8; // feel free to change the size of array
const int NPOT = SIZE - 3; // Non-Power-Of-Two
const long SIZE = 1 << 8; // feel free to change the size of array
const long NPOT = SIZE - 3; // Non-Power-Of-Two
int a[SIZE], b[SIZE], c[SIZE];

int main(int argc, char* argv[]) {
Expand All @@ -38,6 +38,7 @@ int main(int argc, char* argv[]) {
printElapsedTime(StreamCompaction::CPU::timer().getCpuElapsedTimeForPreviousOperation(), "(std::chrono Measured)");
printArray(SIZE, b, true);


zeroArray(SIZE, c);
printDesc("cpu scan, non-power-of-two");
StreamCompaction::CPU::scan(NPOT, c, a);
Expand All @@ -49,30 +50,38 @@ int main(int argc, char* argv[]) {
printDesc("naive scan, power-of-two");
StreamCompaction::Naive::scan(SIZE, c, a);
printElapsedTime(StreamCompaction::Naive::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
//printArray(SIZE, c, true);
//printArray(SIZE, a, true);
//printArray(SIZE, b, true);
//printArray(SIZE, c, true);
printCmpResult(SIZE, b, c);

zeroArray(SIZE, c);
printDesc("naive scan, non-power-of-two");
StreamCompaction::Naive::scan(NPOT, c, a);
printElapsedTime(StreamCompaction::Naive::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
//printArray(SIZE, c, true);
//printArray(SIZE, a, true);
//printArray(SIZE, b, true);
//printArray(SIZE, c, true);
printCmpResult(NPOT, b, c);


zeroArray(SIZE, c);
printDesc("work-efficient scan, power-of-two");
StreamCompaction::Efficient::scan(SIZE, c, a);
printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
//printArray(SIZE, b, true);
//printArray(SIZE, c, true);
printCmpResult(SIZE, b, c);

zeroArray(SIZE, c);
printDesc("work-efficient scan, non-power-of-two");
StreamCompaction::Efficient::scan(NPOT, c, a);
printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
//printArray(NPOT, c, true);
//printArray(SIZE, b, true);
//printArray(NPOT, c, true);
printCmpResult(NPOT, b, c);


zeroArray(SIZE, c);
printDesc("thrust scan, power-of-two");
StreamCompaction::Thrust::scan(SIZE, c, a);
Expand All @@ -87,6 +96,7 @@ int main(int argc, char* argv[]) {
//printArray(NPOT, c, true);
printCmpResult(NPOT, b, c);


printf("\n");
printf("*****************************\n");
printf("** STREAM COMPACTION TESTS **\n");
Expand All @@ -96,7 +106,7 @@ int main(int argc, char* argv[]) {

genArray(SIZE - 1, a, 4); // Leave a 0 at the end to test that edge case
a[SIZE - 1] = 0;
printArray(SIZE, a, true);
//printArray(SIZE, a, true);

int count, expectedCount, expectedNPOT;

Expand All @@ -107,7 +117,7 @@ int main(int argc, char* argv[]) {
count = StreamCompaction::CPU::compactWithoutScan(SIZE, b, a);
printElapsedTime(StreamCompaction::CPU::timer().getCpuElapsedTimeForPreviousOperation(), "(std::chrono Measured)");
expectedCount = count;
printArray(count, b, true);
printArray(count, c, true);
printCmpLenResult(count, expectedCount, b, b);

zeroArray(SIZE, c);
Expand Down Expand Up @@ -139,5 +149,13 @@ int main(int argc, char* argv[]) {
//printArray(count, c, true);
printCmpLenResult(count, expectedNPOT, b, c);

//zeroArray(6, c);
//int d[7] = { 0,1,2,0,2,0,1 };
//int f[4] = { 1,2,2,1 };
//printDesc("Work efficient compact, SMALL TEST CASE");
//count = StreamCompaction::Efficient::compact(7, c, d);
//printArray(count, c, true);
//printCmpLenResult(count, 4, f, c);

system("pause"); // stop Win32 console from closing on exit
}
17 changes: 17 additions & 0 deletions stream_compaction/common.cu
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,16 @@ namespace StreamCompaction {
*/
__global__ void kernMapToBoolean(int n, int *bools, const int *idata) {
// TODO
int index = threadIdx.x + (blockIdx.x * blockDim.x);
if (index >= n) {
return;
}
if (idata[index] != 0) {
bools[index] = 1;
}
else {
bools[index] = 0;
}
}

/**
Expand All @@ -33,6 +43,13 @@ namespace StreamCompaction {
__global__ void kernScatter(int n, int *odata,
const int *idata, const int *bools, const int *indices) {
// TODO
int index = threadIdx.x + (blockIdx.x * blockDim.x);
if (index >= n) {
return;
}
if (bools[index] != 0) {
odata[indices[index]] = idata[index];
}
}

}
Expand Down
62 changes: 48 additions & 14 deletions stream_compaction/cpu.cu
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
#include <cstdio>
#include "cpu.h"

#include "common.h"
#include "common.h"

namespace StreamCompaction {
namespace CPU {
using StreamCompaction::Common::PerformanceTimer;
PerformanceTimer& timer()
{
static PerformanceTimer timer;
return timer;
using StreamCompaction::Common::PerformanceTimer;
PerformanceTimer& timer()
{
static PerformanceTimer timer;
return timer;
}

/**
Expand All @@ -18,21 +18,36 @@ namespace StreamCompaction {
* (Optional) For better understanding before starting moving to GPU, you can simulate your GPU scan in this function first.
*/
void scan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
// TODO
timer().endCpuTimer();
timer().startCpuTimer();
//TODO
scanImplementation(n, odata, idata);
timer().endCpuTimer();

}

void scanImplementation(int n, int *odata, const int *idata) {
odata[0] = 0;
for (int i = 1; i < n; ++i) {
odata[i] = idata[i - 1] + odata[i - 1];
}
}

/**
* CPU stream compaction without using the scan function.
*
* @returns the number of elements remaining after compaction.
*/
int compactWithoutScan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
// TODO
int count = 0;
for (int i = 0; i < n; i++) {
if (idata[i] != 0) {
odata[count] = idata[i];
count++;
}
}
timer().endCpuTimer();
return -1;
return count;
}

/**
Expand All @@ -41,10 +56,29 @@ namespace StreamCompaction {
* @returns the number of elements remaining after compaction.
*/
int compactWithScan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
// TODO
timer().startCpuTimer();
int *temp = new int[n];
int *temp2 = new int[n];
for (int i = 0; i < n; i++) {
if (idata[i] != 0) {
temp[i] = 1;
}
else {
temp[i] = 0;
}
temp2[i] = 0;
}
scanImplementation(n, temp2, temp);
for (int i = 0; i <= n; i++) {
if (temp[i] == 1) {
odata[temp2[i]] = idata[i];
}
}
int count = temp2[n - 1];
delete[] temp;
delete[] temp2;
timer().endCpuTimer();
return -1;
return count;
}
}
}
2 changes: 2 additions & 0 deletions stream_compaction/cpu.h
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ namespace StreamCompaction {

void scan(int n, int *odata, const int *idata);

void scanImplementation(int n, int *odata, const int *idata);

int compactWithoutScan(int n, int *odata, const int *idata);

int compactWithScan(int n, int *odata, const int *idata);
Expand Down
Loading