Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
109 changes: 103 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,109 @@ CUDA Stream Compaction

**University of Pennsylvania, CIS 565: GPU Programming and Architecture, Project 2**

* (TODO) YOUR NAME HERE
* (TODO) [LinkedIn](), [personal website](), [twitter](), etc.
* Tested on: (TODO) Windows 22, i7-2222 @ 2.22GHz 22GB, GTX 222 222MB (Moore 2222 Lab)
* Ruijun(Daniel) Zhong
* [LinkedIn](https://www.linkedin.com/in/daniel-z-73158b152/)
* [Personal Website](https://www.danielzhongportfolio.com/)
* Tested on: Windows 11 pro, 12th Gen Intel(R) Core(TM) i7-12700K 3.61 GHz 32.0 GB, NVIDIA GeForce RTX 3070 Ti (personal computer)

### (TODO: Your README)
# Analyze

Include analysis, etc. (Remember, this is public, so don't put
anything here that you don't want to share with the world.)
### Why is My GPU Approach So Slow? (Extra Credit)

1. The UpSweep and DownSweep kernels are launched with certain number of blocks and threads which not all threads a utilized in every iteration due to the depth 'd' and the index of thread.

2. I don't think implementing early termination for threads can work efficient, becuase the 'lazy' threads are still need to check the condition based on the thread index and depth 'd'. If not met, it won't do any work, it doesn't mean thread has been terminated early, it will still active and still using gpu resources.

3. The solution is changing the number of blocks and threads dynamically base on active thread.

### Block Size Impact (Array Size 26)
![](img/BlockSize.png)
1. For all three algorithms, as the block size increases from 2 to 32, the performance time generally decreases.

After a block size of 32, the behavior starts to diverge for the algorithms:
* Naive Scan: the performance time continues to decrease slightly until a block size of 128, after which it starts to increase. Starts off as the slowest algorithm for smaller block sizes, but its performance improves significantly as the block size increases. However, it becomes slower again for the largest block sizes.
* Efficient Scan: the performance time decreases until a block size of 128, then starts to increase but shows a sudden jump at a block size of 1024. Its performance improves dramatically with increasing block sizes initially, and it becomes the fastest algorithm for block sizes between 64 and 512. But at a block size of 1024, its performance drops significantly.
* Efficient Compact: it shows a similar trend as the Efficient Scan but is slightly slower for most block sizes. This algorithm's performance closely mirrors the Efficient Scan, but it's consistently a little slower than Efficient Scan for every block size.

2. Even though Efficient Scan and Efficient Compact are faster than Naive Scan, but there might be some trade off like memory consumption and algorithm conplexity etc.

### Array Size Impact (Block Size 256)
![](img/ArraySize.png)
1. For the smaller array size, GPU scans are slower than CPU scan, this is probably becuase it takes some times to setting up and launching gpu kernels.But when the array size getting larger and latger, cpu scan becomes slower and slower.
2. Based on the chart, Efficient Scan is faster than Naive Scan, and we can tell when array size getting bigger. But the Trust Scan is a lot faster than my implementation becuase I haven't done any optimization to both Naive Scan and Efiicient Scan yet.

### Analysis Trust Scan
![](img/TrustScan.png)
1. Based on Nsight analysis, the Trust Scan called 3 kernel:
* _kernel_agent for uninitialized_fill functor:It runs 1.910ms and used 524288 blocks and 256 threads per blocks. There are no shared memory allocated and local memory is 42.5 mb. The occupancy is 100 percent which means it's the optimal resource usage. This kernel function is to set up memory with default values.
* DeviceScanInitKernel: This kernel function runs 2.336 μs with 1093 blocks and 128 threads each. No shared memory allocated and local memory is 42.5 mb. Occupancy is also 100 percent. This function looks like another prepartory step for scan operation becuase it runs very short.
* DeviceScanKernel: The run time is 3.995 ms with 7696 shared memory. It uses 56 registers, 139811 blocks with 128 threads per block. The occupancy is 75 percent. It looks like the main scan operation based on the name and run time.

### Analysis my Scan compare Trust Scan
* Trust Scan

Kernel: 2.8%
Memory: 97.2%

* My Scans (both naive and work-efficient):

Kernel: 43.3%
Memory: 56.7%

The trust Scan majority of the time is spent on memory operations, which means the performance is primarily limited by memory access speed and not by computation. In contrast, my scan bottleneck is need to optimize the kernel operations.

### Output Example:


****************
** SCAN TESTS **
****************
[ 47 29 36 32 13 29 42 36 37 30 12 47 28 ... 16 0 ]
==== cpu scan, power-of-two ====
elapsed time: 416.305ms (std::chrono Measured)
[ 0 47 76 112 144 157 186 228 264 301 331 343 390 ... -2015623572 -2015623556 ]
==== cpu scan, non-power-of-two ====
elapsed time: 413.905ms (std::chrono Measured)
[ 0 47 76 112 144 157 186 228 264 301 331 343 390 ... -2015623656 -2015623615 ]
passed
==== naive scan, power-of-two ====
elapsed time: 190.653ms (CUDA Measured)
passed
==== naive scan, non-power-of-two ====
elapsed time: 194.022ms (CUDA Measured)
passed
==== work-efficient scan, power-of-two ====
elapsed time: 142.127ms (CUDA Measured)
passed
==== work-efficient scan, non-power-of-two ====
elapsed time: 140.197ms (CUDA Measured)
passed
==== thrust scan, power-of-two ====
elapsed time: 4.82509ms (CUDA Measured)
passed
==== thrust scan, non-power-of-two ====
elapsed time: 5.13229ms (CUDA Measured)
passed

*****************************
** STREAM COMPACTION TESTS **
*****************************
[ 2 2 1 1 3 0 1 3 2 3 1 1 2 ... 0 0 ]
==== cpu compact without scan, power-of-two ====
elapsed time: 554.655ms (std::chrono Measured)
[ 2 2 1 1 3 1 3 2 3 1 1 2 2 ... 1 2 ]
passed
==== cpu compact without scan, non-power-of-two ====
elapsed time: 547.575ms (std::chrono Measured)
[ 2 2 1 1 3 1 3 2 3 1 1 2 2 ... 1 2 ]
passed
==== cpu compact with scan ====
elapsed time: 966.209ms (std::chrono Measured)
[ 2 2 1 1 3 1 3 2 3 1 1 2 2 ... 1 2 ]
passed
==== work-efficient compact, power-of-two ====
elapsed time: 163.552ms (CUDA Measured)
passed
==== work-efficient compact, non-power-of-two ====
elapsed time: 156.472ms (CUDA Measured)
passed
Binary file added img/ArraySize.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/BlockSize.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/EfficientScan.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/NaiveScan.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/TrustScan.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion src/main.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
#include <stream_compaction/thrust.h>
#include "testing_helpers.hpp"

const int SIZE = 1 << 8; // feel free to change the size of array
const int SIZE = 1 << 28; // feel free to change the size of array
const int NPOT = SIZE - 3; // Non-Power-Of-Two
int *a = new int[SIZE];
int *b = new int[SIZE];
Expand Down
12 changes: 12 additions & 0 deletions stream_compaction/common.cu
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,11 @@ namespace StreamCompaction {
*/
__global__ void kernMapToBoolean(int n, int *bools, const int *idata) {
// TODO
int index = threadIdx.x + (blockIdx.x * blockDim.x);
if (index >= n) {
return;
}
bools[index] = (idata[index] == 0) ? 0 : 1;
}

/**
Expand All @@ -33,6 +38,13 @@ namespace StreamCompaction {
__global__ void kernScatter(int n, int *odata,
const int *idata, const int *bools, const int *indices) {
// TODO
int index = blockIdx.x * blockDim.x + threadIdx.x;
if (index >= n) {
return;
}
if (bools[index] != 0) {
odata[indices[index]] = idata[index];
}
}

}
Expand Down
46 changes: 44 additions & 2 deletions stream_compaction/cpu.cu
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,14 @@ namespace StreamCompaction {
* For performance analysis, this is supposed to be a simple for loop.
* (Optional) For better understanding before starting moving to GPU, you can simulate your GPU scan in this function first.
*/
//cited Lecture slide
void scan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
// TODO
odata[0] = 0;
for (int k = 1; k < n; ++k) {
odata[k] = odata[k - 1] + idata[k-1];
}
timer().endCpuTimer();
}

Expand All @@ -31,8 +36,15 @@ namespace StreamCompaction {
int compactWithoutScan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
// TODO
int j = 0;
for (int k = 0; k < n; k++) {
if (idata[k] != 0) {
odata[j] = idata[k];
j++;
}
}
timer().endCpuTimer();
return -1;
return j;
}

/**
Expand All @@ -43,8 +55,38 @@ namespace StreamCompaction {
int compactWithScan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
// TODO
int* temp = new int[n];
int* scan = new int[n];
int k = 0;
int j = 1;
int oindex = 0;
for (int i = 0; i < n; i++) {
if (idata[i] == 0) {
temp[i] = 0;
}
else {
temp[i] = 1;
}
}

for (int i = 0; i < n; i++) {
scan[i] = k;
if (temp[i] == 1) {
k++;
}
}

for (int i = 0; i < n; i++) {
if (scan[i] == j) {
odata[oindex] = idata[i - 1];
oindex += 1;
j += 1;
}
}
delete[] temp;
delete[] scan;
timer().endCpuTimer();
return -1;
return oindex;
}
}
}
143 changes: 139 additions & 4 deletions stream_compaction/efficient.cu
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
#include <cuda_runtime.h>
#include "common.h"
#include "efficient.h"

#define BlockSize 256
namespace StreamCompaction {
namespace Efficient {
using StreamCompaction::Common::PerformanceTimer;
Expand All @@ -11,14 +11,98 @@ namespace StreamCompaction {
static PerformanceTimer timer;
return timer;
}
__global__ void kernUpSweep(int n, int* odata, int d) {
int index = threadIdx.x + (blockIdx.x * blockDim.x);
if (index >= n || (index % (int)powf(2, d + 1) != 0)) {
return;
}

odata[index + (int)powf(2, d + 1) - 1] += odata[index + (int)powf(2, d) - 1];

}
__global__ void kernUpCopy(int n, int* idata, int* odata) {
int index = threadIdx.x + (blockIdx.x * blockDim.x);
if (index >= n) {
return;
}

if (odata[index] != idata[index]) {
odata[index] = idata[index];
}

}

__global__ void kernZero(const int n, int* data) {
data[n - 1] = 0;
}

__global__ void kernDownSweep(int n, int* odata, int d) {
int index = threadIdx.x + (blockIdx.x * blockDim.x);
if (index >= n || (index % (int)powf(2, d + 1) != 0)) {
return;
}

int t = odata[index + (int)powf(2, d) - 1];
odata[index + (int)powf(2, d) - 1] = odata[index + (int)powf(2, d + 1) - 1];
odata[index + (int)powf(2, d + 1) - 1] += t;
}




/**
* Performs prefix-sum (aka scan) on idata, storing the result into odata.
*/
int nextPowerOf2(int n) {
if (n <= 0) return 0;

int power = 1;
while (power < n) {
power *= 2;
}

return power;
}

void scan(int n, int *odata, const int *idata) {
int* device_A;

int paddedSize = nextPowerOf2(n);
cudaMalloc((void**)&device_A, paddedSize * sizeof(int));
checkCUDAError("cudaMalloc device_A failed!");


cudaMemset(device_A + n, 0, (paddedSize - n) * sizeof(int));
checkCUDAError("device_A cudaMemset failed!");



cudaMemcpy(device_A, idata, n * sizeof(int), cudaMemcpyHostToDevice);
checkCUDAError("cudaMemcpy cudaMemcpyHostToDevice device_A to idata failed!");

dim3 blocksPerGrid((paddedSize + BlockSize - 1) / BlockSize);

timer().startGpuTimer();
// TODO

for (int d = 0; d <= ilog2ceil(paddedSize) - 1; d++) { //Upsweep
kernUpSweep << <blocksPerGrid, BlockSize >> > (paddedSize, device_A, d);
}

kernZero << <1, 1 >> > (paddedSize, device_A);

for (int d = ilog2ceil(paddedSize) - 1; d >= 0; d--) { //Downsweep
kernDownSweep << <blocksPerGrid, BlockSize >> > (paddedSize, device_A, d);
}


timer().endGpuTimer();


cudaMemcpy(odata, device_A, n * sizeof(int), cudaMemcpyDeviceToHost);
checkCUDAError("cudaMemcpy cudaMemcpyDeviceToHost odata to device_A failed!");

cudaFree(device_A);
checkCUDAError("cudaFree device_A failed!");
}

/**
Expand All @@ -31,10 +115,61 @@ namespace StreamCompaction {
* @returns The number of elements remaining after compaction.
*/
int compact(int n, int *odata, const int *idata) {
int* device_idata, * device_odata, * device_bool, * device_scan;

int paddedSize = nextPowerOf2(n);
cudaMalloc((void**)&device_idata, paddedSize * sizeof(int));
checkCUDAError("cudaMalloc device_idata failed!");
cudaMalloc((void**)&device_odata, paddedSize * sizeof(int));
checkCUDAError("cudaMalloc device_odata failed!");
cudaMalloc((void**)&device_bool, paddedSize * sizeof(int));
checkCUDAError("cudaMalloc device_bool failed!");
cudaMalloc((void**)&device_scan, paddedSize * sizeof(int));
checkCUDAError("cudaMalloc device_scan failed!");

cudaMemcpy(device_idata, idata, n * sizeof(int), cudaMemcpyHostToDevice);
checkCUDAError("cudaMemcpy cudaMemcpyHostToDevice device_idata to idata failed!");


cudaMemset(device_idata + n, 0, (paddedSize - n) * sizeof(int));
checkCUDAError("device_idata cudaMemset failed!");
cudaMemset(device_scan + n, 0, (paddedSize - n) * sizeof(int));
checkCUDAError("device_scan cudaMemset failed!");

dim3 blocksPerGrid((paddedSize + BlockSize - 1) / BlockSize);

timer().startGpuTimer();
// TODO
StreamCompaction::Common::kernMapToBoolean << <blocksPerGrid, BlockSize >> > (n, device_bool, device_idata);
kernUpCopy << <blocksPerGrid, BlockSize >> > (n, device_bool, device_scan);


for (int d = 0; d <= ilog2ceil(paddedSize) - 1; d++) { //Upsweep
kernUpSweep << <blocksPerGrid, BlockSize >> > (paddedSize, device_scan, d);
}

kernZero << <1, 1 >> > (paddedSize, device_scan);

for (int d = ilog2ceil(paddedSize) - 1; d >= 0; d--) { //Downsweep
kernDownSweep << <blocksPerGrid, BlockSize >> > (paddedSize, device_scan, d);
}

StreamCompaction::Common::kernScatter << <blocksPerGrid, BlockSize >> > (paddedSize, device_odata, device_idata, device_bool, device_scan);
timer().endGpuTimer();
return -1;

cudaMemcpy(odata, device_odata, n * sizeof(int), cudaMemcpyDeviceToHost);
int finalSize;
cudaMemcpy(&finalSize, device_scan + paddedSize - 1, sizeof(int), cudaMemcpyDeviceToHost);
checkCUDAError("cudaMemcpy cudaMemcpyDeviceToHost odata to device_idata failed!");

cudaFree(device_idata);
checkCUDAError("cudaFree device_idata failed!");
cudaFree(device_odata);
checkCUDAError("cudaFree device_odata failed!");
cudaFree(device_bool);
checkCUDAError("cudaFree device_bool failed!");
cudaFree(device_scan);
checkCUDAError("cudaFree device_scan failed!");
return finalSize;
}
}
}
Loading