diff --git a/README.md b/README.md index 0e38ddb..ac2cf77 100644 --- a/README.md +++ b/README.md @@ -3,12 +3,109 @@ CUDA Stream Compaction **University of Pennsylvania, CIS 565: GPU Programming and Architecture, Project 2** -* (TODO) YOUR NAME HERE - * (TODO) [LinkedIn](), [personal website](), [twitter](), etc. -* Tested on: (TODO) Windows 22, i7-2222 @ 2.22GHz 22GB, GTX 222 222MB (Moore 2222 Lab) +* Ruijun(Daniel) Zhong + * [LinkedIn](https://www.linkedin.com/in/daniel-z-73158b152/) + * [Personal Website](https://www.danielzhongportfolio.com/) +* Tested on: Windows 11 pro, 12th Gen Intel(R) Core(TM) i7-12700K 3.61 GHz 32.0 GB, NVIDIA GeForce RTX 3070 Ti (personal computer) -### (TODO: Your README) +# Analyze -Include analysis, etc. (Remember, this is public, so don't put -anything here that you don't want to share with the world.) +### Why is My GPU Approach So Slow? (Extra Credit) +1. The UpSweep and DownSweep kernels are launched with certain number of blocks and threads which not all threads a utilized in every iteration due to the depth 'd' and the index of thread. + +2. I don't think implementing early termination for threads can work efficient, becuase the 'lazy' threads are still need to check the condition based on the thread index and depth 'd'. If not met, it won't do any work, it doesn't mean thread has been terminated early, it will still active and still using gpu resources. + +3. The solution is changing the number of blocks and threads dynamically base on active thread. + +### Block Size Impact (Array Size 26) +![](img/BlockSize.png) +1. For all three algorithms, as the block size increases from 2 to 32, the performance time generally decreases. + +After a block size of 32, the behavior starts to diverge for the algorithms: +* Naive Scan: the performance time continues to decrease slightly until a block size of 128, after which it starts to increase. Starts off as the slowest algorithm for smaller block sizes, but its performance improves significantly as the block size increases. However, it becomes slower again for the largest block sizes. +* Efficient Scan: the performance time decreases until a block size of 128, then starts to increase but shows a sudden jump at a block size of 1024. Its performance improves dramatically with increasing block sizes initially, and it becomes the fastest algorithm for block sizes between 64 and 512. But at a block size of 1024, its performance drops significantly. +* Efficient Compact: it shows a similar trend as the Efficient Scan but is slightly slower for most block sizes. This algorithm's performance closely mirrors the Efficient Scan, but it's consistently a little slower than Efficient Scan for every block size. + +2. Even though Efficient Scan and Efficient Compact are faster than Naive Scan, but there might be some trade off like memory consumption and algorithm conplexity etc. + +### Array Size Impact (Block Size 256) +![](img/ArraySize.png) +1. For the smaller array size, GPU scans are slower than CPU scan, this is probably becuase it takes some times to setting up and launching gpu kernels.But when the array size getting larger and latger, cpu scan becomes slower and slower. +2. Based on the chart, Efficient Scan is faster than Naive Scan, and we can tell when array size getting bigger. But the Trust Scan is a lot faster than my implementation becuase I haven't done any optimization to both Naive Scan and Efiicient Scan yet. + +### Analysis Trust Scan +![](img/TrustScan.png) +1. Based on Nsight analysis, the Trust Scan called 3 kernel: +* _kernel_agent for uninitialized_fill functor:It runs 1.910ms and used 524288 blocks and 256 threads per blocks. There are no shared memory allocated and local memory is 42.5 mb. The occupancy is 100 percent which means it's the optimal resource usage. This kernel function is to set up memory with default values. +* DeviceScanInitKernel: This kernel function runs 2.336 μs with 1093 blocks and 128 threads each. No shared memory allocated and local memory is 42.5 mb. Occupancy is also 100 percent. This function looks like another prepartory step for scan operation becuase it runs very short. +* DeviceScanKernel: The run time is 3.995 ms with 7696 shared memory. It uses 56 registers, 139811 blocks with 128 threads per block. The occupancy is 75 percent. It looks like the main scan operation based on the name and run time. + +### Analysis my Scan compare Trust Scan +* Trust Scan + +Kernel: 2.8% +Memory: 97.2% + +* My Scans (both naive and work-efficient): + +Kernel: 43.3% +Memory: 56.7% + +The trust Scan majority of the time is spent on memory operations, which means the performance is primarily limited by memory access speed and not by computation. In contrast, my scan bottleneck is need to optimize the kernel operations. + +### Output Example: + + + **************** + ** SCAN TESTS ** + **************** + [ 47 29 36 32 13 29 42 36 37 30 12 47 28 ... 16 0 ] + ==== cpu scan, power-of-two ==== + elapsed time: 416.305ms (std::chrono Measured) + [ 0 47 76 112 144 157 186 228 264 301 331 343 390 ... -2015623572 -2015623556 ] + ==== cpu scan, non-power-of-two ==== + elapsed time: 413.905ms (std::chrono Measured) + [ 0 47 76 112 144 157 186 228 264 301 331 343 390 ... -2015623656 -2015623615 ] + passed + ==== naive scan, power-of-two ==== + elapsed time: 190.653ms (CUDA Measured) + passed + ==== naive scan, non-power-of-two ==== + elapsed time: 194.022ms (CUDA Measured) + passed + ==== work-efficient scan, power-of-two ==== + elapsed time: 142.127ms (CUDA Measured) + passed + ==== work-efficient scan, non-power-of-two ==== + elapsed time: 140.197ms (CUDA Measured) + passed + ==== thrust scan, power-of-two ==== + elapsed time: 4.82509ms (CUDA Measured) + passed + ==== thrust scan, non-power-of-two ==== + elapsed time: 5.13229ms (CUDA Measured) + passed + + ***************************** + ** STREAM COMPACTION TESTS ** + ***************************** + [ 2 2 1 1 3 0 1 3 2 3 1 1 2 ... 0 0 ] + ==== cpu compact without scan, power-of-two ==== + elapsed time: 554.655ms (std::chrono Measured) + [ 2 2 1 1 3 1 3 2 3 1 1 2 2 ... 1 2 ] + passed + ==== cpu compact without scan, non-power-of-two ==== + elapsed time: 547.575ms (std::chrono Measured) + [ 2 2 1 1 3 1 3 2 3 1 1 2 2 ... 1 2 ] + passed + ==== cpu compact with scan ==== + elapsed time: 966.209ms (std::chrono Measured) + [ 2 2 1 1 3 1 3 2 3 1 1 2 2 ... 1 2 ] + passed + ==== work-efficient compact, power-of-two ==== + elapsed time: 163.552ms (CUDA Measured) + passed + ==== work-efficient compact, non-power-of-two ==== + elapsed time: 156.472ms (CUDA Measured) + passed \ No newline at end of file diff --git a/img/ArraySize.png b/img/ArraySize.png new file mode 100644 index 0000000..925553c Binary files /dev/null and b/img/ArraySize.png differ diff --git a/img/BlockSize.png b/img/BlockSize.png new file mode 100644 index 0000000..b07d38e Binary files /dev/null and b/img/BlockSize.png differ diff --git a/img/EfficientScan.png b/img/EfficientScan.png new file mode 100644 index 0000000..f993e8b Binary files /dev/null and b/img/EfficientScan.png differ diff --git a/img/NaiveScan.png b/img/NaiveScan.png new file mode 100644 index 0000000..670233d Binary files /dev/null and b/img/NaiveScan.png differ diff --git a/img/TrustScan.png b/img/TrustScan.png new file mode 100644 index 0000000..28ab987 Binary files /dev/null and b/img/TrustScan.png differ diff --git a/src/main.cpp b/src/main.cpp index 896ac2b..9cbe151 100644 --- a/src/main.cpp +++ b/src/main.cpp @@ -13,7 +13,7 @@ #include #include "testing_helpers.hpp" -const int SIZE = 1 << 8; // feel free to change the size of array +const int SIZE = 1 << 28; // feel free to change the size of array const int NPOT = SIZE - 3; // Non-Power-Of-Two int *a = new int[SIZE]; int *b = new int[SIZE]; diff --git a/stream_compaction/common.cu b/stream_compaction/common.cu index 2ed6d63..1765aed 100644 --- a/stream_compaction/common.cu +++ b/stream_compaction/common.cu @@ -24,6 +24,11 @@ namespace StreamCompaction { */ __global__ void kernMapToBoolean(int n, int *bools, const int *idata) { // TODO + int index = threadIdx.x + (blockIdx.x * blockDim.x); + if (index >= n) { + return; + } + bools[index] = (idata[index] == 0) ? 0 : 1; } /** @@ -33,6 +38,13 @@ namespace StreamCompaction { __global__ void kernScatter(int n, int *odata, const int *idata, const int *bools, const int *indices) { // TODO + int index = blockIdx.x * blockDim.x + threadIdx.x; + if (index >= n) { + return; + } + if (bools[index] != 0) { + odata[indices[index]] = idata[index]; + } } } diff --git a/stream_compaction/cpu.cu b/stream_compaction/cpu.cu index 719fa11..1a4ef11 100644 --- a/stream_compaction/cpu.cu +++ b/stream_compaction/cpu.cu @@ -17,9 +17,14 @@ namespace StreamCompaction { * For performance analysis, this is supposed to be a simple for loop. * (Optional) For better understanding before starting moving to GPU, you can simulate your GPU scan in this function first. */ + //cited Lecture slide void scan(int n, int *odata, const int *idata) { timer().startCpuTimer(); // TODO + odata[0] = 0; + for (int k = 1; k < n; ++k) { + odata[k] = odata[k - 1] + idata[k-1]; + } timer().endCpuTimer(); } @@ -31,8 +36,15 @@ namespace StreamCompaction { int compactWithoutScan(int n, int *odata, const int *idata) { timer().startCpuTimer(); // TODO + int j = 0; + for (int k = 0; k < n; k++) { + if (idata[k] != 0) { + odata[j] = idata[k]; + j++; + } + } timer().endCpuTimer(); - return -1; + return j; } /** @@ -43,8 +55,38 @@ namespace StreamCompaction { int compactWithScan(int n, int *odata, const int *idata) { timer().startCpuTimer(); // TODO + int* temp = new int[n]; + int* scan = new int[n]; + int k = 0; + int j = 1; + int oindex = 0; + for (int i = 0; i < n; i++) { + if (idata[i] == 0) { + temp[i] = 0; + } + else { + temp[i] = 1; + } + } + + for (int i = 0; i < n; i++) { + scan[i] = k; + if (temp[i] == 1) { + k++; + } + } + + for (int i = 0; i < n; i++) { + if (scan[i] == j) { + odata[oindex] = idata[i - 1]; + oindex += 1; + j += 1; + } + } + delete[] temp; + delete[] scan; timer().endCpuTimer(); - return -1; + return oindex; } } } diff --git a/stream_compaction/efficient.cu b/stream_compaction/efficient.cu index 2db346e..9054261 100644 --- a/stream_compaction/efficient.cu +++ b/stream_compaction/efficient.cu @@ -2,7 +2,7 @@ #include #include "common.h" #include "efficient.h" - +#define BlockSize 256 namespace StreamCompaction { namespace Efficient { using StreamCompaction::Common::PerformanceTimer; @@ -11,14 +11,98 @@ namespace StreamCompaction { static PerformanceTimer timer; return timer; } + __global__ void kernUpSweep(int n, int* odata, int d) { + int index = threadIdx.x + (blockIdx.x * blockDim.x); + if (index >= n || (index % (int)powf(2, d + 1) != 0)) { + return; + } + + odata[index + (int)powf(2, d + 1) - 1] += odata[index + (int)powf(2, d) - 1]; + + } + __global__ void kernUpCopy(int n, int* idata, int* odata) { + int index = threadIdx.x + (blockIdx.x * blockDim.x); + if (index >= n) { + return; + } + + if (odata[index] != idata[index]) { + odata[index] = idata[index]; + } + + } + + __global__ void kernZero(const int n, int* data) { + data[n - 1] = 0; + } + + __global__ void kernDownSweep(int n, int* odata, int d) { + int index = threadIdx.x + (blockIdx.x * blockDim.x); + if (index >= n || (index % (int)powf(2, d + 1) != 0)) { + return; + } + + int t = odata[index + (int)powf(2, d) - 1]; + odata[index + (int)powf(2, d) - 1] = odata[index + (int)powf(2, d + 1) - 1]; + odata[index + (int)powf(2, d + 1) - 1] += t; + } + + + /** * Performs prefix-sum (aka scan) on idata, storing the result into odata. */ + int nextPowerOf2(int n) { + if (n <= 0) return 0; + + int power = 1; + while (power < n) { + power *= 2; + } + + return power; + } + void scan(int n, int *odata, const int *idata) { + int* device_A; + + int paddedSize = nextPowerOf2(n); + cudaMalloc((void**)&device_A, paddedSize * sizeof(int)); + checkCUDAError("cudaMalloc device_A failed!"); + + + cudaMemset(device_A + n, 0, (paddedSize - n) * sizeof(int)); + checkCUDAError("device_A cudaMemset failed!"); + + + + cudaMemcpy(device_A, idata, n * sizeof(int), cudaMemcpyHostToDevice); + checkCUDAError("cudaMemcpy cudaMemcpyHostToDevice device_A to idata failed!"); + + dim3 blocksPerGrid((paddedSize + BlockSize - 1) / BlockSize); + timer().startGpuTimer(); - // TODO + + for (int d = 0; d <= ilog2ceil(paddedSize) - 1; d++) { //Upsweep + kernUpSweep << > > (paddedSize, device_A, d); + } + + kernZero << <1, 1 >> > (paddedSize, device_A); + + for (int d = ilog2ceil(paddedSize) - 1; d >= 0; d--) { //Downsweep + kernDownSweep << > > (paddedSize, device_A, d); + } + + timer().endGpuTimer(); + + + cudaMemcpy(odata, device_A, n * sizeof(int), cudaMemcpyDeviceToHost); + checkCUDAError("cudaMemcpy cudaMemcpyDeviceToHost odata to device_A failed!"); + + cudaFree(device_A); + checkCUDAError("cudaFree device_A failed!"); } /** @@ -31,10 +115,61 @@ namespace StreamCompaction { * @returns The number of elements remaining after compaction. */ int compact(int n, int *odata, const int *idata) { + int* device_idata, * device_odata, * device_bool, * device_scan; + + int paddedSize = nextPowerOf2(n); + cudaMalloc((void**)&device_idata, paddedSize * sizeof(int)); + checkCUDAError("cudaMalloc device_idata failed!"); + cudaMalloc((void**)&device_odata, paddedSize * sizeof(int)); + checkCUDAError("cudaMalloc device_odata failed!"); + cudaMalloc((void**)&device_bool, paddedSize * sizeof(int)); + checkCUDAError("cudaMalloc device_bool failed!"); + cudaMalloc((void**)&device_scan, paddedSize * sizeof(int)); + checkCUDAError("cudaMalloc device_scan failed!"); + + cudaMemcpy(device_idata, idata, n * sizeof(int), cudaMemcpyHostToDevice); + checkCUDAError("cudaMemcpy cudaMemcpyHostToDevice device_idata to idata failed!"); + + + cudaMemset(device_idata + n, 0, (paddedSize - n) * sizeof(int)); + checkCUDAError("device_idata cudaMemset failed!"); + cudaMemset(device_scan + n, 0, (paddedSize - n) * sizeof(int)); + checkCUDAError("device_scan cudaMemset failed!"); + + dim3 blocksPerGrid((paddedSize + BlockSize - 1) / BlockSize); + timer().startGpuTimer(); - // TODO + StreamCompaction::Common::kernMapToBoolean << > > (n, device_bool, device_idata); + kernUpCopy << > > (n, device_bool, device_scan); + + + for (int d = 0; d <= ilog2ceil(paddedSize) - 1; d++) { //Upsweep + kernUpSweep << > > (paddedSize, device_scan, d); + } + + kernZero << <1, 1 >> > (paddedSize, device_scan); + + for (int d = ilog2ceil(paddedSize) - 1; d >= 0; d--) { //Downsweep + kernDownSweep << > > (paddedSize, device_scan, d); + } + + StreamCompaction::Common::kernScatter << > > (paddedSize, device_odata, device_idata, device_bool, device_scan); timer().endGpuTimer(); - return -1; + + cudaMemcpy(odata, device_odata, n * sizeof(int), cudaMemcpyDeviceToHost); + int finalSize; + cudaMemcpy(&finalSize, device_scan + paddedSize - 1, sizeof(int), cudaMemcpyDeviceToHost); + checkCUDAError("cudaMemcpy cudaMemcpyDeviceToHost odata to device_idata failed!"); + + cudaFree(device_idata); + checkCUDAError("cudaFree device_idata failed!"); + cudaFree(device_odata); + checkCUDAError("cudaFree device_odata failed!"); + cudaFree(device_bool); + checkCUDAError("cudaFree device_bool failed!"); + cudaFree(device_scan); + checkCUDAError("cudaFree device_scan failed!"); + return finalSize; } } } diff --git a/stream_compaction/naive.cu b/stream_compaction/naive.cu index 4308876..fd21484 100644 --- a/stream_compaction/naive.cu +++ b/stream_compaction/naive.cu @@ -2,6 +2,7 @@ #include #include "common.h" #include "naive.h" +#define BlockSize 256 namespace StreamCompaction { namespace Naive { @@ -12,14 +13,78 @@ namespace StreamCompaction { return timer; } // TODO: __global__ + __global__ void kernScan(int n, int* odata, const int* idata, int d) { + int index = threadIdx.x + (blockIdx.x * blockDim.x); + if (index >= n) { + return; + } + if (odata[index] != idata[index]) { + odata[index] = idata[index]; + } + + if (index >= (int)powf(2, d - 1)) { + odata[index] = idata[index - (int)powf(2, d - 1)] + idata[index]; + } + } + + __global__ void kernelIncToExc(const int n, int* odata, const int* idata) { + int index = (blockIdx.x * blockDim.x) + threadIdx.x; + if (index >= n) { + return; + } + + + if (index == 0) { + odata[0] = 0; + } + else { + odata[index] = idata[index - 1]; + } + } + /** * Performs prefix-sum (aka scan) on idata, storing the result into odata. */ void scan(int n, int *odata, const int *idata) { + int* device_A; + int* device_B; + + + cudaMalloc((void**)&device_A, n * sizeof(int)); + checkCUDAError("cudaMalloc device_A failed!"); + cudaMalloc((void**)&device_B, n * sizeof(int)); + checkCUDAError("cudaMalloc device_B failed!"); + + cudaMemcpy(device_A, idata, n * sizeof(int), cudaMemcpyHostToDevice); + checkCUDAError("cudaMemcpy cudaMemcpyHostToDevice device_A to idata failed!"); + + dim3 blocksPerGrid((n + BlockSize - 1) / BlockSize); + timer().startGpuTimer(); // TODO + int* temp; + for (int d = 1; d <= ilog2ceil(n); d++) { + kernScan << > > (n, device_B, device_A, d); + temp = device_A; + device_A = device_B; + device_B = temp; + } + + kernelIncToExc << > > (n, device_B, device_A); + + timer().endGpuTimer(); + + cudaMemcpy(odata, device_B, n * sizeof(int), cudaMemcpyDeviceToHost); + + checkCUDAError("cudaMemcpy cudaMemcpyDeviceToHost odata to device_B failed!"); + + cudaFree(device_A); + checkCUDAError("cudaFree device_A failed!"); + cudaFree(device_B); + checkCUDAError("cudaFree device_B failed!"); + } } } diff --git a/stream_compaction/thrust.cu b/stream_compaction/thrust.cu index 1def45e..a4a1b7e 100644 --- a/stream_compaction/thrust.cu +++ b/stream_compaction/thrust.cu @@ -18,11 +18,16 @@ namespace StreamCompaction { * Performs prefix-sum (aka scan) on idata, storing the result into odata. */ void scan(int n, int *odata, const int *idata) { + thrust::host_vector h_in(idata, idata + n); + thrust::device_vector dv_in = h_in; + thrust::device_vector dv_out(n); + timer().startGpuTimer(); // TODO use `thrust::exclusive_scan` // example: for device_vectors dv_in and dv_out: - // thrust::exclusive_scan(dv_in.begin(), dv_in.end(), dv_out.begin()); + thrust::exclusive_scan(dv_in.begin(), dv_in.end(), dv_out.begin()); timer().endGpuTimer(); + thrust::copy(dv_out.begin(), dv_out.end(), odata); } } }