diff --git a/README.md b/README.md index 0e38ddb..4c07d1f 100644 --- a/README.md +++ b/README.md @@ -3,12 +3,112 @@ CUDA Stream Compaction **University of Pennsylvania, CIS 565: GPU Programming and Architecture, Project 2** -* (TODO) YOUR NAME HERE - * (TODO) [LinkedIn](), [personal website](), [twitter](), etc. -* Tested on: (TODO) Windows 22, i7-2222 @ 2.22GHz 22GB, GTX 222 222MB (Moore 2222 Lab) +* Daniel Gerhardt + * https://www.linkedin.com/in/daniel-gerhardt-bb012722b/ +* Tested on: Windows 23H2, AMD Ryzen 9 7940HS @ 4GHz 32GB, RTX 4070 8 GB (Personal Laptop) -### (TODO: Your README) +### Stream Compaction -Include analysis, etc. (Remember, this is public, so don't put -anything here that you don't want to share with the world.) +## Description +Stream compaction is the process of taking an array and removing unwanted elements from it. In this project, that means removing 0s from an array. There are five different implementations being compared here. + +1. Compaction without scan on the CPU. This runs a simple O(n) for loop that checks if the input element is 0, and if it is not, adds it to the output. +2. Compaction with scan on the CPU. Scan is a "prefix sum" algorithm that outputs an array that contains at location i the sum of all elements up to element i. Scan can be used for compaction by creating an array of 1s and 0s that is parallel to the original input data, where a 1 represents the element at the same index is going to be in the output, and 0 represents the element is not going to be in the output. You can then accumulate the number of 1s to get an array that increases on the elements that should be contained in the output, and the value of the array at the element in the output is the final index of that element in the output. +3. Naive compaction on the GPU. The following figure shows what the approach looks like: +![](img/figure-39-2.jpg) + By adding in parallel and only having a logarithmic number of iterations, this algorithm reduces the overall complexity to O(logn). But, there are O(nlogn) adds since in the worst case there are O(n) adds per iteration. The work efficient solutions seeks to reduce this factor. +4. Work efficient compaction on the GPU. The following figure shows what the latter stage of the approach looks like: ![](img/figure-39-4.jpg) +This is a much less intuitive approach. The algorithm involves 2 phases, the upsweep and the downsweep, pictured above. The upsweep is the same as the parallel reduction from method 3, except the algorithm occurs "in place" on the input data. Then, by treating the array as a tree and doing some clever summation, the amount of work can be reduced by filtering the sums down the "tree". This is done by setting the "root" -- the last element -- to zero, and then at each pass, giving each left child the parent's value, and setting the right child to the sum of the previous left child’s value and the parent's value. The upsweep has O(n) adds and the downsweep has O(n) adds and O(n) swaps, which reduces the complexity from method 3. +5. A wrapper for thrust's implementation of stream compaction for the sake of performance comparison. + +# Sample Output + +Run with 2^18 elements, this is what a sample program output looks like: + +``` +**************** +** SCAN TESTS ** +**************** + [ 46 5 33 20 5 42 38 23 28 29 9 43 18 ... 43 0 ] +==== cpu scan, power-of-two ==== + elapsed time: 0.0677ms (std::chrono Measured) + [ 0 46 51 84 104 109 151 189 212 240 269 278 321 ... 6416721 6416764 ] +==== cpu scan, non-power-of-two ==== + elapsed time: 0.063ms (std::chrono Measured) + [ 0 46 51 84 104 109 151 189 212 240 269 278 321 ... 6416683 6416688 ] + passed +==== naive scan, power-of-two ==== + elapsed time: 7.26304ms (CUDA Measured) + passed +==== naive scan, non-power-of-two ==== + elapsed time: 0.243776ms (CUDA Measured) + passed +==== work-efficient scan, power-of-two ==== + elapsed time: 0.407616ms (CUDA Measured) + passed +==== work-efficient scan, non-power-of-two ==== + elapsed time: 0.318944ms (CUDA Measured) + passed +==== thrust scan, power-of-two ==== + elapsed time: 0.083872ms (CUDA Measured) + passed +==== thrust scan, non-power-of-two ==== + elapsed time: 0.03472ms (CUDA Measured) + passed + +***************************** +** STREAM COMPACTION TESTS ** +***************************** + [ 0 1 1 2 1 2 0 1 0 3 3 1 2 ... 1 0 ] +==== cpu compact without scan, power-of-two ==== + elapsed time: 0.349ms (std::chrono Measured) + [ 1 1 2 1 2 1 3 3 1 2 3 2 3 ... 2 1 ] + passed +==== cpu compact without scan, non-power-of-two ==== + elapsed time: 0.3535ms (std::chrono Measured) + [ 1 1 2 1 2 1 3 3 1 2 3 2 3 ... 3 1 ] + passed +==== cpu compact with scan ==== + elapsed time: 0.7794ms (std::chrono Measured) + [ 1 1 2 1 2 1 3 3 1 2 3 2 3 ... 2 1 ] + passed +==== work-efficient compact, power-of-two ==== + elapsed time: 7.19795ms (CUDA Measured) + passed +==== work-efficient compact, non-power-of-two ==== + elapsed time: 0.545376ms (CUDA Measured) + passed +``` + +## Performance Analysis + +The following data was collected using a block size of 128 running in release mode and utilizing cudaTimers and std::chrono to gather timing information. Memory swapping between the CPU and GPU was excluded where possible to focus the running time analysis on the algorithm. The size of 128 blocks was chosen from many tested as on my GPU it saw the best performance. Any references to non power of 2 or power of 2 algorithms refer to the input size used for the algorithm. This tests whether the approach is better or worse at handling array inputs where the input is a power of 2, or the input is not a power of 2. This is prevalent with the GPU approaches since the algorithms are tree based, which often rely on having a power of 2 length to properly touch all leaves. + +# Charts + +The following is a chart displaying how running time of the numerous methods changes with input size in the scan algorithm. + +![](img/scan_graph.png) + +The following is a chart displaying how running time of the numerous methods changes with input size in the compaction algorithm. + +![](img/compaction_graph.png) + +# Observations + +In scan, the CPU dominated in performance over my implementations, but thrust proved to be the fastest at large array sizes, increasing much slower than other approaches. The CPU is marginally faster than thrust at small array sizes, because the parallelism is not fast enough to offset the heavier cost of transferring memory to and from the GPU. The thrust algorithm's supremacy shows that true harnessing of the GPU takes more than simply implementing an algorithm, but also smart usage of shared memory, data prefetching, and other strategies to utilize the parallel hardware. See more about the thrust algorithm at the bottom of the readme. + +In stream compaction, the GPU has a much better performance at larger array sizes than the CPU. Note that the power of two and non power of two CPU runs without scan cover each other due to their negligible performance difference. + +In scan, there is interesting behavior between the power of two and non power of two naive GPU algorithms. The non-power of two naive algorithm had an increasing difference in performance over the power of two input the larger the input became. This is likely because the overhead of padding zeroes increases as size increases. The same is not true of the compaction algorithms, likely because the main bottleneck of that algorithm is the global memory reads, which are present even if the input is a power of two. This was not present in the work efficient algorithm, which is so tight between the power of two and non power of two that the power of two line is not visible. + +Across all algorithms and the multiple tests there is a very pleasing linear increase in performance. This is likely due to the running time being so closely tied to the array input size, and any additional overhead being trumped by the O(n) nature of the algorithmic approaches. + +# Investigation of Thrust using NSight + +The following is an image of the report from NSight. + +![](img/nsight_thrust.png) + +The green and red blocks are read and write from memory respectively. It appears they only happen once at the beginning and end of each test, since this image was taken from two consecutive thrust tests. That leads me to believe that the memory is loaded in to shared memory, and clever caching is used to avoid large amounts of usage from global memory, which is a large bottleneck in GPU programming. \ No newline at end of file diff --git a/img/compaction_graph.png b/img/compaction_graph.png new file mode 100644 index 0000000..4e070b8 Binary files /dev/null and b/img/compaction_graph.png differ diff --git a/img/nsight_thrust.png b/img/nsight_thrust.png new file mode 100644 index 0000000..7fab80c Binary files /dev/null and b/img/nsight_thrust.png differ diff --git a/img/scan_graph.png b/img/scan_graph.png new file mode 100644 index 0000000..6aa3701 Binary files /dev/null and b/img/scan_graph.png differ diff --git a/src/main.cpp b/src/main.cpp index 896ac2b..883d9fb 100644 --- a/src/main.cpp +++ b/src/main.cpp @@ -13,7 +13,7 @@ #include #include "testing_helpers.hpp" -const int SIZE = 1 << 8; // feel free to change the size of array +const int SIZE = 1 << 18; // feel free to change the size of array const int NPOT = SIZE - 3; // Non-Power-Of-Two int *a = new int[SIZE]; int *b = new int[SIZE]; diff --git a/stream_compaction/common.cu b/stream_compaction/common.cu index 2ed6d63..35ea7d5 100644 --- a/stream_compaction/common.cu +++ b/stream_compaction/common.cu @@ -23,7 +23,11 @@ namespace StreamCompaction { * which map to 0 will be removed, and elements which map to 1 will be kept. */ __global__ void kernMapToBoolean(int n, int *bools, const int *idata) { - // TODO + int k = (blockDim.x * blockIdx.x) + threadIdx.x; + if (k >= n) { + return; + } + bools[k] = (idata[k] == 0) ? 0 : 1; } /** @@ -32,8 +36,13 @@ namespace StreamCompaction { */ __global__ void kernScatter(int n, int *odata, const int *idata, const int *bools, const int *indices) { - // TODO + int k = (blockDim.x * blockIdx.x) + threadIdx.x; + if (k >= n) { + return; + } + if (bools[k] == 1) { + odata[indices[k]] = idata[k]; + } } - } } diff --git a/stream_compaction/cpu.cu b/stream_compaction/cpu.cu index 719fa11..9bd8659 100644 --- a/stream_compaction/cpu.cu +++ b/stream_compaction/cpu.cu @@ -17,9 +17,15 @@ namespace StreamCompaction { * For performance analysis, this is supposed to be a simple for loop. * (Optional) For better understanding before starting moving to GPU, you can simulate your GPU scan in this function first. */ - void scan(int n, int *odata, const int *idata) { + void scan(int n, int* odata, const int* idata) { timer().startCpuTimer(); - // TODO + + int current_sum = 0; + for (int i = 0; i < n; i++) { + odata[i] = current_sum; + current_sum += idata[i]; + } + timer().endCpuTimer(); } @@ -28,11 +34,30 @@ namespace StreamCompaction { * * @returns the number of elements remaining after compaction. */ - int compactWithoutScan(int n, int *odata, const int *idata) { + int compactWithoutScan(int n, int* odata, const int* idata) { timer().startCpuTimer(); - // TODO + + int o_idx = 0; + for (int i = 0; i < n; i++) { + if (idata[i] != 0) { + odata[o_idx] = idata[i]; + o_idx++; + } + } + timer().endCpuTimer(); - return -1; + return o_idx; + } + + int scatter(const int* idata, int* odata, int* scan_result, int* to_insert, int n) { + int num_elts = 0; + for (int i = 0; i < n; i++) { + if (to_insert[i] == 1) { + odata[scan_result[i]] = idata[i]; + num_elts++; + } + } + return num_elts; } /** @@ -40,11 +65,39 @@ namespace StreamCompaction { * * @returns the number of elements remaining after compaction. */ - int compactWithScan(int n, int *odata, const int *idata) { + int compactWithScan(int n, int* odata, const int* idata) { + + int* to_insert = new int[n]; + int* scan_result = new int[n]; + timer().startCpuTimer(); - // TODO + + //scan + int current_sum = 0; + for (int i = 0; i < n; i++) { + scan_result[i] = current_sum; + current_sum += idata[i]; + } + + //set to_insert to 0 if the element is going to be removed and 1 otherwise + for (int i = 0; i < n; i++) { + to_insert[i] = (idata[i] == 0) ? 0 : 1; + } + + //adjust scan result for proper indices + for (int i = 1; i < n; i++) { + scan_result[i] = to_insert[i - 1] + scan_result[i - 1]; + } + + //scatter and get number of elements to return + int num_elts = scatter(idata, odata, scan_result, to_insert, n); + timer().endCpuTimer(); - return -1; + + delete[] to_insert; + delete[] scan_result; + + return num_elts; } } } diff --git a/stream_compaction/efficient.cu b/stream_compaction/efficient.cu index 2db346e..972c08f 100644 --- a/stream_compaction/efficient.cu +++ b/stream_compaction/efficient.cu @@ -12,13 +12,89 @@ namespace StreamCompaction { return timer; } + __global__ void kernUpsweep(int n, int d, int* data) { + int k = (blockDim.x * blockIdx.x) + threadIdx.x; + int pow_d_plus_one = (1 << d + 1); + + if (k >= n / pow_d_plus_one) { + return; + } + + k *= pow_d_plus_one; + + int pow_d = (1 << d); + + data[k + pow_d_plus_one - 1] += data[k + pow_d - 1]; + } + + __global__ void kernDownsweep(int n, int d, int* data) { + int k = (blockDim.x * blockIdx.x) + threadIdx.x; + int pow_d_plus_one = (1 << d + 1); + + if (k >= n / pow_d_plus_one) { + return; + } + + k *= pow_d_plus_one; + + int curr_left_idx = k + (1 << d) - 1; + int left_child_val = data[curr_left_idx]; // Save left child + int curr_node_idx = k + pow_d_plus_one - 1; + data[curr_left_idx] = data[curr_node_idx]; // Set left child to this node’s value + data[curr_node_idx] += left_child_val; // Set right child to old left value + this node’s value + } + + __global__ void kernPadZeroes(int original_length, int start_point, int* data) { + int k = (blockDim.x * blockIdx.x) + threadIdx.x; + if (k >= original_length && k < start_point) { + data[k] = 0; + } + } + /** * Performs prefix-sum (aka scan) on idata, storing the result into odata. */ void scan(int n, int *odata, const int *idata) { + int log = ilog2ceil(n); + int power_two_len = (1 << log); + + int* data; + cudaMalloc((void**)&data, sizeof(int) * power_two_len); + cudaMemcpy(data, idata, sizeof(int) * power_two_len, cudaMemcpyHostToDevice); + timer().startGpuTimer(); - // TODO + + int blockSize = 128; + dim3 blockDim = dim3((power_two_len + blockSize - 1) / blockSize); + kernPadZeroes << > > (n, power_two_len, data); + + int threads_to_launch; + + //upsweep + for (int d = 0; d < log; d++) { + threads_to_launch = power_two_len / (1 << d + 1); + + blockDim = dim3((threads_to_launch + blockSize - 1) / blockSize); + + kernUpsweep << > > (power_two_len, d, data); + } + + //set root to 0 + cudaMemset(data + power_two_len - 1, 0, sizeof(int)); + + //downsweep + for (int d = log - 1; d >= 0; d--) { + threads_to_launch = power_two_len / (1 << d + 1); + + blockDim = dim3((threads_to_launch + blockSize - 1) / blockSize); + + kernDownsweep << > > (power_two_len, d, data); + } + timer().endGpuTimer(); + + cudaMemcpy(odata, data, sizeof(int) * power_two_len, cudaMemcpyDeviceToHost); + cudaFree(data); } /** @@ -31,10 +107,82 @@ namespace StreamCompaction { * @returns The number of elements remaining after compaction. */ int compact(int n, int *odata, const int *idata) { + + int num_elts = -1; + + int log = ilog2ceil(n); + int power_two_len = (1 << log); + + int blockSize = 128; + dim3 blockDim((n + blockSize - 1) / blockSize); + + int* in_data; + cudaMalloc((void**)&in_data, sizeof(int) * n); + cudaMemcpy(in_data, idata, sizeof(int) * n, cudaMemcpyHostToDevice); + + int* bools; + cudaMalloc((void**)&bools, sizeof(int) * power_two_len); + + int* out_data; + cudaMalloc((void**)&out_data, sizeof(int) * n); + timer().startGpuTimer(); - // TODO + + int scan_blockSize = 128; + dim3 scan_blockDim = dim3((power_two_len + blockSize - 1) / blockSize); + + kernPadZeroes << > > (n, power_two_len, bools); + + StreamCompaction::Common::kernMapToBoolean << < blockDim, blockSize >> >(power_two_len, bools, in_data); + + //START SCAN (input data is now bools) + + int* scan_data; + cudaMalloc((void**)&scan_data, sizeof(int) * power_two_len); + cudaMemcpy(scan_data, bools, sizeof(int) * power_two_len, cudaMemcpyDeviceToDevice); + + int threads_to_launch; + + //upsweep + for (int d = 0; d < log; d++) { + threads_to_launch = power_two_len / (1 << d + 1); + + scan_blockDim = dim3((threads_to_launch + scan_blockSize - 1) / scan_blockSize); + + kernUpsweep << > > (power_two_len, d, scan_data); + } + + //set root to 0 + cudaMemset(scan_data + power_two_len - 1, 0, sizeof(int)); + + //downsweep + for (int d = log - 1; d >= 0; d--) { + threads_to_launch = power_two_len / (1 << d + 1); + + scan_blockDim = dim3((threads_to_launch + scan_blockSize - 1) / scan_blockSize); + + kernDownsweep << > > (power_two_len, d, scan_data); + } + + //END SCAN -- scan is in place so scan_data has the output + + StreamCompaction::Common::kernScatter << < blockDim, blockSize >> >(power_two_len, out_data, in_data, bools, scan_data); + timer().endGpuTimer(); - return -1; + + cudaMemcpy(odata, out_data, sizeof(int) * n, cudaMemcpyDeviceToHost); + cudaMemcpy(&num_elts, scan_data + power_two_len - 1, sizeof(int), cudaMemcpyDeviceToHost); + + int last_bool; + cudaMemcpy(&last_bool, bools + power_two_len - 1, sizeof(int), cudaMemcpyDeviceToHost); + if (last_bool == 1) num_elts++; + + cudaFree(bools); + cudaFree(in_data); + cudaFree(out_data); + cudaFree(scan_data); + + return num_elts; } } } diff --git a/stream_compaction/naive.cu b/stream_compaction/naive.cu index 4308876..884e083 100644 --- a/stream_compaction/naive.cu +++ b/stream_compaction/naive.cu @@ -11,15 +11,65 @@ namespace StreamCompaction { static PerformanceTimer timer; return timer; } - // TODO: __global__ + + __global__ void kernScan(int n, int d, int* odata, const int* idata) { + int k = (blockDim.x * blockIdx.x) + threadIdx.x; + if (k >= n) { + return; + } + int pow_res = (1 << d - 1); + if (k >= pow_res) { + int idx = k - pow_res; + odata[k] = idata[idx] + idata[k]; + } + else { + odata[k] = idata[k]; + } + } + + __global__ void kernMakeScanExclusive(int n, int* odata, int* idata) { + int k = (blockDim.x * blockIdx.x) + threadIdx.x; + if (k >= n) { + return; + } + odata[k] = k == 0 ? 0 : idata[k - 1]; + } /** * Performs prefix-sum (aka scan) on idata, storing the result into odata. */ void scan(int n, int *odata, const int *idata) { + + int* data_a; + int* data_b; + cudaMalloc((void**)&data_a, sizeof(int) * n); + cudaMalloc((void**)&data_b, sizeof(int) * n); + + cudaMemcpy(data_a, idata, sizeof(int) * n, cudaMemcpyHostToDevice); + timer().startGpuTimer(); - // TODO + + int log = ilog2ceil(n); + + int blockSize = 128; + dim3 blockDim((n + blockSize - 1) / blockSize); + + for (int d = 1; d <= log; d++) { + // k = n? does not seem necessary, can prob find a log val that we can limit k to. then do [k, n] + kernScan << > > (n, d, data_b, data_a); + int* temp = data_a; + data_a = data_b; + data_b = temp; + } + + kernMakeScanExclusive << > > (n, data_b, data_a); + timer().endGpuTimer(); + + cudaMemcpy(odata, data_b, sizeof(int) * n, cudaMemcpyDeviceToHost); + + cudaFree(data_a); + cudaFree(data_b); } } } diff --git a/stream_compaction/thrust.cu b/stream_compaction/thrust.cu index 1def45e..e86c19a 100644 --- a/stream_compaction/thrust.cu +++ b/stream_compaction/thrust.cu @@ -18,11 +18,26 @@ namespace StreamCompaction { * Performs prefix-sum (aka scan) on idata, storing the result into odata. */ void scan(int n, int *odata, const int *idata) { + + int* host_in_arr; + cudaMalloc((void**)&host_in_arr, sizeof(int) * n); + cudaMemcpy(host_in_arr, idata, sizeof(int) * n, cudaMemcpyHostToDevice); + + thrust::device_vector dev_in_vec = thrust::device_vector(host_in_arr, host_in_arr + n); + thrust::device_vector dev_out_vec(n); + timer().startGpuTimer(); // TODO use `thrust::exclusive_scan` // example: for device_vectors dv_in and dv_out: // thrust::exclusive_scan(dv_in.begin(), dv_in.end(), dv_out.begin()); + + thrust::exclusive_scan(dev_in_vec.begin(), dev_in_vec.end(), dev_out_vec.begin()); + timer().endGpuTimer(); + + thrust::copy(dev_out_vec.begin(), dev_out_vec.end(), odata); + + cudaFree(host_in_arr); } } }