From c0bf9803a7298331a9aeaef611a47867d50562b9 Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Thu, 14 Aug 2025 17:33:14 +0200 Subject: [PATCH 1/3] feat: Update CODEOWNERS and sync submodules - Add @vasilyyaremchuk as code reviewer to CODEOWNERS - Update .cursor/rules/common submodule to latest commit - Update website-pages submodule --- .cursor/rules/common | 2 +- .github/CODEOWNERS | 4 +- tmp/PR_CONVERSATIONS.md | 220 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 223 insertions(+), 3 deletions(-) create mode 100644 tmp/PR_CONVERSATIONS.md diff --git a/.cursor/rules/common b/.cursor/rules/common index 402558c9..0048a591 160000 --- a/.cursor/rules/common +++ b/.cursor/rules/common @@ -1 +1 @@ -Subproject commit 402558c9424d32857338fd40a8c52f63050e424b +Subproject commit 0048a59175d3774a9ed517c1fe8ade681d8cea42 diff --git a/.github/CODEOWNERS b/.github/CODEOWNERS index f1664d4e..6c852ee6 100644 --- a/.github/CODEOWNERS +++ b/.github/CODEOWNERS @@ -1,6 +1,6 @@ # This is a CODEOWNERS file for the repository -# @killev will be requested for review when someone opens a pull request +# @killev and @vasilyyaremchuk will be requested for review when someone opens a pull request # affecting any file in the repository # Default owner for everything in the repo -* @killev +* @killev @vasilyyaremchuk diff --git a/tmp/PR_CONVERSATIONS.md b/tmp/PR_CONVERSATIONS.md new file mode 100644 index 00000000..0fbe4ab6 --- /dev/null +++ b/tmp/PR_CONVERSATIONS.md @@ -0,0 +1,220 @@ +# All Conversations for PR #207: + +## ❌ OUTDATED (Fixed by previous changes): + +### **website/scripts/mongo-migrate.js:null** +Id: PRRT_kwDOOXKRxs5VQm9e +Author: copilot-pull-request-reviewer +Description: Suggested replacing forEach with for...of loop to avoid anti-pattern in parseArguments function +---- +Using forEach for control flow with early returns or exceptions is an anti-pattern. Consider using a for...of loop which allows proper control flow. +```suggestion + for (const arg of args) { + if (arg.startsWith('--env=')) options.envPath = arg.split('=')[1]; + else if (arg === '--help' || arg === '-h') options.help = true; + else if (arg === '--dry-run') options.dryRun = true; + else if (arg === '--verbose' || arg === '-v') options.verbose = true; + else if (arg.startsWith('--')) throw new Error(`Unknown argument: ${arg}`); + } +``` +---- +Status: OUTDATED - The forEach loop has been replaced with a for...of loop as suggested + +### **website/scripts/mongo-migrate.js:null** +Id: PRRT_kwDOOXKRxs5VQqYd +Author: coderabbitai +Description: Fix error counting in batch processing when insertMany fails with some successful insertions +---- +**Fix error counting in batch processing.** + +When `insertMany` fails with `ordered: false`, some documents may still be inserted successfully. The current implementation counts all documents in the batch as errors, which is incorrect. + +```diff + try { +- await targetCol.insertMany(batch, { ordered: false }); ++ const result = await targetCol.insertMany(batch, { ordered: false }); + processed += batch.length; + + if (verbose) { + process.stdout.write(`\r Progress: ${processed}/${count}`); + } + } catch (error) { +- errors += batch.length; +- if (verbose) console.error(`\n⚠️ Batch error: ${error.message}`); ++ // For BulkWriteError, writeErrors contains the actual failed documents ++ if (error.name === 'BulkWriteError' && error.result) { ++ const successCount = error.result.nInserted || 0; ++ processed += successCount; ++ errors += batch.length - successCount; ++ if (verbose) console.error(`\n⚠️ Batch partially failed: ${successCount}/${batch.length} inserted`); ++ } else { ++ errors += batch.length; ++ if (verbose) console.error(`\n⚠️ Batch error: ${error.message}`); ++ } + } +``` +---- +Status: OUTDATED - This suggestion refers to old code that has been reformatted, and the insertMany batch error handling is now outdated + +## ✅ STILL RELEVANT (Need to be fixed): + +### **website/scripts/s3-copy.js:42** +Id: PRRT_kwDOOXKRxs5VQm-G +Author: copilot-pull-request-reviewer +Description: Temporal coupling issue where dotenv.config() call depends on environment variables before they are loaded +---- +The dotenv.config() call occurs after the config object is defined but before environment variables are accessed. This creates a temporal coupling issue where the config getters depend on environment variables that may not be loaded yet. +```suggestion +const envPath = getEnvPath(); +require('dotenv').config({ path: envPath }); +``` +---- +Status: RELEVANT - The dotenv configuration is still called after the config object definition on line 42, creating a temporal coupling issue where environment variables might not be loaded when getters are accessed +Recommendation: Move the dotenv.config() call to execute before the config object definition to ensure environment variables are available when needed +Decision: Fix this + +### **website/scripts/s3-copy.js:48** +Id: PRRT_kwDOOXKRxs5VQm-W +Author: copilot-pull-request-reviewer +Description: Magic number 3 for COPY_BATCH_SIZE should be documented or made configurable via environment variable +---- +[nitpick] The magic number 3 for COPY_BATCH_SIZE should be documented or made configurable. This value significantly impacts performance and should explain the reasoning behind this specific choice. +```suggestion + * Number of files to copy concurrently per batch. + * Default value is 3, which balances performance and resource usage for typical workloads. + * Users can override this value by setting the environment variable COPY_BATCH_SIZE. + */ +const COPY_BATCH_SIZE = parseInt(process.env.COPY_BATCH_SIZE, 10) || 3; +``` +---- +Status: RELEVANT - COPY_BATCH_SIZE is still hardcoded as 3 with basic documentation but no environment variable configurability +Recommendation: Add environment variable support to make the batch size configurable for different environments and performance needs +Decision: Resolve this, at the moment hardcoded COPY_BATCH_SIZE is fine, no need to change it. + +### **website/scripts/mongo-migrate.js:5** +Id: PRRT_kwDOOXKRxs5VQm-j +Author: copilot-pull-request-reviewer +Description: Magic number 1000 for BATCH_SIZE should be documented or made configurable via environment variable +---- +[nitpick] The magic number 1000 for BATCH_SIZE should be documented or made configurable. Different environments may benefit from different batch sizes based on memory constraints and network conditions. +```suggestion +const BATCH_SIZE = parseInt(process.env.BATCH_SIZE, 10) || 1000; +``` +---- +Status: RELEVANT - BATCH_SIZE is still hardcoded as 1000 without environment variable configurability +Recommendation: Make the batch size configurable via environment variable to allow optimization for different environments +Decision: Resolve this, at the moment hardcoded BATCH_SIZE is fine, no need to change it. + +### **website/package.json:41** +Id: PRRT_kwDOOXKRxs5VQqYR +Author: coderabbitai +Description: Invalid AWS SDK version 3.850.0 that does not exist on npm registry +---- +**Invalid @aws-sdk/client-s3 version in package.json** + +The declared dependency `"@aws-sdk/client-s3": "^3.850.0"` does not exist on npm. +- As of July 22, 2025, npm's latest published version is **3.830.0**. +- Snyk reports version **3.846.0** with no known vulnerabilities. + +Please update your dependency to a valid, up-to-date release, for example: +```diff +– "@aws-sdk/client-s3": "^3.850.0", ++ "@aws-sdk/client-s3": "^3.830.0", # or ^3.846.0 for the Snyk-verified latest +``` +This will ensure your S3 client resolves correctly and includes the most recent security fixes. +---- +Status: RELEVANT - The package.json still contains the invalid version "^3.850.0" for @aws-sdk/client-s3 +Recommendation: Update to a valid version such as "^3.846.0" or the latest available stable version +Decision: Fix this, find correct most recent version + +### **website/scripts/s3-copy.js:39** +Id: PRRT_kwDOOXKRxs5VQqYo +Author: coderabbitai +Description: AWS credentials should be validated before use to prevent exposure in error messages +---- +**Add credential validation to prevent exposure in error messages.** + +AWS credentials should be validated before use to avoid exposing them in error stack traces. + +```diff + get credentials() { ++ const validateCredentials = (creds, type) => { ++ if (!creds.accessKeyId || !creds.secretAccessKey) { ++ throw new Error(`Missing AWS ${type} credentials`); ++ } ++ if (creds.accessKeyId.length < 16 || creds.secretAccessKey.length < 20) { ++ throw new Error(`Invalid AWS ${type} credential format`); ++ } ++ return creds; ++ }; + + return { +- source: { ++ source: validateCredentials({ + region: process.env.AWS_SOURCE_REGION || 'us-east-1', + accessKeyId: process.env.AWS_SOURCE_ACCESS_KEY_ID, + secretAccessKey: process.env.AWS_SOURCE_SECRET_ACCESS_KEY, +- }, +- dest: { ++ }, 'source'), ++ dest: validateCredentials({ + region: process.env.AWS_DEST_REGION || 'us-east-1', + accessKeyId: process.env.AWS_DEST_ACCESS_KEY_ID, + secretAccessKey: process.env.AWS_DEST_SECRET_ACCESS_KEY, +- }, ++ }, 'destination'), + }; + }, +``` +---- +Status: RELEVANT - The credentials getter still returns raw credentials without validation, potentially exposing sensitive information in error messages +Recommendation: Add credential validation to ensure credentials are present and valid before returning them +Decision: Fix this + +### **website/scripts/s3-copy.js:131** +Id: PRRT_kwDOOXKRxs5VQqYx +Author: coderabbitai +Description: streamToBuffer function lacks protection against large data and indefinite streaming without timeout +---- +**Add timeout and size limits for stream processing.** + +The `streamToBuffer` function could consume excessive memory for large objects and has no timeout protection. + +```diff +-streamToBuffer(stream) { ++streamToBuffer(stream, maxSize = 5 * 1024 * 1024 * 1024) { // 5GB default + return new Promise((resolve, reject) => { + const chunks = []; ++ let totalSize = 0; ++ const timeout = setTimeout(() => { ++ stream.destroy(); ++ reject(new Error('Stream processing timeout')); ++ }, 300000); // 5 minute timeout ++ + stream.on('data', (chunk) => { ++ totalSize += chunk.length; ++ if (totalSize > maxSize) { ++ stream.destroy(); ++ clearTimeout(timeout); ++ reject(new Error(`Object size exceeds limit of ${maxSize} bytes`)); ++ return; ++ } + chunks.push(chunk) + }); +- stream.on('end', () => resolve(Buffer.concat(chunks))); +- stream.on('error', reject); ++ stream.on('end', () => { ++ clearTimeout(timeout); ++ resolve(Buffer.concat(chunks)); ++ }); ++ stream.on('error', (err) => { ++ clearTimeout(timeout); ++ reject(err); ++ }); + }); + }, +``` +---- +Status: RELEVANT - The streamToBuffer function on line 124 still lacks timeout and memory protection mechanisms +Recommendation: Add timeout handling and memory size limits to prevent excessive memory usage and hanging streams +Decision: Resolve this. it works good enough, and mistake (even if happed) wouldn't cause any damadge. \ No newline at end of file From 194c820571b8995aae2b7ea9a5bf7f2c4cbbf5f6 Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Thu, 14 Aug 2025 17:44:28 +0200 Subject: [PATCH 2/3] chore: Remove tmp/PR_CONVERSATIONS.md and ignore tmp/ directory - Remove temporary PR_CONVERSATIONS.md file - Add tmp/ directory to .gitignore to prevent future temporary files from being tracked --- .gitignore | 3 + tmp/PR_CONVERSATIONS.md | 220 ---------------------------------------- 2 files changed, 3 insertions(+), 220 deletions(-) delete mode 100644 tmp/PR_CONVERSATIONS.md diff --git a/.gitignore b/.gitignore index d8aacc0c..31e32601 100644 --- a/.gitignore +++ b/.gitignore @@ -144,3 +144,6 @@ website/public/apos-frontend dump.archive .prettierrc aposUsersSafe.json + +# Temporary files directory +tmp/ diff --git a/tmp/PR_CONVERSATIONS.md b/tmp/PR_CONVERSATIONS.md deleted file mode 100644 index 0fbe4ab6..00000000 --- a/tmp/PR_CONVERSATIONS.md +++ /dev/null @@ -1,220 +0,0 @@ -# All Conversations for PR #207: - -## ❌ OUTDATED (Fixed by previous changes): - -### **website/scripts/mongo-migrate.js:null** -Id: PRRT_kwDOOXKRxs5VQm9e -Author: copilot-pull-request-reviewer -Description: Suggested replacing forEach with for...of loop to avoid anti-pattern in parseArguments function ----- -Using forEach for control flow with early returns or exceptions is an anti-pattern. Consider using a for...of loop which allows proper control flow. -```suggestion - for (const arg of args) { - if (arg.startsWith('--env=')) options.envPath = arg.split('=')[1]; - else if (arg === '--help' || arg === '-h') options.help = true; - else if (arg === '--dry-run') options.dryRun = true; - else if (arg === '--verbose' || arg === '-v') options.verbose = true; - else if (arg.startsWith('--')) throw new Error(`Unknown argument: ${arg}`); - } -``` ----- -Status: OUTDATED - The forEach loop has been replaced with a for...of loop as suggested - -### **website/scripts/mongo-migrate.js:null** -Id: PRRT_kwDOOXKRxs5VQqYd -Author: coderabbitai -Description: Fix error counting in batch processing when insertMany fails with some successful insertions ----- -**Fix error counting in batch processing.** - -When `insertMany` fails with `ordered: false`, some documents may still be inserted successfully. The current implementation counts all documents in the batch as errors, which is incorrect. - -```diff - try { -- await targetCol.insertMany(batch, { ordered: false }); -+ const result = await targetCol.insertMany(batch, { ordered: false }); - processed += batch.length; - - if (verbose) { - process.stdout.write(`\r Progress: ${processed}/${count}`); - } - } catch (error) { -- errors += batch.length; -- if (verbose) console.error(`\n⚠️ Batch error: ${error.message}`); -+ // For BulkWriteError, writeErrors contains the actual failed documents -+ if (error.name === 'BulkWriteError' && error.result) { -+ const successCount = error.result.nInserted || 0; -+ processed += successCount; -+ errors += batch.length - successCount; -+ if (verbose) console.error(`\n⚠️ Batch partially failed: ${successCount}/${batch.length} inserted`); -+ } else { -+ errors += batch.length; -+ if (verbose) console.error(`\n⚠️ Batch error: ${error.message}`); -+ } - } -``` ----- -Status: OUTDATED - This suggestion refers to old code that has been reformatted, and the insertMany batch error handling is now outdated - -## ✅ STILL RELEVANT (Need to be fixed): - -### **website/scripts/s3-copy.js:42** -Id: PRRT_kwDOOXKRxs5VQm-G -Author: copilot-pull-request-reviewer -Description: Temporal coupling issue where dotenv.config() call depends on environment variables before they are loaded ----- -The dotenv.config() call occurs after the config object is defined but before environment variables are accessed. This creates a temporal coupling issue where the config getters depend on environment variables that may not be loaded yet. -```suggestion -const envPath = getEnvPath(); -require('dotenv').config({ path: envPath }); -``` ----- -Status: RELEVANT - The dotenv configuration is still called after the config object definition on line 42, creating a temporal coupling issue where environment variables might not be loaded when getters are accessed -Recommendation: Move the dotenv.config() call to execute before the config object definition to ensure environment variables are available when needed -Decision: Fix this - -### **website/scripts/s3-copy.js:48** -Id: PRRT_kwDOOXKRxs5VQm-W -Author: copilot-pull-request-reviewer -Description: Magic number 3 for COPY_BATCH_SIZE should be documented or made configurable via environment variable ----- -[nitpick] The magic number 3 for COPY_BATCH_SIZE should be documented or made configurable. This value significantly impacts performance and should explain the reasoning behind this specific choice. -```suggestion - * Number of files to copy concurrently per batch. - * Default value is 3, which balances performance and resource usage for typical workloads. - * Users can override this value by setting the environment variable COPY_BATCH_SIZE. - */ -const COPY_BATCH_SIZE = parseInt(process.env.COPY_BATCH_SIZE, 10) || 3; -``` ----- -Status: RELEVANT - COPY_BATCH_SIZE is still hardcoded as 3 with basic documentation but no environment variable configurability -Recommendation: Add environment variable support to make the batch size configurable for different environments and performance needs -Decision: Resolve this, at the moment hardcoded COPY_BATCH_SIZE is fine, no need to change it. - -### **website/scripts/mongo-migrate.js:5** -Id: PRRT_kwDOOXKRxs5VQm-j -Author: copilot-pull-request-reviewer -Description: Magic number 1000 for BATCH_SIZE should be documented or made configurable via environment variable ----- -[nitpick] The magic number 1000 for BATCH_SIZE should be documented or made configurable. Different environments may benefit from different batch sizes based on memory constraints and network conditions. -```suggestion -const BATCH_SIZE = parseInt(process.env.BATCH_SIZE, 10) || 1000; -``` ----- -Status: RELEVANT - BATCH_SIZE is still hardcoded as 1000 without environment variable configurability -Recommendation: Make the batch size configurable via environment variable to allow optimization for different environments -Decision: Resolve this, at the moment hardcoded BATCH_SIZE is fine, no need to change it. - -### **website/package.json:41** -Id: PRRT_kwDOOXKRxs5VQqYR -Author: coderabbitai -Description: Invalid AWS SDK version 3.850.0 that does not exist on npm registry ----- -**Invalid @aws-sdk/client-s3 version in package.json** - -The declared dependency `"@aws-sdk/client-s3": "^3.850.0"` does not exist on npm. -- As of July 22, 2025, npm's latest published version is **3.830.0**. -- Snyk reports version **3.846.0** with no known vulnerabilities. - -Please update your dependency to a valid, up-to-date release, for example: -```diff -– "@aws-sdk/client-s3": "^3.850.0", -+ "@aws-sdk/client-s3": "^3.830.0", # or ^3.846.0 for the Snyk-verified latest -``` -This will ensure your S3 client resolves correctly and includes the most recent security fixes. ----- -Status: RELEVANT - The package.json still contains the invalid version "^3.850.0" for @aws-sdk/client-s3 -Recommendation: Update to a valid version such as "^3.846.0" or the latest available stable version -Decision: Fix this, find correct most recent version - -### **website/scripts/s3-copy.js:39** -Id: PRRT_kwDOOXKRxs5VQqYo -Author: coderabbitai -Description: AWS credentials should be validated before use to prevent exposure in error messages ----- -**Add credential validation to prevent exposure in error messages.** - -AWS credentials should be validated before use to avoid exposing them in error stack traces. - -```diff - get credentials() { -+ const validateCredentials = (creds, type) => { -+ if (!creds.accessKeyId || !creds.secretAccessKey) { -+ throw new Error(`Missing AWS ${type} credentials`); -+ } -+ if (creds.accessKeyId.length < 16 || creds.secretAccessKey.length < 20) { -+ throw new Error(`Invalid AWS ${type} credential format`); -+ } -+ return creds; -+ }; - - return { -- source: { -+ source: validateCredentials({ - region: process.env.AWS_SOURCE_REGION || 'us-east-1', - accessKeyId: process.env.AWS_SOURCE_ACCESS_KEY_ID, - secretAccessKey: process.env.AWS_SOURCE_SECRET_ACCESS_KEY, -- }, -- dest: { -+ }, 'source'), -+ dest: validateCredentials({ - region: process.env.AWS_DEST_REGION || 'us-east-1', - accessKeyId: process.env.AWS_DEST_ACCESS_KEY_ID, - secretAccessKey: process.env.AWS_DEST_SECRET_ACCESS_KEY, -- }, -+ }, 'destination'), - }; - }, -``` ----- -Status: RELEVANT - The credentials getter still returns raw credentials without validation, potentially exposing sensitive information in error messages -Recommendation: Add credential validation to ensure credentials are present and valid before returning them -Decision: Fix this - -### **website/scripts/s3-copy.js:131** -Id: PRRT_kwDOOXKRxs5VQqYx -Author: coderabbitai -Description: streamToBuffer function lacks protection against large data and indefinite streaming without timeout ----- -**Add timeout and size limits for stream processing.** - -The `streamToBuffer` function could consume excessive memory for large objects and has no timeout protection. - -```diff --streamToBuffer(stream) { -+streamToBuffer(stream, maxSize = 5 * 1024 * 1024 * 1024) { // 5GB default - return new Promise((resolve, reject) => { - const chunks = []; -+ let totalSize = 0; -+ const timeout = setTimeout(() => { -+ stream.destroy(); -+ reject(new Error('Stream processing timeout')); -+ }, 300000); // 5 minute timeout -+ - stream.on('data', (chunk) => { -+ totalSize += chunk.length; -+ if (totalSize > maxSize) { -+ stream.destroy(); -+ clearTimeout(timeout); -+ reject(new Error(`Object size exceeds limit of ${maxSize} bytes`)); -+ return; -+ } - chunks.push(chunk) - }); -- stream.on('end', () => resolve(Buffer.concat(chunks))); -- stream.on('error', reject); -+ stream.on('end', () => { -+ clearTimeout(timeout); -+ resolve(Buffer.concat(chunks)); -+ }); -+ stream.on('error', (err) => { -+ clearTimeout(timeout); -+ reject(err); -+ }); - }); - }, -``` ----- -Status: RELEVANT - The streamToBuffer function on line 124 still lacks timeout and memory protection mechanisms -Recommendation: Add timeout handling and memory size limits to prevent excessive memory usage and hanging streams -Decision: Resolve this. it works good enough, and mistake (even if happed) wouldn't cause any damadge. \ No newline at end of file From 1bc8872e4306790ffbf98bdd13d32e217991a3b4 Mon Sep 17 00:00:00 2001 From: Peter Ovchyn Date: Thu, 14 Aug 2025 17:45:17 +0200 Subject: [PATCH 3/3] chore: Update website-pages submodule - Sync website-pages submodule to latest commit (e059fa4) - Remove aposDocs.json and add export directory structure --- website-pages | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website-pages b/website-pages index 57f1d394..e059fa49 160000 --- a/website-pages +++ b/website-pages @@ -1 +1 @@ -Subproject commit 57f1d3948967fb0a3e3ea8177f15d49fdb76ead9 +Subproject commit e059fa49b8024ef8d94f90553c2b48afcccc4531