Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 18 additions & 0 deletions .apiiro-test/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# APIIRO Security Scanner Test Fixtures

**⚠️ DO NOT MERGE TO PRODUCTION**

This folder contains intentionally vulnerable code for testing APIIRO security scanning.

| Severity | Vulnerability Type | File |
|----------|-------------------|------|
| **HIGH** | Hardcoded secrets/API keys | All |
| **HIGH** | SQL Injection | All |
| **HIGH** | eval() with user input | vulnerabilities.ts |
| **MEDIUM** | Insecure random (Math.random) | vulnerabilities.ts |
| **MEDIUM** | Sensitive data in logs | All |
| **LOW** | Weak password validation | All |
| **LOW** | Insecure HTTP URL | vulnerabilities.ts |
| **LOW** | TODO/FIXME in code | vulnerabilities.ts |

Remove this folder before releasing to production.
8 changes: 8 additions & 0 deletions .apiiro-test/vulnerabilities.kt
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
/**
* INTENTIONAL VULNERABILITIES FOR APIIRO TESTING
* DO NOT use in production. Remove before release.
*/
const val HARDCODED_SECRET = "sk_live_abc123xyz789secretkey"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: A long-lived secret key is hardcoded directly in source, so if this code is ever built or the repo is leaked, the key is exposed; instead keep only a non-sensitive placeholder here and load the real value from secure configuration at runtime. [security]

Severity Level: Critical 🚨
- ❌ Secret key exposed to anyone with repo or build access.
- ❌ Enables unauthorized use of external API tied to key.
Suggested change
const val HARDCODED_SECRET = "sk_live_abc123xyz789secretkey"
const val HARDCODED_SECRET = "REPLACE_WITH_SECURE_SECRET_FROM_CONFIG"
Steps of Reproduction ✅
1. Open the repository and inspect `.apiiro-test/vulnerabilities.kt` as shown in the PR
"Final File State"; at line 5 the constant `HARDCODED_SECRET` is defined with value
`"sk_live_abc123xyz789secretkey"`.

2. Build or package the project including `.apiiro-test/vulnerabilities.kt`; the Kotlin
compiler will embed the literal string value of `HARDCODED_SECRET` into the generated
bytecode/constants.

3. Anyone with read access to the source repository or to the built artifact (e.g., by
decompiling the library/app) can recover the exact secret value from
`.apiiro-test/vulnerabilities.kt:5` or from the compiled constant pool.

4. If this corresponds to a real external service key (e.g., payment/Stripe-style key
suggested by the `sk_live_` pattern), an attacker can immediately use the leaked key to
call that service as the application, independent of whether any other code in the repo
currently references `HARDCODED_SECRET` (confirmed via Grep search over
`/workspace/react-native-sdk` returning no usages, meaning the exposure is purely via
storage of the secret itself).
Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** .apiiro-test/vulnerabilities.kt
**Line:** 5:5
**Comment:**
	*Security: A long-lived secret key is hardcoded directly in source, so if this code is ever built or the repo is leaked, the key is exposed; instead keep only a non-sensitive placeholder here and load the real value from secure configuration at runtime.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
👍 | 👎

fun sqlInjectionVulnerable(userInput: String) = "SELECT * FROM users WHERE id = '$userInput'"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Concatenating userInput directly into the SQL string enables SQL injection if the caller ever passes untrusted data; using a parameter placeholder prevents untrusted data from altering the query structure. [security]

Severity Level: Critical 🚨
-`sqlInjectionVulnerable` returns a trivially injectable SQL query.
- ❌ When used against a DB, attackers can bypass filters.
Suggested change
fun sqlInjectionVulnerable(userInput: String) = "SELECT * FROM users WHERE id = '$userInput'"
fun sqlInjectionVulnerable(userInput: String) = "SELECT * FROM users WHERE id = ?"
Steps of Reproduction ✅
1. Locate the function `sqlInjectionVulnerable` in `.apiiro-test/vulnerabilities.kt:6` as
defined in the PR "Final File State`.

2. Call `sqlInjectionVulnerable("1' OR '1'='1")` from any Kotlin code in this project
(there are no current callers found via Grep in `/workspace/react-native-sdk`, but this
direct call is sufficient to exercise the function).

3. Observe that the returned string is `SELECT * FROM users WHERE id = '1' OR '1'='1'`,
meaning the untrusted `userInput` has altered the structure of the WHERE clause instead of
being safely parameterized.

4. If this returned string is then used as-is with any SQL execution API (e.g., JDBC
`Statement.executeQuery()`), the database will treat the injected condition as part of the
query, potentially returning all rows from the `users` table or enabling further injection
attacks.
Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** .apiiro-test/vulnerabilities.kt
**Line:** 6:6
**Comment:**
	*Security: Concatenating `userInput` directly into the SQL string enables SQL injection if the caller ever passes untrusted data; using a parameter placeholder prevents untrusted data from altering the query structure.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
👍 | 👎

fun debugWithSensitiveData(password: String) { android.util.Log.d("Auth", "Password: $password") }
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Logging the raw password exposes sensitive credentials in log files, which can be read by other processes or users; log only generic messages or redacted values instead. [security]

Severity Level: Critical 🚨
- ❌ Plaintext passwords exposed in Android debug logs.
- ⚠️ Logs can leak credentials to testers or attackers.
Suggested change
fun debugWithSensitiveData(password: String) { android.util.Log.d("Auth", "Password: $password") }
fun debugWithSensitiveData(password: String) { android.util.Log.d("Auth", "Password provided for authentication") }
Steps of Reproduction ✅
1. Inspect `.apiiro-test/vulnerabilities.kt:7` where `debugWithSensitiveData` is defined
to call `android.util.Log.d("Auth", "Password: $password")`.

2. From any Android code path in this project, call
`debugWithSensitiveData("MySecretP@ss")` (no existing callers were found in
`/workspace/react-native-sdk` via Grep, but invoking this function directly is trivial).

3. Run the app or test harness on a device or emulator; open `logcat` and filter by the
`"Auth"` tag.

4. Observe a debug log entry containing the full plaintext password, e.g., `Password:
MySecretP@ss`, which can be read by anyone with access to the device logs, test logs, or
aggregated logging infrastructure.
Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** .apiiro-test/vulnerabilities.kt
**Line:** 7:7
**Comment:**
	*Security: Logging the raw password exposes sensitive credentials in log files, which can be read by other processes or users; log only generic messages or redacted values instead.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
👍 | 👎

fun weakPasswordCheck(password: String) = password.length >= 4
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Accepting any password of length 4 or greater is far too weak and makes brute-force attacks trivial; at minimum, increase the required length to a more secure threshold. [security]

Severity Level: Critical 🚨
- ❌ Password policy allows trivially short four-character passwords.
- ⚠️ Greatly increases risk of successful brute-force attacks.
Suggested change
fun weakPasswordCheck(password: String) = password.length >= 4
fun weakPasswordCheck(password: String) = password.length >= 8
Steps of Reproduction ✅
1. Inspect `.apiiro-test/vulnerabilities.kt:8` where `weakPasswordCheck` is defined as
`password.length >= 4`.

2. From any authentication or validation code in this project (none currently reference
this function per Grep over `/workspace/react-native-sdk`, but it can be called directly),
invoke `weakPasswordCheck("1234")`.

3. Observe that `weakPasswordCheck("1234")` returns `true`, meaning a trivially guessable
4-character numeric password is treated as acceptable.

4. In any real login or account creation flow that relies on this check, attackers can
choose or brute-force extremely short passwords, significantly reducing the effort needed
to compromise accounts.
Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** .apiiro-test/vulnerabilities.kt
**Line:** 8:8
**Comment:**
	*Security: Accepting any password of length 4 or greater is far too weak and makes brute-force attacks trivial; at minimum, increase the required length to a more secure threshold.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
👍 | 👎

8 changes: 8 additions & 0 deletions .apiiro-test/vulnerabilities.swift
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
/**
* INTENTIONAL VULNERABILITIES FOR APIIRO TESTING
* DO NOT use in production. Remove before release.
*/
let hardcodedSecret = "sk_live_abc123xyz789secretkey"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: A live secret key is hardcoded in the binary, so any compromise of the app bundle or repository exposes it; instead keep only a non-sensitive placeholder here and inject the real secret from secure storage or configuration. [security]

Severity Level: Critical 🚨
- [CRITICAL] Secret key exposed to anyone with repo access.
- [WARNING] Compiled artifacts may also embed the same secret.
Suggested change
let hardcodedSecret = "sk_live_abc123xyz789secretkey"
let hardcodedSecret = "REPLACE_WITH_SECURE_SECRET_FROM_CONFIG"
Steps of Reproduction ✅
1. Clone the repository containing this PR and open the file
`/workspace/react-native-sdk/.apiiro-test/vulnerabilities.swift`.

2. Observe at line 5 (per Read output) the declaration `let hardcodedSecret =
"sk_live_abc123xyz789secretkey"` directly in source.

3. Note that lines 1–4 explicitly mark this file as "INTENTIONAL VULNERABILITIES FOR
APIIRO TESTING" but do not prevent access to the literal key value in the file.

4. Any person or system with read access to the source repository or any compiled artifact
that includes this constant can retrieve the full secret value directly from code, meaning
the secret is effectively exposed at rest without needing to execute any particular code
path.
Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** .apiiro-test/vulnerabilities.swift
**Line:** 5:5
**Comment:**
	*Security: A live secret key is hardcoded in the binary, so any compromise of the app bundle or repository exposes it; instead keep only a non-sensitive placeholder here and inject the real secret from secure storage or configuration.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
👍 | 👎

func sqlInjectionVulnerable(userInput: String) -> String { "SELECT * FROM users WHERE id = '\(userInput)'" }
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Directly interpolating userInput into the SQL string allows an attacker to inject arbitrary SQL; switching to a parameter placeholder avoids untrusted data changing the query structure. [security]

Severity Level: Critical 🚨
- [CRITICAL] Query builder produces SQL vulnerable to injection.
- [WARNING] Future use with user input risks data exposure.
Suggested change
func sqlInjectionVulnerable(userInput: String) -> String { "SELECT * FROM users WHERE id = '\(userInput)'" }
func sqlInjectionVulnerable(userInput: String) -> String { "SELECT * FROM users WHERE id = ?" }
Steps of Reproduction ✅
1. Open `/workspace/react-native-sdk/.apiiro-test/vulnerabilities.swift` and locate
`sqlInjectionVulnerable` defined at line 6.

2. See that the function returns `"SELECT * FROM users WHERE id = '\(userInput)'"`,
directly interpolating the `userInput` string inside single quotes.

3. Manually substitute an attacker-style input, for example `userInput = "1' OR '1'='1"`,
into this expression: the returned SQL becomes `SELECT * FROM users WHERE id = '1' OR
'1'='1'`, which changes the query logic.

4. Any calling code that executes this returned string against a database as-is (no
parameterization or escaping) would thus execute attacker-controlled SQL, demonstrating a
SQL injection risk inherent in the function's current construction, even though no callers
are present in the non-hidden portions of the repository (verified by Grep over
`/workspace/react-native-sdk`).
Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** .apiiro-test/vulnerabilities.swift
**Line:** 6:6
**Comment:**
	*Security: Directly interpolating `userInput` into the SQL string allows an attacker to inject arbitrary SQL; switching to a parameter placeholder avoids untrusted data changing the query structure.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
👍 | 👎

func debugWithSensitiveData(password: String) { print("User password: \(password)") }
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Printing the user's password directly to stdout or logs exposes credentials to anyone with access to console output; log only generic messages without including secrets. [security]

Severity Level: Critical 🚨
- [CRITICAL] Plaintext passwords exposed in application logs.
- [WARNING] Log retention increases duration of credential exposure.
Suggested change
func debugWithSensitiveData(password: String) { print("User password: \(password)") }
func debugWithSensitiveData(password: String) { print("User attempted authentication") }
Steps of Reproduction ✅
1. Open `/workspace/react-native-sdk/.apiiro-test/vulnerabilities.swift` and inspect the
`debugWithSensitiveData` function at line 7.

2. Observe that the implementation calls `print("User password: \(password)")`, directly
embedding the provided `password` value into the log message.

3. For any invocation of `debugWithSensitiveData` with a real user password, the full
plaintext password would be written to standard output or whichever logging sink captures
`print` output in the runtime environment.

4. Anyone with access to console output, log files, or aggregated logging infrastructure
could then read and reuse these passwords, compromising credential confidentiality, even
though no current callers are found in the non-hidden parts of the repository via Grep.
Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** .apiiro-test/vulnerabilities.swift
**Line:** 7:7
**Comment:**
	*Security: Printing the user's password directly to stdout or logs exposes credentials to anyone with access to console output; log only generic messages without including secrets.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
👍 | 👎

func weakPasswordCheck(password: String) -> Bool { password.count >= 4 }
34 changes: 34 additions & 0 deletions .apiiro-test/vulnerabilities.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
/**
* INTENTIONAL VULNERABILITIES FOR APIIRO TESTING
* DO NOT use in production. Remove before release.
*/

// ============ HIGH SEVERITY ============
export const HARDCODED_SECRET = 'sk_live_abc123xyz789secretkey';
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: A live secret key is hardcoded into the exported constant, so it will be bundled and easily discoverable; replace it with a non-sensitive placeholder and load the actual value from environment or secure config at runtime. [security]

Severity Level: Critical 🚨
- ❌ Repository exposes live-looking secret in test file.
- ⚠️ Secret usable even without runtime invocation.
Suggested change
export const HARDCODED_SECRET = 'sk_live_abc123xyz789secretkey';
export const HARDCODED_SECRET = 'REPLACE_WITH_SECURE_SECRET_FROM_CONFIG';
Steps of Reproduction ✅
1. Open `.apiiro-test/vulnerabilities.ts` and observe the hardcoded secret constant at
line 7: `export const HARDCODED_SECRET = 'sk_live_abc123xyz789secretkey';`.

2. Note that this file is committed to the repository and the constant is exported, so
anyone with read access to the repo (e.g., a public Git host or compromised developer
machine) can see the value without needing any code execution.

3. Confirm via Grep that `HARDCODED_SECRET` only appears in definition files
(`.apiiro-test/vulnerabilities.*`) and is not currently imported elsewhere in
`/workspace/react-native-sdk`, meaning the exposure is through source control rather than
runtime usage.

4. In a real-world scenario where this value represents a live Stripe-like secret, an
attacker can clone the repo, read `.apiiro-test/vulnerabilities.ts:7`, and immediately use
the copied key against the external service.
Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** .apiiro-test/vulnerabilities.ts
**Line:** 7:7
**Comment:**
	*Security: A live secret key is hardcoded into the exported constant, so it will be bundled and easily discoverable; replace it with a non-sensitive placeholder and load the actual value from environment or secure config at runtime.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
👍 | 👎

export const API_KEY = 'AIzaSyB1234567890abcdefghijklmnop';
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: An API key is hardcoded into the client bundle, meaning it can be trivially extracted and abused; instead keep only a placeholder here and inject the real key from secure configuration. [security]

Severity Level: Critical 🚨
- ❌ API key exposed via committed test source file.
- ⚠️ Key abuse possible independent of app runtime.
Suggested change
export const API_KEY = 'AIzaSyB1234567890abcdefghijklmnop';
export const API_KEY = 'REPLACE_WITH_SECURE_API_KEY_FROM_CONFIG';
Steps of Reproduction ✅
1. Open `.apiiro-test/vulnerabilities.ts` and locate the exported `API_KEY` constant at
line 8: `export const API_KEY = 'AIzaSyB1234567890abcdefghijklmnop';`.

2. Because the value is hardcoded in source, anyone with access to the repository (or any
built bundle that includes this file) can read the API key directly from the file, without
needing to trigger any application code.

3. Grep over `/workspace/react-native-sdk` shows `API_KEY` only in the
`.apiiro-test/vulnerabilities.*` files and no imports of
`.apiiro-test/vulnerabilities.ts`, confirming this is currently a static exposure rather
than a runtime call path.

4. In a realistic deployment where this key is valid for a Google API, an attacker can
clone the repo, copy the key from line 8, and use it to call the underlying API until rate
limits or abuse detection intervene.
Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** .apiiro-test/vulnerabilities.ts
**Line:** 8:8
**Comment:**
	*Security: An API key is hardcoded into the client bundle, meaning it can be trivially extracted and abused; instead keep only a placeholder here and inject the real key from secure configuration.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
👍 | 👎


Comment on lines +7 to +9

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. Hardcoded api secrets in ts 📘 Rule violation ⛨ Security

The code introduces hardcoded secret values (HARDCODED_SECRET, API_KEY) which can be extracted
from source control and abused. This violates secure data handling requirements for sensitive
credentials.
Agent Prompt
## Issue description
Hardcoded secrets/API keys were added to source code, which risks credential leakage and misuse.

## Issue Context
Even if intended as fixtures, these values can be harvested from the repo and violate secure data handling requirements.

## Fix Focus Areas
- .apiiro-test/vulnerabilities.ts[7-9]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

export function sqlInjectionVulnerable(userInput: string): string {
return `SELECT * FROM users WHERE id = '${userInput}'`;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Constructing an SQL query by interpolating userInput directly into the string allows an attacker to inject arbitrary SQL; using a parameter placeholder prevents untrusted data from modifying the query. [security]

Severity Level: Major ⚠️
- ❌ Exported helper produces injection-prone SQL query strings.
- ⚠️ Future consumers may adopt insecure query pattern.
Suggested change
return `SELECT * FROM users WHERE id = '${userInput}'`;
return 'SELECT * FROM users WHERE id = ?';
Steps of Reproduction ✅
1. Inspect `.apiiro-test/vulnerabilities.ts` and find `sqlInjectionVulnerable` defined at
lines 10–12 returning an SQL string with `userInput` directly interpolated: `... WHERE id
= '${userInput}'`.

2. Confirm via Grep that `sqlInjectionVulnerable` is only defined in
`.apiiro-test/vulnerabilities.*` and has no callers elsewhere in
`/workspace/react-native-sdk`, so it is currently unused but exported and available for
future imports.

3. In any consumer module (e.g., a new service file), import this function: `import {
sqlInjectionVulnerable } from '.apiiro-test/vulnerabilities';` and call it with
attacker-controlled input: `sqlInjectionVulnerable("1' OR '1'='1")`.

4. Observe that the returned query string is `SELECT * FROM users WHERE id = '1' OR
'1'='1'`, which, when executed by a database client using the string as-is, would be
vulnerable to SQL injection and could return all user records.
Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** .apiiro-test/vulnerabilities.ts
**Line:** 11:11
**Comment:**
	*Security: Constructing an SQL query by interpolating `userInput` directly into the string allows an attacker to inject arbitrary SQL; using a parameter placeholder prevents untrusted data from modifying the query.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
👍 | 👎

}
Comment on lines +10 to +12

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

2. Sql injection via string interpolation 📘 Rule violation ⛨ Security

The SQL query is built by concatenating/interpolating untrusted input, enabling SQL injection. This
violates the requirement to validate and safely handle external inputs to prevent injection
vulnerabilities.
Agent Prompt
## Issue description
SQL is constructed using untrusted input via string interpolation, enabling SQL injection.

## Issue Context
The compliance checklist requires proper parameterization and validation for external inputs.

## Fix Focus Areas
- .apiiro-test/vulnerabilities.ts[10-12]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


export function evalVulnerable(userInput: string): unknown {
return eval(userInput);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Passing untrusted userInput directly to eval allows arbitrary JavaScript execution, which is a critical remote code execution risk; using safe parsing (for example JSON parsing) instead avoids running attacker-controlled code. [security]

Severity Level: Critical 🚨
- ❌ Exported helper executes attacker-controlled JavaScript via eval.
- ⚠️ Future adopters may copy unsafe eval pattern.
Suggested change
return eval(userInput);
try {
return JSON.parse(userInput);
} catch {
return userInput;
}
Steps of Reproduction ✅
1. Open `.apiiro-test/vulnerabilities.ts` and locate `evalVulnerable` at lines 14–16,
which directly calls `eval(userInput)` on its argument.

2. Verify with Grep that `evalVulnerable` is only defined in
`.apiiro-test/vulnerabilities.ts` and has no other references in
`/workspace/react-native-sdk`, indicating it is currently unused but exported.

3. In a consumer module (for example, a new handler file), import and call the function
with attacker-controlled data: `evalVulnerable("process.exit(1)")` in Node.js or
`evalVulnerable("alert('xss')")` in a browser context.

4. When that consumer is executed, the `eval` call in `.apiiro-test/vulnerabilities.ts:15`
will execute the supplied string as code, demonstrating arbitrary code execution risk.
Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** .apiiro-test/vulnerabilities.ts
**Line:** 15:15
**Comment:**
	*Security: Passing untrusted `userInput` directly to `eval` allows arbitrary JavaScript execution, which is a critical remote code execution risk; using safe parsing (for example JSON parsing) instead avoids running attacker-controlled code.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
👍 | 👎

}
Comment on lines +14 to +16

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

3. eval(userinput) remote code risk 📘 Rule violation ⛨ Security

Passing user-controlled input into eval() enables arbitrary code execution. This violates
security-first input handling and dramatically increases attack surface.
Agent Prompt
## Issue description
`eval()` is called on user-controlled input, enabling arbitrary code execution.

## Issue Context
Compliance requires secure handling of external inputs and avoiding injection-like vulnerabilities.

## Fix Focus Areas
- .apiiro-test/vulnerabilities.ts[14-16]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +10 to +16

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

5. Vulnerable fixtures unguarded 🐞 Bug ⛨ Security

The PR adds intentionally vulnerable code (SQL injection string building, eval(userInput),
sensitive logging) into the repo with only a README warning. There is no technical enforcement to
prevent accidental linting/scanning noise or an unintended merge/release with these fixtures
present.
Agent Prompt
### Issue description
`.apiiro-test/` contains intentionally vulnerable code (e.g., `eval(userInput)`, SQL injection string interpolation, sensitive logging) and is only guarded by documentation. Tooling currently lints the entire repo (`eslint .`) and `.eslintignore` does not exclude `.apiiro-test/`, so these fixtures can create ongoing noise/failures if `yarn lint` is run or linting is added to CI.

### Issue Context
Although npm publishing is restricted by `package.json.files`, these files still live in the main git repo and can be consumed via git installs/forks/mirrors, and can affect developer tooling.

### Fix Focus Areas
- .apiiro-test/README.md[1-18]
- .apiiro-test/vulnerabilities.ts[10-34]
- package.json[32-36]
- .eslintignore[1-3]
- .github/workflows/release.yml[19-22]

### Suggested changes
- Add `.apiiro-test/` to `.eslintignore` (and any other relevant ignore files such as Prettier if present).
- Add a release/CI guard step (e.g., in `release.yml` before publish) that fails if `.apiiro-test/` exists, or deletes it before packaging.
- Prefer moving these fixtures to a dedicated test-fixtures repo/branch so they never land on `master`/release branches.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +14 to +16
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Step 1: Locate Biome and package config ==="
fd -H 'biome\.jsonc?$|package\.json$' 2>/dev/null | head -20

echo
echo "=== Step 2: Check Biome configuration for .apiiro-test exclusion ==="
fd -H 'biome\.jsonc?$' 2>/dev/null | while read -r f; do
  echo "--- Found: $f ---"
  cat -n "$f"
  echo
done

echo
echo "=== Step 3: Verify vulnerabilities.ts exists and check current content ==="
if [ -f ".apiiro-test/vulnerabilities.ts" ]; then
  echo "--- .apiiro-test/vulnerabilities.ts ---"
  cat -n ".apiiro-test/vulnerabilities.ts"
else
  echo "File not found at .apiiro-test/vulnerabilities.ts"
fi

Repository: Usercentrics/react-native-sdk

Length of output: 3952


Add scoped suppression to prevent noGlobalEval lint failures.

The .apiiro-test directory is not excluded in biome.json and noGlobalEval is enforced as part of the recommended ruleset. This intentional vulnerability fixture will fail linting in CI. Add the suggested suppression comment:

Required fix
 export function evalVulnerable(userInput: string): unknown {
+  // biome-ignore lint/security/noGlobalEval: intentional vulnerability fixture for scanner testing
   return eval(userInput);
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export function evalVulnerable(userInput: string): unknown {
return eval(userInput);
}
export function evalVulnerable(userInput: string): unknown {
// biome-ignore lint/security/noGlobalEval: intentional vulnerability fixture for scanner testing
return eval(userInput);
}
🧰 Tools
🪛 Biome (2.4.4)

[error] 15-15: eval() exposes to security risks and performance issues.

(lint/security/noGlobalEval)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.apiiro-test/vulnerabilities.ts around lines 14 - 16, Add a scoped lint
suppression for the noGlobalEval rule immediately above the evalVulnerable
function: insert the biome suppression comment (e.g. /* biome-suppress
noGlobalEval */) directly above export function evalVulnerable(userInput:
string): unknown { ... } so the intentional eval usage is ignored by the linter
while keeping the rest of the file checked; remove or scope the suppression to
just this function if your suppression style requires re-enabling the rule after
the function.


// ============ MEDIUM SEVERITY ============
export function insecureRandomToken(): string {
return Math.random().toString(36).substring(2);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Using Math.random to generate security tokens produces predictable values that attackers can guess; using a cryptographically secure random source when available greatly reduces the risk of token prediction. [security]

Severity Level: Major ⚠️
- ⚠️ Exported helper suggests non-cryptographic token generation.
- ⚠️ Future security features may inherit weak randomness.
Suggested change
return Math.random().toString(36).substring(2);
const cryptoObj = (globalThis as any).crypto;
if (cryptoObj && typeof cryptoObj.getRandomValues === 'function') {
const array = new Uint32Array(4);
cryptoObj.getRandomValues(array);
return Array.from(array, (value) => value.toString(36)).join('');
}
Steps of Reproduction ✅
1. Inspect `.apiiro-test/vulnerabilities.ts` and find `insecureRandomToken` at lines 19–21
returning `Math.random().toString(36).substring(2)`.

2. Confirm with Grep that `insecureRandomToken` appears only in
`.apiiro-test/vulnerabilities.ts` and is not called elsewhere in
`/workspace/react-native-sdk`, so it is currently unused but exportable.

3. In a consumer module, import and use it as a security token generator, for example
`const token = insecureRandomToken();` for password reset links.

4. Because `Math.random` is not cryptographically secure, an attacker who can observe a
few generated tokens can approximate the PRNG state and significantly narrow the search
space to guess or brute-force other tokens.
Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** .apiiro-test/vulnerabilities.ts
**Line:** 20:20
**Comment:**
	*Security: Using `Math.random` to generate security tokens produces predictable values that attackers can guess; using a cryptographically secure random source when available greatly reduces the risk of token prediction.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
👍 | 👎

}

export function debugWithSensitiveData(user: { password: string }) {
console.log('User auth:', user);
}
Comment on lines +23 to +25

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

4. Logs include user password object 📘 Rule violation ⛨ Security

The code logs an object containing a password, risking credential exposure in logs. This violates
secure logging requirements prohibiting sensitive data in log output.
Agent Prompt
## Issue description
Sensitive credentials (password) are logged, which can leak secrets via log pipelines.

## Issue Context
Compliance requires logs to contain no PII/secrets at any log level.

## Fix Focus Areas
- .apiiro-test/vulnerabilities.ts[23-25]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


// ============ LOW SEVERITY ============
// TODO: Security fix needed
// FIXME: Add validation
export function weakPasswordCheck(password: string): boolean {
return password.length >= 4;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Allowing any password with length at least 4 is extremely weak and makes brute-force attacks easy; increasing the minimum length strengthens password security. [security]

Severity Level: Major ⚠️
- ⚠️ Exported helper encodes extremely weak password policy.
- ⚠️ Future auth features may adopt insecure minimum length.
Suggested change
return password.length >= 4;
return password.length >= 8;
Steps of Reproduction ✅
1. Open `.apiiro-test/vulnerabilities.ts` and locate `weakPasswordCheck` at lines 30–32,
which currently returns `true` for any password with length at least 4.

2. Use Grep to verify `weakPasswordCheck` only appears in `.apiiro-test/vulnerabilities.*`
and is not used in other files, confirming it is exported but not yet part of any
authentication flow.

3. In a hypothetical authentication module, import and rely on `weakPasswordCheck` for
policy enforcement, e.g. `if (!weakPasswordCheck(password)) reject();`.

4. Supply a simple password like `"1234"` or `"test"`, which passes the check (`true`),
demonstrating that the current minimum length allows trivially guessable passwords.
Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** .apiiro-test/vulnerabilities.ts
**Line:** 31:31
**Comment:**
	*Security: Allowing any password with length at least 4 is extremely weak and makes brute-force attacks easy; increasing the minimum length strengthens password security.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
👍 | 👎

}

export const INSECURE_URL = 'http://api.example.com/data';
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Using plain HTTP for an API endpoint exposes requests to eavesdropping and tampering; switching the URL to HTTPS ensures transport-level encryption. [security]

Severity Level: Major ⚠️
- ⚠️ Exported constant encourages use of non-TLS HTTP endpoint.
- ⚠️ Future HTTP calls may transmit data unencrypted.
Suggested change
export const INSECURE_URL = 'http://api.example.com/data';
export const INSECURE_URL = 'https://api.example.com/data';
Steps of Reproduction ✅
1. Inspect `.apiiro-test/vulnerabilities.ts` and find `INSECURE_URL` at line 34 set to
`http://api.example.com/data`.

2. Verify with Grep that `INSECURE_URL` is only defined in
`.apiiro-test/vulnerabilities.*` and not referenced elsewhere in
`/workspace/react-native-sdk`, so it is currently unused but available for import.

3. In a consumer module, import and use `INSECURE_URL` with `fetch(INSECURE_URL, {
credentials: 'include' })` or a similar HTTP client call to send sensitive data.

4. Because the URL uses plain HTTP, any intermediary on the network path (e.g., local
Wi‑Fi attacker or proxy) can eavesdrop on and tamper with the request and response
traffic.
Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** .apiiro-test/vulnerabilities.ts
**Line:** 34:34
**Comment:**
	*Security: Using plain HTTP for an API endpoint exposes requests to eavesdropping and tampering; switching the URL to HTTPS ensures transport-level encryption.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
👍 | 👎

Loading