Skip to content

Set up initial test suite#2

Merged
jreakin merged 2 commits intomainfrom
initial-test-suite
Jan 29, 2026
Merged

Set up initial test suite#2
jreakin merged 2 commits intomainfrom
initial-test-suite

Conversation

@jreakin
Copy link
Member

@jreakin jreakin commented Jan 28, 2026

Summary by CodeRabbit

  • New Features

    • Integrated Sentry for error monitoring and performance tracking via DSN configuration.
  • Documentation

    • Added comprehensive testing documentation and CI/CD pipeline setup.
    • Added CI/CD badges and Sentry configuration references to README.
  • Tests

    • Implemented extensive test suite covering unit, integration, end-to-end, and stateful testing with automated coverage reporting.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Jan 28, 2026

Caution

Review failed

The pull request is closed.

Walkthrough

This pull request introduces a comprehensive testing infrastructure and Sentry error tracking integration. It adds Sentry configuration to environment variables and application startup, implements a complete test suite spanning unit, integration, end-to-end, and stateful tests with property-based testing via Hypothesis, establishes a GitHub Actions CI workflow with coverage reporting, and includes test documentation and planning resources.

Changes

Cohort / File(s) Summary
Sentry Integration
.env.example, README.md, src/config.py, src/main.py
Adds SENTRY_DSN and SENTRY_TRACES_SAMPLE_RATE environment configuration, initializes Sentry SDK with FastAPI/MCP integrations on application startup, documents new configuration in README with CI/coverage badges.
Testing Infrastructure
pyproject.toml, .github/workflows/ci.yml
Adds testing dependencies (pytest-cov, pytest-mock, hypothesis, respx, faker), configures pytest with markers and coverage settings, establishes GitHub Actions CI workflow with dependency installation, test execution, and Codecov reporting.
Test Configuration & Fixtures
tests/conftest.py, tests/__init__.py
Provides reusable pytest fixtures for HTTP mocking (respx), service mocking (CopilotService, Supabase), FastAPI TestClient, sample data (BotConfiguration, AgentContext), and mock settings for deterministic testing across all test categories.
Test Documentation
.cursor/plans/comprehensive_test_implementation_plan_483dd72f.plan.md, TESTING.md, REFACTORING_ASSESSMENT.md
Outlines comprehensive test strategy across unit/integration/e2e/stateful categories, documents testing patterns, Hypothesis property-based testing approach, test execution workflows, and assessment of test suite improvements and resource cleanup requirements.
Unit Tests
tests/unit/test_*.py
Comprehensive unit test suite covering individual components: configuration, models (messenger, agent, config), services (copilot, facebook, agent, reference doc), database client, repository operations, scraper, CLI setup, with extensive property-based testing via Hypothesis for invariant validation.
Integration Tests
tests/integration/test_*.py
Tests component interactions: agent with copilot service, scraper with copilot for document building, repository database operations with mocked Supabase client, validating end-to-end flows and data persistence patterns.
End-to-End Tests
tests/e2e/test_*.py
API endpoint validation via FastAPI TestClient: health endpoint, application initialization, root endpoint, CORS middleware, router registration, webhook verification and message flow handling with realistic request/response patterns.
Stateful Tests
tests/stateful/test_*.py
Stateful test scenarios simulating multi-step workflows: conversation context maintenance across agent interactions, bot configuration lifecycle operations (CRUD), message history accumulation, using mocked services to validate state invariants.
Code Maintenance
src/cli/setup_cli.py
Adds _run_async_with_cleanup() helper function to properly manage asyncio event loops and resource cleanup when running async operations, applied to website scraping and reference document building in setup flow.
Package Structure & Generated Artifacts
src/__init__.py, src/api/__init__.py, src/cli/__init__.py, src/db/__init__.py, src/models/__init__.py, src/services/__init__.py, tests/unit/__init__.py, tests/integration/__init__.py, tests/e2e/__init__.py, tests/stateful/__init__.py, .hypothesis/constants/*, coverage.xml, junit.xml
Establishes package initializers across src and test modules, includes Hypothesis-generated test constants and coverage/test execution reports (generated artifacts from test runs).

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Poem

🐰 Whiskers twitch with testing glee
Sentry tracks where errors be
Hypothesis checks properties true
From unit tests to e2e blue
Coverage blooms, workflows flow
The codebase strengthens, don't you know!

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch initial-test-suite

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Member Author

jreakin commented Jan 28, 2026

@jreakin jreakin marked this pull request as ready for review January 28, 2026 04:44
@jreakin jreakin mentioned this pull request Jan 28, 2026
Copilot AI review requested due to automatic review settings January 28, 2026 04:44
@sentry
Copy link

sentry bot commented Jan 28, 2026

Welcome to Codecov 🎉

Once you merge this PR into your default branch, you're all set! Codecov will compare coverage reports and display results in all future pull requests.

ℹ️ You can also turn on project coverage checks and project coverage reporting on Pull Request comment

Thanks for integrating Codecov - We've got you covered ☂️

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR sets up a comprehensive initial test suite for a Facebook Messenger AI Bot application, introducing unit, integration, end-to-end, and property-based tests using Hypothesis. The test suite covers core functionality including scraping, agent services, database operations, and webhook handling.

Changes:

  • Added comprehensive test coverage across unit, integration, e2e, and stateful test categories
  • Configured pytest with Hypothesis for property-based testing
  • Set up CI/CD pipeline with GitHub Actions for automated testing and code coverage
  • Added detailed TESTING.md documentation outlining testing strategies

Reviewed changes

Copilot reviewed 51 out of 54 changed files in this pull request and generated 48 comments.

Show a summary per file
File Description
tests/unit/*.py (12 files) Unit tests for individual services, models, and utilities with Hypothesis property-based tests
tests/integration/*.py (4 files) Integration tests for service combinations and cross-component interactions
tests/e2e/*.py (5 files) End-to-end tests for complete webhook flows and API endpoints
tests/stateful/*.py (3 files) Stateful tests using Hypothesis for conversation flows and configuration management
tests/conftest.py Shared pytest fixtures including mocks for Copilot, Supabase, and test clients
pyproject.toml Pytest and coverage configuration with Hypothesis settings
TESTING.md Comprehensive testing documentation with examples and best practices
.github/workflows/ci.yml CI/CD pipeline for automated testing and coverage reporting
.hypothesis/* Hypothesis cache files (should not be committed)

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +49 to +53
# Mock scraping
mock_scrape.return_value = ["chunk1", "chunk2", "chunk3"]

# Mock reference doc building
mock_build_ref.return_value = ("# Reference Document", "hash123")
Copy link

Copilot AI Jan 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The mock for scrape_website and build_reference_doc should return coroutines since these are async functions called with asyncio.run() in the actual code. The current setup with mock_scrape.return_value = ["chunk1", "chunk2", "chunk3"] will not work correctly because asyncio.run() expects a coroutine object, not a direct return value.

Consider using AsyncMock or wrapping the return value in a coroutine:

async def async_return(value):
    return value

mock_scrape.return_value = async_return(["chunk1", "chunk2", "chunk3"])

Or use AsyncMock:

mock_scrape = AsyncMock(return_value=["chunk1", "chunk2", "chunk3"])

Copilot uses AI. Check for mistakes.
Comment on lines +58 to +59
"error",
"ignore::DeprecationWarning",
Copy link

Copilot AI Jan 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The filterwarnings configuration is set to treat all warnings as errors with "error", then ignores DeprecationWarning. This might be overly strict and could cause test failures from third-party library warnings. Consider using a more lenient approach such as:

filterwarnings = [
    "ignore::DeprecationWarning",
    "ignore::PendingDeprecationWarning",
]

Or selectively upgrading specific warnings to errors that are relevant to your codebase.

Suggested change
"error",
"ignore::DeprecationWarning",
"ignore::DeprecationWarning",
"ignore::PendingDeprecationWarning",

Copilot uses AI. Check for mistakes.
run: uv sync --extra dev

- name: Run tests with coverage
run: uv run pytest --cov --cov-branch --cov-report=xml --junitxml=junit.xml -o junit_family=legacy
Copy link

Copilot AI Jan 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The -o junit_family=legacy option is being used here, but this option was deprecated in pytest 6.1 and removed in pytest 7.0. Since pyproject.toml specifies pytest>=7.4.0, this flag will cause a warning or error. Remove this option as the default junit_family is now xunit2.

Suggested change
run: uv run pytest --cov --cov-branch --cov-report=xml --junitxml=junit.xml -o junit_family=legacy
run: uv run pytest --cov --cov-branch --cov-report=xml --junitxml=junit.xml

Copilot uses AI. Check for mistakes.
Comment on lines +182 to +184
mock_scrape.return_value = ["chunk1"]
mock_build_ref.return_value = ("# Doc", "hash")
mock_create_ref_doc.side_effect = Exception("Database error")
Copy link

Copilot AI Jan 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Async mocking issue: these mocks need to return coroutines for the async functions scrape_website and build_reference_doc.

Copilot uses AI. Check for mistakes.
Comment on lines +274 to +277
mock_scrape.return_value = ["chunk1"]
mock_build_ref.return_value = ("# Doc", "hash")
mock_create_ref_doc.return_value = "doc-123"
mock_create_bot.return_value = MagicMock()
Copy link

Copilot AI Jan 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Async mocking issue present in this test as well with scrape_website, build_reference_doc, and create_reference_document mocks.

Copilot uses AI. Check for mistakes.
@@ -0,0 +1,150 @@
"""End-to-end tests for webhook message processing flow."""

import pytest
Copy link

Copilot AI Jan 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Import of 'pytest' is not used.

Copilot uses AI. Check for mistakes.
from unittest.mock import patch

from src.main import app
from fastapi.testclient import TestClient
Copy link

Copilot AI Jan 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Import of 'TestClient' is not used.

Copilot uses AI. Check for mistakes.
@@ -0,0 +1,137 @@
"""End-to-end tests for webhook verification."""

import pytest
Copy link

Copilot AI Jan 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Import of 'pytest' is not used.

Copilot uses AI. Check for mistakes.
from unittest.mock import patch

from src.main import app
from fastapi.testclient import TestClient
Copy link

Copilot AI Jan 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Import of 'TestClient' is not used.

Copilot uses AI. Check for mistakes.
@given(text=st.text(min_size=0, max_size=1000))
def test_whitespace_normalization_property(self, text: str):
"""Property: Whitespace normalization should preserve content."""
import re
Copy link

Copilot AI Jan 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This import of module re is redundant, as it was previously imported on line 5.

Copilot uses AI. Check for mistakes.
Comment on lines +32 to +39
pending = [task for task in asyncio.all_tasks(loop) if not task.done()]
for task in pending:
task.cancel()
# Wait for cancellation to complete, ignoring exceptions
if pending:
loop.run_until_complete(
asyncio.gather(*pending, return_exceptions=True)
)

This comment was marked as outdated.

@jreakin jreakin force-pushed the initial-test-suite branch 4 times, most recently from 7f0002b to 13d9387 Compare January 28, 2026 05:26
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 58

🤖 Fix all issues with AI agents
In @.cursor/plans/comprehensive_test_implementation_plan_483dd72f.plan.md:
- Around line 380-381: This file is missing a trailing newline at EOF; open the
file containing the final list item "12. Verify all tests pass and coverage
meets goals" and add a single newline character after the last line so the file
ends with one trailing newline (POSIX-compliant).
- Around line 81-111: The fenced code block that displays the tests/ tree
(starts with ``` and the content beginning "tests/ ├── __init__.py") lacks a
language specifier; update the opening fence to include a language (e.g., change
``` to ```text) so the block becomes ```text and ensure the rest of the tree
content remains unchanged.

In @.env.example:
- Around line 26-29: The .env example currently embeds a real-looking Sentry
DSN; update the SENTRY_DSN entry (symbol: SENTRY_DSN) to a non-sensitive
placeholder (e.g., "your_sentry_dsn_here" or "<SENTRY_DSN>") instead of the live
value, leaving SENTRY_TRACES_SAMPLE_RATE as-is; ensure only the value for
SENTRY_DSN in the .env.example is replaced so the key name remains unchanged.

In @.github/workflows/ci.yml:
- Around line 31-44: Gate both Codecov steps on the presence of the
CODECOV_TOKEN secret and ensure uploads don't fail the CI: add an if condition
checking the token (e.g., for "Upload coverage reports to Codecov" and "Upload
test results to Codecov" use if: ${{ secrets.CODECOV_TOKEN != '' }} and for the
test-results step combine with existing cancellation guard like if: ${{
secrets.CODECOV_TOKEN != '' && !cancelled() }}), and add fail_ci_if_error: false
to the "Upload test results to Codecov" step to match the coverage step.
- Around line 12-23: Replace floating action tags and "latest" uv version with
pinned commit SHAs and explicit version numbers: change uses:
actions/checkout@v4 to the full commit SHA for the desired v4 release (e.g.,
actions/checkout@<commit-sha>), change uses: astral-sh/setup-uv@v4 to the
specific commit SHA for that release and set with: version to the exact uv
release (e.g., "0.9.27"), and change uses: actions/setup-python@v5 to the full
commit SHA for the chosen v5 release and keep python-version: "3.12"; also
consider replacing any runner "latest" labels with explicit runner labels (e.g.,
ubuntu-24.04) and enable Dependabot for future updates.

In @.hypothesis/constants/398f56ebff5c3504:
- Around line 1-4: Remove the committed Hypothesis cache artifact
(.hypothesis/constants/398f56ebff5c3504) from the repo and add the
`.hypothesis/` directory to `.gitignore` to prevent leaking local developer
paths; specifically, delete the file from the commit (or use git rm --cached if
you need to preserve locally) and update .gitignore to include a line
`.hypothesis/`, then commit the changes.

In @.hypothesis/constants/5a4f7d06b9c8205f:
- Around line 1-4: The committed file .hypothesis/constants/5a4f7d06b9c8205f
contains a generated Hypothesis artifact and exposes a local filesystem path
(PII); remove that file from the repository, add a .hypothesis/ entry to
.gitignore to prevent re-adding generated artifacts, and make a commit that
deletes the file and updates .gitignore; ensure no other .hypothesis/* files
remain tracked (use git rm --cached if needed) before committing.

In @.hypothesis/constants/5fa0cb67541f36b0:
- Around line 1-4: Remove the committed Hypothesis cache by deleting the
.hypothesis directory from the repository (including the file
.hypothesis/constants/5fa0cb67541f36b0) and stop tracking it, then add a
.hypothesis/ entry to .gitignore so it is not re-committed; ensure you untrack
the files in git (so local cache stays but repo no longer contains absolute
paths), commit the removal and .gitignore change, and verify tests still run
locally.

In @.hypothesis/constants/842e41bc3d902d92:
- Around line 1-4: Remove the generated Hypothesis cache file
(.hypothesis/constants/842e41bc3d902d92) from version control and prevent future
commits by adding the .hypothesis/ directory to .gitignore; specifically delete
the tracked file (the entry shown with the absolute path and test run metadata)
and update .gitignore to contain a line ".hypothesis/" so the folder and its
files (like the constants file) are ignored going forward, then commit the
.gitignore change and the removal.

In @.hypothesis/constants/8b50f535c13f84d4:
- Around line 1-4: Repository contains committed Hypothesis artifacts under the
.hypothesis directory which should be removed from version control; remove all
tracked .hypothesis/* entries (e.g., git rm -r --cached .hypothesis or remove
them from the index), add a `.hypothesis` entry to `.gitignore`, and commit both
changes in a single commit (or two logically grouped commits) so the directory
is no longer tracked but remains available locally for Hypothesis to regenerate
when tests run.

In @.hypothesis/constants/99413fd42c41f490:
- Around line 1-4: Remove the committed Hypothesis database file
(.hypothesis/constants/99413fd42c41f490) and stop tracking the .hypothesis
directory: delete that file from the repo, add an entry ".hypothesis/" to
.gitignore, and if already committed use git rm --cached on the tracked
.hypothesis files to untrack them; confirm no sensitive local paths remain in
src/api/__init__.py or other files and, if you need reproducible examples,
configure Hypothesis to use a repository-local (relative) database instead of
committing .hypothesis.

In @.hypothesis/constants/a31f4e8ec06e60ab:
- Around line 1-4: Add the ".hypothesis/" directory to .gitignore and remove any
already-tracked hypothesis cache files from Git so they stop being version
controlled; specifically, add the line ".hypothesis/" to your .gitignore, then
untrack the existing file shown (e.g., .hypothesis/constants/a31f4e8ec06e60ab)
from the repository index and commit the change so future runs won't reintroduce
these machine-specific cache files.

In @.hypothesis/constants/b4f90a2ba85e3776:
- Around line 1-4: Remove the generated Hypothesis cache file (.hypothesis/*)
from the repository and add .hypothesis/ to .gitignore so it doesn’t get
committed again: delete the file shown in the diff (the .hypothesis cache file
that contains the absolute local path), update .gitignore to include the entry
".hypothesis/" (or ".hypothesis" ) and commit both changes together with a
message like "Remove Hypothesis cache and ignore .hypothesis/".

In @.hypothesis/constants/c14e13baf0ad84d6:
- Around line 1-4: The repo is tracking Hypothesis cache files (the .hypothesis/
directory, e.g., .hypothesis/constants/c14e13baf0ad84d6) which should be
untracked; remove all .hypothesis/* entries from version control and add a rule
for .hypothesis/ to .gitignore, then commit the changes. Specifically: delete
the 26 committed .hypothesis files from the index (git rm --cached or
equivalent), add ".hypothesis/" to .gitignore, and commit the removal so the
directory is no longer tracked while remaining on local machines.

In @.hypothesis/constants/ce580a5b100a39d7:
- Around line 1-4: The repo has committed Hypothesis example DB files under the
.hypothesis directory (e.g., files showing absolute paths like /Users/...), so
add the directory to .gitignore and stop tracking those files: add a line
".hypothesis/" to .gitignore, remove the tracked .hypothesis files from git's
index (e.g., git rm --cached for those files), and commit the change so future
Hypothesis artifacts are not committed; verify by running git status to ensure
the files are ignored.

In `@coverage.xml`:
- Around line 1-7: The committed coverage.xml embeds a local absolute path in
the <source> element which leaks environment details; remove coverage.xml from
the repository, add coverage.xml (or the coverage output directory) to
.gitignore, and ensure CI/artifacts regenerate and store coverage reports
instead of committing; alternatively, if a committed coverage file is required,
replace the absolute path in the <source> element with a relative path or a
placeholder before committing.

In `@junit.xml`:
- Line 1: Remove the committed test artifact junit.xml from version control and
prevent future commits by adding junit.xml to .gitignore; run git rm --cached
junit.xml (or the equivalent) to stop tracking the file and commit that removal
and the updated .gitignore. Also audit the repo for other generated test outputs
(e.g., coverage reports) and add their filenames/patterns to .gitignore as
needed so CI-produced artifacts (like junit.xml and coverage files) are never
committed again.

In `@README.md`:
- Around line 1-5: Move the top-level heading "# Facebook Messenger AI Bot" to
be the very first line of README.md (above the badge lines) so the H1 appears
before the badges and satisfies MD041; update the file so the badges follow the
H1 in the existing order.

In `@REFACTORING_ASSESSMENT.md`:
- Around line 49-54: The markdown headings "Immediate Actions" and "Future
Considerations (Only if needed)" are missing required blank lines after them
(MD022); open REFACTORING_ASSESSMENT.md and insert a single blank line
immediately after the "### Immediate Actions" and the "### Future Considerations
(Only if needed)" headings so each heading is followed by an empty line before
the next content block.

In `@src/config.py`:
- Around line 65-73: The sentry_traces_sample_rate Field has no bounds even
though the description says 0.0–1.0; update the Field call for
sentry_traces_sample_rate (in src/config.py) to include ge=0.0 and le=1.0 so
Pydantic validates values on instantiation (keep the existing default and
description).

In `@src/main.py`:
- Around line 24-33: The code currently hard-codes send_default_pii=True in
sentry_sdk.init; make PII capture configurable and default to False by adding a
new Settings field (e.g., sentry_send_default_pii: bool = Field(default=False,
...)) to the Settings class in src/config.py, then change the sentry
initialization in sentry_sdk.init (in main.py) to use
settings.sentry_send_default_pii instead of the literal True so PII capture is
opt-in (refer to settings.sentry_dsn, sentry_sdk.init, and send_default_pii).

In `@TESTING.md`:
- Around line 1-5: Add a top-level heading to the very start of the document
above the existing line "Testing strategies, evaluation sets, and CI/CD
integration." (e.g., "# Testing") so the file begins with an H1 for proper
document structure and accessibility; ensure the heading is the first line and
keep the existing content unchanged below it.
- Around line 64-76: The docs/tests use the non-existent Hypothesis strategy
st.urls(); replace st.urls() with the project's custom url_strategy() wherever
it appears (e.g., in the test function test_scrape_website_properties that calls
scrape_website) and update the other occurrences of st.urls() in the file to
url_strategy() so examples match the actual test implementation.

In `@tests/conftest.py`:
- Around line 170-185: The fixture mock_facebook_api currently declares an
unused monkeypatch parameter; remove the unused parameter from the function
signature (change def mock_facebook_api(monkeypatch): to def
mock_facebook_api():) and ensure any tests or other fixtures that reference
mock_facebook_api are unaffected; mirror the same pattern used by
mock_httpx_client by updating the fixture declaration and leaving the internal
async mock_post implementation intact.
- Around line 75-103: The fixture mock_httpx_client declares an unused
monkeypatch parameter; remove monkeypatch from the fixture signature so it
becomes def mock_httpx_client(): and leave the body (mock_get, mock_post,
mock_close, mock_client setup, __aenter__/__aexit__) unchanged; update any tests
that explicitly request the fixture with the old signature only if they relied
on monkeypatch injection (they shouldn't), and ensure the fixture name
mock_httpx_client remains referenced where used.
- Around line 117-118: Replace uses of datetime.utcnow() for the "created_at"
and "updated_at" fields with timezone-aware datetimes: call
datetime.now(timezone.utc).isoformat() instead of datetime.utcnow().isoformat(),
and ensure timezone is imported from datetime (e.g., add timezone to the import
that provides datetime) so the timestamps are explicit UTC-aware.
- Around line 106-126: The sample_bot_config fixture includes unexpected keys
that cause Pydantic validation to fail when instantiating BotConfiguration;
remove the facebook_page_access_token and facebook_verify_token entries from the
dict returned by sample_bot_config so that BotConfiguration(**sample_bot_config)
in the sample_bot_configuration fixture only receives the fields defined on the
BotConfiguration model.

In `@tests/e2e/test_main.py`:
- Around line 97-101: test_app_metadata duplicates assertions already checked in
test_app_initialization (app.title and app.version); either merge the two tests
into one consolidated test_app_initialization that asserts title, version, and
description, or remove the overlapping assertions from test_app_metadata and
make it assert only the distinct property (e.g., app.description) and rename the
test to reflect its focused intent; update assertions in test_app_metadata
and/or test_app_initialization accordingly so each test covers unique behavior.
- Around line 58-95: The test currently only asserts lifespan exists; instead
run the actual ASGI lifespan context to exercise startup: import lifespan from
src.main and enter the async context (e.g., using pytest-asyncio or anyio) so
the startup code runs, then assert expected side effects such as
get_supabase_client (mock_get_supabase) being called, CopilotService being
instantiated (mock_copilot_service) and its is_available awaited, and any
settings-dependent behavior on the mocked Settings; finally exit the context to
trigger shutdown and assert cleanup calls if applicable.

In `@tests/e2e/test_webhook_message_flow.py`:
- Around line 112-150: The test currently asserts properties of the local
payload rather than verifying the webhook handler extracted and forwarded the
message; update test_webhook_message_extraction to either remove the redundant
assertions about payload structure or (preferred) patch the actual message
handler that /webhook calls (e.g., patch src.api.webhook.handle_message) and
assert that this mocked handler was called once with the expected extracted
message content (sender id "user-789", recipient "page-123", text "Test message
text", timestamp 1234567890) after calling test_client.post("/webhook",
json=payload); keep mock_get_settings and use the mocked handler call assertions
in place of the payload checks.
- Around line 10-56: Replace the duplicated Settings instantiation in
test_webhook_message_processing by using the existing mock_settings fixture from
tests/conftest.py: add mock_settings as a test parameter (in addition to
mock_get_settings and test_client), remove the local Settings(...) construction,
and have mock_get_settings.return_value = mock_settings so the test reuses the
shared fixture and avoids duplication; keep the rest of
test_webhook_message_processing (payload, POST, assertions) unchanged.

In `@tests/e2e/test_webhook_verification.py`:
- Around line 18-24: Extract the repeated Settings construction into a shared
pytest fixture or helper function (e.g., a fixture named settings_fixture or
helper make_test_settings) and have tests use that fixture/helper instead of
re-creating Settings inline; update places that set
mock_get_settings.return_value to use the fixture (e.g., replace
mock_get_settings.return_value = mock_settings with
mock_get_settings.return_value = settings_fixture or make_test_settings()) and
ensure the fixture returns a Settings(...) instance with
facebook_page_access_token, facebook_verify_token, supabase_url, and
supabase_service_key populated as in the diff so all tests reuse the single
source of truth.

In `@tests/integration/test_agent_integration.py`:
- Around line 77-110: In test_agent_with_recent_messages move the inline "import
json" out of the test body and add "import json" to the module-level imports at
the top of tests/integration/test_agent_integration.py, then remove the inline
import on the line where payload is built (the one before payload =
json.loads(request.read())); keep using json.loads(request.read()) unchanged and
run tests to ensure imports resolve correctly.

In `@tests/integration/test_repository_db.py`:
- Line 27: Replace the deprecated naive timestamp creation used in the test (the
assignment to now that calls datetime.utcnow()) with a timezone-aware timestamp
by using datetime.now(timezone.utc); update the import to include timezone from
datetime if not already imported and adjust any subsequent comparisons or
assertions involving the variable now (or functions that consume it) to handle
an aware datetime rather than a naive one.
- Around line 19-64: The test test_bot_configuration_lifecycle fails to mock the
internal call to link_reference_document_to_bot invoked by
create_bot_configuration; patch or mock link_reference_document_to_bot in the
test (e.g., using `@patch` or monkeypatch targeting
src.db.repository.link_reference_document_to_bot) and make it return a suitable
MagicMock/None so it does not perform Supabase operations, ensuring the rest of
the Supabase table mocks remain the only external interactions exercised by
create_bot_configuration and get_bot_configuration_by_page_id.

In `@tests/stateful/test_agent_conversation.py`:
- Around line 82-87: The loop assigns the result of agent.respond(...) to an
unused variable response which triggers Ruff F841; to indicate the value is
intentionally unused, rename the variable to _response (or use _ = await
agent.respond(...)) inside the loop where agent.respond(context, f"Message {i}")
is called, leaving the rest of the logic (appending to context.recent_messages
and trimming) unchanged.

In `@tests/stateful/test_bot_configuration.py`:
- Around line 10-11: The test class TestBotConfigurationStateful is not using
Hypothesis stateful testing; either rename/move this file/class to tests/unit
(e.g., TestBotConfiguration) to reflect it's a plain unit test, or convert it to
a real Hypothesis stateful test by subclassing
hypothesis.stateful.RuleBasedStateMachine (or RuleBasedStateMachine), implement
state variables and `@rule` methods for operations currently exercised, add
`@invariant` checks for expected properties, and expose it via
hypothesis.stateful.run_state_machine_as_test or a test class that inherits from
the state machine; update imports and test harness accordingly so the test
runner executes the new stateful machine instead of plain unit methods.
- Line 19: Replace uses of the deprecated naive datetime.utcnow() with
timezone-aware datetime.now(timezone.utc) in test_bot_configuration.py: update
occurrences of datetime.utcnow() (e.g., where variable now is assigned) to
datetime.now(timezone.utc) and add/import timezone (from datetime import
timezone or reference datetime.timezone) so the tests produce timezone-aware
datetimes; apply the same change to all other occurrences noted (the other
datetime.utcnow() calls in the file).
- Line 16: In test_config_operations_basic remove the unused local variable
deleted_configs (or if it was intended to be used, implement its usage) to
resolve the Ruff F841 warning; locate the declaration of deleted_configs in the
test_config_operations_basic function and either delete that line or replace it
with the intended logic that uses deleted_configs.
- Line 4: The import of MagicMock is unused in this test file; remove MagicMock
from the import statement (the "from unittest.mock import MagicMock" entry) so
the file only imports symbols that are actually used, or replace it with the
correct mock import if you intended to use it in functions like any test_...
that reference mocks.

In `@tests/unit/test_config.py`:
- Around line 103-112: The test expects get_settings to be cacheable but the
implementation lacks caching; update the implementation in src/config.py by
importing functools and decorating the get_settings function with
`@functools.lru_cache`() so get_settings.cache_clear() and cached instance
behavior work as the test expects; ensure the function signature remains the
same so callers/tests still match.
- Around line 3-16: Replace the broad Exception assertion with Pydantic's
ValidationError: add an import for ValidationError from pydantic at the top and
change pytest.raises(Exception) to pytest.raises(ValidationError) in the test
methods (e.g., test_settings_required_fields and the other occurrence around
lines 84-91) so the tests assert the specific pydantic.ValidationError raised by
the Settings model validation.

In `@tests/unit/test_copilot_service.py`:
- Around line 101-124: The test test_chat_with_available_copilot currently
checks request payload by reading raw bytes and searching for byte substrings;
instead parse the request JSON and assert on the parsed dict for robustness.
Locate the respx.calls.last.request in the test_chat_with_available_copilot and
replace the raw byte checks (request.read() and b"system_prompt"/b"messages"
assertions) with code that decodes/parses the request body (e.g., request.json()
or json.loads(request.content.decode())) and then assert "system_prompt" and
"messages" are keys in the resulting dict to verify the request format sent by
CopilotService.chat.

In `@tests/unit/test_facebook_service.py`:
- Around line 131-160: The test_send_message_properties test is flaky because
the `@respx.mock` decorator leaves respx.calls accumulating across Hypothesis
examples; replace the decorator usage by opening a respx.mock context manager
inside the test (e.g., with respx.mock(assert_all_called=False) as mock:) then
register the POST mock on mock (mock.post(...).mock(...)), call send_message,
and assert against mock.calls or mock.calls.last to ensure only the current
example's calls are inspected; alternatively, if you prefer to keep the
decorator, limit Hypothesis to one example by adding `@settings`(max_examples=1)
to the test to avoid accumulation.
- Around line 1-9: The test file imports json repeatedly inside multiple test
functions; move a single "import json" to the top-level imports alongside
pytest/hypothesis/httpx/respx and keep the existing "from
src.services.facebook_service import send_message" import as-is, then remove the
in-function "import json" statements (the occurrences inside the test methods)
so all tests use the single top-level json import.

In `@tests/unit/test_hypothesis.py`:
- Around line 227-243: The test_whitespace_normalization_property function
contains a redundant local import "import re" that shadows the module-level re
import; remove that inner import so the function uses the top-level re module
(used by the call re.sub in test_whitespace_normalization_property) and keep the
rest of the normalization/assertions unchanged.
- Around line 149-163: The test test_hash_collision_resistance contains a
tautological assertion (`assert hash1 != hash2 or content1 == content2`) so it
never actually tests the property; fix it by importing and using
hypothesis.assume to require unequal inputs (e.g., add "from hypothesis import
assume" and call assume(content1 != content2) at the start of the test) and then
replace the assertion with a direct check "assert hash1 != hash2" to enforce
that different content yields different SHA256 digests.
- Around line 84-103: The test_message_non_empty_property contains redundant
assume() and will fail for whitespace-only inputs; update the generator for
test_message_non_empty_property (or filter at the given `@given`) to exclude
whitespace-only strings (e.g., use a .filter(lambda s: s.strip()) on the
st.text(...) generator) and remove the assume(len(message) > 0) line, so the
assertion assert len(message.strip()) > 0 only runs for strings that are
non-whitespace; similarly, simplify test_message_whitespace_handling by
explicitly checking stripped == "" vs non-empty cases if needed.

In `@tests/unit/test_models.py`:
- Around line 21-37: The url_strategy function is duplicated across test
modules; fix it by extracting the url_strategy function into a shared test
fixture module (e.g., conftest) and import it into both test modules instead of
redefining it, ensure the shared function keeps the same implementation and
imports hypothesis.strategies as st so existing tests that reference
url_strategy continue to work.
- Line 257: Replace the naive timestamp creation that uses datetime.utcnow() for
the variable now with a timezone-aware UTC datetime; locate the usage of
datetime.utcnow() in the test (the now assignment in tests/unit/test_models.py)
and change it to use datetime.now(datetime.UTC) (or datetime.now(timezone.utc)
if timezone imported) so now is timezone-aware.

In `@tests/unit/test_reference_doc.py`:
- Around line 47-70: Replace the early return in test_content_hash_uniqueness
with hypothesis.assume(content1 != content2) to express the precondition (use
assume from hypothesis); additionally, change both tests to call the real
function build_reference_doc (or the module function that produces the content
hash) and compare its returned/derived hash values instead of directly invoking
hashlib.sha256 so the tests exercise the function's encoding/normalization
behavior (ensure you pass content strings through build_reference_doc or call
whatever function computes the hash in that module and assert
equality/inequality accordingly).

In `@tests/unit/test_repository.py`:
- Line 91: Remove the unused local variable now in the test (the assignment now
= datetime.utcnow()); locate the test function in tests/unit/test_repository.py
where now is assigned and simply delete that line (and if datetime is only
imported for this now assignment, also remove the unused import to avoid linter
warnings).
- Line 152: Replace the deprecated call now = datetime.utcnow() with a
timezone-aware timestamp: import timezone from datetime (or use
datetime.timezone) and set now = datetime.now(timezone.utc); update any similar
occurrences (e.g., the analogous spot referenced in test_models.py) so tests use
timezone-aware datetimes consistently.

In `@tests/unit/test_scraper.py`:
- Around line 207-221: The test test_scrape_website_follows_redirects currently
mocks only the final URL, so it doesn't exercise redirect handling; modify the
test to mock an initial URL (e.g., "https://example.com/redirect") returning a
3xx response with a Location header pointing to the final URL, and also mock the
final destination ("https://example.com/final") returning 200 with the HTML
body, then call scrape_website with the redirect URL and assert the final
content is returned; reference the test function
test_scrape_website_follows_redirects and the scrape_website function when
locating the code to change.
- Around line 142-164: Replace the loop+reset pattern in
test_scrape_website_chunking_properties with a parametrized pytest case: remove
the for loop and respx_lib.reset(), add `@pytest.mark.parametrize` over the
word_count values, use the respx fixture/mocked client (respx_mock or respx_lib
mock fixture) to register the single GET response per test, and call
scrape_website("https://example.com") directly; this keeps isolation per
parameter and avoids manual respx_lib.reset() — update references in the test to
use the function name test_scrape_website_chunking_properties and the
scrape_website call accordingly.

In `@tests/unit/test_setup_cli.py`:
- Around line 19-26: The setup_method currently defines an unused parameter
`method` and calls `asyncio.set_event_loop(None)`, which can conflict with
pytest-asyncio; either remove the entire `setup_method` if no setup is required,
or simplify it by dropping the unused `method` parameter and replacing manual
loop clearing with pytest-asyncio's built-in fixtures (i.e., rely on the
event_loop fixture) instead of calling `asyncio.set_event_loop(None)` in the
`setup_method` function.
- Around line 1-9: Remove the unused top-level import of warnings and keep the
local import used inside teardown_method; specifically, delete the module-level
"import warnings" at the top of the file so only the warnings import inside the
teardown_method helper remains (locate teardown_method to confirm the local
import is the one used).
- Around line 27-109: The teardown_method is overly complex and should be
simplified: remove the large mock-inspection block and the manual event-loop
cancellation/closing, eliminate time.sleep() and repeated gc.collect() passes,
and avoid broad except Exception: pass blocks; instead implement a minimal
teardown_method that only performs any targeted, explicit cleanup needed (e.g.,
call asyncio.set_event_loop(None) inside a narrow try/except or clear specific
test attributes), leaving event loop management to pytest-asyncio and mock
cleanup to `@patch`; locate the current teardown_method in
tests/unit/test_setup_cli.py and replace its body accordingly, referencing
teardown_method and any uses of
asyncio.get_running_loop/get_event_loop/loop.shutdown_asyncgens to remove.

Comment on lines +81 to +111
```
tests/
├── __init__.py
├── conftest.py # Shared fixtures and pytest configuration
├── unit/ # Fast, isolated unit tests
│ ├── __init__.py
│ ├── test_config.py
│ ├── test_models.py
│ ├── test_scraper.py
│ ├── test_copilot_service.py
│ ├── test_agent_service.py
│ ├── test_facebook_service.py
│ ├── test_reference_doc.py
│ ├── test_db_client.py
│ ├── test_repository.py
│ └── test_hypothesis.py # Property-based tests with Hypothesis
├── integration/ # Service integration tests
│ ├── __init__.py
│ ├── test_agent_integration.py
│ ├── test_scraper_copilot.py
│ └── test_repository_db.py
├── e2e/ # End-to-end API tests
│ ├── __init__.py
│ ├── test_webhook_verification.py
│ ├── test_webhook_message_flow.py
│ └── test_health_endpoint.py
└── stateful/ # Hypothesis stateful tests
├── __init__.py
├── test_agent_conversation.py
└── test_bot_configuration.py
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Add language specifier to fenced code block.

The code block showing the test directory structure lacks a language specifier. While this is a text/tree representation, adding a specifier improves rendering consistency.

📝 Suggested fix
-```
+```text
 tests/
 ├── __init__.py
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
```
tests/
├── __init__.py
├── conftest.py # Shared fixtures and pytest configuration
├── unit/ # Fast, isolated unit tests
│ ├── __init__.py
│ ├── test_config.py
│ ├── test_models.py
│ ├── test_scraper.py
│ ├── test_copilot_service.py
│ ├── test_agent_service.py
│ ├── test_facebook_service.py
│ ├── test_reference_doc.py
│ ├── test_db_client.py
│ ├── test_repository.py
│ └── test_hypothesis.py # Property-based tests with Hypothesis
├── integration/ # Service integration tests
│ ├── __init__.py
│ ├── test_agent_integration.py
│ ├── test_scraper_copilot.py
│ └── test_repository_db.py
├── e2e/ # End-to-end API tests
│ ├── __init__.py
│ ├── test_webhook_verification.py
│ ├── test_webhook_message_flow.py
│ └── test_health_endpoint.py
└── stateful/ # Hypothesis stateful tests
├── __init__.py
├── test_agent_conversation.py
└── test_bot_configuration.py
```
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

81-81: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
In @.cursor/plans/comprehensive_test_implementation_plan_483dd72f.plan.md around
lines 81 - 111, The fenced code block that displays the tests/ tree (starts with
``` and the content beginning "tests/ ├── __init__.py") lacks a language
specifier; update the opening fence to include a language (e.g., change ``` to
```text) so the block becomes ```text and ensure the rest of the tree content
remains unchanged.

Comment on lines +380 to +381
11. Configure pytest and coverage reporting
12. Verify all tests pass and coverage meets goals No newline at end of file
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Add trailing newline at end of file.

Files should end with a single newline character for POSIX compliance and better compatibility with various tools.

🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

381-381: Files should end with a single newline character

(MD047, single-trailing-newline)

🤖 Prompt for AI Agents
In @.cursor/plans/comprehensive_test_implementation_plan_483dd72f.plan.md around
lines 380 - 381, This file is missing a trailing newline at EOF; open the file
containing the final list item "12. Verify all tests pass and coverage meets
goals" and add a single newline character after the last line so the file ends
with one trailing newline (POSIX-compliant).

Comment on lines +26 to +29
# Sentry Configuration
# Get your DSN from https://sentry.io/settings/[org]/projects/[project]/keys/
SENTRY_DSN=https://fe5546772383e3c8af3fd8d64b85a0d0@o4510326272950272.ingest.us.sentry.io/4510786365030400
SENTRY_TRACES_SAMPLE_RATE=1.0
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Replace the real Sentry DSN with a placeholder.

Line 28 includes a real-looking DSN; example files should not embed live DSNs to avoid abuse or spam. Use a placeholder instead.

🔐 Suggested edit
-SENTRY_DSN=https://fe5546772383e3c8af3fd8d64b85a0d0@o4510326272950272.ingest.us.sentry.io/4510786365030400
+SENTRY_DSN=https://examplePublicKey@o0.ingest.sentry.io/0
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Sentry Configuration
# Get your DSN from https://sentry.io/settings/[org]/projects/[project]/keys/
SENTRY_DSN=https://fe5546772383e3c8af3fd8d64b85a0d0@o4510326272950272.ingest.us.sentry.io/4510786365030400
SENTRY_TRACES_SAMPLE_RATE=1.0
# Sentry Configuration
# Get your DSN from https://sentry.io/settings/[org]/projects/[project]/keys/
SENTRY_DSN=https://examplePublicKey@o0.ingest.sentry.io/0
SENTRY_TRACES_SAMPLE_RATE=1.0
🤖 Prompt for AI Agents
In @.env.example around lines 26 - 29, The .env example currently embeds a
real-looking Sentry DSN; update the SENTRY_DSN entry (symbol: SENTRY_DSN) to a
non-sensitive placeholder (e.g., "your_sentry_dsn_here" or "<SENTRY_DSN>")
instead of the live value, leaving SENTRY_TRACES_SAMPLE_RATE as-is; ensure only
the value for SENTRY_DSN in the .env.example is replaced so the key name remains
unchanged.

Comment on lines +12 to +23
- name: Checkout code
uses: actions/checkout@v4

- name: Install uv
uses: astral-sh/setup-uv@v4
with:
version: "latest"

- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

GitHub Actions security best practices pin commit SHA reproducibility

💡 Result:

Why pin to a full commit SHA (security + reproducibility)

  • Tags (e.g., @v4, @main) can move, so an attacker (or a maintainer) could change what code you run without changing your workflow file. GitHub explicitly recommends pinning actions to a full-length commit SHA as the way to get an immutable reference. (docs.github.com)
  • Pinning to a SHA also improves build reproducibility because the workflow will execute the exact same action code on every run until you intentionally update it. (docs.github.com)

Example:

- uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7

(Keeping the semver in a comment is a common practice for readability and maintenance.) (wellarchitected.github.com)


Best-practice workflow for SHA pinning (without falling behind)

  • Pin to SHAs that correspond to a release tag (not an arbitrary commit), so updates are intentional and reviewable. (wellarchitected.github.com)
  • Turn on Dependabot version updates for GitHub Actions so you get PRs that bump the pinned reference when new versions are available. (docs.github.com)
  • If you use the “# vX.Y.Z” comment next to the SHA, Dependabot can update that comment too, keeping things readable. (github.blog)

Enforce it (so it doesn’t regress)

  • GitHub admins can enforce SHA pinning via “allowed actions and reusable workflows” policy. (As of Aug 15, 2025, GitHub also added explicit policy support to enforce SHA pinning there.) (github.blog)

Don’t forget the other reproducibility footgun: runner images

  • runs-on: ubuntu-latest is a moving target (it can change OS versions during migrations). For stability/reproducibility, use an explicit label like ubuntu-24.04 or ubuntu-22.04. (github.com)

Complementary hardening (when you’re doing SHA pinning anyway)

  • Treat third-party actions as high-trust code: audit what they do, and apply least privilege (especially for GITHUB_TOKEN permissions), because a compromised action can access secrets and write to your repo depending on permissions. (docs.github.com)

Citations:


🌐 Web query:

actions/checkout@v4 latest commit SHA 2024 2025

💡 Result:

  • actions/checkout@v4 (moving tag) current target (latest as of Nov 13, 2025): 34e114876b0b11c390a56381ad16ebd13914f8d5 (github.com)
  • Latest v4.x release in 2024 (v4.2.2, Oct 23, 2024): 11bd71901bbe5b1630ceea73d27597364c9af683 (github.com)
  • Latest v4.x release in 2025 (v4.3.1, Nov 13, 2025): 34e114876b0b11c390a56381ad16ebd13914f8d5 (github.com)

Citations:


🌐 Web query:

actions/setup-python@v5 latest commit SHA 2024 2025

💡 Result:

  • Latest actions/setup-python@v5 in 2024: v5.3.0 (Oct 24, 2024) → 0b93645e9fea7318ecaed2b359559ac225c90a2b [2][3]
  • Latest actions/setup-python@v5 in 2025: v5.6.0 (Apr 24, 2025) → a26af69be951a213d495a4c3e4e4022e16d87065 [1][2]

As of Jan 28, 2026, @v5 still points to v5.6.0 (same SHA) [1][3].

Sources: [1] [2] [3]


🌐 Web query:

astral-sh/setup-uv@v4 latest commit SHA 2024 2025

💡 Result:

astral-sh/setup-uv@v4 currently points to commit 38f3f104447c67c051c4a08e39b64a148898af3a (tagged Nov 30, 2024). [1][2]

Sources:
[1] https://github.com/astral-sh/setup-uv/tags?after=v5.2.1
[2] astral-sh/setup-uv@38f3f10


🌐 Web query:

uv package manager latest stable version January 2026

💡 Result:

As of January 28, 2026, the latest stable release of the uv package manager is v0.9.27 (released January 26, 2026). [1][2]


🏁 Script executed:

cat -n .github/workflows/ci.yml | sed -n '12,23p'

Repository: Abstract-Data/go-crea-fb-msg-fast-api

Length of output: 321


Pin GitHub Actions and uv to commit SHAs and specific versions for supply-chain security and reproducibility.

Floating tags (@v4, @v5, and version: "latest") can change without warning, creating security and build-reproducibility risks. GitHub's official security hardening guidance recommends pinning all actions to full-length commit SHAs. Update as follows:

- name: Checkout code
  uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4.3.1

- name: Install uv
  uses: astral-sh/setup-uv@38f3f104447c67c051c4a08e39b64a148898af3a # v4 (from Nov 30, 2024)
  with:
    version: "0.9.27"

- name: Set up Python
  uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
  with:
    python-version: "3.12"

Additionally, consider enabling Dependabot to manage version updates, and use explicit runner labels (e.g., runs-on: ubuntu-24.04) instead of -latest suffixes.

🤖 Prompt for AI Agents
In @.github/workflows/ci.yml around lines 12 - 23, Replace floating action tags
and "latest" uv version with pinned commit SHAs and explicit version numbers:
change uses: actions/checkout@v4 to the full commit SHA for the desired v4
release (e.g., actions/checkout@<commit-sha>), change uses:
astral-sh/setup-uv@v4 to the specific commit SHA for that release and set with:
version to the exact uv release (e.g., "0.9.27"), and change uses:
actions/setup-python@v5 to the full commit SHA for the chosen v5 release and
keep python-version: "3.12"; also consider replacing any runner "latest" labels
with explicit runner labels (e.g., ubuntu-24.04) and enable Dependabot for
future updates.

Comment on lines +31 to +44
- name: Upload coverage reports to Codecov
uses: codecov/codecov-action@v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
slug: Abstract-Data/go-crea-fb-msg-fast-api
files: ./coverage.xml
fail_ci_if_error: false

- name: Upload test results to Codecov
if: ${{ !cancelled() }}
uses: codecov/test-results-action@v1
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: ./junit.xml
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🌐 Web query:

codecov/test-results-action@v1 fail_ci_if_error parameter documentation

💡 Result:

In codecov/test-results-action@v1, fail_ci_if_error is an optional boolean input that makes the action exit with a non‑zero status if the Codecov upload step errors, causing the GitHub Actions job/workflow to fail (default is false). This behavior is documented as: “On error, exit with non-zero code.” for Codecov’s GitHub Actions uploader inputs. [1]

Example:

- uses: codecov/test-results-action@v1
  with:
    token: ${{ secrets.CODECOV_TOKEN }}
    fail_ci_if_error: true

This input is used in real codecov/test-results-action@v1 workflows. [2]

Sources:
[1] codecov/codecov-action README input table (“fail_ci_if_error”). (github.com)
[2] Example workflow using codecov/test-results-action@v1 with fail_ci_if_error. (github.com)

Citations:


🌐 Web query:

codecov/test-results-action@v1 CODECOV_TOKEN missing behavior forked PR

💡 Result:

On forked pull requests, secrets.CODECOV_TOKEN is typically not available, so codecov/test-results-action@v1 will run with an empty/missing token. In that case:

  • Private repos: the upload will fail (uploads require a token). citeturn1search0
  • Public repos: the upload may still work tokenless only if Codecov considers it eligible (e.g., commits on an “unprotected” branch name like forkname:branch, or if the org has disabled token enforcement for public uploads). citeturn1search0
  • Even when tokenless is allowed, it can fail intermittently due to tokenless/GitHub rate limits, producing errors telling you to “upload using a token”. citeturn4search2
  • Many Codecov upload steps do not fail the CI by default (so you can get a green workflow but no uploaded results); you need to check the step logs for upload errors. citeturn1search6

Practical options:

  1. Skip the upload on forked PRs (common) and only upload on push to branches in the main repo.
  2. For OSS where you want fork PR uploads: ensure Codecov is configured to allow tokenless fork uploads (public repo + unprotected branch handling / token setting). citeturn1search0
  3. If you must upload from PRs reliably, use a token where safe/appropriate (note: don’t expose it to untrusted fork code). citeturn3search3

🌐 Web query:

codecov/codecov-action@v5 documentation parameters

💡 Result:

codecov/codecov-action@v5 supports these with: inputs (all optional) [1]:

  • base_sha – base SHA for pr-base-picking
  • binary – path to pre-downloaded CLI (bypasses integrity checking)
  • codecov_yml_path – path to codecov.yml (currently only for Automated Test Selection)
  • commit_parent – parent commit SHA (40 chars)
  • directory – directory to search for coverage files (default: current)
  • env_vars – environment variables to pass through (comma-separated)
  • fail_ci_if_error – fail the CI if upload fails
  • files – coverage file(s) list (comma-separated; supports exclusions like !./cache)
  • flags – flag(s) to group uploads (comma-separated)
  • force – only used for empty-upload
  • git_service – override git service (e.g. github_enterprise)
  • gcov_args – extra args for gcov
  • gcov_executable – gcov executable (default: gcov)
  • gcov_ignore – paths to ignore during gcov gathering
  • gcov_include – paths to include during gcov gathering
  • handle_no_reports_found – don’t error if no reports are found
  • job_code – job identifier used by Codecov
  • name – upload name shown in Codecov UI
  • network_filter – include only paths starting with this filter in “network” section
  • network_prefix – prefix for “network” paths (e.g. test_results, coverage)
  • root_dir – root dir for network paths (default: current)
  • run_command – CLI command: upload-coverage (default), empty-upload, pr-base-picking, send-notifications
  • skip_validation – skip CLI integrity checking (not recommended)
  • slugowner/repo slug (required in some enterprise/org-token setups)
  • swift_project – Swift project name (optimization)
  • token – Codecov upload token
  • url – Codecov instance URL (Dedicated Enterprise Cloud)
  • use_legacy_upload_endpoint – use legacy upload endpoint
  • use_oidc – authenticate via OIDC (ignores token)
  • use_pypi – use CLI from PyPI (bypasses integrity checking)
  • verbose – verbose logging
  • version – Codecov CLI version (default: latest, must start with v, e.g. v10.0.1)
  • working-directory – working dir to run the uploader script

Source: Codecov Action README “Arguments” table for v5 [1].


Gate Codecov uploads on token presence to prevent failed upload attempts on forked PRs.

On forked pull requests, secrets.CODECOV_TOKEN is unavailable, causing Codecov uploads to fail and clutter logs. Gate the test-results-action step on token presence and set fail_ci_if_error: false (supported by both actions):

🔧 Suggested gating
-      - name: Upload test results to Codecov
-        if: ${{ !cancelled() }}
+      - name: Upload test results to Codecov
+        if: ${{ !cancelled() && secrets.CODECOV_TOKEN != '' }}
         uses: codecov/test-results-action@v1
         with:
           token: ${{ secrets.CODECOV_TOKEN }}
           files: ./junit.xml
+          fail_ci_if_error: false
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Upload coverage reports to Codecov
uses: codecov/codecov-action@v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
slug: Abstract-Data/go-crea-fb-msg-fast-api
files: ./coverage.xml
fail_ci_if_error: false
- name: Upload test results to Codecov
if: ${{ !cancelled() }}
uses: codecov/test-results-action@v1
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: ./junit.xml
- name: Upload coverage reports to Codecov
uses: codecov/codecov-action@v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
slug: Abstract-Data/go-crea-fb-msg-fast-api
files: ./coverage.xml
fail_ci_if_error: false
- name: Upload test results to Codecov
if: ${{ !cancelled() && secrets.CODECOV_TOKEN != '' }}
uses: codecov/test-results-action@v1
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: ./junit.xml
fail_ci_if_error: false
🤖 Prompt for AI Agents
In @.github/workflows/ci.yml around lines 31 - 44, Gate both Codecov steps on
the presence of the CODECOV_TOKEN secret and ensure uploads don't fail the CI:
add an if condition checking the token (e.g., for "Upload coverage reports to
Codecov" and "Upload test results to Codecov" use if: ${{ secrets.CODECOV_TOKEN
!= '' }} and for the test-results step combine with existing cancellation guard
like if: ${{ secrets.CODECOV_TOKEN != '' && !cancelled() }}), and add
fail_ci_if_error: false to the "Upload test results to Codecov" step to match
the coverage step.

is_active: bool
):
"""Property: BotConfiguration should maintain invariants."""
now = datetime.utcnow()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

🧩 Analysis chain

🏁 Script executed:

find . -name "test_models.py" -type f | head -20

Repository: Abstract-Data/go-crea-fb-msg-fast-api

Length of output: 108


🏁 Script executed:

# Search for the file and examine lines 257 and 277
fd test_models.py -x cat -n {} | sed -n '250,280p'

Repository: Abstract-Data/go-crea-fb-msg-fast-api

Length of output: 1459


🌐 Web query:

Python 3.12 datetime.utcnow() deprecation datetime.UTC

💡 Result:

In Python 3.12, datetime.datetime.utcnow() (and utcfromtimestamp()) is deprecated because it returns a naive datetime (no tzinfo). The recommended replacement is to create a timezone-aware UTC datetime by passing the UTC tzinfo: datetime.now(tz=datetime.UTC) (or fromtimestamp(..., tz=datetime.UTC)). [1]

datetime.UTC is a convenience constant (an alias for datetime.timezone.utc) that was added in Python 3.11. [2]

What to use instead

from datetime import datetime, UTC

dt = datetime.now(tz=UTC)          # replaces datetime.utcnow()
dt2 = datetime.fromtimestamp(ts, tz=UTC)  # replaces datetime.utcfromtimestamp(ts)

If you need Python < 3.11 compatibility

from datetime import datetime, timezone

dt = datetime.now(tz=timezone.utc)

Sources: Python 3.12 “What’s New” (datetime deprecations) [1]; datetime module docs (datetime.UTC alias + deprecation note) [2].


🏁 Script executed:

# Check imports at the top of test_models.py
head -30 ./tests/unit/test_models.py

Repository: Abstract-Data/go-crea-fb-msg-fast-api

Length of output: 939


🏁 Script executed:

# Check Python version requirements
fd -e "setup.py" -e "pyproject.toml" -e ".python-version" -e "requirements.txt" | head -5 | xargs cat

Repository: Abstract-Data/go-crea-fb-msg-fast-api

Length of output: 63


🏁 Script executed:

# Search for project configuration files more thoroughly
find . -maxdepth 2 -type f \( -name "pyproject.toml" -o -name "setup.py" -o -name "setup.cfg" -o -name "tox.ini" -o -name ".python-version" -o -name "Dockerfile" \) 2>/dev/null

Repository: Abstract-Data/go-crea-fb-msg-fast-api

Length of output: 115


🏁 Script executed:

# Also check for requirements files
find . -maxdepth 2 -type f -name "*requirements*.txt" 2>/dev/null

Repository: Abstract-Data/go-crea-fb-msg-fast-api

Length of output: 63


🏁 Script executed:

# Check Python version requirement
cat ./.python-version

Repository: Abstract-Data/go-crea-fb-msg-fast-api

Length of output: 87


🏁 Script executed:

# Check pyproject.toml for Python version and dependencies
cat ./pyproject.toml

Repository: Abstract-Data/go-crea-fb-msg-fast-api

Length of output: 1829


Use timezone-aware datetime.now(datetime.UTC) instead of deprecated datetime.utcnow().

datetime.datetime.utcnow() is deprecated in Python 3.12+ and returns a naive datetime without timezone information. Use datetime.now(datetime.UTC) to create timezone-aware UTC timestamps.

♻️ Suggested fix
-from datetime import datetime
+from datetime import datetime, UTC

# Lines 257 and 277
-        now = datetime.utcnow()
+        now = datetime.now(UTC)
🧰 Tools
🪛 Ruff (0.14.14)

257-257: datetime.datetime.utcnow() used

(DTZ003)

🤖 Prompt for AI Agents
In `@tests/unit/test_models.py` at line 257, Replace the naive timestamp creation
that uses datetime.utcnow() for the variable now with a timezone-aware UTC
datetime; locate the usage of datetime.utcnow() in the test (the now assignment
in tests/unit/test_models.py) and change it to use datetime.now(datetime.UTC)
(or datetime.now(timezone.utc) if timezone imported) so now is timezone-aware.

def test_create_bot_configuration_valid_inputs(self, mock_get_client, mock_link):
"""Test create_bot_configuration() with valid inputs."""
mock_client = MagicMock()
now = datetime.utcnow()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Remove unused variable now.

The variable now is assigned but never used in this test method.

🧹 Suggested fix
     def test_create_bot_configuration_valid_inputs(self, mock_get_client, mock_link):
         """Test create_bot_configuration() with valid inputs."""
         mock_client = MagicMock()
-        now = datetime.utcnow()
         
         # The mock will return data based on what's inserted
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
now = datetime.utcnow()
def test_create_bot_configuration_valid_inputs(self, mock_get_client, mock_link):
"""Test create_bot_configuration() with valid inputs."""
mock_client = MagicMock()
# The mock will return data based on what's inserted
🧰 Tools
🪛 Ruff (0.14.14)

91-91: Local variable now is assigned to but never used

Remove assignment to unused variable now

(F841)


91-91: datetime.datetime.utcnow() used

(DTZ003)

🤖 Prompt for AI Agents
In `@tests/unit/test_repository.py` at line 91, Remove the unused local variable
now in the test (the assignment now = datetime.utcnow()); locate the test
function in tests/unit/test_repository.py where now is assigned and simply
delete that line (and if datetime is only imported for this now assignment, also
remove the unused import to avoid linter warnings).

def test_get_bot_configuration_by_page_id_found(self, mock_get_client):
"""Test get_bot_configuration_by_page_id() when configuration is found."""
mock_client = MagicMock()
now = datetime.utcnow()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Use timezone-aware datetime.now(datetime.UTC) instead of deprecated datetime.utcnow().

Same issue as in test_models.py - datetime.utcnow() is deprecated.

♻️ Suggested fix
-from datetime import datetime
+from datetime import datetime, UTC

# Line 152
-        now = datetime.utcnow()
+        now = datetime.now(UTC)
🧰 Tools
🪛 Ruff (0.14.14)

152-152: datetime.datetime.utcnow() used

(DTZ003)

🤖 Prompt for AI Agents
In `@tests/unit/test_repository.py` at line 152, Replace the deprecated call now =
datetime.utcnow() with a timezone-aware timestamp: import timezone from datetime
(or use datetime.timezone) and set now = datetime.now(timezone.utc); update any
similar occurrences (e.g., the analogous spot referenced in test_models.py) so
tests use timezone-aware datetimes consistently.

Comment on lines +142 to +164
@pytest.mark.asyncio
@respx_lib.mock
async def test_scrape_website_chunking_properties(self):
"""Property: Chunking should always return list of non-empty strings."""
# Test with various word counts
for word_count in [100, 500, 1000, 2000]:
words = ["word"] * word_count
html_content = f"<html><body><p>{' '.join(words)}</p></body></html>"

respx_lib.get("https://example.com").mock(return_value=httpx.Response(
200,
text=html_content
))

chunks = await scrape_website("https://example.com")

# Invariants
assert isinstance(chunks, list)
assert all(isinstance(chunk, str) for chunk in chunks)
assert all(len(chunk) > 0 for chunk in chunks) # No empty chunks
assert all(len(chunk.split()) > 0 for chunk in chunks) # All chunks have words

respx_lib.reset()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider using parametrized tests instead of loops with mock reset.

Using loops with respx_lib.reset() inside async tests can lead to subtle issues. Parametrized tests provide better isolation and clearer test output per case.

♻️ Example for test_scrape_website_various_html_structures
`@pytest.mark.asyncio`
`@pytest.mark.parametrize`("html_structure", [
    "<div>Simple div</div>",
    "<p>Paragraph with <strong>bold</strong> text</p>",
    "<ul><li>Item 1</li><li>Item 2</li></ul>",
    "<table><tr><td>Cell</td></tr></table>",
])
async def test_scrape_website_various_html_structures(self, respx_mock, html_structure):
    """Test that scraping handles various HTML structures."""
    html_content = f"<html><body>{html_structure}</body></html>"
    
    respx_mock.get("https://example.com").mock(return_value=httpx.Response(
        200,
        text=html_content
    ))
    
    chunks = await scrape_website("https://example.com")
    
    assert isinstance(chunks, list)
    assert all(isinstance(chunk, str) for chunk in chunks)

Also applies to: 223-248

🤖 Prompt for AI Agents
In `@tests/unit/test_scraper.py` around lines 142 - 164, Replace the loop+reset
pattern in test_scrape_website_chunking_properties with a parametrized pytest
case: remove the for loop and respx_lib.reset(), add `@pytest.mark.parametrize`
over the word_count values, use the respx fixture/mocked client (respx_mock or
respx_lib mock fixture) to register the single GET response per test, and call
scrape_website("https://example.com") directly; this keeps isolation per
parameter and avoids manual respx_lib.reset() — update references in the test to
use the function name test_scrape_website_chunking_properties and the
scrape_website call accordingly.

Comment on lines +207 to +221
@pytest.mark.asyncio
@respx_lib.mock
async def test_scrape_website_follows_redirects(self):
"""Test that redirects are followed."""
# Mock the final destination (httpx with follow_redirects handles this)
respx_lib.get("https://example.com/final").mock(return_value=httpx.Response(
200,
text="<html><body><p>Final content</p></body></html>"
))

chunks = await scrape_website("https://example.com/final")

# Should get content from final URL
combined_text = " ".join(chunks)
assert "Final content" in combined_text
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Test doesn't actually verify redirect following behavior.

This test mocks the final URL directly and fetches from it. To properly test redirect following, you should mock an initial URL that returns a 3xx redirect response, then verify the client follows it to the final destination.

💡 Suggested improvement
`@pytest.mark.asyncio`
`@respx_lib.mock`
async def test_scrape_website_follows_redirects(self):
    """Test that redirects are followed."""
    # Mock initial URL with redirect
    respx_lib.get("https://example.com/redirect").mock(
        return_value=httpx.Response(
            301,
            headers={"Location": "https://example.com/final"}
        )
    )
    # Mock final destination
    respx_lib.get("https://example.com/final").mock(
        return_value=httpx.Response(
            200,
            text="<html><body><p>Final content</p></body></html>"
        )
    )
    
    chunks = await scrape_website("https://example.com/redirect")
    
    combined_text = " ".join(chunks)
    assert "Final content" in combined_text
🤖 Prompt for AI Agents
In `@tests/unit/test_scraper.py` around lines 207 - 221, The test
test_scrape_website_follows_redirects currently mocks only the final URL, so it
doesn't exercise redirect handling; modify the test to mock an initial URL
(e.g., "https://example.com/redirect") returning a 3xx response with a Location
header pointing to the final URL, and also mock the final destination
("https://example.com/final") returning 200 with the HTML body, then call
scrape_website with the redirect URL and assert the final content is returned;
reference the test function test_scrape_website_follows_redirects and the
scrape_website function when locating the code to change.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review continued from previous batch...

Comment on lines +75 to +103
@pytest.fixture
def mock_httpx_client(monkeypatch):
"""Mock httpx.AsyncClient for testing with proper cleanup simulation."""
async def mock_get(*args, **kwargs):
mock_response = MagicMock()
mock_response.status_code = 200
mock_response.text = "<html><body>Test content</body></html>"
mock_response.raise_for_status = Mock()
return mock_response

async def mock_post(*args, **kwargs):
mock_response = MagicMock()
mock_response.status_code = 200
mock_response.json.return_value = {"content": "Test response"}
mock_response.raise_for_status = Mock()
return mock_response

async def mock_close():
"""Simulate closing the HTTP client to clean up resources."""
return None

mock_client = MagicMock()
mock_client.get = AsyncMock(side_effect=mock_get)
mock_client.post = AsyncMock(side_effect=mock_post)
mock_client.close = AsyncMock(side_effect=mock_close)
mock_client.__aenter__ = AsyncMock(return_value=mock_client)
mock_client.__aexit__ = AsyncMock(return_value=None) # This closes properly

return mock_client
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Remove unused monkeypatch parameter.

The monkeypatch parameter is declared but never used in this fixture. The fixture constructs and returns a mock client without patching anything.

♻️ Proposed fix
 `@pytest.fixture`
-def mock_httpx_client(monkeypatch):
+def mock_httpx_client():
     """Mock httpx.AsyncClient for testing with proper cleanup simulation."""
🧰 Tools
🪛 Ruff (0.14.14)

76-76: Unused function argument: monkeypatch

(ARG001)


78-78: Missing return type annotation for private function mock_get

(ANN202)


78-78: Missing type annotation for *args

(ANN002)


78-78: Unused function argument: args

(ARG001)


78-78: Missing type annotation for **kwargs

(ANN003)


78-78: Unused function argument: kwargs

(ARG001)


85-85: Missing return type annotation for private function mock_post

(ANN202)


85-85: Missing type annotation for *args

(ANN002)


85-85: Unused function argument: args

(ARG001)


85-85: Missing type annotation for **kwargs

(ANN003)


85-85: Unused function argument: kwargs

(ARG001)


92-92: Missing return type annotation for private function mock_close

Add return type annotation: None

(ANN202)

🤖 Prompt for AI Agents
In `@tests/conftest.py` around lines 75 - 103, The fixture mock_httpx_client
declares an unused monkeypatch parameter; remove monkeypatch from the fixture
signature so it becomes def mock_httpx_client(): and leave the body (mock_get,
mock_post, mock_close, mock_client setup, __aenter__/__aexit__) unchanged;
update any tests that explicitly request the fixture with the old signature only
if they relied on monkeypatch injection (they shouldn't), and ensure the fixture
name mock_httpx_client remains referenced where used.

Comment on lines +106 to +126
@pytest.fixture
def sample_bot_config():
"""Sample bot configuration dict for testing."""
return {
"id": "bot-123",
"page_id": "page-123",
"website_url": "https://example.com",
"reference_doc_id": "doc-123",
"tone": "professional",
"facebook_page_access_token": "token-123",
"facebook_verify_token": "verify-123",
"created_at": datetime.utcnow().isoformat(),
"updated_at": datetime.utcnow().isoformat(),
"is_active": True
}


@pytest.fixture
def sample_bot_configuration(sample_bot_config):
"""Sample BotConfiguration model instance."""
return BotConfiguration(**sample_bot_config)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n src/models/config_models.py | head -40

Repository: Abstract-Data/go-crea-fb-msg-fast-api

Length of output: 1163


🏁 Script executed:

rg -A 15 "class BotConfiguration" src/models/config_models.py

Repository: Abstract-Data/go-crea-fb-msg-fast-api

Length of output: 371


sample_bot_config fixture contains fields not present in BotConfiguration model.

The BotConfiguration model (src/models/config_models.py, lines 26-35) does not include facebook_page_access_token or facebook_verify_token fields. When this dict is passed to BotConfiguration(**sample_bot_config) in the sample_bot_configuration fixture, Pydantic will raise a validation error for these unexpected fields.

Remove these two fields from the fixture:

Proposed fix
 `@pytest.fixture`
 def sample_bot_config():
     """Sample bot configuration dict for testing."""
     return {
         "id": "bot-123",
         "page_id": "page-123",
         "website_url": "https://example.com",
         "reference_doc_id": "doc-123",
         "tone": "professional",
-        "facebook_page_access_token": "token-123",
-        "facebook_verify_token": "verify-123",
         "created_at": datetime.utcnow().isoformat(),
         "updated_at": datetime.utcnow().isoformat(),
         "is_active": True
     }
🧰 Tools
🪛 Ruff (0.14.14)

117-117: datetime.datetime.utcnow() used

(DTZ003)


118-118: datetime.datetime.utcnow() used

(DTZ003)

🤖 Prompt for AI Agents
In `@tests/conftest.py` around lines 106 - 126, The sample_bot_config fixture
includes unexpected keys that cause Pydantic validation to fail when
instantiating BotConfiguration; remove the facebook_page_access_token and
facebook_verify_token entries from the dict returned by sample_bot_config so
that BotConfiguration(**sample_bot_config) in the sample_bot_configuration
fixture only receives the fields defined on the BotConfiguration model.

Comment on lines +117 to +118
"created_at": datetime.utcnow().isoformat(),
"updated_at": datetime.utcnow().isoformat(),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Use datetime.now(timezone.utc) instead of deprecated datetime.utcnow().

datetime.utcnow() is deprecated in Python 3.12+ and returns a naive datetime. Use datetime.now(timezone.utc) for explicit UTC timezone awareness.

♻️ Proposed fix
+from datetime import datetime, timezone

 `@pytest.fixture`
 def sample_bot_config():
     """Sample bot configuration dict for testing."""
     return {
         "id": "bot-123",
         "page_id": "page-123",
         "website_url": "https://example.com",
         "reference_doc_id": "doc-123",
         "tone": "professional",
-        "created_at": datetime.utcnow().isoformat(),
-        "updated_at": datetime.utcnow().isoformat(),
+        "created_at": datetime.now(timezone.utc).isoformat(),
+        "updated_at": datetime.now(timezone.utc).isoformat(),
         "is_active": True
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"created_at": datetime.utcnow().isoformat(),
"updated_at": datetime.utcnow().isoformat(),
from datetime import datetime, timezone
`@pytest.fixture`
def sample_bot_config():
"""Sample bot configuration dict for testing."""
return {
"id": "bot-123",
"page_id": "page-123",
"website_url": "https://example.com",
"reference_doc_id": "doc-123",
"tone": "professional",
"created_at": datetime.now(timezone.utc).isoformat(),
"updated_at": datetime.now(timezone.utc).isoformat(),
"is_active": True
}
🧰 Tools
🪛 Ruff (0.14.14)

117-117: datetime.datetime.utcnow() used

(DTZ003)


118-118: datetime.datetime.utcnow() used

(DTZ003)

🤖 Prompt for AI Agents
In `@tests/conftest.py` around lines 117 - 118, Replace uses of datetime.utcnow()
for the "created_at" and "updated_at" fields with timezone-aware datetimes: call
datetime.now(timezone.utc).isoformat() instead of datetime.utcnow().isoformat(),
and ensure timezone is imported from datetime (e.g., add timezone to the import
that provides datetime) so the timestamps are explicit UTC-aware.

Comment on lines +170 to +185
@pytest.fixture
def test_client():
"""FastAPI TestClient for E2E tests."""
return TestClient(app)


@pytest.fixture
def mock_facebook_api(monkeypatch):
"""Mock Facebook Graph API responses."""
async def mock_post(*args, **kwargs):
mock_response = MagicMock()
mock_response.status_code = 200
mock_response.raise_for_status = Mock()
return mock_response

return mock_post
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Remove unused monkeypatch parameter from mock_facebook_api.

Similar to mock_httpx_client, the monkeypatch parameter is unused.

♻️ Proposed fix
 `@pytest.fixture`
-def mock_facebook_api(monkeypatch):
+def mock_facebook_api():
     """Mock Facebook Graph API responses."""
🧰 Tools
🪛 Ruff (0.14.14)

177-177: Unused function argument: monkeypatch

(ARG001)


179-179: Missing return type annotation for private function mock_post

(ANN202)


179-179: Missing type annotation for *args

(ANN002)


179-179: Unused function argument: args

(ARG001)


179-179: Missing type annotation for **kwargs

(ANN003)


179-179: Unused function argument: kwargs

(ARG001)

🤖 Prompt for AI Agents
In `@tests/conftest.py` around lines 170 - 185, The fixture mock_facebook_api
currently declares an unused monkeypatch parameter; remove the unused parameter
from the function signature (change def mock_facebook_api(monkeypatch): to def
mock_facebook_api():) and ensure any tests or other fixtures that reference
mock_facebook_api are unaffected; mirror the same pattern used by
mock_httpx_client by updating the fixture declaration and leaving the internal
async mock_post implementation intact.

Comment on lines +58 to +95
@patch('src.main.get_supabase_client')
@patch('src.main.CopilotService')
@patch('src.main.get_settings')
def test_lifespan_startup(
self,
mock_get_settings,
mock_copilot_service,
mock_get_supabase
):
"""Test application lifespan startup."""
from src.config import Settings

# Mock settings
mock_settings = Settings(
facebook_page_access_token="test-token",
facebook_verify_token="test-verify",
supabase_url="https://test.supabase.co",
supabase_service_key="test-key",
copilot_cli_host="http://localhost:5909",
copilot_enabled=True
)
mock_get_settings.return_value = mock_settings

# Mock Supabase
mock_supabase = MagicMock()
mock_get_supabase.return_value = mock_supabase

# Mock Copilot
mock_copilot = MagicMock()
mock_copilot.is_available = AsyncMock(return_value=True)
mock_copilot_service.return_value = mock_copilot

# Test lifespan context manager
from src.main import lifespan

# This would normally be called by FastAPI
# We can test the structure, but full async context testing is complex
assert lifespan is not None
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Lifespan test doesn't exercise the context manager.

The test sets up mocks but only asserts lifespan is not None, which doesn't validate the startup behavior. Consider actually running the lifespan context manager to verify startup logic.

♻️ Suggested improvement to exercise lifespan
     `@patch`('src.main.get_supabase_client')
     `@patch`('src.main.CopilotService')
     `@patch`('src.main.get_settings')
     def test_lifespan_startup(
         self,
         mock_get_settings,
         mock_copilot_service,
         mock_get_supabase
     ):
         """Test application lifespan startup."""
         from src.config import Settings
         
         # Mock settings
         mock_settings = Settings(
             facebook_page_access_token="test-token",
             facebook_verify_token="test-verify",
             supabase_url="https://test.supabase.co",
             supabase_service_key="test-key",
             copilot_cli_host="http://localhost:5909",
             copilot_enabled=True
         )
         mock_get_settings.return_value = mock_settings
         
         # Mock Supabase
         mock_supabase = MagicMock()
         mock_get_supabase.return_value = mock_supabase
         
         # Mock Copilot
         mock_copilot = MagicMock()
         mock_copilot.is_available = AsyncMock(return_value=True)
         mock_copilot_service.return_value = mock_copilot
         
-        # Test lifespan context manager
-        from src.main import lifespan
-        
-        # This would normally be called by FastAPI
-        # We can test the structure, but full async context testing is complex
-        assert lifespan is not None
+        # Test lifespan with TestClient which invokes the context manager
+        from src.main import app
+        with TestClient(app) as client:
+            # Verify startup executed
+            mock_get_settings.assert_called()
+            mock_get_supabase.assert_called()
+            mock_copilot_service.assert_called_once()
🧰 Tools
🪛 Ruff (0.14.14)

72-72: Possible hardcoded password assigned to argument: "facebook_page_access_token"

(S106)


73-73: Possible hardcoded password assigned to argument: "facebook_verify_token"

(S106)

🤖 Prompt for AI Agents
In `@tests/e2e/test_main.py` around lines 58 - 95, The test currently only asserts
lifespan exists; instead run the actual ASGI lifespan context to exercise
startup: import lifespan from src.main and enter the async context (e.g., using
pytest-asyncio or anyio) so the startup code runs, then assert expected side
effects such as get_supabase_client (mock_get_supabase) being called,
CopilotService being instantiated (mock_copilot_service) and its is_available
awaited, and any settings-dependent behavior on the mocked Settings; finally
exit the context to trigger shutdown and assert cleanup calls if applicable.

Comment on lines +131 to +160
@pytest.mark.asyncio
@given(
recipient_id=st.text(min_size=1, max_size=100),
text=st.text(min_size=1, max_size=2000)
)
@respx.mock
async def test_send_message_properties(
self,
recipient_id: str,
text: str
):
"""Property: send_message() should handle various inputs."""
respx.post("https://graph.facebook.com/v18.0/me/messages").mock(
return_value=httpx.Response(200)
)

# Should not raise exception for valid inputs
await send_message(
page_access_token="test-token",
recipient_id=recipient_id,
text=text
)

# Verify request was made
assert len(respx.calls) == 1
request = respx.calls.last.request
import json
payload = json.loads(request.read())
assert payload["recipient"]["id"] == recipient_id
assert payload["message"]["text"] == text
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Hypothesis + respx.mock interaction may cause flaky tests.

Combining @given (Hypothesis) with @respx.mock can be problematic because:

  1. respx.mock creates a context that may not reset properly between Hypothesis examples
  2. The assertion assert len(respx.calls) == 1 at line 155 will fail on the second example since respx.calls accumulates

Consider using respx.mock(assert_all_called=False) as a context manager inside the test, or use settings(max_examples=1) for this specific test to limit to a single example.

🐛 Proposed fix using context manager
     `@pytest.mark.asyncio`
     `@given`(
         recipient_id=st.text(min_size=1, max_size=100),
         text=st.text(min_size=1, max_size=2000)
     )
-    `@respx.mock`
     async def test_send_message_properties(
         self,
         recipient_id: str,
         text: str
     ):
         """Property: send_message() should handle various inputs."""
-        respx.post("https://graph.facebook.com/v18.0/me/messages").mock(
-            return_value=httpx.Response(200)
-        )
-        
-        # Should not raise exception for valid inputs
-        await send_message(
-            page_access_token="test-token",
-            recipient_id=recipient_id,
-            text=text
-        )
-        
-        # Verify request was made
-        assert len(respx.calls) == 1
-        request = respx.calls.last.request
-        import json
-        payload = json.loads(request.read())
-        assert payload["recipient"]["id"] == recipient_id
-        assert payload["message"]["text"] == text
+        async with respx.mock:
+            respx.post("https://graph.facebook.com/v18.0/me/messages").mock(
+                return_value=httpx.Response(200)
+            )
+            
+            # Should not raise exception for valid inputs
+            await send_message(
+                page_access_token="test-token",
+                recipient_id=recipient_id,
+                text=text
+            )
+            
+            # Verify request was made
+            assert len(respx.calls) == 1
+            request = respx.calls.last.request
+            import json
+            payload = json.loads(request.read())
+            assert payload["recipient"]["id"] == recipient_id
+            assert payload["message"]["text"] == text
🧰 Tools
🪛 Ruff (0.14.14)

149-149: Possible hardcoded password assigned to argument: "page_access_token"

(S106)

🤖 Prompt for AI Agents
In `@tests/unit/test_facebook_service.py` around lines 131 - 160, The
test_send_message_properties test is flaky because the `@respx.mock` decorator
leaves respx.calls accumulating across Hypothesis examples; replace the
decorator usage by opening a respx.mock context manager inside the test (e.g.,
with respx.mock(assert_all_called=False) as mock:) then register the POST mock
on mock (mock.post(...).mock(...)), call send_message, and assert against
mock.calls or mock.calls.last to ensure only the current example's calls are
inspected; alternatively, if you prefer to keep the decorator, limit Hypothesis
to one example by adding `@settings`(max_examples=1) to the test to avoid
accumulation.

Comment on lines +47 to +70
@given(content=st.text(min_size=1, max_size=10000))
def test_content_hash_determinism(self, content: str):
"""Property: Content hash should be deterministic (same input = same hash)."""
hash1 = hashlib.sha256(content.encode()).hexdigest()
hash2 = hashlib.sha256(content.encode()).hexdigest()

# Same content should produce same hash
assert hash1 == hash2

@given(
content1=st.text(min_size=1, max_size=10000),
content2=st.text(min_size=1, max_size=10000)
)
def test_content_hash_uniqueness(self, content1: str, content2: str):
"""Property: Different content should produce different hashes."""
# Skip if content is the same
if content1 == content2:
return

hash1 = hashlib.sha256(content1.encode()).hexdigest()
hash2 = hashlib.sha256(content2.encode()).hexdigest()

# Different content should produce different hash
assert hash1 != hash2
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Use assume() and consider testing through the actual function.

  1. The return statement on line 64 should use hypothesis.assume(content1 != content2) for clearer intent.
  2. These tests verify hashlib.sha256 directly rather than build_reference_doc. While useful, they don't cover the function's encoding behavior.
♻️ Proposed fix for assume()
+from hypothesis import given, strategies as st, assume
-from hypothesis import given, strategies as st

 `@given`(
     content1=st.text(min_size=1, max_size=10000),
     content2=st.text(min_size=1, max_size=10000)
 )
 def test_content_hash_uniqueness(self, content1: str, content2: str):
     """Property: Different content should produce different hashes."""
-    # Skip if content is the same
-    if content1 == content2:
-        return
+    assume(content1 != content2)
     
     hash1 = hashlib.sha256(content1.encode()).hexdigest()
     hash2 = hashlib.sha256(content2.encode()).hexdigest()
     
     # Different content should produce different hash
     assert hash1 != hash2
🤖 Prompt for AI Agents
In `@tests/unit/test_reference_doc.py` around lines 47 - 70, Replace the early
return in test_content_hash_uniqueness with hypothesis.assume(content1 !=
content2) to express the precondition (use assume from hypothesis);
additionally, change both tests to call the real function build_reference_doc
(or the module function that produces the content hash) and compare its
returned/derived hash values instead of directly invoking hashlib.sha256 so the
tests exercise the function's encoding/normalization behavior (ensure you pass
content strings through build_reference_doc or call whatever function computes
the hash in that module and assert equality/inequality accordingly).

Comment on lines +1 to +9
"""Tests for setup CLI."""

import pytest
import asyncio
import gc
import time
import warnings
from unittest.mock import patch, MagicMock, AsyncMock
import typer
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Remove unused imports.

warnings is imported at line 7 but also re-imported inside teardown_method at line 104. The top-level import is unused.

♻️ Proposed fix
 import pytest
 import asyncio
 import gc
 import time
-import warnings
 from unittest.mock import patch, MagicMock, AsyncMock
 import typer
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"""Tests for setup CLI."""
import pytest
import asyncio
import gc
import time
import warnings
from unittest.mock import patch, MagicMock, AsyncMock
import typer
"""Tests for setup CLI."""
import pytest
import asyncio
import gc
import time
from unittest.mock import patch, MagicMock, AsyncMock
import typer
🤖 Prompt for AI Agents
In `@tests/unit/test_setup_cli.py` around lines 1 - 9, Remove the unused top-level
import of warnings and keep the local import used inside teardown_method;
specifically, delete the module-level "import warnings" at the top of the file
so only the warnings import inside the teardown_method helper remains (locate
teardown_method to confirm the local import is the one used).

Comment on lines +19 to +26
def setup_method(self, method):
"""Ensure clean state before each test."""
# Clear any existing event loop
try:
asyncio.set_event_loop(None)
except Exception:
pass

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Simplify setup_method or remove if unnecessary.

The method parameter is unused. If clearing the event loop is required, consider using pytest-asyncio's built-in event loop management instead of manual manipulation. Setting asyncio.set_event_loop(None) can cause issues with some pytest-asyncio configurations.

♻️ Proposed simplification
-    def setup_method(self, method):
-        """Ensure clean state before each test."""
-        # Clear any existing event loop
-        try:
-            asyncio.set_event_loop(None)
-        except Exception:
-            pass
+    def setup_method(self):
+        """Ensure clean state before each test."""
+        pass  # pytest-asyncio handles event loop management

Or remove the method entirely if no setup is needed.

🧰 Tools
🪛 Ruff (0.14.14)

19-19: Unused method argument: method

(ARG002)


24-25: try-except-pass detected, consider logging the exception

(S110)


24-24: Do not catch blind exception: Exception

(BLE001)

🤖 Prompt for AI Agents
In `@tests/unit/test_setup_cli.py` around lines 19 - 26, The setup_method
currently defines an unused parameter `method` and calls
`asyncio.set_event_loop(None)`, which can conflict with pytest-asyncio; either
remove the entire `setup_method` if no setup is required, or simplify it by
dropping the unused `method` parameter and replacing manual loop clearing with
pytest-asyncio's built-in fixtures (i.e., rely on the event_loop fixture)
instead of calling `asyncio.set_event_loop(None)` in the `setup_method`
function.

Comment on lines +27 to +109
def teardown_method(self, method):
"""Clean up event loops and async resources after each test to prevent resource warnings."""
# First: Close any mock HTTP clients that might have open sockets
try:
# Check for any mock HTTP clients from CopilotService or Supabase mocks
# These are created via @patch decorators, so they're in the test method's locals
# We'll clean them up by ensuring their async context managers are properly closed
for attr_name in dir(self):
if 'mock' in attr_name.lower():
try:
mock_obj = getattr(self, attr_name, None)
if mock_obj is not None and hasattr(mock_obj, 'reset_mock'):
# Reset the mock to clear any state
mock_obj.reset_mock()
# If it's an async context manager mock, ensure __aexit__ is called
if hasattr(mock_obj, '__aenter__') and hasattr(mock_obj, '__aexit__'):
try:
# Simulate proper async context manager cleanup
if hasattr(mock_obj.__aexit__, 'return_value'):
mock_obj.__aexit__.return_value = None
except Exception:
pass
except Exception:
pass
except Exception:
pass

# Enhanced cleanup with better error handling
try:
# Try to get running loop first
try:
loop = asyncio.get_running_loop()
# If we have a running loop, we can't close it from here
return
except RuntimeError:
loop = None

# If no running loop, try to get the event loop
if loop is None:
try:
loop = asyncio.get_event_loop()
except RuntimeError:
loop = None

# Clean up the loop if it exists and is not closed
if loop is not None and not loop.is_closed():
# Check if loop is running - if so, we can't close it
if loop.is_running():
return

try:
# Cancel all pending tasks
tasks = [t for t in asyncio.all_tasks(loop) if not t.done()]
for t in tasks:
t.cancel()
if tasks:
loop.run_until_complete(asyncio.gather(*tasks, return_exceptions=True))
# Shutdown async generators to close any open resources (sockets, etc.)
loop.run_until_complete(loop.shutdown_asyncgens())
loop.close()
except (RuntimeError, Exception):
# Ignore errors during cleanup
pass
except Exception:
# Ignore any errors during cleanup
pass
finally:
# Always clear the event loop reference
try:
asyncio.set_event_loop(None)
except Exception:
pass
# Add a small delay to allow async cleanup to complete before garbage collection
# This helps ensure that async generators and context managers have time to close
time.sleep(0.01) # 10ms delay to allow async cleanup
# Use gc.collect() defensively, but suppress ResourceWarnings during collection
# as they may come from mocked resources that don't need real cleanup
import warnings
with warnings.catch_warnings():
warnings.filterwarnings("ignore", category=ResourceWarning)
for _ in range(3):
gc.collect()
time.sleep(0.002) # Small delay between collection passes
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Overly complex teardown logic — consider simplifying significantly.

This teardown method is ~80 lines of defensive cleanup code with multiple nested try-except blocks that silently swallow exceptions. Key concerns:

  1. Mock cleanup (lines 30-52): Unnecessary — @patch decorators automatically clean up mocks after each test.
  2. Event loop cleanup (lines 54-98): pytest-asyncio manages event loops; manual intervention can cause conflicts.
  3. time.sleep() calls (lines 101, 109): Add ~16ms latency per test, which accumulates.
  4. Bare except Exception: pass: Silences real errors that could help debug test failures.

pytest-asyncio and the @patch decorator handle resource cleanup automatically. If you're seeing ResourceWarnings, the root cause is likely elsewhere (e.g., unclosed httpx clients in the code under test).

♻️ Proposed simplification
-    def teardown_method(self, method):
-        """Clean up event loops and async resources after each test to prevent resource warnings."""
-        # First: Close any mock HTTP clients that might have open sockets
-        try:
-            # Check for any mock HTTP clients from CopilotService or Supabase mocks
-            ... # 80+ lines of cleanup
-        finally:
-            ...
+    def teardown_method(self):
+        """Clean up after each test."""
+        # pytest-asyncio and `@patch` handle most cleanup automatically.
+        # If ResourceWarnings persist, investigate the source (e.g., unclosed clients).
+        gc.collect()
🧰 Tools
🪛 Ruff (0.14.14)

27-27: Unused method argument: method

(ARG002)


47-48: try-except-pass detected, consider logging the exception

(S110)


47-47: Do not catch blind exception: Exception

(BLE001)


49-50: try-except-pass detected, consider logging the exception

(S110)


49-49: Do not catch blind exception: Exception

(BLE001)


51-52: try-except-pass detected, consider logging the exception

(S110)


51-51: Do not catch blind exception: Exception

(BLE001)


60-60: Consider moving this statement to an else block

(TRY300)


87-89: try-except-pass detected, consider logging the exception

(S110)


87-87: Do not catch blind exception: Exception

(BLE001)


90-92: try-except-pass detected, consider logging the exception

(S110)


90-90: Do not catch blind exception: Exception

(BLE001)


97-98: try-except-pass detected, consider logging the exception

(S110)


97-97: Do not catch blind exception: Exception

(BLE001)

🤖 Prompt for AI Agents
In `@tests/unit/test_setup_cli.py` around lines 27 - 109, The teardown_method is
overly complex and should be simplified: remove the large mock-inspection block
and the manual event-loop cancellation/closing, eliminate time.sleep() and
repeated gc.collect() passes, and avoid broad except Exception: pass blocks;
instead implement a minimal teardown_method that only performs any targeted,
explicit cleanup needed (e.g., call asyncio.set_event_loop(None) inside a narrow
try/except or clear specific test attributes), leaving event loop management to
pytest-asyncio and mock cleanup to `@patch`; locate the current teardown_method in
tests/unit/test_setup_cli.py and replace its body accordingly, referencing
teardown_method and any uses of
asyncio.get_running_loop/get_event_loop/loop.shutdown_asyncgens to remove.

Copy link
Member Author

jreakin commented Jan 29, 2026

Merge activity

  • Jan 29, 11:55 PM UTC: A user started a stack merge that includes this pull request via Graphite.
  • Jan 29, 11:56 PM UTC: Graphite rebased this pull request as part of a merge.
  • Jan 29, 11:57 PM UTC: @jreakin merged this pull request with Graphite.

@jreakin jreakin changed the base branch from initial-setup to graphite-base/2 January 29, 2026 23:55
@jreakin jreakin changed the base branch from graphite-base/2 to main January 29, 2026 23:55
@jreakin jreakin force-pushed the initial-test-suite branch from 13d9387 to 4b180c4 Compare January 29, 2026 23:56
@jreakin jreakin merged commit f0013d6 into main Jan 29, 2026
5 of 6 checks passed
@jreakin jreakin deleted the initial-test-suite branch January 30, 2026 00:19
@notion-workspace
Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants