-
Notifications
You must be signed in to change notification settings - Fork 0
Update #88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update #88
Changes from all commits
914261c
2e75d39
0451d76
f24960b
3369ab0
23cd3ab
2d5a8e9
b11dc2d
82fe873
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -4,9 +4,9 @@ | |||||
| # Copy this file to .env and update with your production values | ||||||
| # | ||||||
| # Usage: | ||||||
| # cp .env.production.example .env | ||||||
| # nano .env # Edit with your values | ||||||
| # docker compose -f docker-compose.production.yml up -d | ||||||
| # cp .env.production.example .env | ||||||
| # nano .env # Edit with your values | ||||||
| # docker compose -f docker-compose.production.yml up -d | ||||||
|
|
||||||
| # ============================================================================= | ||||||
| # FLASK CONFIGURATION (Required) | ||||||
|
|
@@ -15,7 +15,7 @@ FLASK_ENV=production | |||||
| DEBUG=False | ||||||
|
|
||||||
| # Generate a new secret key with: | ||||||
| # python -c 'import secrets; print(secrets.token_hex(32))' | ||||||
| # python -c 'import secrets; print(secrets.token_hex(32))' | ||||||
| SECRET_KEY=87d27fd71f96abf9e24e998f0717912bd15185a7f82370202272cc2ad59123f2 | ||||||
|
|
||||||
| # ============================================================================= | ||||||
|
|
@@ -25,7 +25,7 @@ SECRET_KEY=87d27fd71f96abf9e24e998f0717912bd15185a7f82370202272cc2ad59123f2 | |||||
| POSTGRES_HOST=192.168.5.71 | ||||||
| POSTGRES_PORT=5432 | ||||||
| POSTGRES_DB=writebot | ||||||
| POSTGRES_USER=writebot_user | ||||||
| POSTGRES_USER=postgres | ||||||
| POSTGRES_PASSWORD=Writebot01 | ||||||
|
|
||||||
| # Alternative: Set DATABASE_URL directly (overrides above settings) | ||||||
|
|
@@ -48,7 +48,8 @@ CUDA_VISIBLE_DEVICES=0 | |||||
| # DOCKER BUILD ARGUMENTS | ||||||
| # ============================================================================= | ||||||
| # These can be customized at build time | ||||||
| CUDA_VERSION=12.3.1 | ||||||
| CUDA_VERSION=12.3.2 | ||||||
|
||||||
| CUDA_VERSION=12.3.2 | |
| CUDA_VERSION=13.0.1 |
Copilot
AI
Jan 7, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The UBUNTU_VERSION specified here (22.04) is inconsistent with the Ubuntu version specified in Dockerfile.gpu (24.04). This mismatch could lead to compatibility issues or unexpected behavior during deployment. Both files should specify the same Ubuntu version to ensure consistency across the environment configuration.
| UBUNTU_VERSION=22.04 | |
| UBUNTU_VERSION=24.04 |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,13 +1,13 @@ | ||
| # GPU-enabled Dockerfile for WriteBot | ||
| # Optimized for NVIDIA RTX 50 series (Blackwell) and RTX 40/30 series GPUs | ||
| # Uses CUDA 12.x for best performance with modern GPUs | ||
| # Uses CUDA 13.x for RTX 50 series (Blackwell) support | ||
|
||
|
|
||
| #============================================================================= | ||
| # BUILD ARGUMENTS - Customize these at build time | ||
| #============================================================================= | ||
| ARG CUDA_VERSION=12.3.2 | ||
| ARG CUDA_VERSION=13.0.1 | ||
| ARG CUDNN_VERSION=9 | ||
| ARG UBUNTU_VERSION=22.04 | ||
| ARG UBUNTU_VERSION=24.04 | ||
| ARG PYTHON_VERSION=3.11 | ||
| ARG GUNICORN_WORKERS=2 | ||
| ARG GUNICORN_THREADS=4 | ||
|
|
@@ -52,16 +52,17 @@ RUN python -m pip install --upgrade pip | |
| # Copy requirements and install Python dependencies | ||
| COPY requirements.txt . | ||
|
|
||
| # Install TensorFlow with CUDA support | ||
| RUN pip install --no-cache-dir --user tensorflow[and-cuda]>=2.15.0 tensorflow-probability>=0.23.0 | ||
| # Install TensorFlow 2.18+ with CUDA 13.0 support for RTX 50 series (Blackwell) | ||
|
||
| # Also install tf-keras for Keras 2 compatibility (TF 2.16+ defaults to Keras 3) | ||
| RUN pip install --no-cache-dir --user "tensorflow[and-cuda]>=2.18.0" "tensorflow-probability>=0.25.0" "tf-keras>=2.18.0" | ||
|
|
||
| # Install remaining dependencies | ||
| RUN pip install --no-cache-dir --user -r requirements.txt | ||
|
|
||
| # Production stage | ||
| ARG CUDA_VERSION=12.3.2 | ||
| ARG CUDA_VERSION=13.0.1 | ||
| ARG CUDNN_VERSION=9 | ||
| ARG UBUNTU_VERSION=22.04 | ||
| ARG UBUNTU_VERSION=24.04 | ||
| FROM nvidia/cuda:${CUDA_VERSION}-cudnn${CUDNN_VERSION}-runtime-ubuntu${UBUNTU_VERSION} | ||
|
|
||
| # Re-declare ARGs for production stage | ||
|
|
@@ -116,6 +117,8 @@ ENV CUDA_VISIBLE_DEVICES=0 | |
| ENV TF_FORCE_GPU_ALLOW_GROWTH=true | ||
| # Enable TensorFloat-32 for RTX 30/40/50 series | ||
| ENV TF_ENABLE_TF32=1 | ||
| # Use Keras 2 (tf-keras) instead of Keras 3 for TF1 compat code compatibility | ||
| ENV TF_USE_LEGACY_KERAS=1 | ||
| ENV NVIDIA_VISIBLE_DEVICES=all | ||
| ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility | ||
|
|
||
|
|
||
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -8,12 +8,45 @@ | |||||||||||||||||||||
| from tensorflow.python.ops import math_ops | ||||||||||||||||||||||
| from tensorflow.python.ops import tensor_array_ops | ||||||||||||||||||||||
| from tensorflow.python.ops import variable_scope as vs | ||||||||||||||||||||||
| from tensorflow.python.ops.rnn import _maybe_tensor_shape_from_tensor | ||||||||||||||||||||||
| from tensorflow.python.ops.rnn_cell_impl import _concat, assert_like_rnncell | ||||||||||||||||||||||
| from tensorflow.python.util import is_in_graph_mode | ||||||||||||||||||||||
| from tensorflow.python.util import nest | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
|
|
||||||||||||||||||||||
| def _maybe_tensor_shape_from_tensor(shape): | ||||||||||||||||||||||
| """Convert tensor or TensorShape to TensorShape for compatibility.""" | ||||||||||||||||||||||
| if isinstance(shape, ops.Tensor): | ||||||||||||||||||||||
| return tensor_shape.TensorShape(None) | ||||||||||||||||||||||
| return tensor_shape.TensorShape(shape) | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
|
|
||||||||||||||||||||||
| def _concat(prefix, suffix, static=False): | ||||||||||||||||||||||
| """Concat prefix and suffix, handling both static and dynamic shapes.""" | ||||||||||||||||||||||
|
||||||||||||||||||||||
| """Concat prefix and suffix, handling both static and dynamic shapes.""" | |
| """Concat prefix and suffix, handling both static and dynamic shapes.""" | |
| # The `static` argument is currently kept for API compatibility only and has | |
| # no effect on the behavior of this helper. | |
| _ = static |
Copilot
AI
Jan 7, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The conditional logic for converting prefix to a tensor is incorrect when prefix is a list. The expression wraps the list in another list with tf.constant([prefix], dtype=tf.int32), which will create the wrong structure. It should be tf.constant(prefix, dtype=tf.int32) (without the extra brackets) to match the suffix handling on line 30. The correct logic should be: if prefix is a Tensor, use it as-is; otherwise convert it to a constant regardless of whether it's a scalar or list.
| p = tf.constant(prefix, dtype=tf.int32) if not isinstance(prefix, list) else tf.constant([prefix], dtype=tf.int32) | |
| if isinstance(suffix, ops.Tensor): | |
| s = suffix | |
| else: | |
| s = tf.constant(suffix, dtype=tf.int32) if not isinstance(suffix, list) else tf.constant(suffix, dtype=tf.int32) | |
| p = tf.constant(prefix, dtype=tf.int32) | |
| if isinstance(suffix, ops.Tensor): | |
| s = suffix | |
| else: | |
| s = tf.constant(suffix, dtype=tf.int32) |
Copilot
AI
Jan 7, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The condition hasattr(cell, "__call__") or callable(cell) is redundant because callable(cell) already checks for the presence of a __call__ method. The condition can be simplified to just callable(cell) for cleaner and more idiomatic code.
| hasattr(cell, "__call__") or callable(cell), | |
| callable(cell), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changing
POSTGRES_USERto the defaultpostgressuperuser means the application will connect as a database administrator by default, greatly increasing the impact of any compromise (e.g., SQL injection or remote code execution leading to full database takeover). Use a dedicated application role with only the minimum privileges required for the app, and reserve thepostgresaccount strictly for administrative tasks managed outside the application environment.