Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 15 additions & 14 deletions .env
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,9 @@
# Copy this file to .env and update with your production values
#
# Usage:
#   cp .env.production.example .env
#   nano .env  # Edit with your values
#   docker compose -f docker-compose.production.yml up -d
# cp .env.production.example .env
# nano .env # Edit with your values
# docker compose -f docker-compose.production.yml up -d

# =============================================================================
# FLASK CONFIGURATION (Required)
Expand All @@ -15,7 +15,7 @@ FLASK_ENV=production
DEBUG=False

# Generate a new secret key with:
#   python -c 'import secrets; print(secrets.token_hex(32))'
# python -c 'import secrets; print(secrets.token_hex(32))'
SECRET_KEY=87d27fd71f96abf9e24e998f0717912bd15185a7f82370202272cc2ad59123f2

# =============================================================================
Expand All @@ -25,7 +25,7 @@ SECRET_KEY=87d27fd71f96abf9e24e998f0717912bd15185a7f82370202272cc2ad59123f2
POSTGRES_HOST=192.168.5.71
POSTGRES_PORT=5432
POSTGRES_DB=writebot
POSTGRES_USER=writebot_user
POSTGRES_USER=postgres
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changing POSTGRES_USER to the default postgres superuser means the application will connect as a database administrator by default, greatly increasing the impact of any compromise (e.g., SQL injection or remote code execution leading to full database takeover). Use a dedicated application role with only the minimum privileges required for the app, and reserve the postgres account strictly for administrative tasks managed outside the application environment.

Suggested change
POSTGRES_USER=postgres
POSTGRES_USER=writebot

Copilot uses AI. Check for mistakes.
POSTGRES_PASSWORD=Writebot01

# Alternative: Set DATABASE_URL directly (overrides above settings)
Expand All @@ -48,7 +48,8 @@ CUDA_VISIBLE_DEVICES=0
# DOCKER BUILD ARGUMENTS
# =============================================================================
# These can be customized at build time
CUDA_VERSION=12.3.1
CUDA_VERSION=12.3.2
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The CUDA version specified here (12.3.2) is inconsistent with the CUDA version specified in Dockerfile.gpu (13.0.1). This mismatch could lead to unexpected behavior or build failures if these build arguments are used. Both files should specify the same CUDA version to maintain consistency across the environment configuration.

Suggested change
CUDA_VERSION=12.3.2
CUDA_VERSION=13.0.1

Copilot uses AI. Check for mistakes.
CUDNN_VERSION=9
UBUNTU_VERSION=22.04
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The UBUNTU_VERSION specified here (22.04) is inconsistent with the Ubuntu version specified in Dockerfile.gpu (24.04). This mismatch could lead to compatibility issues or unexpected behavior during deployment. Both files should specify the same Ubuntu version to ensure consistency across the environment configuration.

Suggested change
UBUNTU_VERSION=22.04
UBUNTU_VERSION=24.04

Copilot uses AI. Check for mistakes.
PYTHON_VERSION=3.11

Expand Down Expand Up @@ -83,9 +84,9 @@ SESSION_COOKIE_SAMESITE=Lax
# The tunnel will connect to localhost:${APP_PORT}
#
# Setup commands:
#   cloudflared tunnel login
#   cloudflared tunnel create writebot
#   cloudflared tunnel route dns writebot your-domain.com
# cloudflared tunnel login
# cloudflared tunnel create writebot
# cloudflared tunnel route dns writebot your-domain.com
#
# Tunnel token (if using token-based auth instead of cert)
# CLOUDFLARE_TUNNEL_TOKEN=your-tunnel-token
Expand All @@ -101,11 +102,11 @@ APP_VERSION=1.0.0
# =============================================================================
# Default network layout (customize in proxmox-setup.sh):
#
# Component        | VMID | IP Address    | Purpose
# Component | VMID | IP Address | Purpose
# -----------------|------|---------------|------------------
# Docker VM        | 100  | 10.10.10.10   | WriteBot app + GPU
# PostgreSQL LXC   | 101  | 10.10.10.11   | Database server
# Redis LXC        | 102  | 10.10.10.12   | Cache/rate limiting
# Gateway          | -    | 10.10.10.1    | Network gateway
# Docker VM | 100 | 10.10.10.10 | WriteBot app + GPU
# PostgreSQL LXC | 101 | 10.10.10.11 | Database server
# Redis LXC | 102 | 10.10.10.12 | Cache/rate limiting
# Gateway | - | 10.10.10.1 | Network gateway
#
# Network: vmbr0 (10.10.10.0/24)
17 changes: 10 additions & 7 deletions Dockerfile.gpu
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
# GPU-enabled Dockerfile for WriteBot
# Optimized for NVIDIA RTX 50 series (Blackwell) and RTX 40/30 series GPUs
# Uses CUDA 12.x for best performance with modern GPUs
# Uses CUDA 13.x for RTX 50 series (Blackwell) support
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment states "Uses CUDA 13.x for RTX 50 series (Blackwell) support" but CUDA 13.x does not exist as of January 2025. The actual CUDA versions available are in the 12.x series. This comment should be updated to reflect the correct CUDA version that actually supports the target hardware.

Copilot uses AI. Check for mistakes.

#=============================================================================
# BUILD ARGUMENTS - Customize these at build time
#=============================================================================
ARG CUDA_VERSION=12.3.2
ARG CUDA_VERSION=13.0.1
ARG CUDNN_VERSION=9
ARG UBUNTU_VERSION=22.04
ARG UBUNTU_VERSION=24.04
ARG PYTHON_VERSION=3.11
ARG GUNICORN_WORKERS=2
ARG GUNICORN_THREADS=4
Expand Down Expand Up @@ -52,16 +52,17 @@ RUN python -m pip install --upgrade pip
# Copy requirements and install Python dependencies
COPY requirements.txt .

# Install TensorFlow with CUDA support
RUN pip install --no-cache-dir --user tensorflow[and-cuda]>=2.15.0 tensorflow-probability>=0.23.0
# Install TensorFlow 2.18+ with CUDA 13.0 support for RTX 50 series (Blackwell)
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment mentions "CUDA 13.0 support" but CUDA 13.0 does not exist as of January 2025. This comment should be updated to reflect the actual CUDA version being used once the correct version is determined.

Copilot uses AI. Check for mistakes.
# Also install tf-keras for Keras 2 compatibility (TF 2.16+ defaults to Keras 3)
RUN pip install --no-cache-dir --user "tensorflow[and-cuda]>=2.18.0" "tensorflow-probability>=0.25.0" "tf-keras>=2.18.0"

# Install remaining dependencies
RUN pip install --no-cache-dir --user -r requirements.txt

# Production stage
ARG CUDA_VERSION=12.3.2
ARG CUDA_VERSION=13.0.1
ARG CUDNN_VERSION=9
ARG UBUNTU_VERSION=22.04
ARG UBUNTU_VERSION=24.04
FROM nvidia/cuda:${CUDA_VERSION}-cudnn${CUDNN_VERSION}-runtime-ubuntu${UBUNTU_VERSION}

# Re-declare ARGs for production stage
Expand Down Expand Up @@ -116,6 +117,8 @@ ENV CUDA_VISIBLE_DEVICES=0
ENV TF_FORCE_GPU_ALLOW_GROWTH=true
# Enable TensorFloat-32 for RTX 30/40/50 series
ENV TF_ENABLE_TF32=1
# Use Keras 2 (tf-keras) instead of Keras 3 for TF1 compat code compatibility
ENV TF_USE_LEGACY_KERAS=1
ENV NVIDIA_VISIBLE_DEVICES=all
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility

Expand Down
2 changes: 1 addition & 1 deletion handwriting_synthesis/rnn/LSTMAttentionCell.py
Original file line number Diff line number Diff line change
Expand Up @@ -226,7 +226,7 @@ def termination_condition(self, state):
past_final_char = char_idx >= self.attention_values_lengths
output = self.output_function(state)
es = tf.cast(output[:, 2], tf.int32)
is_eos = tf.equal(es, tf.experimental.numpy.ones_like(es))
is_eos = tf.equal(es, tf.ones_like(es))
return tf.logical_or(tf.logical_and(final_char, is_eos), past_final_char)

def _parse_parameters(self, gmm_params, eps=1e-8, sigma_eps=1e-4):
Expand Down
42 changes: 38 additions & 4 deletions handwriting_synthesis/rnn/operations.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,45 @@
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import tensor_array_ops
from tensorflow.python.ops import variable_scope as vs
from tensorflow.python.ops.rnn import _maybe_tensor_shape_from_tensor
from tensorflow.python.ops.rnn_cell_impl import _concat, assert_like_rnncell
from tensorflow.python.util import is_in_graph_mode
from tensorflow.python.util import nest


def _maybe_tensor_shape_from_tensor(shape):
"""Convert tensor or TensorShape to TensorShape for compatibility."""
if isinstance(shape, ops.Tensor):
return tensor_shape.TensorShape(None)
return tensor_shape.TensorShape(shape)


def _concat(prefix, suffix, static=False):
"""Concat prefix and suffix, handling both static and dynamic shapes."""
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The static parameter is declared but never used in the function body. If this parameter is intended to control behavior (e.g., force static vs dynamic shape handling), it should be implemented. Otherwise, it should be removed to avoid confusion and maintain a clean API.

Suggested change
"""Concat prefix and suffix, handling both static and dynamic shapes."""
"""Concat prefix and suffix, handling both static and dynamic shapes."""
# The `static` argument is currently kept for API compatibility only and has
# no effect on the behavior of this helper.
_ = static

Copilot uses AI. Check for mistakes.
if isinstance(prefix, ops.Tensor):
p = prefix
else:
p = tf.constant(prefix, dtype=tf.int32) if not isinstance(prefix, list) else tf.constant([prefix], dtype=tf.int32)
if isinstance(suffix, ops.Tensor):
s = suffix
else:
s = tf.constant(suffix, dtype=tf.int32) if not isinstance(suffix, list) else tf.constant(suffix, dtype=tf.int32)
Comment on lines +26 to +30
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The conditional logic for converting prefix to a tensor is incorrect when prefix is a list. The expression wraps the list in another list with tf.constant([prefix], dtype=tf.int32), which will create the wrong structure. It should be tf.constant(prefix, dtype=tf.int32) (without the extra brackets) to match the suffix handling on line 30. The correct logic should be: if prefix is a Tensor, use it as-is; otherwise convert it to a constant regardless of whether it's a scalar or list.

Suggested change
p = tf.constant(prefix, dtype=tf.int32) if not isinstance(prefix, list) else tf.constant([prefix], dtype=tf.int32)
if isinstance(suffix, ops.Tensor):
s = suffix
else:
s = tf.constant(suffix, dtype=tf.int32) if not isinstance(suffix, list) else tf.constant(suffix, dtype=tf.int32)
p = tf.constant(prefix, dtype=tf.int32)
if isinstance(suffix, ops.Tensor):
s = suffix
else:
s = tf.constant(suffix, dtype=tf.int32)

Copilot uses AI. Check for mistakes.
# Ensure both are at least 1D
if len(p.shape) == 0:
p = tf.expand_dims(p, 0)
if len(s.shape) == 0:
s = tf.expand_dims(s, 0)
return tf.concat([p, s], axis=0)


def assert_like_rnncell(name, cell):
"""Check that cell behaves like an RNNCell (compatibility shim for TF 2.x)."""
conditions = [
hasattr(cell, "output_size"),
hasattr(cell, "state_size"),
hasattr(cell, "__call__") or callable(cell),
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The condition hasattr(cell, "__call__") or callable(cell) is redundant because callable(cell) already checks for the presence of a __call__ method. The condition can be simplified to just callable(cell) for cleaner and more idiomatic code.

Suggested change
hasattr(cell, "__call__") or callable(cell),
callable(cell),

Copilot uses AI. Check for mistakes.
]
if not all(conditions):
raise TypeError(f"{name} is not an RNNCell: {type(cell)}")


def raw_rnn(cell, loop_fn, parallel_iterations=None, swap_memory=False, scope=None):
"""
Computes a recurrent neural network where inputs can be fed adaptively.
Expand Down Expand Up @@ -56,7 +89,8 @@ def raw_rnn(cell, loop_fn, parallel_iterations=None, swap_memory=False, scope=No
# determined by the parent scope, or is set to place the cached
# Variable using the same placement as for the rest of the RNN.
with vs.variable_scope(scope or "rnn") as varscope:
if is_in_graph_mode.IS_IN_GRAPH_MODE():
# TF 2.x compatibility: check if we're in graph mode (not eager)
if not tf.executing_eagerly():
if varscope.caching_device is None:
varscope.set_caching_device(lambda op: op.device)

Expand Down
Loading