Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
40 commits
Select commit Hold shift + click to select a range
126c4f7
Add album_artist column to data layer across all providers
rendyhd Jan 31, 2026
44846c1
Add Docker test stacks and album_artist validation guide
claude Feb 1, 2026
27d356c
Switch AudioMuse test stack to local NVIDIA build
claude Feb 1, 2026
3752779
Use bind mounts under ./providers/ for provider storage
claude Feb 1, 2026
95b68bf
Merge pull request #1 from rendyhd/claude/docker-test-guide-Pa9uc
rendyhd Feb 1, 2026
1d23a29
updated test files
rendyhd Feb 1, 2026
1a4378d
Merge branch 'NeptuneHub:main' into feature/album-artist-support
rendyhd Feb 1, 2026
969cae2
missing base_image
rendyhd Feb 1, 2026
b13914b
wrong lyrion port in test compose for audiomuse
rendyhd Feb 1, 2026
a1913c9
Fix album_artist API call
rendyhd Feb 1, 2026
4c4f612
Clean test stack
rendyhd Feb 1, 2026
c6f2273
Added Year, Rating (for Lyrion and Navidrome), and File Path
rendyhd Feb 2, 2026
c7825f6
changed identifier towards Navidrome from version to "AudioMuse" to r…
rendyhd Feb 2, 2026
9b58e3a
clean-up provider test stack
rendyhd Feb 2, 2026
3dd1a9a
clean-up provider test stack
rendyhd Feb 2, 2026
aa8ec44
Merge branch 'feature/album-artist-support' of https://github.com/ren…
rendyhd Feb 2, 2026
c86778a
Merge branch 'main' into feature/album-artist-support
NeptuneHub Feb 3, 2026
674aff6
fallback logic for DD-MM-YYYY, Rating to 5 star schema, album_name fo…
rendyhd Feb 3, 2026
8204a2f
Merge branch 'main' into feature/album-artist-support
NeptuneHub Feb 11, 2026
6893550
Cherry-pick AI instant playlist overhaul + album support from multi-p…
rendyhd Mar 1, 2026
af11f41
AI instant playlist improvements, unit tests, and gitignore testing_s…
rendyhd Mar 3, 2026
96fb5aa
Enforce rating and genre filters strictly in instant playlist
rendyhd Mar 4, 2026
2097f80
updated navidrome identifier
rendyhd Mar 4, 2026
c2f11e5
Merge pull request #6 from rendyhd/feature/ai-instant-playlist-upgrade
rendyhd Mar 4, 2026
2b2efe0
Merge branch 'main' into feature/album-artist-support
NeptuneHub Mar 5, 2026
8290bb4
Merge branch 'main' into feature/album-artist-support
NeptuneHub Mar 7, 2026
9e1b317
Fix hardcoded local path in test_playlist_ordering.py
rendyhd Mar 9, 2026
aaf9f86
Merge branch 'main' into feature/album-artist-support
rendyhd Mar 9, 2026
27f2c09
Fix genre filter test to match new SUBSTRING-based SQL pattern
rendyhd Mar 9, 2026
e6f1183
Index fix
NeptuneHub Mar 10, 2026
88b9c99
Prompt improvement
NeptuneHub Mar 10, 2026
09ca1fe
Unit and Integration test fix
NeptuneHub Mar 10, 2026
ccde60b
Merge branch 'main' into feature/album-artist-support
NeptuneHub Mar 10, 2026
70e0a1e
Merge branch 'main' into feature/album-artist-support
NeptuneHub Mar 10, 2026
0b745ad
Merge branch 'main' into feature/album-artist-support
NeptuneHub Mar 12, 2026
ceead32
Fix year filter, strict filter fidelity, and progressive artist cap r…
rendyhd Mar 12, 2026
6f4d5b0
Merge branch 'main' into feature/album-artist-support
NeptuneHub Mar 12, 2026
aa241ab
Merge branch 'main' into feature/album-artist-support
NeptuneHub Mar 14, 2026
d4c30ea
Improve Ollama instant playlist: fix timeout, thinking models, and pr…
rendyhd Mar 14, 2026
f59e506
Improve playlist quality: scale routing, genre coherence, iteration c…
rendyhd Mar 14, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ env/
# IMPORTANT: Never commit your .env file with secrets!
.env*
!.env.example
!.env.test.example
# You can add an exception for an example file if you create one
# !.env.example

Expand All @@ -35,6 +36,7 @@ env/
# Test artifacts
.pytest_cache/
htmlcov/
nul

# Large model files in query folder
/query/*.pt
Expand Down Expand Up @@ -64,3 +66,6 @@ student_clap/models/*.onnx
student_clap/config.local.yaml
student_clap/models/FMA_SONGS_LICENSE.md
student_clap/models/FMA_SONGS_2247_LICENSE.md

# Testing suite
testing_suite/
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -403,4 +403,4 @@ ENV PYTHONPATH=/usr/local/lib/python3/dist-packages:/app
EXPOSE 8000

WORKDIR /workspace
CMD ["bash", "-c", "if [ -n \"$TZ\" ] && [ -f \"/usr/share/zoneinfo/$TZ\" ]; then ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone; elif [ -n \"$TZ\" ]; then echo \"Warning: timezone '$TZ' not found in /usr/share/zoneinfo\" >&2; fi; if [ \"$SERVICE_TYPE\" = \"worker\" ]; then echo 'Starting worker processes via supervisord...' && /usr/bin/supervisord -c /etc/supervisor/conf.d/supervisord.conf; else echo 'Starting web service...' && gunicorn --bind 0.0.0.0:8000 --workers 1 --timeout 120 app:app; fi"]
CMD ["bash", "-c", "if [ -n \"$TZ\" ] && [ -f \"/usr/share/zoneinfo/$TZ\" ]; then ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone; elif [ -n \"$TZ\" ]; then echo \"Warning: timezone '$TZ' not found in /usr/share/zoneinfo\" >&2; fi; if [ \"$SERVICE_TYPE\" = \"worker\" ]; then echo 'Starting worker processes via supervisord...' && /usr/bin/supervisord -c /etc/supervisor/conf.d/supervisord.conf; else echo 'Starting web service...' && gunicorn --bind 0.0.0.0:8000 --workers 1 --timeout 300 app:app; fi"]
2 changes: 1 addition & 1 deletion Dockerfile-noavx2
Original file line number Diff line number Diff line change
Expand Up @@ -397,4 +397,4 @@ ENV PYTHONPATH=/usr/local/lib/python3/dist-packages:/app
EXPOSE 8000

WORKDIR /workspace
CMD ["bash", "-c", "if [ \"$SERVICE_TYPE\" = \"worker\" ]; then echo 'Starting worker processes via supervisord...' && /usr/bin/supervisord -c /etc/supervisor/conf.d/supervisord.conf; else echo 'Starting web service...' && gunicorn --bind 0.0.0.0:8000 --workers 1 --timeout 120 app:app; fi"]
CMD ["bash", "-c", "if [ \"$SERVICE_TYPE\" = \"worker\" ]; then echo 'Starting worker processes via supervisord...' && /usr/bin/supervisord -c /etc/supervisor/conf.d/supervisord.conf; else echo 'Starting web service...' && gunicorn --bind 0.0.0.0:8000 --workers 1 --timeout 300 app:app; fi"]
601 changes: 339 additions & 262 deletions ai_mcp_client.py

Large diffs are not rendered by default.

546 changes: 407 additions & 139 deletions app_chat.py

Large diffs are not rendered by default.

106 changes: 97 additions & 9 deletions app_helper.py
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ def init_db():
cur.execute('CREATE EXTENSION IF NOT EXISTS unaccent')
cur.execute('CREATE EXTENSION IF NOT EXISTS pg_trgm')
# Create 'score' table
cur.execute("CREATE TABLE IF NOT EXISTS score (item_id TEXT PRIMARY KEY, title TEXT, author TEXT, album TEXT, tempo REAL, key TEXT, scale TEXT, mood_vector TEXT)")
cur.execute("CREATE TABLE IF NOT EXISTS score (item_id TEXT PRIMARY KEY, title TEXT, author TEXT, album TEXT, album_artist TEXT, tempo REAL, key TEXT, scale TEXT, mood_vector TEXT)")
# Add 'energy' column if not exists
cur.execute("SELECT EXISTS (SELECT 1 FROM information_schema.columns WHERE table_name = 'score' AND column_name = 'energy')")
if not cur.fetchone()[0]:
Expand All @@ -103,6 +103,26 @@ def init_db():
if not cur.fetchone()[0]:
logger.info("Adding 'album' column to 'score' table.")
cur.execute("ALTER TABLE score ADD COLUMN album TEXT")
# Add 'album_artist' column if not exists
cur.execute("SELECT EXISTS (SELECT 1 FROM information_schema.columns WHERE table_name = 'score' AND column_name = 'album_artist')")
if not cur.fetchone()[0]:
logger.info("Adding 'album_artist' column to 'score' table.")
cur.execute("ALTER TABLE score ADD COLUMN album_artist TEXT")
# Add 'year' column if not exists
cur.execute("SELECT EXISTS (SELECT 1 FROM information_schema.columns WHERE table_name = 'score' AND column_name = 'year')")
if not cur.fetchone()[0]:
logger.info("Adding 'year' column to 'score' table.")
cur.execute("ALTER TABLE score ADD COLUMN year INTEGER")
# Add 'rating' column if not exists
cur.execute("SELECT EXISTS (SELECT 1 FROM information_schema.columns WHERE table_name = 'score' AND column_name = 'rating')")
if not cur.fetchone()[0]:
logger.info("Adding 'rating' column to 'score' table.")
cur.execute("ALTER TABLE score ADD COLUMN rating INTEGER")
# Add 'file_path' column if not exists
cur.execute("SELECT EXISTS (SELECT 1 FROM information_schema.columns WHERE table_name = 'score' AND column_name = 'file_path')")
if not cur.fetchone()[0]:
logger.info("Adding 'file_path' column to 'score' table.")
cur.execute("ALTER TABLE score ADD COLUMN file_path TEXT")

# Add 'search_u' column if not exists (helps search)
cur.execute("SELECT EXISTS (SELECT 1 FROM information_schema.columns WHERE table_name = 'score' AND column_name = 'search_u')")
Expand Down Expand Up @@ -441,7 +461,7 @@ def track_exists(item_id):
cur.close()
return row is not None

def save_track_analysis_and_embedding(item_id, title, author, tempo, key, scale, moods, embedding_vector, energy=None, other_features=None, album=None):
def save_track_analysis_and_embedding(item_id, title, author, tempo, key, scale, moods, embedding_vector, energy=None, other_features=None, album=None, album_artist=None, year=None, rating=None, file_path=None):
"""Saves track analysis and embedding in a single transaction."""

def _sanitize_string(s, max_length=1000, field_name="field"):
Expand Down Expand Up @@ -479,19 +499,83 @@ def _sanitize_string(s, max_length=1000, field_name="field"):
title = _sanitize_string(title, max_length=500, field_name="title")
author = _sanitize_string(author, max_length=200, field_name="author")
album = _sanitize_string(album, max_length=200, field_name="album")
album_artist = _sanitize_string(album_artist, max_length=200, field_name="album_artist")
key = _sanitize_string(key, max_length=10, field_name="key")
scale = _sanitize_string(scale, max_length=10, field_name="scale")
other_features = _sanitize_string(other_features, max_length=2000, field_name="other_features")

# year: parse from various date formats and validate
def _parse_year_from_date(year_value):
"""
Parse year from various date formats.
Supports: YYYY, YYYY-MM-DD, MM-DD-YYYY, DD-MM-YYYY (with - or / separators)
"""
if year_value is None:
return None

year_str = str(year_value).strip()
if not year_str:
return None

# Try parsing as pure integer first (YYYY)
try:
year = int(year_str)
if 1000 <= year <= 2100:
return year
except (ValueError, TypeError):
pass

# Normalize separators
normalized = year_str.replace('/', '-')
parts = normalized.split('-')

if len(parts) == 3:
try:
# YYYY-MM-DD format
if len(parts[0]) == 4:
year = int(parts[0])
if 1000 <= year <= 2100:
return year

# MM-DD-YYYY or DD-MM-YYYY format
if len(parts[2]) == 4:
year = int(parts[2])
if 1000 <= year <= 2100:
return year

# 2-digit year (MM-DD-YY)
if len(parts[2]) == 2:
year = int(parts[2])
year += 2000 if year < 30 else 1900
if 1000 <= year <= 2100:
return year
except (ValueError, TypeError, IndexError):
pass

return None

year = _parse_year_from_date(year)

# rating: validate as integer 0-5 (5-star rating system)
if rating is not None:
try:
rating = int(rating)
if rating < 0 or rating > 5:
rating = None
except (ValueError, TypeError):
rating = None

file_path = _sanitize_string(file_path, max_length=1000, field_name="file_path")

mood_str = ','.join(f"{k}:{v:.3f}" for k, v in moods.items())

conn = get_db() # This now calls the function within this file
cur = conn.cursor()
try:
# Save analysis to score table
cur.execute("""
INSERT INTO score (item_id, title, author, tempo, key, scale, mood_vector, energy, other_features, album)
VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s)
INSERT INTO score (item_id, title, author, tempo, key, scale, mood_vector, energy, other_features, album, album_artist, year, rating, file_path)
VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)
ON CONFLICT (item_id) DO UPDATE SET
title = EXCLUDED.title,
author = EXCLUDED.author,
Expand All @@ -501,8 +585,12 @@ def _sanitize_string(s, max_length=1000, field_name="field"):
mood_vector = EXCLUDED.mood_vector,
energy = EXCLUDED.energy,
other_features = EXCLUDED.other_features,
album = EXCLUDED.album
""", (item_id, title, author, tempo, key, scale, mood_str, energy, other_features, album))
album = EXCLUDED.album,
album_artist = EXCLUDED.album_artist,
year = EXCLUDED.year,
rating = EXCLUDED.rating,
file_path = EXCLUDED.file_path
""", (item_id, title, author, tempo, key, scale, mood_str, energy, other_features, album, album_artist, year, rating, file_path))

# Save embedding
if isinstance(embedding_vector, np.ndarray) and embedding_vector.size > 0:
Expand Down Expand Up @@ -589,7 +677,7 @@ def get_all_tracks():
conn = get_db() # This now calls the function within this file
cur = conn.cursor(cursor_factory=DictCursor)
cur.execute("""
SELECT s.item_id, s.title, s.author, s.tempo, s.key, s.scale, s.mood_vector, s.energy, s.other_features, e.embedding
SELECT s.item_id, s.title, s.author, s.tempo, s.key, s.scale, s.mood_vector, s.energy, s.other_features, s.year, s.rating, s.file_path, e.embedding
FROM score s
LEFT JOIN embedding e ON s.item_id = e.item_id
""")
Expand Down Expand Up @@ -620,7 +708,7 @@ def get_tracks_by_ids(item_ids_list):
item_ids_str = [str(item_id) for item_id in item_ids_list]

query = """
SELECT s.item_id, s.title, s.author, s.album, s.tempo, s.key, s.scale, s.mood_vector, s.energy, s.other_features, e.embedding
SELECT s.item_id, s.title, s.author, s.album, s.album_artist, s.tempo, s.key, s.scale, s.mood_vector, s.energy, s.other_features, s.year, s.rating, s.file_path, e.embedding
FROM score s
LEFT JOIN embedding e ON s.item_id = e.item_id
WHERE s.item_id IN %s
Expand Down Expand Up @@ -648,7 +736,7 @@ def get_score_data_by_ids(item_ids_list):
conn = get_db() # This now calls the function within this file
cur = conn.cursor(cursor_factory=DictCursor)
query = """
SELECT s.item_id, s.title, s.author, s.album, s.tempo, s.key, s.scale, s.mood_vector, s.energy, s.other_features
SELECT s.item_id, s.title, s.author, s.album, s.album_artist, s.tempo, s.key, s.scale, s.mood_vector, s.energy, s.other_features, s.year, s.rating, s.file_path
FROM score s
WHERE s.item_id IN %s
"""
Expand Down
7 changes: 5 additions & 2 deletions app_voyager.py
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,8 @@ def search_tracks_endpoint():
'item_id': r.get('item_id'),
'title': r.get('title'),
'author': r.get('author'),
'album': album
'album': album,
'album_artist': (r.get('album_artist') or '').strip() or 'unknown'
})
else:
results.append({'item_id': None, 'title': None, 'author': None, 'album': 'unknown'})
Expand Down Expand Up @@ -256,6 +257,7 @@ def get_similar_tracks_endpoint():
"title": track_info['title'],
"author": track_info['author'],
"album": (track_info.get('album') or 'unknown'),
"album_artist": (track_info.get('album_artist') or 'unknown'),
"distance": distance_map[neighbor_id]
})

Expand Down Expand Up @@ -314,7 +316,8 @@ def get_track_endpoint():
"item_id": d.get('item_id'),
"title": d.get('title'),
"author": d.get('author'),
"album": (d.get('album') or 'unknown')
"album": (d.get('album') or 'unknown'),
"album_artist": (d.get('album_artist') or 'unknown')
}), 200
except Exception as e:
logger.error(f"Unexpected error fetching track {item_id}: {e}", exc_info=True)
Expand Down
5 changes: 5 additions & 0 deletions config.py
Original file line number Diff line number Diff line change
Expand Up @@ -465,6 +465,11 @@
# }
ENABLE_PROXY_FIX = os.environ.get("ENABLE_PROXY_FIX", "False").lower() == "true"

# --- Instant Playlist Optimization ---
# Max songs from a single artist in the instant playlist (diversity enforcement)
MAX_SONGS_PER_ARTIST_PLAYLIST = int(os.environ.get("MAX_SONGS_PER_ARTIST_PLAYLIST", "5"))
# Enable energy-arc shaping for playlist ordering (gentle start -> peak -> cool down)
PLAYLIST_ENERGY_ARC = os.environ.get("PLAYLIST_ENERGY_ARC", "False").lower() == "true"
# --- Authentication ---
# Set all three to enable authentication. Leave any blank to disable (legacy mode).
AUDIOMUSE_USER = os.environ.get("AUDIOMUSE_USER", "")
Expand Down
8 changes: 8 additions & 0 deletions deployment/docker-compose-nvidia-local.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,8 @@ services:
build:
context: ..
dockerfile: Dockerfile
args:
BASE_IMAGE: nvidia/cuda:12.8.1-cudnn-runtime-ubuntu24.04
image: audiomuse-ai:local-nvidia
container_name: audiomuse-ai-flask-app
ports:
Expand All @@ -52,6 +54,8 @@ services:
OPENAI_MODEL_NAME: "${OPENAI_MODEL_NAME}"
GEMINI_API_KEY: "${GEMINI_API_KEY}"
MISTRAL_API_KEY: "${MISTRAL_API_KEY}"
OLLAMA_SERVER_URL: "${OLLAMA_SERVER_URL:-http://192.168.1.71:11434/api/generate}"
OLLAMA_MODEL_NAME: "${OLLAMA_MODEL_NAME:-qwen3:1.7b}"
CLAP_ENABLED: "${CLAP_ENABLED:-true}"
TEMP_DIR: "/app/temp_audio"
# Authentication (optional) – leave blank to disable
Expand All @@ -78,6 +82,8 @@ services:
build:
context: ..
dockerfile: Dockerfile
args:
BASE_IMAGE: nvidia/cuda:12.8.1-cudnn-runtime-ubuntu24.04
image: audiomuse-ai:local-nvidia
container_name: audiomuse-ai-worker-instance
environment:
Expand Down Expand Up @@ -126,5 +132,7 @@ services:
volumes:
redis-data:
postgres-data:
external: true
name: deployment_postgres-data
temp-audio-flask:
temp-audio-worker:
6 changes: 3 additions & 3 deletions tasks/analysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -881,7 +881,7 @@ def get_missing_mulan_track_ids(track_ids):
logger.info(f" - Other Features: {other_features}")

# Save MusiCNN score+embedding first (creates the 'score' row)
save_track_analysis_and_embedding(item['Id'], item['Name'], item.get('AlbumArtist', 'Unknown'), musicnn_analysis['tempo'], musicnn_analysis['key'], musicnn_analysis['scale'], top_moods, musicnn_embedding, energy=musicnn_analysis['energy'], other_features=other_features, album=item.get('Album', None))
save_track_analysis_and_embedding(item['Id'], item['Name'], item.get('AlbumArtist', 'Unknown'), musicnn_analysis['tempo'], musicnn_analysis['key'], musicnn_analysis['scale'], top_moods, musicnn_embedding, energy=musicnn_analysis['energy'], other_features=other_features, album=item.get('Album', None), album_artist=item.get('OriginalAlbumArtist', None), year=item.get('Year'), rating=item.get('Rating'), file_path=item.get('FilePath'))

# Save CLAP embedding AFTER score row exists (FK: clap_embedding.item_id → score.item_id)
if clap_embedding_for_track is not None and needs_clap:
Expand Down Expand Up @@ -1210,9 +1210,9 @@ def monitor_and_clear_jobs():
track_id_str = str(item['Id'])
try:
with get_db() as conn, conn.cursor() as cur:
cur.execute("UPDATE score SET album = %s WHERE item_id = %s", (album.get('Name'), track_id_str))
cur.execute("UPDATE score SET album = %s, album_artist = %s, year = %s, rating = %s, file_path = %s WHERE item_id = %s", (album.get('Name'), item.get('OriginalAlbumArtist'), item.get('Year'), item.get('Rating'), item.get('FilePath'), track_id_str))
conn.commit()
logger.info(f"[MainAnalysisTask] Updated album name for track '{item['Name']}' to '{album.get('Name')}' (main task)")
logger.info(f"[MainAnalysisTask] Updated album/album_artist/year/rating/file_path for track '{item['Name']}' to '{album.get('Name')}' (main task)")
except Exception as e:
logger.warning(f"[MainAnalysisTask] Failed to update album name for '{item['Name']}': {e}")
albums_skipped += 1
Expand Down
5 changes: 4 additions & 1 deletion tasks/chat_manager.py
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You added here in the prompt that you have Year and Rating but some line over you have a contradictionary line that say:

The database has NO YEAR COLUMN

The result is that a query like

Give me all the song of year 2025

even with gemini-2.5-flash that is fast enough, don't read the year from the database.
Please refactor ALL the prompt to use this new data and have it prefear from the "AI BRAINSTORMING" tool.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! My focus was completely on the data schema, I remember removing it there. Completely missed it above. I'd be happy to spend some time on looking at the complete prompt. I can make a similar test script and iterate through variations and document performance across models.

Have you every looked at using it for specific song suggestions? I know that plexamp used that approach.
Also just thought of the idea that I'd like a killer-prompt that's gonna suggest me new albums

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you can improve the AI prompt by doing multiple test will be really appreciated. I have the impression that we are not using the Instant Playlist potentiality at all.

I think that the key idea is not using the AI to have the information, but to help you searching between the information that you already have in your database (so searching in the database, or running API, and so on).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been diving into this one in #311

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand: the PR #311 will include the change of this so I can wait it to be finished and test all together?
Thanks!

Original file line number Diff line number Diff line change
Expand Up @@ -1247,11 +1247,14 @@ def generate_final_sql_query(intent, strategy_info, found_artists, found_keyword
- item_id (text)
- title (text)
- author (text)
- album (text)
- album_artist (text)
- tempo (numeric 40-200)
- mood_vector (text, format: 'pop:0.8,rock:0.3')
- other_features (text, format: 'danceable:0.7,party:0.6')
- energy (numeric 0-0.15, higher = more energetic)
- **NOTE: NO YEAR OR DATE COLUMN EXISTS**
- year (integer, e.g. 2005, NULL if unknown)
- rating (integer 0-5, NULL if unrated, represents 5-star rating)
**PROGRESSIVE FILTERING STRATEGY - CRITICAL:**
The goal is to return EXACTLY {target_count} songs. Start with minimal filters and add more ONLY if needed.
Expand Down
Loading
Loading