Add ability to add multiple flight plans / operational intents#77
Add ability to add multiple flight plans / operational intents#77hrishiballal merged 5 commits intomasterfrom
Conversation
There was a problem hiding this comment.
Pull request overview
Adds new bulk submission endpoints to allow sending multiple flight plans / operational intents to Flight Blender in a single request (Issue #76), while refactoring the existing single-item endpoints to reuse shared validation/save and intersection-processing logic.
Changes:
- Refactors single-item
set_flight_declaration/set_operational_intentinto shared helper functions and a batchedcheck_intersectionsimplementation. - Adds bulk endpoints
set_flight_declarations_bulkandset_operational_intents_bulkplus URL routes. - Introduces tests covering the refactored single-item endpoints.
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 7 comments.
| File | Description |
|---|---|
flight_declaration_operations/views.py |
Refactors validation/save flow, updates intersection checking to support batches, and adds two new bulk POST endpoints. |
flight_declaration_operations/urls.py |
Exposes the new bulk endpoints via URL patterns. |
flight_declaration_operations/tests.py |
Adds regression tests for the refactored single-item endpoints. |
flight_declaration_operations/data_definitions.py |
Adds a dataclass for bulk submission responses. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
I will create new tests for this in the verification repo: https://github.com/openutm/verification/tree/bulk-add-declaration-operatoinal-intents |
|
Also the https://github.com/openutm/flight-blender/blob/master/api/flight-blender-server-1.0.0-resolved.yaml needs to be updated manually |
|
I have now added openutm/verification#98 for this branch, also created openutm/verification#99 to do the same for operational intents |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 4 out of 4 changed files in this pull request and generated 7 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| state=default_state, | ||
| ) | ||
| flight_declaration.save() | ||
| flight_declaration.add_state_history_entry(new_state=0, original_state=0, notes="Created Declaration") |
There was a problem hiding this comment.
The state history entry is created with hardcoded new_state=0 but the declaration is saved with state=default_state. When USSP_NETWORK_ENABLED=0, default_state=1, so the declaration is created with state=1 but the history entry claims it was created with state=0. This creates an inconsistency between the actual state and the state history.
The history entry should use new_state=default_state to accurately reflect the actual state the declaration was created with.
| my_fd_rtree_helper = FlightDeclarationRTreeIndexFactory(index_name=FLIGHT_DECLARATION_INDEX_BASEPATH) | ||
| my_fd_rtree_helper.generate_flight_declaration_index(all_flight_declarations=declaration_list) | ||
| all_relevant_declarations = my_fd_rtree_helper.check_flight_declaration_box_intersection(view_box=view_box) | ||
| my_fd_rtree_helper.clear_rtree_index() |
There was a problem hiding this comment.
The FlightDeclaration RTree index is created and destroyed once per declaration in the batch (line 224-227). For a batch of N declarations, this creates O(N) RTree indices. Since the approved_ids set grows dynamically, the queryset changes for each declaration, so the index must be rebuilt.
However, this could impact performance for large batches. Consider whether the overhead of creating N indices outweighs the benefits of the optimized intersection checking. If performance becomes an issue with large batches, consider documenting a recommended maximum batch size or exploring alternative approaches.
| else: | ||
| if declaration_state == 0 and USSP_NETWORK_ENABLED: | ||
| if declaration_state == 0 and ussp_network_enabled: | ||
| submit_flight_declaration_to_dss_async.delay(flight_declaration_id=flight_declaration_id) |
There was a problem hiding this comment.
The DSS submission is only triggered if BOTH the declaration state is 0 AND ussp_network_enabled is truthy. However, if the declaration was rejected (is_approved=False), declaration_state will be 8, so this condition will correctly not trigger. But there's a subtle issue: the condition is inside the else block that only executes when NOT (all_relevant_fences AND all_relevant_declarations). This means if a declaration is approved (no conflicts), but there are no fences or declarations to check against, it will still attempt DSS submission. This appears to be intentional behavior, but the nesting could be clearer.
| # Phase 1: validate and save all declarations | ||
| saved: dict[int, FlightDeclaration] = {} | ||
| results: list[dict] = [] | ||
| failed_count = 0 | ||
|
|
||
| for idx, item in enumerate(flight_declarations_list): | ||
| try: | ||
| flight_declaration, error = _validate_and_save_flight_declaration(item, default_state) | ||
| if error or flight_declaration is None: | ||
| failed_count += 1 | ||
| error = error or {"message": "Unknown error"} | ||
| results.append({"index": idx, "success": False, "message": error.get("message", "Validation error"), "errors": error.get("errors")}) | ||
| else: | ||
| saved[idx] = flight_declaration | ||
| except Exception as e: | ||
| logger.error(f"Error validating flight declaration at index {idx}: {e}") | ||
| failed_count += 1 | ||
| results.append({"index": idx, "success": False, "message": str(e)}) | ||
|
|
||
| # Phase 2: check all intersections at once, then process results | ||
| validator = FlightDeclarationRequestValidator() | ||
| intersection_results = validator.check_intersections(list(saved.values()), ussp_network_enabled) | ||
|
|
||
| submitted_count = 0 | ||
| for idx, flight_declaration in saved.items(): | ||
| fd_id = str(flight_declaration.id) | ||
| creation_response = _process_intersection_result( | ||
| flight_declaration, intersection_results[fd_id], ussp_network_enabled | ||
| ) | ||
| submitted_count += 1 | ||
| results.append( | ||
| { | ||
| "index": idx, | ||
| "success": True, | ||
| "id": creation_response.id, | ||
| "is_approved": creation_response.is_approved, | ||
| "state": creation_response.state, | ||
| } | ||
| ) | ||
|
|
||
| results.sort(key=lambda r: r["index"]) | ||
|
|
||
| bulk_response = BulkFlightDeclarationCreateResponse( | ||
| submitted=submitted_count, | ||
| failed=failed_count, | ||
| results=results, | ||
| ) | ||
| http_status = 200 if failed_count == 0 else 207 | ||
| return HttpResponse( | ||
| json.dumps(asdict(bulk_response)), | ||
| status=http_status, | ||
| content_type=RESPONSE_CONTENT_TYPE, | ||
| ) |
There was a problem hiding this comment.
The bulk endpoints lack database transaction handling. Declarations are saved individually in Phase 1 without an atomic transaction wrapper. If an exception occurs during Phase 2 (intersection checking or notification sending), the database will contain partially processed declarations with inconsistent states.
Consider wrapping the entire operation in a transaction.atomic() block, or at minimum wrap Phase 1 so that either all validations succeed and all declarations are saved, or none are saved. This would provide better consistency guarantees and simplify error recovery.
| is_approved=True, | ||
| start_datetime=start_datetime, | ||
| end_datetime=end_datetime, | ||
| originating_party=originating_party, | ||
| flight_declaration_raw_geojson=json.dumps(flight_declaration_geo_json), | ||
| state=declaration_state, | ||
| state=default_state, | ||
| ) | ||
| flight_declaration.save() | ||
| flight_declaration.add_state_history_entry(new_state=0, original_state=0, notes="Created Declaration") |
There was a problem hiding this comment.
Declarations are initially saved with is_approved=True and state=default_state before intersection checking occurs. If the declaration is later rejected, _process_intersection_result updates these fields. This creates a brief window where rejected declarations appear as approved in the database.
While this is necessary for batch processing (so declarations can see each other), it could cause race conditions if other processes query for approved declarations during this window. Consider documenting this behavior or adding a transitional state to indicate "pending intersection check" to make the workflow clearer.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Implement #76