diff --git a/.gitignore b/.gitignore index 3a17345..d601d32 100644 --- a/.gitignore +++ b/.gitignore @@ -12,15 +12,22 @@ .coverage # Python egg metadata, regenerated from source files by setuptools -/*.egg-info -/*.egg +*.egg-info/ +.eggs/ +*.egg # Temporary jupyter files /.ipynb_checkpoints/ *.ipynb_checkpoints # Output folder -/outputs/ +outputs/ + +# Folder with original Matlab code +/Matlab_code/ + +.ai/ +Comparison_scripts/ # All new files coming out of buildInputs inputs/example_inputs/*/optional_inputs.json diff --git a/README.md b/README.md index f5a26c0..0e7f15d 100644 --- a/README.md +++ b/README.md @@ -2,52 +2,73 @@ This is translation of Matlab codebase into Python for quantifying building-specific functional recovery and reoccupancy based on a probabilistic performance-based earthquake engineering framework. ## Requirements -- The `requirements.txt` file defines the Python package dependencies required to run this codebase. Follow the instructions below to install all required depenedancies listed in the 'requirements.txt' file. -- Recommended Python version: `3.9` (the codebase was developed and tested with Python 3.9). -Installation (using a virtual environment is recommended): +- **Python Version**: 3.9 or later (recommend 3.9) +- **Package Manager**: pip (comes with Python) -```powershell -# create a virtual environment +### Installation + +The ATC-138 Functional Recovery Assessment tool is distributed as a Python package. Install it using pip: + + +```bash +# Create and activate a virtual environment (recommended) python -m venv .venv -# activate the virtual environment (PowerShell) +# Activate virtual environment +# On Windows (PowerShell): .\.venv\Scripts\Activate.ps1 +# On macOS/Linux: +source .venv/bin/activate + +# Install the package in editable mode +pip install -e . +``` -# upgrade pip (optional but recommended) -python -m pip install --upgrade pip -# install dependencies from requirements.txt -pip install -r requirements.txt +### Verify Installation + +After installation, verify that the CLI is available: + +```bash +atc138 --help ``` -If you prefer conda: +You should see the command help output with available options. + +## Running an Assessment + +An assessment can be run directly from the command line, or as imported within a Python workflow. If `simulated_inputs.json` does not exist, it will be created using default inputs within `src/atc138/data`. Various assessment options can be overridden by placing them in file `optional_inputs.json` file within the input directory. This file can be customized for each assessment if desired and will be set as default values if not specified. + +### Running from the command line + +With the input directory containing the necessary inputs, perform an assessment by running: ```bash -conda create -n frec python=3.9 -conda activate frec -pip install -r requirements.txt +python -m atc138.cli dir/to/inputs dir/to/outputs ``` -If you run into platform-specific dependency issues, please refer to the package error messages and install any missing system libraries before re-running `pip install -r requirements.txt`. -Original Matlab code is from Dr. Dustin Cook's Github directory https://github.com/OpenPBEE/PBEE-Recovery. + For example, the ICSB example case is run with: + +```bash +python -m atc138.cli ./examples/ICSB ./examples/ICSB/output +``` -### Method Description -The method for quantifying building-specific functional recovery is based on the performance-based earthquake engineering framework. To quantify building function, the method maps component-level damage to system-level performance, and system-level performance to building function using a series of fault trees that describe the interdependencies between the functions of various building components. The method defines the recovery of function and occupancy at the tenant unit level, where a building can be made up of one-to-many tenant units, each with a possible unique set of requirements to regain building function; the recovery state of the building is defined as an aggregation of all the tenant units within the building. The method propagates uncertainty through the assessment using a Monte Carlo simulation. Details of the method are fully described in Cook, Liel, Haselton, and Koliou, 2022. "A Framework for Operationalizing the Assessment of Post Earthquake Functional Recovery of Buildings", Earthquake Spectra. +### Imported via Python script -### Implementation Details -The method is developed as part of the consequence module of the Performance-Based Earthquake Engineering framework and uses simulations of component damage from the FEMA P-58 method as an fundamental input. Therefore, this implementation will not perform a FEMA P-58 assessment, and instead, expects the simulations of component damage, from a FEMA P-58 assessment to be provided as inputs. Along with other information about the building, the buildings tenant units, and some analysis options, this implementation will perform the functional recovery assessment method, and provide simulated recovery times for each realization provided. The implementation runs an assessment for a single building at a single intensity level. The implementation of the method does not handle demo and replace conditions and predicts building function based on component damage simulation and recovery times assuming damage will be repaired in-kind. Building failure, demo, and replacement conditions can be handled as a post-process by either overwriting realizations where global failure occurs or only inputting realizations that are scheduled for repair. +Ensure that the `src/` directory is on the path of the main script. Then: -The method is employs Python v 3.9; running this implementation using other versions of Python may not perform as expected. +```python +from src.atc138 import driver -## Running an Assessment - - **Step 1**: Build the inputs json file of simulated inputs. Title the file "simulated_inputs.json" and place it in a directory of the model name within the "inputs" drirectory. This json data file can either be constructed manually following the inputs schema or using the build script as discussed in the _Building the Inputs File section_ below. - - **Step 2**: Open the Python file "driver_PBEErecovery.py" and set the "model_name", "model_dir", and "outputs_dir" variables. - - **Step 3**: Run the script - - **Step 4**: Simulated assessment outputs will be saved as a json file in a directory of your choice +example_dir = './examples/ICSB' +output_dir = './examples/ICSB/output' + +driver.run_analysis(example_dir, output_dir, seed=985) +``` ## Example Inputs -Four example inputs are provided to help illustrate both the construction of the inputs file and the implementation. These files are located in the inputs/example_inputs directory and can be run through the assessment by setting the variable names accordingly in **step 2** above. +Four example inputs are provided to help illustrate both the construction of the inputs file and the implementation. These files are located in the `examples/` directory and can be run through the assessment by setting the variable names accordingly above. ## Definition of I/O A brief description of the various input and output variables are provided below. A detailed schema of all expected input and output subfields is provided in the schema directory. @@ -80,18 +101,8 @@ A brief description of the various input and output variables are provided below - **functionality['impeding_factors']**: Python dictionary Python dictionary containing the simulated impeding factors delaying the start of system repair -## Building the Inputs File -Instead of manually defining the inputs matlab data file based on the inputs schema, the inputs file can be built from a simpler set of building inputs, taking advantage of default assessment assumptions and component, system, and tenant attributes contained within the _static_tables_ directory. - -### Instructions - - **Step 1**: Copy the scripts build_inputs.py and optional_inputs.py from the _Inputs2Copy_ directory to the directory where you want to build the simulated_inputs.json inputs file - - **Step 2**: Add the requried building specific input files listed below to the same directory - - **Step 3**: Modify the optional_inputs.py file as needed and run it before running the build_inputs.py file - - **Step 4**: Make sure the diectory for the static data tables in build_inputs.py is correctly pointing to the location of the _static_tables_ directory under the heading # Load required data tables - - **Step 5**: Run the build script - -#### Option for Customizing Static Data -If you would like to modify the static data tables listed below for a specifc model, simply copy the static data tables listed below to the build script directory, modify the files, and specifiy the path to the location of the modified files (same directory as the build script). +## Manually building the Inputs File +By default, the inputs file are built from a simpler set of building inputs, taking advantage of default assessment assumptions and component, system, and tenant attributes contained within the _data_ directory. If you would like to manually modify the data tables listed below for a specific model, simply copy the files to the input directory and modify them. ### Required Building Specific Data Each file listed below contains data specific to the building performance model and simulated damage given for a specific level of shaking. Each file listed will need to be created for each unique assessment and saved in the root directory of the build script. Data are contained in either json or csv format. @@ -101,7 +112,7 @@ Each file listed below contains data specific to the building performance model - story: [int] building story where this tenant unit is located (ground floor is listed at 1) - area: [number] total gross plan area of the tenant unit, in square feet - perim_area: [number] total exterior perimeter area (elevation) of the tenant unit, is square feet - - occupancy_id: [int] foreign key to the _occupancy_id_ attribute of the tenant_function_requirements.csv table in the _static_tables_ directory + - occupancy_id: [int] foreign key to the _occupancy_id_ attribute of the tenant_function_requirements.csv table in the _data_ directory - **comp_ds_list.csv**: Table that lists each component and damage state populated in the building performance model; one row per each component's damage state. This table requires the following attributes: - comp_id: [string] unique FEMA P-58 component identifier - ds_seq_id: [int] interger index of the sequential parent damage state (i.e., damage state 1, 2, 3, 4); @@ -117,15 +128,17 @@ Each file listed below contains data specific to the building performance model - tenant_unit{tu}.num_comps: [array: 1 × damage states] The total number of components associated with each damage state (should be uniform for damage state of the same component stack). ### Optional Building Specific Data -The file(s) listed below contain data that is optional for the assessment. If the files do not exist, the method will make simplifying assumptions to account for the missing data (as noted below). Save in the root directory of the build script. +The file(s) listed below contain data that is optional for the assessment. If the files do not exist, the method will make simplifying assumptions to account for the missing data (as noted below). Save in the input directory of your analysis. - **utility_downtime.json**: Regional utility simulated downtimes for gas, water, and electrical power networks. Contains all variables within the _functionality['utilities']_ dictionary defined in the inputs schema. ### Default Optional Inputs -The Python file listed below defines additional assessment inputs based on set of default values. Copy the file from the _Inputs2Copy_ directory, place it in the root directory of the build script, and modify it as you see if (or build the script programmatically) - - **optional_inputs.py**: Defines default variables for the impedance_options, repair_time_options, functionality_options, and regional_impact variables listed in the inputs schema. +The Python file listed below defines additional assessment inputs based on set of default values. Place this file in the input directory of your analysis. + - **optional_inputs.json**: Defines default variables for the impedance_options, repair_time_options, functionality_options, and regional_impact variables listed in the inputs schema. + + ### Static Data -The csv tables listed below contain default component, damage state, system, and tenant function attributes that can be used to populate the required assessment inputs according to the methodology. Either in build_inputs.py point to the location of these tables in the _static_tables_ directory, or copy and modify them as you see fit and place them in the root directory of the build script. +The csv tables listed below contain default component, damage state, system, and tenant function attributes that can be used to populate the required assessment inputs according to the methodology. Either in `input_builder.py` point to the location of these tables in the _data_ directory, or copy and modify them as you see fit and place them in the root directory of the build script. - **component_attributes.csv**: Attributes of components in the FEMA P-58 fragility database that are required for the functional recovery assessment. - **damage_state_attribute_mapping.csv**: Attributes of damage state in the FEMA P-58 fragility database and their affect on function and reoccupancy. - **subsystems.csv**: Attributes of each default subsystem considered in the method. diff --git a/inputs/example_inputs/ICSB/building_model.json b/examples/ICSB/building_model.json similarity index 100% rename from inputs/example_inputs/ICSB/building_model.json rename to examples/ICSB/building_model.json diff --git a/inputs/example_inputs/ICSB/comp_ds_list.csv b/examples/ICSB/comp_ds_list.csv similarity index 100% rename from inputs/example_inputs/ICSB/comp_ds_list.csv rename to examples/ICSB/comp_ds_list.csv diff --git a/inputs/example_inputs/ICSB/comp_population.csv b/examples/ICSB/comp_population.csv similarity index 100% rename from inputs/example_inputs/ICSB/comp_population.csv rename to examples/ICSB/comp_population.csv diff --git a/inputs/example_inputs/ICSB/damage_consequences.json b/examples/ICSB/damage_consequences.json similarity index 100% rename from inputs/example_inputs/ICSB/damage_consequences.json rename to examples/ICSB/damage_consequences.json diff --git a/inputs/example_inputs/ICSB/simulated_damage.json b/examples/ICSB/simulated_damage.json similarity index 100% rename from inputs/example_inputs/ICSB/simulated_damage.json rename to examples/ICSB/simulated_damage.json diff --git a/inputs/example_inputs/ICSB/tenant_unit_list.csv b/examples/ICSB/tenant_unit_list.csv similarity index 100% rename from inputs/example_inputs/ICSB/tenant_unit_list.csv rename to examples/ICSB/tenant_unit_list.csv diff --git a/inputs/example_inputs/RCSW_1story/building_model.json b/examples/RCSW_1story/building_model.json similarity index 100% rename from inputs/example_inputs/RCSW_1story/building_model.json rename to examples/RCSW_1story/building_model.json diff --git a/inputs/example_inputs/RCSW_1story/comp_ds_list.csv b/examples/RCSW_1story/comp_ds_list.csv similarity index 100% rename from inputs/example_inputs/RCSW_1story/comp_ds_list.csv rename to examples/RCSW_1story/comp_ds_list.csv diff --git a/inputs/example_inputs/RCSW_1story/comp_population.csv b/examples/RCSW_1story/comp_population.csv similarity index 100% rename from inputs/example_inputs/RCSW_1story/comp_population.csv rename to examples/RCSW_1story/comp_population.csv diff --git a/inputs/example_inputs/RCSW_1story/damage_consequences.json b/examples/RCSW_1story/damage_consequences.json similarity index 100% rename from inputs/example_inputs/RCSW_1story/damage_consequences.json rename to examples/RCSW_1story/damage_consequences.json diff --git a/inputs/example_inputs/RCSW_1story/simulated_damage.json b/examples/RCSW_1story/simulated_damage.json similarity index 100% rename from inputs/example_inputs/RCSW_1story/simulated_damage.json rename to examples/RCSW_1story/simulated_damage.json diff --git a/inputs/example_inputs/RCSW_1story/tenant_unit_list.csv b/examples/RCSW_1story/tenant_unit_list.csv similarity index 100% rename from inputs/example_inputs/RCSW_1story/tenant_unit_list.csv rename to examples/RCSW_1story/tenant_unit_list.csv diff --git a/inputs/example_inputs/haseltonRCMF_12story/building_model.json b/examples/haseltonRCMF_12story/building_model.json similarity index 100% rename from inputs/example_inputs/haseltonRCMF_12story/building_model.json rename to examples/haseltonRCMF_12story/building_model.json diff --git a/inputs/example_inputs/haseltonRCMF_12story/comp_ds_list.csv b/examples/haseltonRCMF_12story/comp_ds_list.csv similarity index 100% rename from inputs/example_inputs/haseltonRCMF_12story/comp_ds_list.csv rename to examples/haseltonRCMF_12story/comp_ds_list.csv diff --git a/inputs/example_inputs/haseltonRCMF_12story/comp_population.csv b/examples/haseltonRCMF_12story/comp_population.csv similarity index 100% rename from inputs/example_inputs/haseltonRCMF_12story/comp_population.csv rename to examples/haseltonRCMF_12story/comp_population.csv diff --git a/inputs/example_inputs/haseltonRCMF_12story/damage_consequences.json b/examples/haseltonRCMF_12story/damage_consequences.json similarity index 100% rename from inputs/example_inputs/haseltonRCMF_12story/damage_consequences.json rename to examples/haseltonRCMF_12story/damage_consequences.json diff --git a/inputs/example_inputs/haseltonRCMF_12story/simulated_damage.json b/examples/haseltonRCMF_12story/simulated_damage.json similarity index 100% rename from inputs/example_inputs/haseltonRCMF_12story/simulated_damage.json rename to examples/haseltonRCMF_12story/simulated_damage.json diff --git a/inputs/example_inputs/haseltonRCMF_12story/tenant_unit_list.csv b/examples/haseltonRCMF_12story/tenant_unit_list.csv similarity index 100% rename from inputs/example_inputs/haseltonRCMF_12story/tenant_unit_list.csv rename to examples/haseltonRCMF_12story/tenant_unit_list.csv diff --git a/inputs/example_inputs/haseltonRCMF_4story/building_model.json b/examples/haseltonRCMF_4story/building_model.json similarity index 100% rename from inputs/example_inputs/haseltonRCMF_4story/building_model.json rename to examples/haseltonRCMF_4story/building_model.json diff --git a/inputs/example_inputs/haseltonRCMF_4story/comp_ds_list.csv b/examples/haseltonRCMF_4story/comp_ds_list.csv similarity index 100% rename from inputs/example_inputs/haseltonRCMF_4story/comp_ds_list.csv rename to examples/haseltonRCMF_4story/comp_ds_list.csv diff --git a/inputs/example_inputs/haseltonRCMF_4story/comp_population.csv b/examples/haseltonRCMF_4story/comp_population.csv similarity index 100% rename from inputs/example_inputs/haseltonRCMF_4story/comp_population.csv rename to examples/haseltonRCMF_4story/comp_population.csv diff --git a/inputs/example_inputs/haseltonRCMF_4story/damage_consequences.json b/examples/haseltonRCMF_4story/damage_consequences.json similarity index 100% rename from inputs/example_inputs/haseltonRCMF_4story/damage_consequences.json rename to examples/haseltonRCMF_4story/damage_consequences.json diff --git a/inputs/example_inputs/haseltonRCMF_4story/simulated_damage.json b/examples/haseltonRCMF_4story/simulated_damage.json similarity index 100% rename from inputs/example_inputs/haseltonRCMF_4story/simulated_damage.json rename to examples/haseltonRCMF_4story/simulated_damage.json diff --git a/inputs/example_inputs/haseltonRCMF_4story/tenant_unit_list.csv b/examples/haseltonRCMF_4story/tenant_unit_list.csv similarity index 100% rename from inputs/example_inputs/haseltonRCMF_4story/tenant_unit_list.csv rename to examples/haseltonRCMF_4story/tenant_unit_list.csv diff --git a/inputs/Inputs2Copy/optional_inputs.py b/inputs/Inputs2Copy/optional_inputs.py deleted file mode 100644 index 34cbebe..0000000 --- a/inputs/Inputs2Copy/optional_inputs.py +++ /dev/null @@ -1,105 +0,0 @@ -# -*- coding: utf-8 -*- -""" -Code for generating optional_inputs.json file - -""" - -import json -optional_inputs = { -# impedance Options -"impedance_options" : { - - -"include_impedance": { - "inspection" : True, - "financing" : True, - "permitting" : True, - "engineering" : True, - "contractor" : True, - "long_lead" : False - }, -"system_design_time" : { - "f" : 0.04, - "r" : 200, - "t" : 1.3, - "w" : 8 - }, -"eng_design_min_days" : 14, -"eng_design_max_days" : 365, -"mitigation" : { - "is_essential_facility" : False, - "is_borp_equivalent" : False, - "is_engineer_on_retainer" : False, - "contractor_relationship" : 'good', - "contractor_retainer_time" : 3, - "funding_source" : 'private', - "capital_available_ratio" : 0.02 - }, -"impedance_beta" : 0.6, -"impedance_truncation" : 2, -"default_lead_time" : 182, -"demand_surge": { - "include_surge" : 1, - "is_dense_urban_area" : 1, - "site_pga" : 1, - "pga_de": 1 - }, - -"scaffolding_lead_time" : 5, -"scaffolding_erect_time" : 2, -"door_racking_repair_day" : 3, -"flooding_cleanup_day" : 5, -"flooding_repair_day" : 90 - }, - - -# Repir Schedule Options -"repair_time_options" : { - - "max_workers_per_sqft_story" : 0.001, - "max_workers_per_sqft_story_temp_repair" : 0.005, - "max_workers_per_sqft_building" : 0.00025, - "max_workers_building_min" : 20, - "max_workers_building_max" : 260, - "allow_tmp_repairs" : 1, - "allow_shoring" : 1 - }, - -# Functionality Assessment Options -"functionality_options" : { - -"calculate_red_tag" : 1, -"red_tag_clear_time" : 7, -"red_tag_clear_beta" : 0.6, -"red_tag_options" : { - "tag_coupling_beams_over_height" : True, - "ignore_coupling_beam_for_red_tag" : False - }, -"include_local_stability_impact" : 1, -"include_flooding_impact": 1, -"egress_threshold" : 0.5, -"fire_watch" : True, -"local_fire_damamge_threshold" : 0.25, -"min_egress_paths" : 2, -"exterior_safety_threshold" : 0.1, -"interior_safety_threshold" : 0.25, -"door_access_width_ft" : 9, -"habitability_requirements": { - "electrical" : 0, - "water_potable" : 0, - "water_sanitary" : 0, - "hvac_ventilation" : 0, - "hvac_heating" : 0, - "hvac_cooling" : 0, - "hvac_exhaust" : 0 - }, -"water_pressure_max_story" : 4, -"heat_utility" : 'gas', - } - - } - - -with open("optional_inputs.json", "w") as outfile: - json.dump(optional_inputs, outfile) - diff --git a/pyproject.toml b/pyproject.toml new file mode 100644 index 0000000..0fdd6fe --- /dev/null +++ b/pyproject.toml @@ -0,0 +1,36 @@ +[build-system] +requires = ["setuptools>=61.0", "wheel"] +build-backend = "setuptools.build_meta" + +[project] +name = "atc138" +version = "0.1.0" +description = "Functional Recovery Assessment (ATC-138)" +readme = "README.md" +requires-python = ">=3.9" +license = {file = "LICENSE"} +authors = [ + {name = "Dustin Cook", email = "dustin.cook@nist.gov"}, +] +classifiers = [ + "Programming Language :: Python :: 3", + "License :: OSI Approved :: BSD License", + "Operating System :: OS Independent", +] +dependencies = [ + "numpy", + "pandas", + "scipy", + "matplotlib", + "seaborn", +] + +[project.scripts] +atc138 = "atc138.cli:main" + +[project.urls] +"Homepage" = "https://github.com/NHERI-SimCenter/Functional-Recovery-Python" +"Bug Tracker" = "https://github.com/NHERI-SimCenter/Functional-Recovery-Python/issues" + +[tool.setuptools.packages.find] +where = ["src"] diff --git a/src/atc138/__init__.py b/src/atc138/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/src/atc138/cli.py b/src/atc138/cli.py new file mode 100644 index 0000000..5934609 --- /dev/null +++ b/src/atc138/cli.py @@ -0,0 +1,26 @@ +import argparse +import sys +import os +from .driver import run_analysis + +def main(): + parser = argparse.ArgumentParser(description="Run ATC-138 Functional Recovery Assessment") + parser.add_argument("input_dir", help="Path to the directory containing input files (e.g., simulated_inputs.json)") + parser.add_argument("output_dir", help="Path to the directory where outputs will be saved") + parser.add_argument("--seed", type=int, help="Random seed for reproducibility", default=None) + + args = parser.parse_args() + + # Validate inputs + if not os.path.isdir(args.input_dir): + print(f"Error: Input directory '{args.input_dir}' does not exist.", file=sys.stderr) + sys.exit(1) + + try: + run_analysis(args.input_dir, args.output_dir, seed=args.seed) + except Exception as e: + print(f"Error running analysis: {e}", file=sys.stderr) + sys.exit(1) + +if __name__ == "__main__": + main() diff --git a/static_tables/README.md b/src/atc138/data/README.md similarity index 100% rename from static_tables/README.md rename to src/atc138/data/README.md diff --git a/static_tables/component_attributes.csv b/src/atc138/data/component_attributes.csv similarity index 100% rename from static_tables/component_attributes.csv rename to src/atc138/data/component_attributes.csv diff --git a/static_tables/damage_state_attribute_mapping.csv b/src/atc138/data/damage_state_attribute_mapping.csv similarity index 100% rename from static_tables/damage_state_attribute_mapping.csv rename to src/atc138/data/damage_state_attribute_mapping.csv diff --git a/src/atc138/data/default_inputs.json b/src/atc138/data/default_inputs.json new file mode 100644 index 0000000..1a7d4e6 --- /dev/null +++ b/src/atc138/data/default_inputs.json @@ -0,0 +1,81 @@ +{ + "impedance_options": { + "include_impedance": { + "inspection": true, + "financing": true, + "permitting": true, + "engineering": true, + "contractor": true, + "long_lead": false + }, + "system_design_time": { + "f": 0.04, + "r": 200, + "t": 1.3, + "w": 8 + }, + "eng_design_min_days": 14, + "eng_design_max_days": 365, + "mitigation": { + "is_essential_facility": false, + "is_borp_equivalent": false, + "is_engineer_on_retainer": false, + "contractor_relationship": "good", + "contractor_retainer_time": 3, + "funding_source": "private", + "capital_available_ratio": 0.02 + }, + "impedance_beta": 0.6, + "impedance_truncation": 2, + "default_lead_time": 182, + "demand_surge": { + "include_surge": 1, + "is_dense_urban_area": 1, + "site_pga": 1, + "pga_de": 1 + }, + "scaffolding_lead_time": 5, + "scaffolding_erect_time": 2, + "door_racking_repair_day": 3, + "flooding_cleanup_day": 5, + "flooding_repair_day": 90 + }, + "repair_time_options": { + "max_workers_per_sqft_story": 0.001, + "max_workers_per_sqft_story_temp_repair": 0.005, + "max_workers_per_sqft_building": 0.00025, + "max_workers_building_min": 20, + "max_workers_building_max": 260, + "allow_tmp_repairs": 1, + "allow_shoring": 1 + }, + "functionality_options": { + "calculate_red_tag": 1, + "red_tag_clear_time": 7, + "red_tag_clear_beta": 0.6, + "red_tag_options": { + "tag_coupling_beams_over_height": true, + "ignore_coupling_beam_for_red_tag": false + }, + "include_local_stability_impact": 1, + "include_flooding_impact": 1, + "egress_threshold": 0.5, + "fire_watch": true, + "local_fire_damamge_threshold": 0.25, + "min_egress_paths": 2, + "exterior_safety_threshold": 0.1, + "interior_safety_threshold": 0.25, + "door_access_width_ft": 9, + "habitability_requirements": { + "electrical": 0, + "water_potable": 0, + "water_sanitary": 0, + "hvac_ventilation": 0, + "hvac_heating": 0, + "hvac_cooling": 0, + "hvac_exhaust": 0 + }, + "water_pressure_max_story": 4, + "heat_utility": "gas" + } +} \ No newline at end of file diff --git a/static_tables/impeding_factors.csv b/src/atc138/data/impeding_factors.csv similarity index 100% rename from static_tables/impeding_factors.csv rename to src/atc138/data/impeding_factors.csv diff --git a/static_tables/subsystems.csv b/src/atc138/data/subsystems.csv similarity index 100% rename from static_tables/subsystems.csv rename to src/atc138/data/subsystems.csv diff --git a/static_tables/systems.csv b/src/atc138/data/systems.csv similarity index 100% rename from static_tables/systems.csv rename to src/atc138/data/systems.csv diff --git a/static_tables/temp_repair_class.csv b/src/atc138/data/temp_repair_class.csv similarity index 100% rename from static_tables/temp_repair_class.csv rename to src/atc138/data/temp_repair_class.csv diff --git a/static_tables/tenant_function_requirements.csv b/src/atc138/data/tenant_function_requirements.csv similarity index 100% rename from static_tables/tenant_function_requirements.csv rename to src/atc138/data/tenant_function_requirements.csv diff --git a/driver_PBEE_recovery.py b/src/atc138/driver.py similarity index 66% rename from driver_PBEE_recovery.py rename to src/atc138/driver.py index 62b9f1d..651e631 100644 --- a/driver_PBEE_recovery.py +++ b/src/atc138/driver.py @@ -1,4 +1,4 @@ -def run_analysis(model_name, seed=None): +def run_analysis(input_dir, output_dir, seed=None): '''This script facilitates the performance based functional recovery and reoccupancy assessment of a single building for a single intensity level @@ -15,9 +15,11 @@ def run_analysis(model_name, seed=None): Parameters ---------- - model_name: string - Name of the model. Inputs are expected to be in a directory with this - name. Outputs will save to a directory with this name + input_dir: string + Path to the directory containing the input files (simulated_inputs.json). + + output_dir: string + Path to the directory where the output file (recovery_outputs.json) will be saved. seed: int Random seed to be passed to the Numpy random engine. Default behavior @@ -42,12 +44,23 @@ def run_analysis(model_name, seed=None): from scipy.stats import truncnorm ## 2. Define User Inputs - model_dir = 'inputs/example_inputs/'+model_name # Directory where the simulated inputs are located - outputs_dir = 'outputs/'+model_name # Directory where the assessment outputs are saved + # Input/Output directories are passed as arguments ## 3. Load FEMA P-58 performance model data and simulated damage and loss - f = open(os.path.join(os.path.dirname(__file__),model_dir, 'simulated_inputs.json')) - simulated_inputs = json.load(f) + # Check if simulated_inputs.json exists, if not build it + sim_inputs_path = os.path.join(input_dir, 'simulated_inputs.json') + + if os.path.exists(sim_inputs_path): + f = open(sim_inputs_path) + simulated_inputs = json.load(f) + else: + print(f"simulated_inputs.json not found in {input_dir}. Building from raw inputs...") + from .input_builder import build_simulated_inputs + simulated_inputs = build_simulated_inputs(input_dir) + + # Save simulated inputs + with open(sim_inputs_path, 'w') as f: + json.dump(simulated_inputs, f) building_model = simulated_inputs['building_model'] damage = simulated_inputs['damage'] @@ -59,33 +72,52 @@ def run_analysis(model_name, seed=None): tenant_units = simulated_inputs['tenant_units'] # Change story indices in damage['tenant_units'], damage['story'] building_model['comps']['story'] to int from string + # (This ensures compatibility if JSON keys were strings) damage_ten_units = [] if ('tenant_units' in damage.keys()) == True: for tu in range(len(damage['tenant_units'])): - damage_ten_units.append(damage['tenant_units'][str(tu)]) + # Handle list vs dict if necessary, but assuming list structure from builder + if isinstance(damage['tenant_units'], list): # list + damage_ten_units.append(damage['tenant_units'][tu]) + elif str(tu) in damage['tenant_units']: # string key + damage_ten_units.append(damage['tenant_units'][str(tu)]) + elif tu in damage['tenant_units']: # integer key + damage_ten_units.append(damage['tenant_units'][tu]) damage['tenant_units'] = damage_ten_units damage_story = [] for s in range(len(damage['story'])): - damage_story.append(damage['story'][str(s)]) + if isinstance(damage['story'], list): # list + damage_story.append(damage['story'][s]) + elif str(s) in damage['story']: # string key + damage_story.append(damage['story'][str(s)]) + elif s in damage['story']: # integer key + damage_story.append(damage['story'][s]) damage['story'] = damage_story bldg_comps_story = [] for s in range(len(building_model['comps']['story'])): - bldg_comps_story.append(building_model['comps']['story'][str(s)]) + if isinstance(building_model['comps']['story'], list): # list + bldg_comps_story.append(building_model['comps']['story'][s]) + elif str(s) in building_model['comps']['story']: # string key + bldg_comps_story.append(building_model['comps']['story'][str(s)]) + elif s in building_model['comps']['story']: # integer key + bldg_comps_story.append(building_model['comps']['story'][s]) building_model['comps']['story'] = bldg_comps_story ## 4. Load required static data - systems = pd.read_csv(os.path.join(os.path.dirname(__file__), 'static_tables', 'systems.csv')) - subsystems = pd.read_csv(os.path.join(os.path.dirname(__file__), 'static_tables', 'subsystems.csv')) - impeding_factor_medians = pd.read_csv(os.path.join(os.path.dirname(__file__), 'static_tables', 'impeding_factors.csv')) - tmp_repair_class = pd.read_csv(os.path.join(os.path.dirname(__file__), 'static_tables', 'temp_repair_class.csv')) + # Static data is bundled with the package, so use __file__ + pkg_dir = os.path.dirname(__file__) + systems = pd.read_csv(os.path.join(pkg_dir, 'data', 'systems.csv')) + subsystems = pd.read_csv(os.path.join(pkg_dir, 'data', 'subsystems.csv')) + impeding_factor_medians = pd.read_csv(os.path.join(pkg_dir, 'data', 'impeding_factors.csv')) + tmp_repair_class = pd.read_csv(os.path.join(pkg_dir, 'data', 'temp_repair_class.csv')) ## 5. Run Recovery Method - from main_PBEE_recovery import main_PBEE_recovery + from .engine import main_PBEE_recovery # set a seed # this seed propagates through the entire subfunctions @@ -107,11 +139,9 @@ def run_analysis(model_name, seed=None): functionality_options) # 6. Save Outputs - # # Define Output path - if os.path.exists(os.path.join(os.path.dirname(__file__),'outputs')) == False: - os.mkdir(os.path.join(os.path.dirname(__file__),'outputs')) - if os.path.exists(os.path.join(os.path.dirname(__file__),'outputs', model_name)) == False: - os.mkdir(os.path.join(os.path.dirname(__file__),'outputs', model_name)) + # Ensure output directory exists + if not os.path.exists(output_dir): + os.makedirs(output_dir) # Covert arrays to list for writing to json file fnc_keys_1 = list(functionality.keys()) @@ -147,18 +177,12 @@ def run_analysis(model_name, seed=None): output_json_object = json.dumps(functionality) - with open(os.path.join(os.path.dirname(__file__),outputs_dir, "recovery_outputs.json"), "w") as outfile: + with open(os.path.join(output_dir, "recovery_outputs.json"), "w") as outfile: outfile.write(output_json_object) end_time = time.time() - print('Recovery assessment of model ' + model_name + ' complete') + # print('Recovery assessment of model ' + model_name + ' complete') # model_name no longer available + print('Recovery assessment complete') print('time to run '+str(round(end_time - start_time,2))+'s') - - -if __name__ == '__main__': - - model_name = 'ICSB' - - run_analysis(model_name) diff --git a/main_PBEE_recovery.py b/src/atc138/engine.py similarity index 94% rename from main_PBEE_recovery.py rename to src/atc138/engine.py index dce3288..d2b455b 100644 --- a/main_PBEE_recovery.py +++ b/src/atc138/engine.py @@ -57,11 +57,11 @@ def main_PBEE_recovery(damage, damage_consequences, building_model, ## Import Packages import numpy as np - from preprocessing import main_preprocessing - from fn_red_tag import fn_red_tag - from impedance import main_impedance_function - from repair_schedule import main_repair_schedule - from functionality import main_functionality_function + from .preprocessing import main_preprocessing + from .red_tag import fn_red_tag + from .impedance import main_impedance_function + from .repair_schedule import main_repair_schedule + from .functionality import main_functionality_function ## Combine compoment attributes into recovery filters to expidite recovery assessment damage, tmp_repair_class, damage_consequences = main_preprocessing.main_preprocessing(damage['comp_ds_table'], diff --git a/src/atc138/functionality/__init__.py b/src/atc138/functionality/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/functionality/fn_calculate_functionality.py b/src/atc138/functionality/fn_calculate_functionality.py similarity index 98% rename from functionality/fn_calculate_functionality.py rename to src/atc138/functionality/fn_calculate_functionality.py index a9067f1..27d808d 100644 --- a/functionality/fn_calculate_functionality.py +++ b/src/atc138/functionality/fn_calculate_functionality.py @@ -40,7 +40,7 @@ def fn_calculate_functionality(damage, damage_consequences, utilities, ## Initial Set Up # import packages - from functionality import other_functionality_functions + from . import other_functionality_functions ## Define the day each system becomes functionl - Building level system_operation_day = other_functionality_functions.fn_building_level_system_operation(damage, diff --git a/functionality/fn_calculate_reoccupancy.py b/src/atc138/functionality/fn_calculate_reoccupancy.py similarity index 98% rename from functionality/fn_calculate_reoccupancy.py rename to src/atc138/functionality/fn_calculate_reoccupancy.py index 6af1dff..836ee00 100644 --- a/functionality/fn_calculate_reoccupancy.py +++ b/src/atc138/functionality/fn_calculate_reoccupancy.py @@ -37,7 +37,7 @@ def fn_calculate_reoccupancy(damage, damage_consequences, utilities, import numpy as np # Import packages - from functionality import other_functionality_functions + from . import other_functionality_functions ## Stage 1: Quantify the effect that component damage has on the building safety recovery_day={} diff --git a/functionality/fn_check_habitability.py b/src/atc138/functionality/fn_check_habitability.py similarity index 98% rename from functionality/fn_check_habitability.py rename to src/atc138/functionality/fn_check_habitability.py index 3372ee2..17da486 100644 --- a/functionality/fn_check_habitability.py +++ b/src/atc138/functionality/fn_check_habitability.py @@ -24,7 +24,7 @@ def fn_check_habitability( damage, damage_consequences, reoc_meta, func_meta, recovery trajectorires, and contributions from systems and components''' import numpy as np - from functionality import other_functionality_functions + from . import other_functionality_functions num_reals = len(damage_consequences['red_tag']) # Functionality checks to adopt onto reoccupancy requirements for diff --git a/functionality/main_functionality_function.py b/src/atc138/functionality/main_functionality_function.py similarity index 94% rename from functionality/main_functionality_function.py rename to src/atc138/functionality/main_functionality_function.py index 14c1538..ee492d0 100644 --- a/functionality/main_functionality_function.py +++ b/src/atc138/functionality/main_functionality_function.py @@ -34,10 +34,9 @@ def main_functionality(damage, building_model, damage_consequences, contains data on the recovery of tenant- and building-level function, recovery trajectorires, and contributions from systems and components''' - ## Import Packages - from functionality import fn_calculate_reoccupancy - from functionality import fn_calculate_functionality - from functionality import fn_check_habitability + from . import fn_calculate_reoccupancy + from . import fn_calculate_functionality + from . import fn_check_habitability ## Calaculate Building Functionality Restoration Curves # Downtime including external delays recovery = {} diff --git a/functionality/other_functionality_functions.py b/src/atc138/functionality/other_functionality_functions.py similarity index 99% rename from functionality/other_functionality_functions.py rename to src/atc138/functionality/other_functionality_functions.py index eb46048..103fede 100644 --- a/functionality/other_functionality_functions.py +++ b/src/atc138/functionality/other_functionality_functions.py @@ -797,7 +797,7 @@ def fn_building_level_system_operation( damage, damage_consequences, # import packages - from functionality import other_functionality_functions + from . import other_functionality_functions import numpy as np system_operation_day = {'building' : {}, 'comp' : {}} @@ -960,7 +960,7 @@ def subsystem_recovery(subsystem, damage, repair_complete_day, initial_damaged, dependancy): # import packages - from functionality import other_functionality_functions + from . import other_functionality_functions # Set variables recovery_day_all = dependancy['recovery_day'].copy() diff --git a/src/atc138/impedance/__init__.py b/src/atc138/impedance/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/impedance/main_impedance_function.py b/src/atc138/impedance/main_impedance_function.py similarity index 99% rename from impedance/main_impedance_function.py rename to src/atc138/impedance/main_impedance_function.py index 9a425f4..0bb3334 100644 --- a/impedance/main_impedance_function.py +++ b/src/atc138/impedance/main_impedance_function.py @@ -55,7 +55,8 @@ def main_impeding_factors(damage, impedance_options, repair_cost_ratio_total, import numpy as np from scipy.stats import truncnorm - from impedance import other_impedance_functions + + from . import other_impedance_functions # Initialize parameters num_reals = len(inspection_trigger) diff --git a/impedance/other_impedance_functions.py b/src/atc138/impedance/other_impedance_functions.py similarity index 100% rename from impedance/other_impedance_functions.py rename to src/atc138/impedance/other_impedance_functions.py diff --git a/inputs/Inputs2Copy/build_input.py b/src/atc138/input_builder.py similarity index 56% rename from inputs/Inputs2Copy/build_input.py rename to src/atc138/input_builder.py index f85a66b..181b3d5 100644 --- a/inputs/Inputs2Copy/build_input.py +++ b/src/atc138/input_builder.py @@ -1,28 +1,64 @@ -def build_input(output_path): - # """ - # Code for generating simulated_inputs.json file - - # Parameters - # ---------- - # output_path: string - # Path where the generated input file shall be saved. - - # """ - import numpy as np - import json - import pandas as pd - import os - import re - import sys - - print(os.getcwd()) - +import numpy as np +import json +import pandas as pd +import os +import re +import sys + +def clean_types(obj): + """ + Recursively convert numpy types to native Python types for JSON serialization, + preserving NaN as float('nan') for Numpy compatibility in the engine. + """ + if isinstance(obj, dict): + return {k: clean_types(v) for k, v in obj.items()} + elif isinstance(obj, list): + return [clean_types(i) for i in obj] + elif isinstance(obj, np.ndarray): + return clean_types(obj.tolist()) + elif isinstance(obj, (np.int64, np.int32, int)): + return int(obj) + elif isinstance(obj, (np.float64, np.float32, float)): + # Preserving NaN for numpy compatibility in downstream engine + return float(obj) + elif pd.isna(obj): + # Handle standalone pandas/numpy NaNs/NaTs + return float('nan') + return obj + +def recursive_update(d, u): + """ + Recursively update dictionary d with values from u. + """ + for k, v in u.items(): + if isinstance(v, dict) and k in d and isinstance(d[k], dict): + recursive_update(d[k], v) + else: + d[k] = v + return d + + +def build_simulated_inputs(model_dir): + """ + Generates simulated_inputs dictionary from raw input files in the model directory. + Based on the original build_inputs.py script. + + Parameters + ---------- + model_dir: string + Path to the directory containing raw input files. + + Returns + ------- + simulated_inputs: dict + The complete dictionary of inputs. + """ + ''' PULL STATIC DATA If the location of this directory differs, updat the static_data_dir variable below. ''' - static_data_dir = os.path.join(os.path.dirname(__file__), '..', '..', '..', 'static_tables') - + static_data_dir = os.path.join(os.path.dirname(__file__), 'data') component_attributes = pd.read_csv(os.path.join(static_data_dir, 'component_attributes.csv')) damage_state_attribute_mapping = pd.read_csv(os.path.join(static_data_dir, 'damage_state_attribute_mapping.csv')) @@ -35,25 +71,28 @@ def build_input(output_path): for each assessment. Data is formated as json structures or csv tables''' # 1. Building Model: Basic data about the building being assessed - building_model = json.loads(open('building_model.json').read()) + with open(os.path.join(model_dir, 'building_model.json'), 'r') as f: + building_model = json.load(f) # If number of stories is 1, change individual values to lists in order to work with later code if building_model['num_stories'] == 1: for key in ['area_per_story_sf', 'ht_per_story_ft', 'occupants_per_story', 'stairs_per_story', 'struct_bay_area_per_story']: - building_model[key] = [building_model[key]] + if not isinstance(building_model[key], list): + building_model[key] = [building_model[key]] if building_model['num_stories'] == 1: for key in ['edge_lengths']: - building_model[key] = [[building_model[key][0]], [building_model[key][1]]] + if not isinstance(building_model[key][0], list): + building_model[key] = [[building_model[key][0]], [building_model[key][1]]] # 2. List of tenant units within the building and their basic attributes - tenant_unit_list = pd.read_csv('tenant_unit_list.csv') + tenant_unit_list = pd.read_csv(os.path.join(model_dir, 'tenant_unit_list.csv')) # 3. List of component and damage states ids associated with the damage - comp_ds_list = pd.read_csv('comp_ds_list.csv') + comp_ds_list = pd.read_csv(os.path.join(model_dir, 'comp_ds_list.csv')) # 4. List of component and damage states in the performance model - comp_population = pd.read_csv('comp_population.csv') + comp_population = pd.read_csv(os.path.join(model_dir, 'comp_population.csv')) comp_header = list(comp_population.columns) comp_list = np.array(comp_header[2:len(comp_header)]) comp_list= np.char.replace(np.array(comp_list),'_','.') @@ -71,8 +110,24 @@ def build_input(output_path): for s in range (building_model['num_stories']): building_model['comps']['story'][s] = {} for d in range(len(drs)): - filt = np.logical_and(np.array(comp_population['story']) == s+1, np.array(comp_population['dir']) == drs[d]) - building_model['comps']['story'][s]['qty_dir_' + str(drs[d])] = comp_population.to_numpy()[filt,2:len(comp_header)].tolist()[0] + # [FIX] Robust key generation and missing data handling + current_dir = drs[d] + filt = np.logical_and(np.array(comp_population['story']) == s+1, np.array(comp_population['dir']) == current_dir) + + # Format key identifier using integer representation of direction to ensure consistency (e.g. qty_dir_1 not qty_dir_1.0) + try: + dir_key_suffix = str(int(current_dir)) + except: + dir_key_suffix = str(current_dir) + + qty_data = comp_population.to_numpy()[filt,2:len(comp_header)] + + if qty_data.shape[0] > 0: + building_model['comps']['story'][s]['qty_dir_' + dir_key_suffix] = qty_data.tolist()[0] + else: + # Missing data for this story/direction, fill with zeros to avoid crashes + num_comps = len(comp_header) - 2 + building_model['comps']['story'][s]['qty_dir_' + dir_key_suffix] = [0] * num_comps # Set comp info table @@ -85,10 +140,12 @@ def build_input(output_path): else: comp_attr = component_attributes.to_numpy()[comp_attr_filt,:] comp_info['comp_id'].append(comp_list[c]) - comp_info['comp_idx'].append(c) #FZ# or c+1. Review in line with latter part of the code - comp_info['structural_system'].append(float(comp_attr[0,[component_attributes.columns.get_loc('structural_system')]])) - comp_info['structural_system_alt'].append(float(comp_attr[0,[component_attributes.columns.get_loc('structural_system_alt')]])) - comp_info['structural_series_id'].append(float(comp_attr[0,[component_attributes.columns.get_loc('structural_series_id')]])) + comp_info['comp_idx'].append(c) + + # [FIX] Scalar extraction: Use scalar indexing [0, col] instead of slicing [0, [col]] to avoid array-to-scalar conversion errors + comp_info['structural_system'].append(float(comp_attr[0, component_attributes.columns.get_loc('structural_system')])) + comp_info['structural_system_alt'].append(float(comp_attr[0, component_attributes.columns.get_loc('structural_system_alt')])) + comp_info['structural_series_id'].append(float(comp_attr[0, component_attributes.columns.get_loc('structural_series_id')])) building_model['comps']['comp_table'] = comp_info @@ -99,12 +156,15 @@ def build_input(output_path): Data is formated as json structures.''' # 1. Simulated damage consequences - various building and story level consequences of simulated data, for each realization of the monte carlo simulation. - damage_consequences = json.loads(open('damage_consequences.json').read()) + with open(os.path.join(model_dir, 'damage_consequences.json'), 'r') as f: + damage_consequences = json.load(f) # 2. Simulated utility downtimes for electrical, water, and gas networks for each realization of the monte carlo simulation. # If file exists load it - if os.path.exists('utility_downtime.json') == True: - functionality = json.loads(open('utility_downtime.json').read()) + utility_path = os.path.join(model_dir, 'utility_downtime.json') + if os.path.exists(utility_path): + with open(utility_path, 'r') as f: + functionality = json.load(f) # else If no data exist, assume there is no consequence of network downtime else: num_reals = len(damage_consequences["repair_cost_ratio_total"]) @@ -117,7 +177,9 @@ def build_input(output_path): # 3. Simulated component damage per tenant unit for each realization of the monte carlo simulation - sim_damage = json.loads(open('simulated_damage.json').read()) + # 3. Simulated component damage per tenant unit for each realization of the monte carlo simulation + with open(os.path.join(model_dir, 'simulated_damage.json'), 'r') as f: + sim_damage = json.load(f) # Write in individual dictionaries part of larger 'damage' dictionary damage = {'story' : {}, 'tenant_units' : {}} @@ -136,13 +198,26 @@ def build_input(output_path): optional_inputs.json file. This file is expected to be in this input directory. This file can be customized for each assessment if desired.''' - optional_inputs = json.load(open("optional_inputs.json")) - functionality_options = optional_inputs['functionality_options'] - impedance_options = optional_inputs['impedance_options'] - repair_time_options = optional_inputs['repair_time_options'] + # Load defaults first, then merge user overrides + pkg_dir = os.path.dirname(__file__) + defaults_path = os.path.join(pkg_dir, 'data', 'default_inputs.json') + with open(defaults_path, 'r') as f: + options = json.load(f) + + user_options_path = os.path.join(model_dir, 'optional_inputs.json') + if os.path.exists(user_options_path): + with open(user_options_path, 'r') as f: + user_options = json.load(f) + options = recursive_update(options, user_options) + + functionality_options = options['functionality_options'] + impedance_options = options['impedance_options'] + repair_time_options = options['repair_time_options'] + + # Preallocate tenant unit table - tenant_units = tenant_unit_list; + tenant_units = tenant_unit_list.copy() # copy to avoid SettingWithCopy if passed dataframe tenant_units['exterior'] = np.zeros(len(tenant_units)) tenant_units['interior'] = np.zeros(len(tenant_units)) tenant_units['occ_per_elev'] = np.zeros(len(tenant_units)) @@ -158,26 +233,32 @@ def build_input(output_path): '''Pull default tenant unit attributes for each tenant unit listed in the tenant_unit_list''' for tu in range(len(tenant_unit_list)): - fnc_requirements_filt = tenant_function_requirements['occupancy_id'] == tenant_units['occupancy_id'][tu] + occ_id = tenant_units.loc[tu, 'occupancy_id'] # Use .loc for pandas safety + fnc_requirements_filt = tenant_function_requirements['occupancy_id'] == occ_id if sum(fnc_requirements_filt) != 1: - sys.exit('error! Tenant Unit Requirements for This Occupancy Not Found') + raise ValueError(f'error! Tenant Unit Requirements for Occupancy ID {occ_id} Not Found') - tenant_units['exterior'][tu] = tenant_function_requirements['exterior'][fnc_requirements_filt] - tenant_units['interior'][tu] = tenant_function_requirements['interior'][fnc_requirements_filt] - tenant_units['occ_per_elev'][tu] = tenant_function_requirements['occ_per_elev'][fnc_requirements_filt] - if list(tenant_function_requirements['is_elevator_required'][fnc_requirements_filt] == 1)[0] and list(tenant_function_requirements['max_walkable_story'][fnc_requirements_filt] < tenant_units['story'][tu])[0]: - tenant_units['is_elevator_required'][tu] = 1 + # Accessing filtered rows. Original input builder used filtered Series assignment. + req_row = tenant_function_requirements[fnc_requirements_filt].iloc[0] + + tenant_units.loc[tu, 'exterior'] = req_row['exterior'] + tenant_units.loc[tu, 'interior'] = req_row['interior'] + tenant_units.loc[tu, 'occ_per_elev'] = req_row['occ_per_elev'] + + story = tenant_units.loc[tu, 'story'] + if req_row['is_elevator_required'] == 1 and req_row['max_walkable_story'] < story: + tenant_units.loc[tu, 'is_elevator_required'] = 1 else: - tenant_units['is_elevator_required'][tu] = 0 - - tenant_units['is_electrical_required'][tu] = tenant_function_requirements['is_electrical_required'][fnc_requirements_filt] - tenant_units['is_water_potable_required'][tu] = tenant_function_requirements['is_water_potable_required'][fnc_requirements_filt] - tenant_units['is_water_sanitary_required'][tu] = tenant_function_requirements['is_water_sanitary_required'][fnc_requirements_filt] - tenant_units['is_hvac_ventilation_required'][tu] = tenant_function_requirements['is_hvac_ventilation_required'][fnc_requirements_filt] - tenant_units['is_hvac_heating_required'][tu] = tenant_function_requirements['is_hvac_heating_required'][fnc_requirements_filt] - tenant_units['is_hvac_cooling_required'][tu] = tenant_function_requirements['is_hvac_cooling_required'][fnc_requirements_filt] - tenant_units['is_hvac_exhaust_required'][tu] = tenant_function_requirements['is_hvac_exhaust_required'][fnc_requirements_filt] - tenant_units['is_data_required'][tu] = tenant_function_requirements['is_data_required'][fnc_requirements_filt] + tenant_units.loc[tu, 'is_elevator_required'] = 0 + + tenant_units.loc[tu, 'is_electrical_required'] = req_row['is_electrical_required'] + tenant_units.loc[tu, 'is_water_potable_required'] = req_row['is_water_potable_required'] + tenant_units.loc[tu, 'is_water_sanitary_required'] = req_row['is_water_sanitary_required'] + tenant_units.loc[tu, 'is_hvac_ventilation_required'] = req_row['is_hvac_ventilation_required'] + tenant_units.loc[tu, 'is_hvac_heating_required'] = req_row['is_hvac_heating_required'] + tenant_units.loc[tu, 'is_hvac_cooling_required'] = req_row['is_hvac_cooling_required'] + tenant_units.loc[tu, 'is_hvac_exhaust_required'] = req_row['is_hvac_exhaust_required'] + tenant_units.loc[tu, 'is_data_required'] = req_row['is_data_required'] '''Pull default component and damage state attributes for each component in the comp_ds_list''' @@ -235,25 +316,27 @@ def build_input(output_path): # Find the component attributes of this component comp_attr_filt = component_attributes['fragility_id'] == comp_ds_list['comp_id'][c] if sum(comp_attr_filt) != 1: - sys.exit('error! Could not find component attrubutes') + raise ValueError('error! Could not find component attrubutes') else: - # comp_attr = component_attributes[comp_attr_filt,:); - comp_attr = component_attributes.to_numpy()[comp_attr_filt,:] #FZ# Changed to numpy array to filter out - comp_attr = pd.DataFrame(comp_attr, columns = list(component_attributes.columns)) #FZ# Changed back to DataFrame + comp_attr = component_attributes[comp_attr_filt].iloc[0] # Changed to Series access for robust scalar extraction ds_comp_filt = [] for frag_reg in range(len(damage_state_attribute_mapping["fragility_id_regex"])): - + regex_str = damage_state_attribute_mapping["fragility_id_regex"][frag_reg] + cid = comp_ds_list["comp_id"][c] + match = re.search(regex_str, cid) + + # Mapping components with attributes - Cjecks are based on mapping, comp_id, seq_id and sub_id - + # Matching element ID using information contained in damage_state_attribute_mapping ["fragility_id_regex"] - if re.search(damage_state_attribute_mapping["fragility_id_regex"][frag_reg], comp_ds_list["comp_id"][c]) == None: - ds_comp_filt.append(0) - elif (re.search(damage_state_attribute_mapping["fragility_id_regex"][frag_reg], comp_ds_list["comp_id"][c])).string == comp_ds_list["comp_id"][c]: - ds_comp_filt.append(1) + if match and match.string == cid: + ds_comp_filt.append(True) else: - ds_comp_filt.append(0) + ds_comp_filt.append(False) + ds_comp_filt = np.array(ds_comp_filt) # Convert to array for boolean indexing + ds_seq_filt = damage_state_attribute_mapping['ds_index'] == comp_ds_list['ds_seq_id'][c] if comp_ds_list['ds_sub_id'][c] == 1: ds_sub_filt = np.logical_or(damage_state_attribute_mapping['sub_ds_index'] ==1, damage_state_attribute_mapping['sub_ds_index'].isnull()) @@ -263,76 +346,75 @@ def build_input(output_path): ds_filt = ds_comp_filt & ds_seq_filt & ds_sub_filt if sum(ds_filt) != 1: - sys.exit('error!, Could not find damage state attrubutes') + raise ValueError('error!, Could not find damage state attrubutes') else: - ds_attr = damage_state_attribute_mapping.to_numpy()[ds_filt,:] #FZ# Changed to numpy array to filter out - ds_attr = pd.DataFrame(ds_attr, columns = list(damage_state_attribute_mapping.columns)) #FZ# Changed back to DataFrame + ds_attr = damage_state_attribute_mapping[ds_filt].iloc[0] # Series access ## Populate data for each damage state # Basic Component and DS identifiers comp_ds_info['comp_id'].append(comp_ds_list['comp_id'][c]) comp_ds_info['comp_type_id'].append(comp_ds_list['comp_id'][c][0:5]) # first 5 characters indicate the type comp_ds_info['comp_idx'].append(c) - comp_ds_info['ds_seq_id'].append(ds_attr['ds_index'][0]) - # comp_ds_info['ds_sub_id'][c] = str2double(strrep(ds_attr.sub_ds_index{1},'NA','1')) - comp_ds_info['ds_sub_id'].append(ds_attr['sub_ds_index'][0]) - if np.isnan(comp_ds_info['ds_sub_id'][c]): - comp_ds_info['ds_sub_id'][c] = 1.0 + comp_ds_info['ds_seq_id'].append(ds_attr['ds_index']) + + sub_id = ds_attr['sub_ds_index'] + if pd.isna(sub_id): sub_id = 1.0 + comp_ds_info['ds_sub_id'].append(sub_id) # Set Component Attributes - comp_ds_info['system'].append(comp_attr['system_id'][0]) - comp_ds_info['subsystem_id'].append(comp_attr['subsystem_id'][0]) - comp_ds_info['structural_system'].append(comp_attr['structural_system'][0]) - comp_ds_info['structural_system_alt'].append(comp_attr['structural_system_alt'][0]) # component_attributes.csv does not have structural_system_alt field - comp_ds_info['structural_series_id'].append(comp_attr['structural_series_id'][0]) - comp_ds_info['unit'].append(comp_attr['unit'][0]) #FZ# Check w.r.t. matlab output - comp_ds_info['unit_qty'].append(comp_attr['unit_qty'][0]) - comp_ds_info['service_location'].append(comp_attr['service_location'][0]) #FZ# Check w.r.t. matlab output + comp_ds_info['system'].append(comp_attr['system_id']) + comp_ds_info['subsystem_id'].append(comp_attr['subsystem_id']) + comp_ds_info['structural_system'].append(comp_attr['structural_system']) + comp_ds_info['structural_system_alt'].append(comp_attr['structural_system_alt']) # component_attributes.csv does not have structural_system_alt field + comp_ds_info['structural_series_id'].append(comp_attr['structural_series_id']) + comp_ds_info['unit'].append(comp_attr['unit']) #FZ# Check w.r.t. matlab output + comp_ds_info['unit_qty'].append(comp_attr['unit_qty']) + comp_ds_info['service_location'].append(comp_attr['service_location']) #FZ# Check w.r.t. matlab output # Set Damage State Attributes - comp_ds_info['is_sim_ds'].append(ds_attr['is_sim_ds'][0]) - comp_ds_info['safety_class'].append(ds_attr['safety_class'][0]) - comp_ds_info['affects_envelope_safety'].append(ds_attr['affects_envelope_safety'][0]) - comp_ds_info['ext_falling_hazard'].append(ds_attr['exterior_falling_hazard'][0]) - comp_ds_info['int_falling_hazard'].append(ds_attr['interior_falling_hazard'][0]) - comp_ds_info['global_hazardous_material'].append(ds_attr['global_hazardous_material'][0]) - comp_ds_info['local_hazardous_material'].append(ds_attr['local_hazardous_material'][0]) - comp_ds_info['weakens_fire_break'].append(ds_attr['weakens_fire_break'][0]) - comp_ds_info['affects_access'].append(ds_attr['affects_access'][0]) - comp_ds_info['damages_envelope_seal'].append(ds_attr['damages_envelope_seal'][0]) - comp_ds_info['affects_roof_function'].append(ds_attr['affects_roof_function'][0]) - comp_ds_info['obstructs_interior_space'].append(ds_attr['obstructs_interior_space'][0]) - comp_ds_info['impairs_system_operation'].append(ds_attr['impairs_system_operation'][0]) - comp_ds_info['causes_flooding'].append(ds_attr['causes_flooding'][0]) - comp_ds_info['interior_area_factor'].append(ds_attr['interior_area_factor'][0]) - comp_ds_info['interior_area_conversion_type'].append(ds_attr['interior_area_conversion_type'][0]) - comp_ds_info['exterior_surface_area_factor'].append(ds_attr['exterior_surface_area_factor'][0]) - comp_ds_info['exterior_falling_length_factor'].append(ds_attr['exterior_falling_length_factor'][0]) - comp_ds_info['crew_size'].append(ds_attr['crew_size'][0]) - comp_ds_info['permit_type'].append(ds_attr['permit_type'][0]) - comp_ds_info['redesign'].append(ds_attr['redesign'][0]) - comp_ds_info['long_lead_time'].append(impedance_options['default_lead_time'] * ds_attr['long_lead'][0]) - comp_ds_info['requires_shoring'].append(ds_attr['requires_shoring'][0]) - comp_ds_info['resolved_by_scaffolding'].append(ds_attr['resolved_by_scaffolding'][0]) - comp_ds_info['tmp_repair_class'].append(ds_attr['tmp_repair_class'][0]) - comp_ds_info['tmp_repair_time_lower'].append(ds_attr['tmp_repair_time_lower'][0]) - comp_ds_info['tmp_repair_time_upper'].append(ds_attr['tmp_repair_time_upper'][0]) - - if comp_ds_info['tmp_repair_class'][c] > 0: # only grab values for components with temp repair times - time_lower_quantity = ds_attr['time_lower_quantity'][0] - time_upper_quantity = ds_attr['time_upper_quantity'][0] + # Map fields (legacy mapping logic preserved where simple) + comp_ds_info['is_sim_ds'].append(ds_attr['is_sim_ds']) + comp_ds_info['safety_class'].append(ds_attr['safety_class']) + comp_ds_info['affects_envelope_safety'].append(ds_attr['affects_envelope_safety']) + comp_ds_info['ext_falling_hazard'].append(ds_attr['exterior_falling_hazard']) + comp_ds_info['int_falling_hazard'].append(ds_attr['interior_falling_hazard']) + comp_ds_info['global_hazardous_material'].append(ds_attr['global_hazardous_material']) + comp_ds_info['local_hazardous_material'].append(ds_attr['local_hazardous_material']) + comp_ds_info['weakens_fire_break'].append(ds_attr['weakens_fire_break']) + comp_ds_info['affects_access'].append(ds_attr['affects_access']) + comp_ds_info['damages_envelope_seal'].append(ds_attr['damages_envelope_seal']) + comp_ds_info['affects_roof_function'].append(ds_attr['affects_roof_function']) + comp_ds_info['obstructs_interior_space'].append(ds_attr['obstructs_interior_space']) + comp_ds_info['impairs_system_operation'].append(ds_attr['impairs_system_operation']) + comp_ds_info['causes_flooding'].append(ds_attr['causes_flooding']) + comp_ds_info['interior_area_factor'].append(ds_attr['interior_area_factor']) + comp_ds_info['interior_area_conversion_type'].append(ds_attr['interior_area_conversion_type']) + comp_ds_info['exterior_surface_area_factor'].append(ds_attr['exterior_surface_area_factor']) + comp_ds_info['exterior_falling_length_factor'].append(ds_attr['exterior_falling_length_factor']) + comp_ds_info['crew_size'].append(ds_attr['crew_size']) + comp_ds_info['permit_type'].append(ds_attr['permit_type']) + comp_ds_info['redesign'].append(ds_attr['redesign']) + comp_ds_info['long_lead_time'].append(impedance_options['default_lead_time'] * ds_attr['long_lead']) + comp_ds_info['requires_shoring'].append(ds_attr['requires_shoring']) + comp_ds_info['resolved_by_scaffolding'].append(ds_attr['resolved_by_scaffolding']) + comp_ds_info['tmp_repair_class'].append(ds_attr['tmp_repair_class']) + comp_ds_info['tmp_repair_time_lower'].append(ds_attr['tmp_repair_time_lower']) + comp_ds_info['tmp_repair_time_upper'].append(ds_attr['tmp_repair_time_upper']) - comp_ds_info['tmp_repair_time_lower_qnty'].append(time_lower_quantity) - comp_ds_info['tmp_repair_time_upper_qnty'].append(time_upper_quantity) + tmp_class = ds_attr['tmp_repair_class'] + if tmp_class > 0: + comp_ds_info['tmp_repair_time_lower_qnty'].append(ds_attr['time_lower_quantity']) + comp_ds_info['tmp_repair_time_upper_qnty'].append(ds_attr['time_upper_quantity']) else: comp_ds_info['tmp_repair_time_lower_qnty'].append(np.nan) comp_ds_info['tmp_repair_time_upper_qnty'].append(np.nan) - comp_ds_info['tmp_crew_size'].append(ds_attr['tmp_crew_size'][0]) + comp_ds_info['tmp_crew_size'].append(ds_attr['tmp_crew_size']) # Subsystem attributes - subsystem_filt = subsystems['id'] == comp_attr['subsystem_id'][0] - if comp_ds_info['subsystem_id'][c] == 0: + sub_id = comp_attr['subsystem_id'] + subsystem_filt = subsystems['id'] == sub_id + if sub_id == 0: # No subsytem comp_ds_info['n1_redundancy'].append(0) comp_ds_info['parallel_operation'].append(0) @@ -340,60 +422,38 @@ def build_input(output_path): elif sum(subsystem_filt) != 1: sys.exit('error! Could not find damage state attrubutes') else: - # Set Damage State Attributes - comp_ds_info['n1_redundancy'].append(np.array(subsystems['n1_redundancy'])[subsystem_filt][0]) - comp_ds_info['parallel_operation'].append(np.array(subsystems['parallel_operation'])[subsystem_filt][0]) - comp_ds_info['redundancy_threshold'].append(np.array(subsystems['redundancy_threshold'])[subsystem_filt][0]) + sub_row = subsystems[subsystem_filt].iloc[0] + comp_ds_info['n1_redundancy'].append(sub_row['n1_redundancy']) + comp_ds_info['parallel_operation'].append(sub_row['parallel_operation']) + comp_ds_info['redundancy_threshold'].append(sub_row['redundancy_threshold']) damage['comp_ds_table'] = comp_ds_info ## Check missing data # Engineering Repair Cost Ratio - Assume is the sum of all component repair # costs that require redesign - if 'repair_cost_ratio_engineering' in damage_consequences.keys() == False: + ## Check missing data + # Engineering Repair Cost Ratio - Assume is the sum of all component repair + # costs that require redesign + if 'repair_cost_ratio_engineering' not in damage_consequences: eng_filt = np.array(damage['comp_ds_table']['redesign']).astype(bool) - damage_consequences['repair_cost_ratio_engineering'] = np.zeros(len(damage_consequences['repair_cost_ratio_total'])) - for s in range(len(sim_damage['story'])): - damage_consequences['repair_cost_ratio_engineering'] = damage_consequences['repair_cost_ratio_engineering'] + np.sum(sim_damage['story'][s]['repair_cost'][:,eng_filt], axis = 1) + # Re-calc using numpy arrays + costs = np.zeros(len(damage_consequences['repair_cost_ratio_total'])) + if 'story' in sim_damage: + for s in range(len(sim_damage['story'])): + story_costs = np.array(sim_damage['story'][s]['repair_cost']) + costs += np.sum(story_costs[:, eng_filt], axis=1) + damage_consequences['repair_cost_ratio_engineering'] = costs.tolist() - # Covert to Python int and floats for creating .json file - for key in list(damage['comp_ds_table'].keys()): - for i in range(len(damage['comp_ds_table'][key])): - if type(damage['comp_ds_table'][key][i]) == np.int64: - damage['comp_ds_table'][key][i] = int(damage['comp_ds_table'][key][i]) - if type(damage['comp_ds_table'][key][i]) == np.float64: - damage['comp_ds_table'][key][i] = float(damage['comp_ds_table'][key][i]) - - for key in list(tenant_units.keys()): - for i in range(len(tenant_units[key])): - if type(tenant_units[key][i]) == np.int64: - tenant_units[key][i] = int(tenant_units[key][i]) - if type(tenant_units[key][i]) == np.float64: - tenant_units[key][i] = float(tenant_units[key][i]) - # Convert tenant_units dataframe to dictionary - tenant_units_dict = {} - for i in list(tenant_units.columns): - tenant_units_dict[i] = list(tenant_units[i]) - - tenant_units = tenant_units_dict + tenant_units_dict = tenant_units.to_dict(orient='list') # Export output as simulated_inputs.json file - simulated_inputs = {'building_model' : building_model, 'damage' : damage, 'damage_consequences' : damage_consequences, 'functionality' : functionality, 'functionality_options' : functionality_options, 'impedance_options' : impedance_options, 'repair_time_options' : repair_time_options, 'tenant_units' : tenant_units} - - for inp in simulated_inputs: - output_json_object = json.dumps(simulated_inputs) + simulated_inputs = {'building_model' : building_model, 'damage' : damage, 'damage_consequences' : damage_consequences, 'functionality' : functionality, 'functionality_options' : functionality_options, 'impedance_options' : impedance_options, 'repair_time_options' : repair_time_options, 'tenant_units' : tenant_units_dict} - # with open("simulated_inputs.json", "w") as outfile: - # outfile.write(output_json_object) + # [FIX] Type cleaning using recursive helper (enables JSON serialization while preserving NaNs) + simulated_inputs = clean_types(simulated_inputs) - with open(output_path, "w") as outfile: - outfile.write(output_json_object) - -if __name__ == '__main__': - - output_path = "simulated_inputs.json" - - build_input(output_path) + return simulated_inputs diff --git a/src/atc138/preprocessing/__init__.py b/src/atc138/preprocessing/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/preprocessing/main_preprocessing.py b/src/atc138/preprocessing/main_preprocessing.py similarity index 97% rename from preprocessing/main_preprocessing.py rename to src/atc138/preprocessing/main_preprocessing.py index ec72aec..28ccaf5 100644 --- a/preprocessing/main_preprocessing.py +++ b/src/atc138/preprocessing/main_preprocessing.py @@ -39,8 +39,8 @@ def main_preprocessing(comp_ds_table, damage, repair_time_options, temp_repair_c damage_consequences: dictionary dictionary containing simulated building consequences, such as red''' - # Import Packages - from preprocessing import preprocessing_fns + import numpy as np + from . import preprocessing_fns ## Define simulated damage in each tenant unit if not provided by the user damage = preprocessing_fns.fn_populate_damage_per_tu(damage) diff --git a/preprocessing/preprocessing_fns.py b/src/atc138/preprocessing/preprocessing_fns.py similarity index 100% rename from preprocessing/preprocessing_fns.py rename to src/atc138/preprocessing/preprocessing_fns.py diff --git a/fn_red_tag.py b/src/atc138/red_tag.py similarity index 100% rename from fn_red_tag.py rename to src/atc138/red_tag.py diff --git a/src/atc138/repair_schedule/__init__.py b/src/atc138/repair_schedule/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/repair_schedule/main_repair_schedule.py b/src/atc138/repair_schedule/main_repair_schedule.py similarity index 99% rename from repair_schedule/main_repair_schedule.py rename to src/atc138/repair_schedule/main_repair_schedule.py index 43b2df7..1b788d2 100644 --- a/repair_schedule/main_repair_schedule.py +++ b/src/atc138/repair_schedule/main_repair_schedule.py @@ -56,7 +56,7 @@ def main_repair_schedule(damage, building_model, simulated_red_tags, import math import numpy as np - from repair_schedule import other_repair_schedule_functions + from . import other_repair_schedule_functions ## initial Setup # Define the maximum number of workers that can be on site, based on REDI diff --git a/repair_schedule/other_repair_schedule_functions.py b/src/atc138/repair_schedule/other_repair_schedule_functions.py similarity index 99% rename from repair_schedule/other_repair_schedule_functions.py rename to src/atc138/repair_schedule/other_repair_schedule_functions.py index d5f27ca..4a1aa41 100644 --- a/repair_schedule/other_repair_schedule_functions.py +++ b/src/atc138/repair_schedule/other_repair_schedule_functions.py @@ -435,8 +435,8 @@ def fn_set_repair_constraints(systems, repair_type, conditionTag): # Interior Constraints if repair_type == 'full': # Interiors are delayed by structural repairs - interiors_idx = np.where(np.array(systems['name']) == 'interior')[0] #FZ# [0] is done to convert tuple to np array - structure_idx = np.where(np.array(systems['name']) == 'structural')[0] + interiors_idx = np.where(np.array(systems['name']) == 'interior')[0][0] #FZ [0] is done to convert tuple to np array # [0][0] to get scalar + structure_idx = np.where(np.array(systems['name']) == 'structural')[0][0] sys_constraint_matrix[:,interiors_idx] = structure_idx+1 #FZ# +1 is done to replace with the system id which starts with 1, but python indexing starts at 0.