"Conductor" of model construction (frontend api of MDDO demo system).
+ model-conductor/ # https://github.com/ool-mddo/model-conductor (THIS repository)
+ doc/ # class documents (generated w/yard)
+ lib/ # library for appliation
- Ruby >3.1.0 (development under ruby/3.1.0 and bundler/2.3.5)
# If you install gems into project local
# bundle config set --local path 'vendor/bundle'
bundle installAPI entrypoints:
BATFISH_WRAPPER_HOST: batfish-wrapper hostNETOMOX_EXP_HOST: netomox-exp host
Log level variable:
MODEL_CONDUCTOR_LOG_LEVEL(defaultinfo)- select a value from
fatal,error,warn,infoanddebug
bundle exec rackup -s webrick -o 0.0.0.0 -p 9292For development: rerun watches file update and reload the server.
--force-pollingin container with volume mount
rerun [--force-polling] bundle exec rackup -s webrick -o 0.0.0.0 -p 9292Generate snapshot topology from query data for all snapshots in a network
- POST
/conduct/<network>/<snnapshot>/topologylabel: Label (description) of the physical snapshotphy_ss_only: [optional] Flag to target only physical snapshotsuse_parallel: [optional] Flag to use parallel processing for query-to-topology data generation stageoff_node: [optional] Node name to draw-offoff_intf_re: [optional] Interface name (regexp match) to draw-off inoff_node
# model-info.json
# -> { "label": "description of the snapshot", ... }
curl -X POST -H "Content-Type: application/json" -d @model-info.json \
http://localhost:9292/conduct/pushed_configs/mddo_networkFetch subsets of a snapshot
- GET
/conduct/<network>/<snapshot>/subsets
curl http://localhost:9292/conduct/pusheed_configs/mddo_network/subsetsFetch subset comparison data between physical and logical snapshots in a network
- GET
/conduct/<network>/<physical-snapshot>/subsets_diffmin_score: [optional] Ignore comparison data lower than this score (default: 0)
curl http://localhost:9292/conduct/pushed_configs/mddo_network/subsets_diffRun reachability test with test-pattern definition
- GET
/conduct/<network>/reachabilitysnapshoots: List of snapshot to test reachabilitytest_pattern: Test pattern definition
# test_pattern.json
# -> { "snapshots": ["mddo_network_linkdown_01", ...], "test_pattern": <test-pattern> }
curl -X POST -H "Content-Type: application/json" -d @test_pattern.json \
http://localhost:9292/conduct/pushed_configs/reachabilityTake snapshot diff in a network
- GET
/conduct/<network>/snapshot_diff/<src_snapshot>/<dst_snapshot>upper_layer3: [optional] compare layer3 or upper layer
curl -s "http://localhost:9292/conduct/mddo-ospf/snapshot_diff/emulated_asis/emulated_tobe?upper_layer3=true"Take snapshot diff and write back (overwrite) as destination snapshot
- POST
/conduct/<network>/snapshot_diff/<src_snapshot>/<dst_snapshot>upper_layer3: [optional] compare layer3 or upper layer
curl -s -X POST -H 'Content-Type: application/json' \
-d '{ "upper_layer3": true }' \
http://localhost:9292/conduct/mddo-ospf/snapshot_diff/emulated_asis/emulated_tobeConvert namespace of source snapshot and post it as destination snapshot
- POST
/conduct/<network>/ns_convert/<src_snapshot>/<dst_snapshot>table_origin: [optional] Origin snapshot name to initialize convert table, Force update the convert table of a network if this option used.
curl -s -X POST -H 'Content-Type: application/json' \
-d '{ "table_origin": "original_asis" }' \
http://localhost:9292/conduct/mddo-ospf/ns_convert/original_asis/emulated_asisSplice external topology (external-AS and/or Layer3 pre-allocated resources topology) to snapshot topology.
- POST
/conduct/<network>/<snapshot>/splice_topologyext_topology_data: [optional] external topology data (RFC8345 json) to splicel3_preallocated_resources: [optional] preallocated l3 resources (RFC8345 json) to splice'overwrite: [optional] true to write snapshot topology (default: true). If false, it does not modify snapshot topology (Only get spliced topology data)
# ext_topology.json : external topology data to splice (RFC8345 json)
curl -s -X POST -H "Content-Type: application/json" \
-d @<(jq -s '{ "l3_preallocated_resources": .[0], "ext_topology_data": .[1] }' l3p_topo.json ext_as_topo.json) \
http://localhost:9292/conduct/mddo-bgp/original_asis/splice_topologyFor manual-steps usecase.
Generate topology data based on the specified command. Target is original/preallocated(N) snapshot, and it generates next preallocated(N+1) snapshot.
command:
- connect_link: "shut to no-shut" operation
- shutdown_intf: "no-shut to shut" operation
- POST
/conduct/<network>/topology_opscommand: command namedry_run: [optional] dry-run flag (default:false)args: arguments for each command
curl -s -X POST -H "Content-Type: application/json" \
-d '{ "command": "connect_link", "args": { "link": { "source": { "node": "as65550-edge01", "tp": "Ethernet3" }, "destination": { "node": "edge-tk12", "tp": "ge-0/0/0.0" } } } } }}' \
http://localhost:9292/conduct/mddo-bgp/topology_opscurl -s -X POST -H "Content-Type: application/json" \
-d '{ "command": "shutdown_intf", "args": { "interface": { "node": "as65550-edge01", "tp": "Ethernet3" } } }' \
http://localhost:9292/conduct/mddo-bgp/topology_opsGet current/next target snapshot name of topology-ops API.
- GET
/conduct/<network>/topology_ops_targets
curl http://localhost:9292/conduct/mddo-bgp/topology_ops_targetsNote
Currently, for bgp-proc only (to "patch" bgp policy or other attribute data). It used in bgp-policy-parser in PNI use-case of copy-to-emulated-env demo.
Add node/term-point attribute.
- POST
/conduct/<network>/<snapshot>/topology/<layer>/policiesnode: node/term-point attribute (RFC8345-json format)
Patch data (bgp-policy-patch.json)
{
"node": [
{
"node-id": "192.168.255.7",
"ietf-network-topology:termination-point": [
{
"tp-id": "peer_192.168.255.2",
"mddo-topology:bgp-proc-termination-point-attributes": {
"import-policy": ["ibgp-export"]
}
}
]
}
]
}curl -s -X POST -H 'Content-Type: application/json' \
-d @bgp-policy-patch.json \
http://localhost:9292/conduct/biglobe_deform/original_asis/topology/bgp_proc/policiesGenerate candidate configs from original_asis snapshot
- POST
/conduct/<network>/<snapshot>/candidate_topologyphase_number: Number of search iteration (phase)candidate_number: Number of candidate configsusecase: Usecase parametername: Usecase namesources: Data sources for the usecase
curl -s -X POST -H 'Content-Type: application/json' \
-d '{"candidate_number": 3, "usecase": { "name": "pni_te", "sources": ["params", "phase_candidate_opts"]}}' \
http://localhost:9292/conduct/mddo-bgp/original_asis/candidate_topologyNOTE: "phase_candidate_opts" for multi_region_te or multi_src_as_te usecase
"phase_candidate_opts.yaml" can include flow_data for candidate model generation. If with flow_data, candidate model is generated using the flow_data, prefix-set selection done by flow_data. But without flow_data, candidate model is simple selection, prefix-set selection done by sequential.---
peer_asn: 65550
node: edge-tk01
interface: ge-0/0/1.0
flow_data: flows/event # csvmodel-conductor uses netomox gem that pushed on github packages.
So, it need authentication to exec bundle install when building its container image.
You have to pass authetication credential via ghp_credential environment variable like below:
USERNAME: your github usernameTOKEN: your github personal access token (needread:packagesscope)
ghp_credential="USERNAME:TOKEN" docker buildx build -t model-conductor --secret id=ghp_credential .YARD options are in .yardopts file.
bundle exec rake yardRun yard document server (access http://localhost:8808/ with browser)
bundle exec yard serverbundle exec rake rubocop
# or
bundle exec rake rubocop:auto_correct