Skip to content

Conversation

@ido-shm-uel
Copy link
Contributor

Implemented INVPROP (arXiv:2302.01404v4 [cs.LG]), Partial Multi-Neuron Relaxation bound tightening algorithms.

  1. Modified DeepPoly module to enable storing symbolic bounds in the NLR and support parametrised SBT for ReLU, LeakyReLU, Sign and Bilinear layers.
  2. Implemented storing BBPS (arXiv:2405.21063v2 [cs.LG]) branching points and scores in the NLR.
  3. Implemented random, Gradient-based (arXiv:1804.10829v3 [cs.AI]) and BBPS-based heuristics for PMNR neuron selection and Polygonal Tightening generation for INVPROP.
  4. Implemented INVPROP algorithm for tightening polygonal tightenings. INVPROP implementation supports all layers.
  5. Implemented branching selected neurons before INVPROP to derive a stronger INVPROP bounds.
  6. Corrected SBT and LP relaxations for Sigmoid, Max, Softmax, Bilinear layers and relevant unit tests.
  7. Wrote unit tests for parameterised SBT, parameterised DeepPoly, Gradient and BBPS-based neuron scores, branching points, symbolic bound maps and INVPROP/PMNR algorithms.

Added assignment and store-into-other NLR tests for every activation function.
0-2-0: Implementing layer simulations for every activation function.
0-2-1: Testing layer simulations for every activation function.
0-3-0: Implementing interval arithmetic for all activation functions.
0-3-1: Adding interval arithmetic tests for all activation functions.
…nction.

0-4-0: Implementing symbolic bound tightening for every activation function.
…tions.

0-4-1: Adding symbolic bound tightening tests for all activation functions.
0-5-0: Adding rounding constant for LP relaxation propagation.
… activation functions.

0-5-1: Implementing forward-backward LP propagation algorithm for all activation functions.
…n functions.

0-5-2: Adding forward-backward LP propagation tests for all activation functions.
…rithms, adjusting in Engine.

0-6-0: Adding MILP bound types for two backpropagation of bounds algorithms, adjusting in Engine.
0-6-1: Adjusting in Options for two new MILP Bound types.
…rithms).

0-6-2: Introducing Polygonal Tightenings (output type of two new algorithms).
0-6-3: Adjusting in NLR, LayerOwner for Polygonal Tightenings.
…rithms will optimize value of parameter).

0-6-4: Introducing parameterised bounds in LPFormulator (two new algorithms will optimize value of parameter).
0-6-5: Current state of PreimageApproximation + cleanup + corrections.
0-6-6: New constants for PreimageApproximation (random seeds).
…algorithm (gradient-free optimization for PreimageApproximation).

0-6-7: TEMPORARY CHANGE - use boost v1.85 for Differential Evolution algorithm (gradient-free optimization for PreimageApproximation).
0-6-8: Revert last change.
0-6-9: Adapt DifferentialEvolution from Boost v1.85.
…ented).

0-6-10: Current state of PreimageApproximation (EstimateVolume implemented).
0-6-11: Add constants for PreimageApproximation.
…descent).

0-6-12: Current state of PreimageApproximation (implement coordinate descent).
…ow).

0-6-13: Timing PreimageApproximation with coordinate descent (very slow).
0-6-14: Small Fix (same as 0-5---4).
…imization algorithms, parameters, clang-format,)

- Apply 0-5--6 fixes (Backward-Forward Algorithm branch) to this branch.
- Add new optimization algorithms for PreimageApproximation: Coordinate Descent, Gradient Descent, Adam Optimizer.
- Change default algorithm  (Adam) + parameters to reduce runtime.
- Apply clang-format to all files changed.
smal fix for bug in SumOfInfeasibilitiesManager::proposePhasePatternU… - Merge idan0610's fork to solve regression test N273 failling due to fixed Sign Layer splitting (MarabouError 7).
…ption, global constants, function skeletons and implement countNonlinearLayers, addPolyognalTighteningsToLpRelaxation.
…ounds using DeepPoly module. 2. Implement OptimizeSingleParameterisedPolygonalTightening and most of selectConstraints, getParameterisdPolygonalTighteningLowerBound. 3. Move some bound propagation functions and helpers from Layer, LPFormulator to NetworkLevelReasoner. 4. Delete duplicate helpers from DeepPolySoftmaxElement and replace calls to their copies in Layer.
…or MILP_SOLVER_BOUND_TIGHTENING_TYPE. 1. Extending initializeSymbolicBoundsMaps using DeepPoly module. 2. Implement generatePolygonalTightenings, getParameterisdPolygonalTighteningLowerBound and helper functions. 3. Corrections in Layer, OptionParser. 4. Creating new test file Test_PMNR.h.
…(in DeepPoly module), required by PMNR's selectConstraints. 2. Implemented unit tests for parameterisedSymbolicBoundPropagation and initializeSymbolicBounds, in Test_NetworkLevelReasoner.h and nlr/tests/Test_NetworkLevelReasoner.h. 3. Various corrections to PMNR's generatePolygonalTightening, getParameterisdPolygonalTighteningLowerBound and related functions (in NetworkLevelReasoner.cpp). 4. Renamings in Layer, LPFormulator, NetworkLevelReasoner for consistency. 5. Few corrections to nlr/tests/Test_NetworkLevelReasoner.h calculations (documentation). 6. One correction to Sigmoid regular SBT (changed if ( FloatUtils::areEqual( FloatUtils::round( sourceLb ), FloatUtils::round( sourceUb ) ) ) to if ( FloatUtils::areEqual( sourceLb, sourceUb ) ) in Layer.cpp's computeSymbolicBoundsForSigmoid. 7. One correction to DeepPoly module (added line _phaseFixed[i] = phaseFixed; in DeepPolyMaxPoolElement.cpp's execute() which is required by its storePredecessorSymbolicBounds() (otherwise the map _phaseFixed would be used without being updated).
…unctions in in DeepPoly module, Restored Test_PMNR.h, removed Gurobi guard for Test_PMNRSelection.h, resumed implementating selectConstraints/generatePolygonalTightenings and related functions.
SBT for Sigmoid: Corrected criterion for fixed Sigmoid neuron.
SBT for Softmax: Symbolic bounds now stored correctly in _symbolicLb, _symbolicUb.
SBT for Bilinear: Fixed calculation for symbolic lower/upper bias, fixed mistaken assumption both source neurons are from same layer.
Parameterized SBT for Bilinear: Fixed calculation for symbolic lower/upper bias, fixed mistaken assumption both source neurons are from same layer.
LP relaxation for Bilinear: Fixed mistaken assumption both source neurons are from same layer.
Parameterized LP relaxation for Bilinear: Fixed mistaken assumption both source neurons are from same layer.
Minor changes for code of PreimageApproximation optimization function: size_t -> unsigned, define sign variable at start of function, use Vector( double, unsigned ) constructor and std::min/std::max.
…tion#889), plus: 1. Various corrections in generatePolyongalTightening, Extended INVPROP and hyperparameters, initializeSymbolicBounds (deepPoly module), Gradient-based PMNR heuristic, unit tests. 2. Implemented calculations of branch symbolic bounds in NLR. 3. Implemented branching after Extended INVPROP to derive stronger bounds. 4. Implemented detection of some infeasible branch combinations during INVPROP. 5. Implemented heuristic generation of branching points and tightening loss. 6. Wrote unit tests for parameterised SBT/DeepPoly in Test_NetworkLevelReasoner/Test_DeepPolyAnalysis.h. 7. Implemented BBPS-based heuristic for PMNR. 7. Wrote Test_PMNRSelection.h, unit test file for symbolic bound maps, branching points and PMNR neuron scores. 8. Wrote Test_PMNR.h, unit test file for backward-invprop, backward-pmnr-(random/gradient/bbps) bound tightening options.
…ulation modifed. 2. Alpha optimization removed from EXTENDED INVPROP. 3. PreimageApproximation is run before EXTENDED INVPROP. 4. BBPS branching candidates number increased to 100. 5. Removed milp-tightening options backward-invprop, backward-pmnr-random, backward-pmnr-gradient, renamed backward-pmnr-bbps to backward-pmnr. 6. Updated test files Test_PMNRSelection.h, Test_LPRelaxation.h, Test_PMNR.h.
…Apply Results of PMNR Optimization Benchmarks

Manually Sync Fork + Apply Results of PMNR Optimization Benchmarks
…d function declarations from LPFormulator.h. 3. Removing redundant 'include LPFormulator.h' from NetworkLevelReasoner.h to solve Release build failure on clang++.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants