Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .sdk-version
Original file line number Diff line number Diff line change
@@ -1 +1 @@
v1.89.2
v1.89.4
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -401,6 +401,7 @@ Class | Method | HTTP request | Description
- [MetaModel](docs/MetaModel.md)
- [ModelName](docs/ModelName.md)
- [ModelsResponse](docs/ModelsResponse.md)
- [NameConfidence](docs/NameConfidence.md)
- [NearestNeighbor](docs/NearestNeighbor.md)
- [NetworkOverviewDns](docs/NetworkOverviewDns.md)
- [NetworkOverviewDnsAnswer](docs/NetworkOverviewDnsAnswer.md)
Expand Down
5 changes: 4 additions & 1 deletion docs/AnalysisFunctionMatchingRequest.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,11 @@

Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**min_similarity** | **float** | Minimum similarity expected for a match, default is 0.9 | [optional] [default to 0.9]
**min_similarity** | **float** | Minimum similarity expected for a match as a percentage, default is 90 | [optional] [default to 90.0]
**filters** | [**FunctionMatchingFilters**](FunctionMatchingFilters.md) | | [optional]
**results_per_function** | **int** | Maximum number of matches to return per function, default is 1, max is 10 | [optional] [default to 1]
**page** | **int** | Page number for paginated results, default is 1 (first page) | [optional] [default to 1]
**page_size** | **int** | Number of functions to return per page, default is 0 (all functions), max is 1000 | [optional] [default to 0]

## Example

Expand Down
4 changes: 2 additions & 2 deletions docs/AutoUnstripRequest.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@

Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**min_similarity** | **float** | Minimum similarity expected for a match, default is 0.9 | [optional] [default to 0.9]
**min_similarity** | **float** | Minimum similarity expected for a match as a percentage, default is 90 | [optional] [default to 90.0]
**apply** | **bool** | Whether to apply the matched function names to the target binary, default is False | [optional] [default to False]
**confidence_threshold** | **float** | Confidence threshold for applying function names, default is 0.9 | [optional] [default to 0.9]
**confidence_threshold** | **float** | Confidence threshold for applying function names as a percentage, default is 90 | [optional] [default to 90.0]
**min_group_size** | **int** | Minimum number of matching functions required to consider for a match, default is 10 | [optional] [default to 10]

## Example
Expand Down
2 changes: 2 additions & 0 deletions docs/FunctionMatchingBatchResponse.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,8 @@ Name | Type | Description | Notes
**status** | **str** | | [optional]
**total_time** | **int** | | [optional]
**error_message** | **str** | | [optional]
**current_page** | **int** | | [optional]
**total_pages** | **int** | | [optional]
**matches** | [**List[FunctionMatchingResultWithBestMatch]**](FunctionMatchingResultWithBestMatch.md) | |

## Example
Expand Down
5 changes: 4 additions & 1 deletion docs/FunctionMatchingRequest.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,11 @@ Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**model_id** | **int** | ID of the model used for function matching, used to determine the embedding model |
**function_ids** | **List[int]** | ID's of functions to find matches for, must be at least one function ID |
**min_similarity** | **float** | Minimum similarity expected for a match, default is 0.9 | [optional] [default to 0.9]
**min_similarity** | **float** | Minimum similarity expected for a match as a percentage, default is 90 | [optional] [default to 90.0]
**filters** | [**FunctionMatchingFilters**](FunctionMatchingFilters.md) | | [optional]
**results_per_function** | **int** | Maximum number of matches to return per function, default is 1, max is 10 | [optional] [default to 1]
**page** | **int** | Page number for paginated results, default is 1 (first page) | [optional] [default to 1]
**page_size** | **int** | Number of functions to return per page, default is 0 (all functions), max is 1000 | [optional] [default to 0]

## Example

Expand Down
5 changes: 2 additions & 3 deletions docs/FunctionMatchingResultWithBestMatch.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,8 @@
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**function_id** | **int** | |
**matched_function** | [**MatchedFunction**](MatchedFunction.md) | |
**suggested_name** | **str** | | [optional]
**suggested_name_confidence** | **float** | | [optional]
**matched_functions** | [**List[MatchedFunction]**](MatchedFunction.md) | |
**confidences** | [**List[NameConfidence]**](NameConfidence.md) | | [optional]

## Example

Expand Down
1 change: 1 addition & 0 deletions docs/MatchedFunction.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ Name | Type | Description | Notes
**sha_256_hash** | **str** | |
**analysis_id** | **int** | |
**similarity** | **float** | | [optional]
**confidence** | **float** | | [optional]

## Example

Expand Down
30 changes: 30 additions & 0 deletions docs/NameConfidence.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
# NameConfidence


## Properties

Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**name** | **str** | The suggested function name |
**confidence** | **float** | Confidence score as a percentage |

## Example

```python
from revengai.models.name_confidence import NameConfidence

# TODO update the JSON string below
json = "{}"
# create an instance of NameConfidence from a JSON string
name_confidence_instance = NameConfidence.from_json(json)
# print the JSON string representation of the object
print(NameConfidence.to_json())

# convert the object into a dict
name_confidence_dict = name_confidence_instance.to_dict()
# create an instance of NameConfidence from a dict
name_confidence_from_dict = NameConfidence.from_dict(name_confidence_dict)
```
[[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)


4 changes: 3 additions & 1 deletion revengai/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
""" # noqa: E501


__version__ = "v1.89.2"
__version__ = "v1.89.4"

# Define package exports
__all__ = [
Expand Down Expand Up @@ -285,6 +285,7 @@
"MetaModel",
"ModelName",
"ModelsResponse",
"NameConfidence",
"NearestNeighbor",
"NetworkOverviewDns",
"NetworkOverviewDnsAnswer",
Expand Down Expand Up @@ -633,6 +634,7 @@
from revengai.models.meta_model import MetaModel as MetaModel
from revengai.models.model_name import ModelName as ModelName
from revengai.models.models_response import ModelsResponse as ModelsResponse
from revengai.models.name_confidence import NameConfidence as NameConfidence
from revengai.models.nearest_neighbor import NearestNeighbor as NearestNeighbor
from revengai.models.network_overview_dns import NetworkOverviewDns as NetworkOverviewDns
from revengai.models.network_overview_dns_answer import NetworkOverviewDnsAnswer as NetworkOverviewDnsAnswer
Expand Down
2 changes: 1 addition & 1 deletion revengai/api_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ def __init__(
self.default_headers[header_name] = header_value
self.cookie = cookie
# Set default User-Agent.
self.user_agent = 'OpenAPI-Generator/v1.89.2/python'
self.user_agent = 'OpenAPI-Generator/v1.89.4/python'
self.client_side_validation = configuration.client_side_validation

def __enter__(self):
Expand Down
4 changes: 2 additions & 2 deletions revengai/configuration.py
Original file line number Diff line number Diff line change
Expand Up @@ -529,8 +529,8 @@ def to_debug_report(self) -> str:
return "Python SDK Debug Report:\n"\
"OS: {env}\n"\
"Python Version: {pyversion}\n"\
"Version of the API: v1.89.2\n"\
"SDK Package Version: v1.89.2".\
"Version of the API: v1.89.4\n"\
"SDK Package Version: v1.89.4".\
format(env=sys.platform, pyversion=sys.version)

def get_host_settings(self) -> List[HostSetting]:
Expand Down
1 change: 1 addition & 0 deletions revengai/models/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -251,6 +251,7 @@
from revengai.models.meta_model import MetaModel
from revengai.models.model_name import ModelName
from revengai.models.models_response import ModelsResponse
from revengai.models.name_confidence import NameConfidence
from revengai.models.nearest_neighbor import NearestNeighbor
from revengai.models.network_overview_dns import NetworkOverviewDns
from revengai.models.network_overview_dns_answer import NetworkOverviewDnsAnswer
Expand Down
14 changes: 10 additions & 4 deletions revengai/models/analysis_function_matching_request.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,9 +27,12 @@ class AnalysisFunctionMatchingRequest(BaseModel):
"""
AnalysisFunctionMatchingRequest
""" # noqa: E501
min_similarity: Optional[Union[Annotated[float, Field(le=1.0, strict=True, ge=0.0)], Annotated[int, Field(le=1, strict=True, ge=0)]]] = Field(default=0.9, description="Minimum similarity expected for a match, default is 0.9")
min_similarity: Optional[Union[Annotated[float, Field(le=100.0, strict=True, ge=0.0)], Annotated[int, Field(le=100, strict=True, ge=0)]]] = Field(default=90.0, description="Minimum similarity expected for a match as a percentage, default is 90")
filters: Optional[FunctionMatchingFilters] = None
__properties: ClassVar[List[str]] = ["min_similarity", "filters"]
results_per_function: Optional[Annotated[int, Field(le=10, strict=True, ge=1)]] = Field(default=1, description="Maximum number of matches to return per function, default is 1, max is 10")
page: Optional[Annotated[int, Field(strict=True, ge=1)]] = Field(default=1, description="Page number for paginated results, default is 1 (first page)")
page_size: Optional[Annotated[int, Field(le=1000, strict=True, ge=0)]] = Field(default=0, description="Number of functions to return per page, default is 0 (all functions), max is 1000")
__properties: ClassVar[List[str]] = ["min_similarity", "filters", "results_per_function", "page", "page_size"]

model_config = ConfigDict(
populate_by_name=True,
Expand Down Expand Up @@ -90,8 +93,11 @@ def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]:
return cls.model_validate(obj)

_obj = cls.model_validate({
"min_similarity": obj.get("min_similarity") if obj.get("min_similarity") is not None else 0.9,
"filters": FunctionMatchingFilters.from_dict(obj["filters"]) if obj.get("filters") is not None else None
"min_similarity": obj.get("min_similarity") if obj.get("min_similarity") is not None else 90.0,
"filters": FunctionMatchingFilters.from_dict(obj["filters"]) if obj.get("filters") is not None else None,
"results_per_function": obj.get("results_per_function") if obj.get("results_per_function") is not None else 1,
"page": obj.get("page") if obj.get("page") is not None else 1,
"page_size": obj.get("page_size") if obj.get("page_size") is not None else 0
})
return _obj

Expand Down
8 changes: 4 additions & 4 deletions revengai/models/auto_unstrip_request.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,9 +26,9 @@ class AutoUnstripRequest(BaseModel):
"""
AutoUnstripRequest
""" # noqa: E501
min_similarity: Optional[Union[Annotated[float, Field(le=1.0, strict=True, ge=0.0)], Annotated[int, Field(le=1, strict=True, ge=0)]]] = Field(default=0.9, description="Minimum similarity expected for a match, default is 0.9")
min_similarity: Optional[Union[Annotated[float, Field(le=100.0, strict=True, ge=0.0)], Annotated[int, Field(le=100, strict=True, ge=0)]]] = Field(default=90.0, description="Minimum similarity expected for a match as a percentage, default is 90")
apply: Optional[StrictBool] = Field(default=False, description="Whether to apply the matched function names to the target binary, default is False")
confidence_threshold: Optional[Union[Annotated[float, Field(le=1.0, strict=True, ge=0.0)], Annotated[int, Field(le=1, strict=True, ge=0)]]] = Field(default=0.9, description="Confidence threshold for applying function names, default is 0.9")
confidence_threshold: Optional[Union[Annotated[float, Field(le=100.0, strict=True, ge=0.0)], Annotated[int, Field(le=100, strict=True, ge=0)]]] = Field(default=90.0, description="Confidence threshold for applying function names as a percentage, default is 90")
min_group_size: Optional[Annotated[int, Field(le=20, strict=True, ge=1)]] = Field(default=10, description="Minimum number of matching functions required to consider for a match, default is 10")
__properties: ClassVar[List[str]] = ["min_similarity", "apply", "confidence_threshold", "min_group_size"]

Expand Down Expand Up @@ -83,9 +83,9 @@ def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]:
return cls.model_validate(obj)

_obj = cls.model_validate({
"min_similarity": obj.get("min_similarity") if obj.get("min_similarity") is not None else 0.9,
"min_similarity": obj.get("min_similarity") if obj.get("min_similarity") is not None else 90.0,
"apply": obj.get("apply") if obj.get("apply") is not None else False,
"confidence_threshold": obj.get("confidence_threshold") if obj.get("confidence_threshold") is not None else 0.9,
"confidence_threshold": obj.get("confidence_threshold") if obj.get("confidence_threshold") is not None else 90.0,
"min_group_size": obj.get("min_group_size") if obj.get("min_group_size") is not None else 10
})
return _obj
Expand Down
16 changes: 15 additions & 1 deletion revengai/models/function_matching_batch_response.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,8 +30,10 @@ class FunctionMatchingBatchResponse(BaseModel):
status: Optional[StrictStr] = None
total_time: Optional[StrictInt] = None
error_message: Optional[StrictStr] = None
current_page: Optional[StrictInt] = None
total_pages: Optional[StrictInt] = None
matches: List[FunctionMatchingResultWithBestMatch]
__properties: ClassVar[List[str]] = ["progress", "status", "total_time", "error_message", "matches"]
__properties: ClassVar[List[str]] = ["progress", "status", "total_time", "error_message", "current_page", "total_pages", "matches"]

model_config = ConfigDict(
populate_by_name=True,
Expand Down Expand Up @@ -94,6 +96,16 @@ def to_dict(self) -> Dict[str, Any]:
if self.error_message is None and "error_message" in self.model_fields_set:
_dict['error_message'] = None

# set to None if current_page (nullable) is None
# and model_fields_set contains the field
if self.current_page is None and "current_page" in self.model_fields_set:
_dict['current_page'] = None

# set to None if total_pages (nullable) is None
# and model_fields_set contains the field
if self.total_pages is None and "total_pages" in self.model_fields_set:
_dict['total_pages'] = None

return _dict

@classmethod
Expand All @@ -110,6 +122,8 @@ def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]:
"status": obj.get("status"),
"total_time": obj.get("total_time"),
"error_message": obj.get("error_message"),
"current_page": obj.get("current_page"),
"total_pages": obj.get("total_pages"),
"matches": [FunctionMatchingResultWithBestMatch.from_dict(_item) for _item in obj["matches"]] if obj.get("matches") is not None else None
})
return _obj
Expand Down
14 changes: 10 additions & 4 deletions revengai/models/function_matching_request.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,9 +29,12 @@ class FunctionMatchingRequest(BaseModel):
""" # noqa: E501
model_id: StrictInt = Field(description="ID of the model used for function matching, used to determine the embedding model")
function_ids: List[StrictInt] = Field(description="ID's of functions to find matches for, must be at least one function ID")
min_similarity: Optional[Union[Annotated[float, Field(le=1.0, strict=True, ge=0.0)], Annotated[int, Field(le=1, strict=True, ge=0)]]] = Field(default=0.9, description="Minimum similarity expected for a match, default is 0.9")
min_similarity: Optional[Union[Annotated[float, Field(le=100.0, strict=True, ge=0.0)], Annotated[int, Field(le=100, strict=True, ge=0)]]] = Field(default=90.0, description="Minimum similarity expected for a match as a percentage, default is 90")
filters: Optional[FunctionMatchingFilters] = None
__properties: ClassVar[List[str]] = ["model_id", "function_ids", "min_similarity", "filters"]
results_per_function: Optional[Annotated[int, Field(le=10, strict=True, ge=1)]] = Field(default=1, description="Maximum number of matches to return per function, default is 1, max is 10")
page: Optional[Annotated[int, Field(strict=True, ge=1)]] = Field(default=1, description="Page number for paginated results, default is 1 (first page)")
page_size: Optional[Annotated[int, Field(le=1000, strict=True, ge=0)]] = Field(default=0, description="Number of functions to return per page, default is 0 (all functions), max is 1000")
__properties: ClassVar[List[str]] = ["model_id", "function_ids", "min_similarity", "filters", "results_per_function", "page", "page_size"]

model_config = ConfigDict(
populate_by_name=True,
Expand Down Expand Up @@ -94,8 +97,11 @@ def from_dict(cls, obj: Optional[Dict[str, Any]]) -> Optional[Self]:
_obj = cls.model_validate({
"model_id": obj.get("model_id"),
"function_ids": obj.get("function_ids"),
"min_similarity": obj.get("min_similarity") if obj.get("min_similarity") is not None else 0.9,
"filters": FunctionMatchingFilters.from_dict(obj["filters"]) if obj.get("filters") is not None else None
"min_similarity": obj.get("min_similarity") if obj.get("min_similarity") is not None else 90.0,
"filters": FunctionMatchingFilters.from_dict(obj["filters"]) if obj.get("filters") is not None else None,
"results_per_function": obj.get("results_per_function") if obj.get("results_per_function") is not None else 1,
"page": obj.get("page") if obj.get("page") is not None else 1,
"page_size": obj.get("page_size") if obj.get("page_size") is not None else 0
})
return _obj

Expand Down
Loading