Conversation
| return super().__new__(cls, to_float32(float(value))) | ||
|
|
||
|
|
||
| class Int32(BaseConstrainedInt): |
There was a problem hiding this comment.
we need this, not to constrain the data per se (pydantic can deal with that via i.e. conint), but to encode to the correct proto type; the information from i.e. conint does not carry over to the python int generated, so in many cases you cannot discern which type of proto integer it was / needs to be converted into
| class ResolvedType(NamedTuple): | ||
| fully_qualified_name: str | ||
| ast_node: MessageAdapter | ast.Enum | None = None | ||
|
|
||
| @property | ||
| def is_enum(self): | ||
| return isinstance(self.ast_node, ast.Enum) | ||
|
|
||
| @property | ||
| def is_message(self): | ||
| return isinstance(self.ast_node, MessageAdapter) | ||
|
|
||
| def __str__(self): | ||
| return self.fully_qualified_name |
There was a problem hiding this comment.
This facilitates checking, as we traverse parents, what type we're dealing with
…est_scaffold_protocol
| for cls in BaseModel.__subclasses__(): | ||
| cls.model_rebuild() |
There was a problem hiding this comment.
The sames goes for this module: the performative models (generated under tests/performatives.py) also need to update forward references in order for hypothesis to generate strategies from them correctly using st.from_type
| # Point PYTHONPATH to the temporary project root so generated modules are discoverable | ||
| env = os.environ.copy() | ||
| env["PYTHONPATH"] = str(repo_root) | ||
|
|
||
| command = ["adev", "-v", "scaffold", "protocol", str(protocol_spec)] | ||
| result = subprocess.run(command, env=env, check=False, text=True, capture_output=True) | ||
| if result.returncode != 0: | ||
| msg = f"Protocol scaffolding failed: {result.stderr}" | ||
| raise ValueError(msg) | ||
|
|
||
| assert protocol_outpath.exists() | ||
|
|
||
| test_dir = protocol_outpath / "tests" | ||
| command = ["pytest", str(test_dir), "-vv", "-s", "--tb=long", "-p", "no:warnings"] | ||
| result = subprocess.run( | ||
| command, | ||
| env=env, | ||
| check=False, | ||
| text=True, | ||
| capture_output=True, | ||
| ) | ||
|
|
||
| assert result.returncode == 0, f"Failed pytest on generated protocol: {result.stderr}" |
There was a problem hiding this comment.
before, i passed the env with custom PYTHONPATH only to the last subprocess call and tests passed. However, no code was generated in the proper location (!) as I didn't pass the custom env to the adev -v scaffold protocol call. This resulted in a situation where no tests were ran in the subprocess, causing this test to pass without actually running any tests. We should find a better way to assert this, i.e. by checking output of the subprocess call and comparing that reported output (parsing stdout) to expected result. For now, tho, this assert protocol_outpath.exists() ensures that the expected outpath was generated (which fails when one does not pass the env)
There was a problem hiding this comment.
We have the subprocess task thing for this quite extensively used throughout the repo
There was a problem hiding this comment.
Which gives you access to all the env var stuff etc and provides the abstraction
There was a problem hiding this comment.
I think there’s a small mix-up in how loading and executing (newly generated) code occurs in the context of an isolated_filesystem:
- We run the codegen inside an isolated temp directory (e.g.
/tmp/...) that isn’t on the defaultPYTHONPATH. - Spawning a separate python process guarantees zero cached imports; otherwise you’ll end up with stale code (cache invalidation truly is one of the two hard things in computer science - more details in
isolated_filesystemcontextmanager issues #609) - When you switch into the temp directory, you must also update
PYTHONPATH(or use-mwith the correct module path) so that the newly written files become importable in that subprocess.
…ProtocolSpec.template_context` cached property
| protocol_name = self.protocol.metadata.name | ||
| protocol_author = self.protocol.metadata.author | ||
| speech_acts = list(self.protocol.metadata.speech_acts) | ||
| roles = list(self.protocol.interaction_model.roles) |
There was a problem hiding this comment.
This isnt a key on the original protocol specs?
There was a problem hiding this comment.
well this is the custom data model for a protocol spec consisting of 3 yaml parts. I've simply named these individual sections (as per the pydantic model here)
| exclude = "/(\n \\.eggs\n | \\.git\n | \\.hg\n | \\.mypy_cache\n | \\.tox\n | \\.venv\n | _build\n | buck-out\n | build\n | dist\n)/\n" | ||
|
|
||
| [tool.poetry.dependencies] | ||
| python = ">=3.10,<3.14" |
There was a problem hiding this comment.
we do no want to drop this support
There was a problem hiding this comment.
please read the comments i leave on my PRs
| openapi-spec-validator = "0.2.8" | ||
| disutils = "^1.4.32.post2" | ||
| setuptools = "^75.8.0" | ||
| proto-schema-parser = "^1.5.0" |
There was a problem hiding this comment.
No. I think the bump to include 3.13 previously was an error in a previous PR of mine (see git blame)
|
|
||
|
|
||
| @pytest.fixture(scope="module") | ||
| def module_scoped_dummy_agent_tim() -> Path: |
There was a problem hiding this comment.
so now there is like 3 different agents?
Why?
There was a problem hiding this comment.
because of the time it takes to create this "isolated_filesystem". See the difference in run times (1 and 3) in this comment
There was a problem hiding this comment.
why not remove the unused ones?

A re-implementation of the protocol scaffolder
ast, but on a protobufast(protodantic.py, test cases).protodefinition one-to-one, enabling precise bidirectional translationhypothesisstrategy generation for both pydantic models derived from protobuf and protocol performativeshypothesistests in an isolated filesystem, then launches asubprocessto execute the generated tests viapytest