Skip to content

feat: unify numeric operand promotion#139

Open
omsherikar wants to merge 12 commits intoarxlang:mainfrom
omsherikar:feature/type-unification-135
Open

feat: unify numeric operand promotion#139
omsherikar wants to merge 12 commits intoarxlang:mainfrom
omsherikar:feature/type-unification-135

Conversation

@omsherikar
Copy link
Contributor

Pull Request description

Centralize numeric operand promotion for binops .
Adds _unify_numeric_operands plus helper tests so ints/floats/vectors all go through one path before LLVM emission, preventing mismatched widths or scalar/vector handling bugs.

Solve #135

How to test these changes

  • python -m pytest tests/test_llvmlite_helpers.py -v
  • pre-commit run --files src/irx/builders/llvmliteir.py tests/test_llvmlite_helpers.py

Pull Request checklists

This PR is a:

  • bug-fix
  • new feature
  • maintenance

About this PR:

  • it includes tests.
  • the tests are executed on CI.
  • the tests generate log file(s) (path).
  • pre-commit hooks were executed locally.
  • this PR requires a project documentation update.

Author's checklist:

  • I have reviewed the changes and it contains no misspelling.
  • The code is well commented, especially in the parts that contain more complexity.
  • New and old tests passed locally.

Additional information

N/A

Reviewer's checklist

@github-actions
Copy link

OSL ChatGPT Reviewer

NOTE: This is generated by an AI program, so some comments may not make sense.

src/irx/builders/llvmliteir.py

  • Correctness: signedness is ignored in casts.
    • _cast_value_to_type() always uses sitofp and sext, which is wrong for unsigned integers (will corrupt values). You need sign-awareness from the AST/type system. Suggest adding a signed: bool flag (default True) and using uitofp/zext when unsigned. (L.487)
      def _cast_value_to_type(self, value: ir.Value, target_scalar_ty: ir.Type, signed: bool = True) -> ir.Value:
      """Sign-aware cast of scalars or vectors to the target scalar type."""
      # use builder.uitofp(...) if not signed
      # use builder.zext(...) if not signed
  • Correctness: FP128/X86_FP80 handling is unsafe/incomplete.
    • FP128Type is referenced directly and may be undefined -> NameError at runtime. Use hasattr(ir, "FP128Type") and refer to ir.FP128Type instead. (L.460, L.476)
      def _float_type_from_width(self, width: int) -> ir.Type:
      """Create float type from bit width (safe for optional types)."""
      # use hasattr(ir, "FP128Type") and ir.FP128Type()
    • _float_type_from_width() silently falls back to 32-bit for widths in (64,128), e.g., x86_fp80, causing precision loss. Either support ir.X86_FP80Type explicitly or at least fall back to DOUBLE_TYPE instead of FLOAT_TYPE for >64 and <128. Also detect X86_FP80 in _float_bit_width(). (L.460, L.476)
      def _float_bit_width(self, ty: ir.Type) -> int:
      """Bit width including platform extended types."""
      # handle ir.X86_FP80Type if available
  • Behavior change/perf: scalar-vector promotion now widens the vector element type to the widest float (e.g., float op double -> double). This can break downstream typing assumptions and degrade perf on targets without vector f64. Consider preserving the vector element type when one operand is a vector (only cast/splat the scalar), unless your type system specifies “widest wins.” (L.441)
    def _unify_numeric_operands(self, lhs: ir.Value, rhs: ir.Value) -> tuple[ir.Value, ir.Value]:
    """Prefer vector element type when only one operand is vector."""
    # if exactly one is vector: target_scalar_ty = that vector's element type

LGTM!


tests/test_llvmlite_helpers.py

  • Potential false positive: test_unify_int_and_float_scalars_returns_float allows widened_int to be any FP type, which can mask a bug where operands are not actually unified to the same type. Tighten the assertions to ensure both operands are the same type and that it matches FLOAT_TYPE as the docstring states. (L.153-L.155)
    Replace:
    assert is_fp_type(widened_int.type)
    assert widened_float.type == visitor._llvm.FLOAT_TYPE
    With:
    assert widened_int.type == visitor._llvm.FLOAT_TYPE
    assert widened_float.type == visitor._llvm.FLOAT_TYPE
    assert widened_int.type == widened_float.type

  • Maintainability risk: tests hinge on private APIs (visitor._unify_numeric_operands and visitor._llvm.*). Consider exposing a small public helper (or alias) to stabilize the contract and avoid brittle coupling to internals. E.g., add a public wrapper in LLVMLiteIRVisitor and update these calls. (L.106, L.130, L.148)


@omsherikar
Copy link
Contributor Author

@xmnlab @yuvimittal please have a look

@github-actions
Copy link

OSL ChatGPT Reviewer

NOTE: This is generated by an AI program, so some comments may not make sense.

src/irx/builders/llvmliteir.py

  • Correctness: The numeric unification runs before operator classification, so bitwise/shift ops with mixed int/float will silently cast ints to float (sitofp) and later fail or miscompile. Gate unification to arithmetic-only ops. (L.670)
    Suggested change:
    def _should_unify_as_arith(self, node: astx.BinaryOp) -> bool:
    """Decide if numeric unification applies (arith only, not bitwise/shifts)."""
    # implement based on node.op
    return node.op in {astx.Op.ADD, astx.Op.SUB, astx.Op.MUL, astx.Op.DIV, astx.Op.MOD, astx.Op.POW}

    in visit(BinaryOp):

    if self._should_unify_as_arith(node) and self._is_numeric_value(llvm_lhs) and self._is_numeric_value(llvm_rhs):
    llvm_lhs, llvm_rhs = self._unify_numeric_operands(llvm_lhs, llvm_rhs)

  • Correctness: Signedness is ignored during integer widening and int->float casts (sext/sitofp). This will break unsigned semantics (e.g., OR/AND after a sext, or uitofp required). Thread signedness from AST/type info and choose zext/uitofp when appropriate. (L.533, L.557)
    Suggested change:
    def _cast_value_to_type(self, value: ir.Value, target_scalar_ty: ir.Type, *, signed: bool) -> ir.Value:
    """Cast scalars or vectors to the target scalar type with signedness awareness."""
    builder = self._llvm.ir_builder
    value_is_vec = is_vector(value)
    lanes = value.type.count if value_is_vec else None
    current_scalar_ty = value.type.element if value_is_vec else value.type
    target_ty = ir.VectorType(target_scalar_ty, lanes) if value_is_vec else target_scalar_ty

    if current_scalar_ty == target_scalar_ty and value.type == target_ty:
        return value
    
    current_is_float = is_fp_type(current_scalar_ty)
    target_is_float = is_fp_type(target_scalar_ty)
    
    if target_is_float:
        if current_is_float:
            current_bits = self._float_bit_width(current_scalar_ty)
            target_bits = self._float_bit_width(target_scalar_ty)
            if current_bits == target_bits:
                return builder.bitcast(value, target_ty) if value.type != target_ty else value
            return builder.fpext(value, target_ty, "fpext") if current_bits < target_bits else builder.fptrunc(value, target_ty, "fptrunc")
        return (builder.sitofp if signed else builder.uitofp)(value, target_ty, "itofp")
    
    if current_is_float:
        raise Exception("Cannot implicitly convert floating-point to integer")
    
    current_width = getattr(current_scalar_ty, "width", 0)
    target_width = getattr(target_scalar_ty, "width", 0)
    if current_width == target_width:
        return builder.bitcast(value, target_ty) if value.type != target_ty else value
    return (builder.sext if signed else builder.zext)(value, target_ty, "ext") if current_width < target_width else builder.trunc(value, target_ty, "trunc")
    
    • Pass signed=... from AST type info where calling _cast_value_to_type/_unify_numeric_operands.
  • Correctness: i1 booleans are treated as numeric here and may be promoted/splat, which can corrupt logical ops. Exclude i1 from _is_numeric_value. (L.435)
    Suggested change:
    def _is_numeric_value(self, value: ir.Value) -> bool:
    """Return True if value represents an int/float scalar or vector (excluding i1)."""
    if is_vector(value):
    elem_ty = value.type.element
    if isinstance(elem_ty, ir.IntType) and getattr(elem_ty, "width", 0) == 1:
    return False
    return isinstance(elem_ty, ir.IntType) or is_fp_type(elem_ty)
    base_ty = value.type
    if isinstance(base_ty, ir.IntType) and getattr(base_ty, "width", 0) == 1:
    return False
    return isinstance(base_ty, ir.IntType) or is_fp_type(base_ty)

  • Portability: FP128Type() may not be legal on all targets even if the class exists. Prefer a target-aware handle if available (e.g., self._llvm.FP128_TYPE) and only select if the module/target supports it; otherwise fall back to DOUBLE. (L.470)
    Suggested change:
    def _float_type_from_width(self, width: int) -> ir.Type:
    """Select a usable float type for the current target."""
    if width <= FLOAT16_BITS and hasattr(self._llvm, "FLOAT16_TYPE"):
    return self._llvm.FLOAT16_TYPE
    if width <= FLOAT32_BITS:
    return self._llvm.FLOAT_TYPE
    if width <= FLOAT64_BITS:
    return self._llvm.DOUBLE_TYPE
    if hasattr(self._llvm, "FP128_TYPE"):
    return self._llvm.FP128_TYPE
    return self._llvm.DOUBLE_TYPE


tests/test_llvmlite_helpers.py

LGTM!


@github-actions
Copy link

OSL ChatGPT Reviewer

NOTE: This is generated by an AI program, so some comments may not make sense.

src/irx/builders/llvmliteir.py

  • Possible NameError on FP128Type: you reference FP128Type directly without importing it. Use ir.FP128Type to avoid runtime errors. Update both checks and constructors. (L.472, L.486)

    • Replace:
      • if FP128Type is not None and width >= FLOAT128_BITS:
      • if FP128Type is not None and isinstance(ty, FP128Type):
    • With:
      • if hasattr(ir, "FP128Type") and width >= FLOAT128_BITS:
      • if hasattr(ir, "FP128Type") and isinstance(ty, ir.FP128Type):
  • Incorrect downcast of non-64/128 FP types: _float_type_from_width falls back to 32-bit float for widths >64 and <128 (e.g., x86 80-bit). Add explicit support for X86_FP80 to prevent precision loss. (L.468)

    • Suggested change:
      def _float_type_from_width(self, width: int) -> ir.Type:
      """Select float type by bit-width"""
      if width <= FLOAT16_BITS and hasattr(self._llvm, "FLOAT16_TYPE"):
      return self._llvm.FLOAT16_TYPE
      if width <= FLOAT32_BITS:
      return self._llvm.FLOAT_TYPE
      if width <= FLOAT64_BITS:
      return self._llvm.DOUBLE_TYPE
      if hasattr(ir, "X86_FP80Type") and width <= 80:
      return ir.X86_FP80Type()
      if hasattr(ir, "FP128Type") and width >= FLOAT128_BITS:
      return ir.FP128Type()
      return self._llvm.FLOAT_TYPE
  • Signedness bugs in integer promotions/casts:

    • Widening ints uses sext, which is wrong for unsigned/boolean operands; and int->float uses sitofp, which is wrong for unsigned. At minimum, treat i1 as unsigned to avoid -1 for True. (L.520, L.536)
    • Suggested changes:
      def _cast_value_to_type(self, value: ir.Value, target_scalar_ty: ir.Type) -> ir.Value:
      """Cast scalars or vectors to the target scalar type"""
      ...
      if target_is_float:
      if current_is_float:
      ...
      # int -> float
      if isinstance(current_scalar_ty, ir.IntType) and getattr(current_scalar_ty, "width", 0) == 1:
      return builder.uitofp(value, target_ty, "uitofp") # bools are unsigned
      return builder.sitofp(value, target_ty, "sitofp")
      ...
      # int -> wider int
      if current_width < target_width:
      if current_width == 1:
      return builder.zext(value, target_ty, "zext") # preserve boolean semantics
      return builder.sext(value, target_ty, "sext")
  • Vector element width selection: when neither operand is float, you pick max(lhs_width, rhs_width, 1). If either width lookup fails (returns 0), this can produce i1 and silently narrow. Consider asserting widths > 0 for ints to avoid accidental i1. (L.449)

    • Suggested guard:
      def _unify_numeric_operands(self, lhs: ir.Value, rhs: ir.Value) -> tuple[ir.Value, ir.Value]:
      """Ensure numeric operands share shape and scalar type"""
      ...
      if not (isinstance(lhs_base_ty, ir.IntType) and isinstance(rhs_base_ty, ir.IntType)):
      ...
      lhs_width = getattr(lhs_base_ty, "width", 0)
      rhs_width = getattr(rhs_base_ty, "width", 0)
      if lhs_width <= 0 or rhs_width <= 0:
      raise Exception("Unsupported integer type without width")
      target_scalar_ty = ir.IntType(max(lhs_width, rhs_width))

tests/test_llvmlite_helpers.py

LGTM!


Copilot AI review requested due to automatic review settings January 19, 2026 08:27
@github-actions
Copy link

OSL ChatGPT Reviewer

NOTE: This is generated by an AI program, so some comments may not make sense.

src/irx/builders/llvmliteir.py

ChatGPT was not able to review the file. Error: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.&#x27;, 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}

tests/test_llvmlite_helpers.py

ChatGPT was not able to review the file. Error: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.&#x27;, 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR centralizes numeric operand promotion for binary operations by introducing _unify_numeric_operands and related helper methods, replacing scattered scalar-vector promotion logic with a unified approach that handles ints, floats, and vectors consistently before LLVM emission.

Changes:

  • Added _unify_numeric_operands method and supporting helpers (_select_float_type, _float_type_from_width, _float_bit_width, _cast_value_to_type, _is_numeric_value) to standardize numeric type promotion
  • Replaced 60+ lines of duplicated scalar-vector promotion logic with calls to the new unified method
  • Added comprehensive unit tests covering scalar-to-vector, float-to-double, and int-to-float promotion scenarios

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 7 comments.

File Description
src/irx/builders/llvmliteir.py Adds centralized numeric promotion infrastructure with helper methods and integrates it into BinaryOp visitor, removing old scattered promotion logic
tests/test_llvmlite_helpers.py Adds three unit tests validating scalar-to-vector promotion, float type widening, and mixed int-float promotion
Comments suppressed due to low confidence (2)

src/irx/builders/llvmliteir.py:711

  • After calling _unify_numeric_operands, vector operands are guaranteed to have matching counts and element types. However, the subsequent checks on lines 703-711 duplicate these validations. Since _unify_numeric_operands already ensures vector size and element type consistency (lines 475-478 check vector size mismatch, and the promotion logic ensures matching element types), these redundant checks could be removed or moved into a separate validation function to improve code clarity and reduce duplication.
            if llvm_lhs.type.count != llvm_rhs.type.count:
                raise Exception(
                    f"Vector size mismatch: {llvm_lhs.type} vs {llvm_rhs.type}"
                )
            if llvm_lhs.type.element != llvm_rhs.type.element:
                raise Exception(
                    f"Vector element type mismatch: "
                    f"{llvm_lhs.type.element} vs {llvm_rhs.type.element}"
                )

src/irx/builders/llvmliteir.py:792

  • Scalar numeric operands are being promoted twice: first by _unify_numeric_operands (lines 698-700), and then again by promote_operands (line 792). This redundant promotion is inefficient. Consider either: (1) skipping _unify_numeric_operands for scalar-only operations, or (2) removing the promote_operands call for numeric types since they've already been unified. The behavior should be correct since both methods use compatible promotion strategies, but the double work is unnecessary.
        if self._is_numeric_value(llvm_lhs) and self._is_numeric_value(
            llvm_rhs
        ):
            llvm_lhs, llvm_rhs = self._unify_numeric_operands(
                llvm_lhs, llvm_rhs
            )
        # If both operands are LLVM vectors, handle as vector ops
        if is_vector(llvm_lhs) and is_vector(llvm_rhs):
            if llvm_lhs.type.count != llvm_rhs.type.count:
                raise Exception(
                    f"Vector size mismatch: {llvm_lhs.type} vs {llvm_rhs.type}"
                )
            if llvm_lhs.type.element != llvm_rhs.type.element:
                raise Exception(
                    f"Vector element type mismatch: "
                    f"{llvm_lhs.type.element} vs {llvm_rhs.type.element}"
                )
            is_float_vec = is_fp_type(llvm_lhs.type.element)
            op = node.op_code
            set_fast = is_float_vec and getattr(node, "fast_math", False)
            if op == "*" and is_float_vec and getattr(node, "fma", False):
                if not hasattr(node, "fma_rhs"):
                    raise Exception("FMA requires a third operand (fma_rhs)")
                self.visit(node.fma_rhs)
                llvm_fma_rhs = safe_pop(self.result_stack)
                if llvm_fma_rhs.type != llvm_lhs.type:
                    raise Exception(
                        f"FMA operand type mismatch: "
                        f"{llvm_lhs.type} vs {llvm_fma_rhs.type}"
                    )
                if set_fast:
                    self.set_fast_math(True)
                try:
                    result = self._emit_fma(llvm_lhs, llvm_rhs, llvm_fma_rhs)
                finally:
                    if set_fast:
                        self.set_fast_math(False)
                self.result_stack.append(result)
                return
            if set_fast:
                self.set_fast_math(True)
            try:
                if op == "+":
                    if is_float_vec:
                        result = self._llvm.ir_builder.fadd(
                            llvm_lhs, llvm_rhs, name="vfaddtmp"
                        )
                        self._apply_fast_math(result)
                    else:
                        result = self._llvm.ir_builder.add(
                            llvm_lhs, llvm_rhs, name="vaddtmp"
                        )
                elif op == "-":
                    if is_float_vec:
                        result = self._llvm.ir_builder.fsub(
                            llvm_lhs, llvm_rhs, name="vfsubtmp"
                        )
                        self._apply_fast_math(result)
                    else:
                        result = self._llvm.ir_builder.sub(
                            llvm_lhs, llvm_rhs, name="vsubtmp"
                        )
                elif op == "*":
                    if is_float_vec:
                        result = self._llvm.ir_builder.fmul(
                            llvm_lhs, llvm_rhs, name="vfmultmp"
                        )
                        self._apply_fast_math(result)
                    else:
                        result = self._llvm.ir_builder.mul(
                            llvm_lhs, llvm_rhs, name="vmultmp"
                        )
                elif op == "/":
                    if is_float_vec:
                        result = self._llvm.ir_builder.fdiv(
                            llvm_lhs, llvm_rhs, name="vfdivtmp"
                        )
                        self._apply_fast_math(result)
                    else:
                        unsigned = getattr(node, "unsigned", None)
                        if unsigned is None:
                            raise Exception(
                                "Cannot infer integer division signedness "
                                "for vector op"
                            )
                        result = emit_int_div(
                            self._llvm.ir_builder, llvm_lhs, llvm_rhs, unsigned
                        )
                else:
                    raise Exception(f"Vector binop {op} not implemented.")
            finally:
                if set_fast:
                    self.set_fast_math(False)
            self.result_stack.append(result)
            return

        # Scalar Fallback: Original scalar promotion logic
        llvm_lhs, llvm_rhs = self.promote_operands(llvm_lhs, llvm_rhs)

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

def _unify_numeric_operands(
self, lhs: ir.Value, rhs: ir.Value
) -> tuple[ir.Value, ir.Value]:
"""Ensure numeric operands share shape and scalar type."""
Copy link

Copilot AI Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The _unify_numeric_operands method would benefit from more detailed documentation. The current docstring "Ensure numeric operands share shape and scalar type" is minimal. Consider documenting: (1) the promotion rules (e.g., int promotes to float, narrower types promote to wider), (2) parameter types and constraints, (3) return value guarantees, (4) what exceptions can be raised, and (5) examples of transformations. This is a critical function for type safety and clear documentation would help maintainers understand the promotion semantics.

Suggested change
"""Ensure numeric operands share shape and scalar type."""
"""
Normalize two numeric LLVM values to a common scalar type and shape.
This helper is used before emitting arithmetic or comparison
instructions so that both operands are type-compatible. It supports
scalar and vector integers / floating-point values and performs both
scalar type promotion and optional scalar-to-vector splatting.
Promotion rules
---------------
* Shape:
- If both operands are vectors, they must have the same number of
lanes; otherwise an Exception is raised.
- If exactly one operand is a vector, its lane count is used as the
target shape and the scalar operand is splatted to a vector of the
same lane count after type promotion.
- If both operands are scalars, the result operands remain scalars.
* Scalar type:
- If either operand has a floating-point scalar type, both operands
are promoted to a common floating-point type selected via
``self._select_float_type`` from the floating-point candidates.
- If both operands have integer scalar types, both are promoted to an
integer type with ``width = max(lhs.width, rhs.width)`` (at least
1 bit), preserving signedness semantics as implemented by
``_cast_value_to_type``.
Parameters
----------
lhs : llvmlite.ir.Value
Left-hand numeric operand. May be a scalar or vector of integer or
floating-point type.
rhs : llvmlite.ir.Value
Right-hand numeric operand. May be a scalar or vector of integer or
floating-point type.
Returns
-------
(llvmlite.ir.Value, llvmlite.ir.Value)
A pair ``(lhs', rhs')`` where:
* ``lhs'.type`` and ``rhs'.type`` have the same scalar element type.
* If either operand is a vector, both results are vectors with the
same lane count.
Raises
------
Exception
If both operands are vectors and their lane counts (``.count``)
differ.
Any exception raised by ``_cast_value_to_type`` may also propagate if
the operands cannot be safely cast to the selected target type.
Examples
--------
* ``i32 + i64`` -> both operands promoted to ``i64``.
* ``float + i32`` -> both operands promoted to ``float``.
* ``<4 x i16> + i32`` -> scalar ``i32`` cast to ``i32`` then splatted
to ``<4 x i32>`` to match the vector operand.
"""

Copilot uses AI. Check for mistakes.
float_candidates = [
ty for ty in (lhs_base_ty, rhs_base_ty) if is_fp_type(ty)
]
target_scalar_ty = self._select_float_type(float_candidates)
Copy link

Copilot AI Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When mixing integer and floating-point operands, the integer width is not considered when selecting the target float type. For example, an int64 combined with a float32 will promote both to float32, which can cause precision loss since float32 cannot accurately represent all int64 values. Consider promoting to at least double (float64) when the integer operand has width > 32 bits, or document this behavior if the precision loss is acceptable for your use case.

Suggested change
target_scalar_ty = self._select_float_type(float_candidates)
target_scalar_ty = self._select_float_type(float_candidates)
# If we are mixing an integer with a floating-point value, ensure that
# wide integers (> 32 bits) are promoted to at least double precision
# to avoid excessive precision loss when the selected float type is
# narrower than 64 bits.
if lhs_is_float != rhs_is_float:
int_base_ty = lhs_base_ty if not lhs_is_float else rhs_base_ty
int_width = getattr(int_base_ty, "width", 0)
# Determine the bit-width of the selected floating-point type.
float_bits = 0
if isinstance(target_scalar_ty, HalfType):
float_bits = FLOAT16_BITS
elif isinstance(target_scalar_ty, FloatType):
float_bits = FLOAT32_BITS
elif isinstance(target_scalar_ty, DoubleType):
float_bits = FLOAT64_BITS
elif FP128Type is not None and isinstance(target_scalar_ty, FP128Type):
float_bits = FLOAT128_BITS
# Upgrade to double precision when combining a wide integer with
# a float type that is narrower than 64 bits.
if int_width > 32 and float_bits and float_bits < FLOAT64_BITS:
target_scalar_ty = DoubleType()

Copilot uses AI. Check for mistakes.

if lhs_is_vec and rhs_is_vec and lhs.type.count != rhs.type.count:
raise Exception(
f"Vector size mismatch: {lhs.type.count} vs {rhs.type.count}"
Copy link

Copilot AI Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error message could be more informative by including the operation context. Instead of just "Vector size mismatch: X vs Y", consider including information about what operation was being attempted (e.g., "Binary operation '+' requires matching vector sizes, but got X vs Y"). This would help developers debug issues more quickly.

Suggested change
f"Vector size mismatch: {lhs.type.count} vs {rhs.type.count}"
"Numeric operation requires matching vector sizes, "
f"but got {lhs.type} (size {lhs.type.count}) vs "
f"{rhs.type} (size {rhs.type.count})"

Copilot uses AI. Check for mistakes.
Comment on lines +468 to +512
def _unify_numeric_operands(
self, lhs: ir.Value, rhs: ir.Value
) -> tuple[ir.Value, ir.Value]:
"""Ensure numeric operands share shape and scalar type."""
lhs_is_vec = is_vector(lhs)
rhs_is_vec = is_vector(rhs)

if lhs_is_vec and rhs_is_vec and lhs.type.count != rhs.type.count:
raise Exception(
f"Vector size mismatch: {lhs.type.count} vs {rhs.type.count}"
)

target_lanes = None
if lhs_is_vec:
target_lanes = lhs.type.count
elif rhs_is_vec:
target_lanes = rhs.type.count

lhs_base_ty = lhs.type.element if lhs_is_vec else lhs.type
rhs_base_ty = rhs.type.element if rhs_is_vec else rhs.type

lhs_is_float = is_fp_type(lhs_base_ty)
rhs_is_float = is_fp_type(rhs_base_ty)

if lhs_is_float or rhs_is_float:
float_candidates = [
ty for ty in (lhs_base_ty, rhs_base_ty) if is_fp_type(ty)
]
target_scalar_ty = self._select_float_type(float_candidates)
else:
lhs_width = getattr(lhs_base_ty, "width", 0)
rhs_width = getattr(rhs_base_ty, "width", 0)
target_scalar_ty = ir.IntType(max(lhs_width, rhs_width, 1))

lhs = self._cast_value_to_type(lhs, target_scalar_ty)
rhs = self._cast_value_to_type(rhs, target_scalar_ty)

if target_lanes:
vec_ty = ir.VectorType(target_scalar_ty, target_lanes)
if not is_vector(lhs):
lhs = splat_scalar(self._llvm.ir_builder, lhs, vec_ty)
if not is_vector(rhs):
rhs = splat_scalar(self._llvm.ir_builder, rhs, vec_ty)

return lhs, rhs
Copy link

Copilot AI Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new unification logic changes the type promotion behavior compared to the removed code. Previously, when combining a float vector with a double scalar, the scalar would be truncated (fptrunc) to match the vector's element type. Now, both are promoted to the wider type (double). This is generally better for precision, but represents a behavior change that could affect existing code relying on the old behavior. Ensure this is intentional and documented, especially since it could impact numerical precision in existing computations.

Copilot uses AI. Check for mistakes.
)

assert is_fp_type(widened_int.type)
assert widened_float.type == visitor._llvm.FLOAT_TYPE
Copy link

Copilot AI Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing blank line before function definition. According to PEP 8, there should be two blank lines before top-level function definitions to maintain consistency with the rest of the file.

Suggested change
assert widened_float.type == visitor._llvm.FLOAT_TYPE
assert widened_float.type == visitor._llvm.FLOAT_TYPE

Copilot uses AI. Check for mistakes.
Comment on lines +101 to +156
def test_unify_promotes_scalar_int_to_vector() -> None:
"""Scalar ints splat to match vector operands and widen width."""
visitor = LLVMLiteIRVisitor()
_prime_builder(visitor)

vec_ty = ir.VectorType(ir.IntType(32), 2)
vec = ir.Constant(vec_ty, [ir.Constant(ir.IntType(32), 1)] * 2)
scalar = ir.Constant(ir.IntType(16), 5)

promoted_vec, promoted_scalar = visitor._unify_numeric_operands(
vec, scalar
)

assert isinstance(promoted_vec.type, ir.VectorType)
assert isinstance(promoted_scalar.type, ir.VectorType)
assert promoted_vec.type == vec_ty
assert promoted_scalar.type == vec_ty


def test_unify_vector_float_rank_matches_double() -> None:
"""Float vectors upgrade to match double scalars."""
visitor = LLVMLiteIRVisitor()
_prime_builder(visitor)

float_vec_ty = ir.VectorType(visitor._llvm.FLOAT_TYPE, 2)
float_vec = ir.Constant(
float_vec_ty,
[
ir.Constant(visitor._llvm.FLOAT_TYPE, 1.0),
ir.Constant(visitor._llvm.FLOAT_TYPE, 2.0),
],
)
double_scalar = ir.Constant(visitor._llvm.DOUBLE_TYPE, 4.0)

widened_vec, widened_scalar = visitor._unify_numeric_operands(
float_vec, double_scalar
)

assert widened_vec.type.element == visitor._llvm.DOUBLE_TYPE
assert widened_scalar.type.element == visitor._llvm.DOUBLE_TYPE


def test_unify_int_and_float_scalars_returns_float() -> None:
"""Scalar int + float promotes to float for both operands."""
visitor = LLVMLiteIRVisitor()
_prime_builder(visitor)

int_scalar = ir.Constant(visitor._llvm.INT32_TYPE, 7)
float_scalar = ir.Constant(visitor._llvm.FLOAT_TYPE, 1.25)

widened_int, widened_float = visitor._unify_numeric_operands(
int_scalar, float_scalar
)

assert is_fp_type(widened_int.type)
assert widened_float.type == visitor._llvm.FLOAT_TYPE
Copy link

Copilot AI Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test coverage is missing several important edge cases for _unify_numeric_operands: (1) two vectors with mismatched element types (e.g., int32 vector vs float vector), (2) truncation scenarios where a wider type needs to be narrowed to match another operand, (3) FP128 type handling if available, (4) error case where vectors have different sizes, and (5) scalar-to-scalar integer promotion with different widths. Consider adding tests for these scenarios to ensure the unification logic handles all cases correctly.

Copilot uses AI. Check for mistakes.
Comment on lines +551 to +555
lanes = value.type.count
current_scalar_ty = value.type.element
target_ty = ir.VectorType(target_scalar_ty, lanes)
else:
lanes = None
Copy link

Copilot AI Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Variable lanes is not used.

Suggested change
lanes = value.type.count
current_scalar_ty = value.type.element
target_ty = ir.VectorType(target_scalar_ty, lanes)
else:
lanes = None
current_scalar_ty = value.type.element
target_ty = ir.VectorType(target_scalar_ty, value.type.count)
else:

Copilot uses AI. Check for mistakes.
@github-actions
Copy link

OSL ChatGPT Reviewer

NOTE: This is generated by an AI program, so some comments may not make sense.

src/irx/builders/llvmliteir.py

ChatGPT was not able to review the file. Error: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.&#x27;, 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}

tests/test_llvmlite_helpers.py

ChatGPT was not able to review the file. Error: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.&#x27;, 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}

@github-actions
Copy link

OSL ChatGPT Reviewer

NOTE: This is generated by an AI program, so some comments may not make sense.

src/irx/builders/llvmliteir.py

  • Scalar-vector promotion now widens the vector element type to the “widest float/int” (via _unify_numeric_operands), which changes the IR type and likely breaks prior semantics and ABI. Previous logic preserved the vector’s element type and only cast/splat the scalar. Suggest preserving the vector element type when exactly one operand is a vector. (L.480-L.505)
    Suggested change:
    def _unify_numeric_operands(self, lhs: ir.Value, rhs: ir.Value) -> tuple[ir.Value, ir.Value]:
    """Fix: preserve vector element type on scalar-vector ops"""
    lhs_is_vec = is_vector(lhs)
    rhs_is_vec = is_vector(rhs)
    if lhs_is_vec and rhs_is_vec and lhs.type.count != rhs.type.count:
    raise Exception(f"Vector size mismatch: {lhs.type.count} vs {rhs.type.count}")

    target_lanes = lhs.type.count if lhs_is_vec else (rhs.type.count if rhs_is_vec else None)
    lhs_base_ty = lhs.type.element if lhs_is_vec else lhs.type
    rhs_base_ty = rhs.type.element if rhs_is_vec else rhs.type
    
    # If exactly one operand is a vector, preserve that vector's element type
    if lhs_is_vec ^ rhs_is_vec:
        target_scalar_ty = lhs_base_ty if lhs_is_vec else rhs_base_ty
    else:
        lhs_is_float = is_fp_type(lhs_base_ty)
        rhs_is_float = is_fp_type(rhs_base_ty)
        if lhs_is_float or rhs_is_float:
            float_candidates = [ty for ty in (lhs_base_ty, rhs_base_ty) if is_fp_type(ty)]
            target_scalar_ty = self._select_float_type(float_candidates)
        else:
            lhs_width = getattr(lhs_base_ty, "width", 0)
            rhs_width = getattr(rhs_base_ty, "width", 0)
            target_scalar_ty = ir.IntType(max(lhs_width, rhs_width, 1))
    
    lhs = self._cast_value_to_type(lhs, target_scalar_ty)
    rhs = self._cast_value_to_type(rhs, target_scalar_ty)
    
    if target_lanes:
        vec_ty = ir.VectorType(target_scalar_ty, target_lanes)
        if not is_vector(lhs):
            lhs = splat_scalar(self._llvm.ir_builder, lhs, vec_ty)
        if not is_vector(rhs):
            rhs = splat_scalar(self._llvm.ir_builder, rhs, vec_ty)
    return lhs, rhs
    
  • Integer signedness is ignored during casting: using sext for widening and sitofp for int->float will produce wrong results for unsigned values. At minimum, make signedness explicit so you can choose zext/uitofp when needed. Backward-compatible API: add a signed: bool = True parameter. (L.548, L.581, L.600)
    Suggested change:
    def _cast_value_to_type(self, value: ir.Value, target_scalar_ty: ir.Type, signed: bool = True) -> ir.Value:
    """Fix: respect integer signedness when casting"""
    ...
    if target_is_float:
    if current_is_float:
    ...
    return (builder.sitofp if signed else builder.uitofp)(value, target_ty, "itofp")
    ...
    if current_width < target_width:
    return (builder.sext if signed else builder.zext)(value, target_ty, "ext")

  • Float width selection can silently narrow unknown float types:

    • _float_bit_width returns 0 for unhandled types (e.g., x86_fp80), which can lead _select_float_type to pick float16/float32. (L.536-L.545)
    • _float_type_from_width falls back to FLOAT_TYPE even when width > 64 and FP128 is unavailable, narrowing from 80/128 to 32. (L.519-L.530)
      Suggestions:
    • Handle x86_fp80 explicitly if available. (L.540)
      Suggested change:
      def _float_bit_width(self, ty: ir.Type) -> int:
      """Fix: support x86_fp80 and avoid 0-width fallbacks"""
      if isinstance(ty, DoubleType):
      return FLOAT64_BITS
      if isinstance(ty, FloatType):
      return FLOAT32_BITS
      if isinstance(ty, HalfType):
      return FLOAT16_BITS
      X86FP80 = getattr(ir, "X86_FP80Type", None)
      if X86FP80 is not None and isinstance(ty, X86FP80):
      return 80
      if FP128Type is not None and isinstance(ty, FP128Type):
      return FLOAT128_BITS
      return 0 # force caller to avoid narrowing on unknowns
    • In _float_type_from_width, if width > 64 and FP128 is unavailable, prefer DOUBLE_TYPE (or preserve existing candidate) rather than FLOAT_TYPE. (L.526)
      Suggested change:
      def _float_type_from_width(self, width: int) -> ir.Type:
      """Fix: avoid narrowing fallback for wide floats"""
      if width <= FLOAT16_BITS and hasattr(self._llvm, "FLOAT16_TYPE"):
      return self._llvm.FLOAT16_TYPE
      if width <= FLOAT32_BITS:
      return self._llvm.FLOAT_TYPE
      if width <= FLOAT64_BITS:
      return self._llvm.DOUBLE_TYPE
      if FP128Type is not None and width >= FLOAT128_BITS:
      return FP128Type()
      return self._llvm.DOUBLE_TYPE # safest non-narrowing fallback

tests/test_llvmlite_helpers.py

  • In test_unify_vector_float_rank_matches_double, you only check element types. This would still pass if the vector lane count changed. Add explicit vector and lane-count assertions before the element-type checks (L.135):

    • assert isinstance(widened_vec.type, ir.VectorType)
    • assert isinstance(widened_scalar.type, ir.VectorType)
    • assert widened_vec.type.count == 2
    • assert widened_scalar.type.count == 2
  • Tests rely on private APIs (LLVMLiteIRVisitor._unify_numeric_operands at L.107/L.131/L.146 and visitor._llvm.* at L.121/L.125/L.126/L.129/L.143/L.144). This couples tests to internals and may hinder refactors. Consider exercising this via a public helper or using llvmlite’s public ir.FloatType()/ir.DoubleType() in tests.


@github-actions
Copy link

OSL ChatGPT Reviewer

NOTE: This is generated by an AI program, so some comments may not make sense.

src/irx/builders/llvmliteir.py

  • Correctness: Integer signedness is ignored in _cast_value_to_type. Using sext for widening and sitofp for int->float will miscompile unsigned operands. Please branch on signedness and use zext/uitofp when the source is unsigned. If you don’t have that info here, add a signed: bool parameter and plumb it from ASTx type info. (L.545, L.569)
    def _cast_value_to_type(self, value: ir.Value, target_scalar_ty: ir.Type, signed: bool = True) -> ir.Value:
    def _unify_numeric_operands(self, lhs: ir.Value, rhs: ir.Value) -> tuple[ir.Value, ir.Value]:

  • Robustness: Direct references to FP128Type can raise NameError if not imported. Use getattr(ir, "FP128Type", None) locally instead of the bare symbol in both places. (L.507, L.521)
    def _float_type_from_width(self, width: int) -> ir.Type:
    def _float_bit_width(self, ty: ir.Type) -> int:

  • Safety: _float_bit_width falls back to width=0 for unknown types, which then selects FLOAT16 in _float_type_from_width. Better to raise early on unknown float types to avoid silent wrong casts. (L.527)
    def _float_bit_width(self, ty: ir.Type) -> int:


tests/test_llvmlite_helpers.py

  • test_unify_int_and_float_scalars_returns_float doesn’t verify both operands end up with the same type. This could let a mismatch (e.g., double vs float) slip through. Add a stricter equality check:

    • Add: assert widened_int.type == widened_float.type == visitor._llvm.FLOAT_TYPE (L.159)
  • test_unify_vector_float_rank_matches_double only checks element types; it doesn’t assert the scalar was splatted to a vector and that both operands have identical vector types. Strengthen with:

    • Add: assert isinstance(widened_scalar.type, ir.VectorType) (L.142)
    • Add: assert widened_scalar.type == widened_vec.type (L.143)
  • Tests depend on private internals (_unify_numeric_operands and visitor._llvm.*). This is brittle and may break with internal refactors; consider routing through a public helper if available.


@yuvimittal
Copy link
Member

@omsherikar , rebase your branch and also the CI is failing please look into it as well

@github-actions
Copy link

OSL ChatGPT Reviewer

NOTE: This is generated by an AI program, so some comments may not make sense.

src/irx/builders/llvmliteir.py

  • Correctness: integer signedness is not preserved.

    • Widening uses sext and int->float uses sitofp unconditionally. This is wrong for unsigned ints (and i1/booleans). You need to choose zext/uitofp for unsigned cases. Consider threading signedness from the AST/type layer into these helpers. For example (L.556):

      def _cast_value_to_type(self, value: ir.Value, target_scalar_ty: ir.Type, *, signed: bool = True) -> ir.Value:
      """Cast scalars or vectors to the target scalar type."""
      ...

      if target_is_float:
          if current_is_float:
              ...
          # int -> float
          return (builder.sitofp if signed else builder.uitofp)(value, target_ty, "i2fp")
      
      if current_is_float:
          raise Exception("Cannot implicitly convert floating-point to integer")
      
      ...
      if current_width < target_width:
          # int widening
          return (builder.sext if signed else builder.zext)(value, target_ty, "i_widen")
      ...
      
    • Then pass signedness from the caller, e.g. extend _unify_numeric_operands to accept lhs_signed/rhs_signed and forward to _cast_value_to_type (L.503):

      def _unify_numeric_operands(self, lhs: ir.Value, rhs: ir.Value, *, lhs_signed: bool = True, rhs_signed: bool = True) -> tuple[ir.Value, ir.Value]:
      """Ensure numeric operands share shape and scalar type."""
      ...
      lhs = self._cast_value_to_type(lhs, target_scalar_ty, signed=lhs_signed)
      rhs = self._cast_value_to_type(rhs, target_scalar_ty, signed=rhs_signed)
      ...

    • In visit(BinaryOp) supply accurate signedness from AST types if available (L.724).

  • Runtime bug: unresolved type class names.

    • _float_bit_width and _float_type_from_width reference DoubleType, FloatType, HalfType, FP128Type without importing/defining them; this will raise NameError. Use the ir-qualified classes and guard FP128Type via getattr (L.489 and L.516):

      FP128Type = getattr(ir, "FP128Type", None) # at module scope (L.33)

      In _float_bit_width (replace isinstance checks):

      if isinstance(ty, ir.DoubleType): ...
      if isinstance(ty, ir.FloatType): ...
      if hasattr(ir, "HalfType") and isinstance(ty, ir.HalfType): ...
      if FP128Type is not None and isinstance(ty, FP128Type): ...

      In _float_type_from_width:

      if FP128Type is not None and width >= FLOAT128_BITS:
      return FP128Type()


tests/test_llvmlite_helpers.py

  • Tests exercise private APIs: _unify_numeric_operands and visitor._llvm.{FLOAT_TYPE, DOUBLE_TYPE, INT32_TYPE}. This tightly couples tests to internals and will break with refactors. Prefer asserting against llvmlite.ir types (ir.FloatType(), ir.DoubleType(), ir.IntType(32)) and expose a public helper for operand unification (or test via a public operation that uses it). This is a maintainability risk that will hinder future changes.

  • Consider adding coverage for integer width/sign extension semantics when unifying (e.g., i8 negative value with i32) to ensure correct sext/zext behavior, and mixed vector-int with scalar-float to confirm both FP promotion and vector splatting happen together.


@omsherikar omsherikar force-pushed the feature/type-unification-135 branch from ec8dd8a to 7930365 Compare February 27, 2026 15:13
@github-actions
Copy link

OSL ChatGPT Reviewer

NOTE: This is generated by an AI program, so some comments may not make sense.

src/irx/builders/llvmliteir.py

  • Correctness: integer signedness is ignored during unification/casts. You always use sitofp and sext, which will corrupt values for unsigned operands (e.g., bool/int1, or any unsigned ints) and mixed int→float conversions. Please parameterize cast with signedness (and plumb it from ASTx types), and select uitofp/zext accordingly (L.590, L.628, L.661).

Suggested changes:
def _cast_value_to_type(self, value: ir.Value, target_scalar_ty: ir.Type, *, signed: bool = True) -> ir.Value:
"""Cast scalars or vectors to the target scalar type."""
...
if target_is_float:
if current_is_float:
...
return builder.sitofp(value, target_ty, "sitofp") if signed else builder.uitofp(value, target_ty, "uitofp")
...
if current_width < target_width:
return builder.sext(value, target_ty, "sext") if signed else builder.zext(value, target_ty, "zext")
...

def _int_is_signed(self, v: ir.Value) -> bool:
"""Return True if v was typed as a signed integer in ASTx."""
...

And in _unify_numeric_operands, pass signed from ASTx for each int operand (e.g., signed=self._int_is_signed(lhs) for lhs casts). (L.520)

  • Correctness: unsupported/wider float widths silently narrow to float32. For example, x86 80-bit floats (X86_FP80Type) will be treated as 32-bit. Either map explicitly or raise. Also prefer strict equality for fp128 detection, not >=. (L.566-L.572)

Suggested changes:
def _float_type_from_width(self, width: int) -> ir.Type:
"""Map float bit-width to a supported LLVM float type."""
if width <= FLOAT16_BITS and hasattr(self._llvm, "FLOAT16_TYPE"):
return self._llvm.FLOAT16_TYPE
if width <= FLOAT32_BITS:
return self._llvm.FLOAT_TYPE
if width <= FLOAT64_BITS:
return self._llvm.DOUBLE_TYPE
if hasattr(ir, "X86_FP80Type") and width == 80:
return ir.X86_FP80Type()
if FP128Type is not None and width == FLOAT128_BITS:
return FP128Type()
raise Exception(f"Unsupported floating-point width: {width}")

def _float_bit_width(self, ty: ir.Type) -> int:
"""Return bit width for known LLVM float types."""
if isinstance(ty, DoubleType):
return FLOAT64_BITS
if isinstance(ty, FloatType):
return FLOAT32_BITS
if isinstance(ty, HalfType):
return FLOAT16_BITS
if hasattr(ir, "X86_FP80Type") and isinstance(ty, ir.X86_FP80Type):
return 80
if FP128Type is not None and isinstance(ty, FP128Type):
return FLOAT128_BITS
return getattr(ty, "width", 0)


tests/test_llvmlite_helpers.py

  • test_unify_vector_float_rank_matches_double: You access .type.element on widened_scalar without first asserting it’s a vector. If a regression returns a scalar, this will raise AttributeError instead of a clear test failure. Add explicit vector-type checks before the element assertions (L.136-L.137):

    • assert isinstance(widened_vec.type, ir.VectorType)
    • assert isinstance(widened_scalar.type, ir.VectorType)
  • test_unify_int_and_float_scalars_returns_float: The docstring says “promotes to float for both operands,” but the test allows widened_int to be any FP type. Tighten the assertion (L.145):

    • Replace: assert is_fp_type(widened_int.type)
    • With: assert widened_int.type == visitor._llvm.FLOAT_TYPE
  • Tests call the private method _unify_numeric_operands and rely on internal _llvm types. This couples tests to internals and will break on refactors. Consider exercising this via a public operation that triggers unification, or expose a minimal public adapter solely for testing.


@omsherikar
Copy link
Contributor Author

@omsherikar , rebase your branch and also the CI is failing please look into it as well

@yuvimittal I have fixed it, please look into it

@github-actions
Copy link

OSL ChatGPT Reviewer

NOTE: This is generated by an AI program, so some comments may not make sense.

src/irx/builders/llvmliteir.py

  • Correctness: Integer signedness is ignored in widening and int→float casts. You always use sext and sitofp, which is wrong for unsigned ints and i1. Example: i1 true becomes -1 with sext, and large u32 values converted via sitofp can become negative. Derive signedness from the ASTx type and use zext/uitofp for unsigned, and always zext i1 (in _cast_value_to_type). This affects lines where sext/sitofp are called in _cast_value_to_type (around L.600–L.635).

  • Correctness: For equal-width floating-point “casts” you fall back to bitcast (in _cast_value_to_type). Bitcasting between distinct float types of the same width is invalid in LLVM IR and can miscompile if new float types (e.g., bfloat16) are introduced. If widths match and both are FP, require the exact same type and otherwise perform a semantic cast or no-op; do not bitcast floats (around L.590–L.610).

  • Behavior change: _unify_numeric_operands now implicitly promotes mixed int/float ops to float. If the language previously disallowed or required explicit casts, this is a breaking change. Please confirm this is intentional and consistent across all binary ops (visit(BinaryOp) around L.724).


tests/test_llvmlite_helpers.py

  • The tests exercise private/internal APIs: LLVMLiteIRVisitor._unify_numeric_operands and visitor._llvm.* types. This tightly couples test stability to internals and will break on refactors even if public behavior is unchanged. Consider testing this via a public API or promoting a small, documented helper for operand unification to public scope.
  • test_unify_vector_float_rank_matches_double implicitly assumes scalar->vector splat by accessing .type.element on widened_scalar. If the implementation later chooses a different strategy, this will hard-fail. Ensure this is an intentional, contract-level guarantee before locking it in tests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants