-
Notifications
You must be signed in to change notification settings - Fork 11
Add support for custom compiler flags via FUSILLI_EXTRA_COMPILER_FLAGS #115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for custom compiler flags via FUSILLI_EXTRA_COMPILER_FLAGS #115
Conversation
a7292b7 to
aeb017b
Compare
|
The used td spec produced by tuner: module attributes {iree_codegen.tuning_spec_with_default_entrypoint, transform.with_named_sequence} {
transform.named_sequence @apply_op_config(%arg0: !transform.any_op {transform.readonly}, %arg1: !transform.any_param {transform.readonly}) {
transform.annotate %arg0 "compilation_info" = %arg1 : !transform.any_op, !transform.any_param
transform.yield
}
transform.named_sequence @match_main$async_dispatch_0_matmul_8192x2048x4096_bf16xbf16xf32(%arg0: !transform.any_op {transform.readonly}) -> (!transform.any_op, !transform.any_param) {
%batch_dims, %m_dims, %n_dims, %k_dims = transform.iree.match.contraction %arg0, lhs_type = bf16, rhs_type = bf16, output_type = f32, indexing_maps = [affine_map<(d0, d1, d2) -> (d2, d0)>, affine_map<(d0, d1, d2) -> (d2, d1)>, affine_map<(d0, d1, d2) -> (d0, d1)>] : !transform.any_op -> !transform.param<i64>
transform.iree.match.dims_equal %batch_dims, [] : !transform.param<i64>
transform.iree.match.dims_equal %m_dims, [8192] : !transform.param<i64>
transform.iree.match.dims_equal %n_dims, [2048] : !transform.param<i64>
transform.iree.match.dims_equal %k_dims, [4096] : !transform.param<i64>
%0 = transform.param.constant #iree_codegen.compilation_info<lowering_config = #iree_gpu.lowering_config<{mma_kind = #iree_gpu.mma_layout<WMMAR4_F32_16x16x16_BF16>, promote_operands = [0, 1], reduction = [0, 0, 1], subgroup = [4, 4, 0], subgroup_basis = [[4, 2, 1], [0, 1, 2]], workgroup = [256, 128, 0]}>, translation_info = <pipeline = LLVMGPUTileAndFuse workgroup_size = [256, 1, 1] subgroup_size = 32, {gpu_pipeline_options = #iree_gpu.pipeline_options<prefetch_num_stages = 2>, llvm_func_attrs = {"amdgpu-waves-per-eu" = "2"}}>> -> !transform.any_param
transform.yield %arg0, %0 : !transform.any_op, !transform.any_param
}
transform.named_sequence @__kernel_config(%arg0: !transform.any_op {transform.consumed}) -> !transform.any_op attributes {iree_codegen.tuning_spec_entrypoint} {
%updated_root = transform.foreach_match in %arg0
@match_main$async_dispatch_0_matmul_8192x2048x4096_bf16xbf16xf32 -> @apply_op_config : (!transform.any_op) -> !transform.any_op
transform.yield %updated_root : !transform.any_op
}
}Results (AMD gfx1201):
|
db0456f to
a974e07
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is in draft but I just wanted to leave a high level "directionary" feedback before you go down this path further. At this stage while the tuning integration is in a POC, we don't want to bake the flags into fusilli/backend/*. You could instead have a catch-all variable for any additional flags to be specified (e.g. through env variables / CLI or so). This let's us evaluate forward looking things without baking the design in too early making it hard to refactor to a more optimal integration later.
d9fe6db to
a3e39a2
Compare
484f316 to
60069c4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to have a generic escape hatch for passing extra compiler flags? In other projects we use --Xiree-compile='....' or --Xiree_compile='...': https://github.com/search?q=repo%3Anod-ai%2Firee-kernel-benchmark%20xiree&type=code , similar to -Xclang and others: https://clang.llvm.org/docs/ClangCommandLineReference.html#compilation-options
I think it's a good suggestion. The generic Actually, looking at the current hardcoded flags in @sjain-stanford what do you think? we could also introduce |
|
Yeah that's what I was initially thinking - a fully general escape hatch for compiler flags. This would be mainly used for tuning for now, but could be extended for debug purposes. Thanks @kuhar for the suggestion. |
|
I think using a single environment variable to accept all extra flags seems reasonable - @kuhar do you see concerns with this? |
| // Add extra flags to both CPU and AMDGPU backends. | ||
| addExtraFlags(cpuFlags); | ||
| addExtraFlags(amdGpuFlags); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd asked this elsewhere, but what happens if users specify a GPU specific flag (say --iree-hip-target..) through the FUSILLI_EXTRA_COMPILER_FLAGS? It get's added to both cpuFlags and amdGpuFlags - would it crash the compiler when the selected backend is CPU, or would it silently ignore it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1, I don't understand the distinction -- I don't think the compiler c api has separate entry points for these
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then how about adding extra flags to AMDGPU backend only?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think the compiler c api has separate entry points for these
OK then it's probably fine to say this is user responsibility (to make sure the flags provided match the backend selected).
I don't understand the distinction
The reason we have a map that stores the cpuFlags and amdGpuFlags separately is to allow multiple sessions+graphs to co-exist. So say there's a machine with AMDGPU, users might create graph 1 on the cpu handle and graph 2 on the gpu handle. In such cases, they'd want to make sure the extra flags are not commingled for both backends.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It get's added to both cpuFlags and amdGpuFlags - would it crash the compiler when the selected backend is CPU, or would it silently ignore it?
I thought about this. Since the compiler is basically cross-compiling for whatever target is specified, it likely won't crash the compilation itself and a vmfb should get generated. The issue is when the vmfb is executed on a backend that it was not compiled for (through flags targeting a different backend). So again it becomes a user responsibility to make sure they specify extra flags pertaining to the correct backend.
I think we can leave it as is for now and if this becomes a prevalent pattern we could revisit with alternate approaches (e.g. use backend specific env variable or read extra flags from backend specific files).
Signed-off-by: Bangtian Liu <liubangtian@gmail.com>
7b6b4ac to
f6700a2
Compare
Signed-off-by: Bangtian Liu <liubangtian@gmail.com>
a7995bb to
2f71414
Compare
|
I've updated the tests to follow the established pattern in the codebase. After examining other test files like |
sjain-stanford
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks for iterating on this! Please wait for @kuhar 's approval before landing.
Signed-off-by: Bangtian Liu <liubangtian@gmail.com>
8b2bc3f to
7a2f1bc
Compare
|
cc @kuhar for review |
kuhar
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you update the PR title and description to better match the contents? It allows for arbitrary compiler flags, not only tuning spec paths.
Signed-off-by: Bangtian Liu <liubangtian@gmail.com>
This PR adds support for passing custom IREE compiler flags to fusilli via:
FUSILLI_EXTRA_COMPILER_FLAGSenvironment variable (for C++ driver)--Xiree-compileflag (for Python benchmark wrapper)This enables users to customize compilation with arbitrary compiler flags,
such as optimization levels, tuning specs, or debug options.
Tuning specs provided by tuner can be passed through the flag to enable the performance improvement through tuning.
Local testing shows ~8.2% performance improvement on bf16 matrix multiplication (
matmul -M 8192 -N 2048 -K 4096 --transA --a_type bf16 --b_type bf16 --out_type bf16) when using tuned transform dialect specs.Assisted-by: Claude