Add support for unicode string inputs to Workflow Transform in Triton#345
Add support for unicode string inputs to Workflow Transform in Triton#345oliverholworthy wants to merge 11 commits intoNVIDIA-Merlin:mainfrom
Conversation
Documentation preview |
| out = out[:, 0] | ||
| # cudf doesn't seem to handle dtypes like |S15 or object that well | ||
| if is_string_dtype(out.dtype): | ||
| out = out.astype("str") |
There was a problem hiding this comment.
I tried changing this to out = np.char.decode(out.astype(bytes)) which worked for the new test being added here. And then I wondered if this was needed at all. Looking to see if the tests pass without this now.
There was a problem hiding this comment.
My impression is that this does (or did) cover a real edge case, which may not be adequately covered by tests. This piece of code was inherited from the old serving code in NVT, which was TBH not very well tested.
There was a problem hiding this comment.
I've updated this keeing the string type coercion. Using np.char.decode(out.astype(bytes)) intead of out.astype("str").
It appears we do need this because cudf doesn't accept an array of byte strings as a type when constructing a DataFrame.
| @@ -150,9 +150,6 @@ def _convert_tensor(t): | |||
| out = t.as_numpy() | |||
| if len(out.shape) == 2: | |||
| out = out[:, 0] | |||
There was a problem hiding this comment.
Unrelated to this change: It's unclear to me why we'd want to remove dimensions from the input here
There was a problem hiding this comment.
This code has existed for a long time, and I think is related to the perennial inconsistency around list formats that has plagued the Merlin code base. Way back in the before times, sometimes you'd get a proper 1d array/tensor and sometimes you'd get a 2d array/tensor that only contained one row. The legacy serving code from NVT that Systems is based on (and still trying to clean up and/or shed) had all kinds of issues like this and mostly solved them by hacking around the inconsistent formats instead of standardizing.
Add support for unicode string inputs to Workflow Transform in Triton.
We currently get the following error from the
.astype("str")call if we pass string inputs with non-ascii characters.This is because when we pass a string like
"椅子"to a triton model, that tensor is received asnp.array([b'\xe6\xa4\x85\xe5\xad\x90'], dtype=object). If you try to do.astype(str)on this, it raises this UnicodeDecodeError.We can coerce array of byte strings to unicode strings with
np.char.decode(out.astype(bytes))on the array, whereout = np.array([b'\xe6\xa4\x85\xe5\xad\x90'], dtype=object).However, it appears we can safely remove the line that is performing the coersion. (It doesn't appear to break any existing tests at least.)