I'm testing Seldon Core and I have a sample model up and running with only predict
function in my serving model MyModel(object)
. Now, I added transform_input
function to preprocess input data before sending the input to the class. Again I deployed the model and it seemed working perfectly but I found that transform_input
is not working or it's not used. I have found the doc and example but it only shows to add the function transform_input
. Am I missing something?
Even if your model already provides a transform_input
function, you still need to add it as a TRANSFORMER
node in your inference graph. For your particular case, you would need to define a graph like the following one:
graph:
name: my-input-transformer
type: TRANSFORMER
children:
- name: my-model
type: MODEL
Both nodes (my-model
and my-input-transformer
) would point to the same Docker image.