I am trying to use the Input Transformer along with a model. Following is the deployment yaml:
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: two-testing
spec:
predictors:
- componentSpecs:
- spec:
containers:
- name: transformer
image: transformer_image
ports:
- containerPort: 7100
name: http
protocol: TCP
- name: model
image: model_image
ports:
- containerPort: 7200
name: http
protocol: TCP
graph:
name: transformer
type: TRANSFORMER
children:
- name: model
type: MODEL
children: []
endpoint:
service_port: 7300
name: model
replicas: 1
The endpoint api where I am sending the request is: 'http://0.0.0.0:3000/api/v1.0/predictions' , 3000 is the local port where I have port forwarded but the request is directly hitting the model container due to which I am getting an error. Can someone please tell me what endpoint api should I send the request to so that the model passes through the input transformer and then goes to the model.