How to perform A/B Testing in Polymer Web Components?

2/11/2017

I'm encountering a situation at a customer: they want to do A/B Testing.

As far as I know, this most of the time happens at LoadBalancer level (Kubernetes) redirecting users to a certain version of the application (for example with new version of Gmail and a release is being rolled out).

Now with web components, this customer wants to have a "dom-if" kind of situation where features are turned on if a certain requirement is met IN the component. This will add overhead of course.

I wonder if this is the way to go. Their reasoning of this customer is that the component can be used in 100s of applications and then creating a build and working on it, might be too cumbersome and on the microlevel (as in IN the component) would be the best way to go. They are following Linkedin/AirBnB.

As far as I know these companies are not using Web Components.

The question is: what is advisable ? Doing the A/B testing on microlevel or on application-level (and use load balancers like kubernetes).

-- rjankie
ab-testing
javascript
kubernetes
polymer
web-component

2 Answers

2/12/2017

not sure if this covers your full question - but within a Microservice architecture you'd test within each service on it's own.

Speaking of Kubernetes as your plattform to host your services you could have one LoadBalancer service which selects Pods from different Deployments. Each Deployment could provide different container/application versions or it could provide the same container but with different settings.

Here's a small example - the single services has a selector (app: testme) which matches pods from both deployments. The Deployments define containers from the same image (yourcontainerimage:version) but with different environment variables. Also the different amount of replicas would allow you to have different proportions of the traffic routed to one or the other option.

apiVersion: v1
kind: Service
metadata:
  name: app
spec:
  ports:
    - name: http
      port: 8080
  selector:
    app: testme
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: app-deployment-a
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: testme
        ab: on
    spec:
      containers:
      - name: app
        image: yourcontainerimage:version
        env:
        - name: FEATURE_TOGGLE
          value: true
        ports:
        - containerPort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: app-deployment-b
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: testme
        ab: off
    spec:
      containers:
      - name: app
        image: yourcontainerimage:version
        env:
        - name: FEATURE_TOGGLE
          value: false
        ports:
        - containerPort: 8080

Depending on your application and the type of feature you test, you might want to adjust the services and e.g. enable or disable SessionAffinity of the services. You'll find details in the official docs.

-- pagid
Source: StackOverflow

3/11/2017

The case of the distributed server side (e.g. micro services) is cleanly addressed in Variant. (Disclaimer: I work there). Whichever component touches the experiment first, creates a Variant session (independent of any notion of the session that the host application may have, e.g. HTTP Session) and then passes the session handle to the next component, which will be able to retrieve it and all the experiment related data from Variant server. The only catch is that we only support Java components at this time.

As well, using deployment infrastructure, like load balancer, for application concerns like A/B testing is a bad idea on many levels and should be abandoned.

-- Igor Urisman
Source: StackOverflow