I have defined a couple of case classes for JSON representation but I am not sure whether I did it properly as there a lot of nested case classes. Entities like spec, meta and so on are of type JSONObject as well as the Custom object itself.
Here is all the classes I have defined:
case class CustomObject(apiVersion: String,kind: String, metadata: Metadata,spec: Spec,labels: Object,version: String)
case class Metadata(creationTimestamp: String, generation: Int, uid: String,resourceVersion: String,name: String,namespace: String,selfLink: String)
case class Spec(mode: String,image: String,imagePullPolicy: String, mainApplicationFile: String,mainClass: String,deps: Deps,driver: Driver,executor: Executor,subresources: Subresources)
case class Driver(cores: Double,coreLimit: String,memory: String,serviceAccount: String,labels: Labels)
case class Executor(cores: Double,instances: Double,memory: String,labels: Labels)
case class Labels(version: String)
case class Subresources(status: Status)
case class Status()
case class Deps()
And this is a JSON structure for the custom K8s object I need to transform:
{
"apiVersion": "sparkoperator.k8s.io/v1alpha1",
"kind": "SparkApplication",
"metadata": {
"creationTimestamp": "2019-01-11T15:58:45Z",
"generation": 1,
"name": "spark-example",
"namespace": "default",
"resourceVersion": "268972",
"selfLink": "/apis/sparkoperator.k8s.io/v1alpha1/namespaces/default/sparkapplications/spark-example",
"uid": "uid"
},
"spec": {
"deps": {},
"driver": {
"coreLimit": "1000m",
"cores": 0.1,
"labels": {
"version": "2.4.0"
},
"memory": "1024m",
"serviceAccount": "default"
},
"executor": {
"cores": 1,
"instances": 1,
"labels": {
"version": "2.4.0"
},
"memory": "1024m"
},
"image": "gcr.io/ynli-k8s/spark:v2.4.0,
"imagePullPolicy": "Always",
"mainApplicationFile": "http://localhost:8089/spark_k8s_airflow.jar",
"mainClass": "org.apache.spark.examples.SparkExample",
"mode": "cluster",
"subresources": {
"status": {}
},
"type": "Scala"
}
}
UPDATE: I want to convert JSON into case classes with Circe, however, with such classes I face this error:
Error: could not find Lazy implicit value of type io.circe.generic.decoding.DerivedDecoder[dataModel.CustomObject]
implicit val customObjectDecoder: Decoder[CustomObject] = deriveDecoder[CustomObject]
I have defined implicit decoders for all case classes:
implicit val customObjectLabelsDecoder: Decoder[Labels] = deriveDecoder[Labels]
implicit val customObjectSubresourcesDecoder: Decoder[Subresources] = deriveDecoder[Subresources]
implicit val customObjectDepsDecoder: Decoder[Deps] = deriveDecoder[Deps]
implicit val customObjectStatusDecoder: Decoder[Status] = deriveDecoder[Status]
implicit val customObjectExecutorDecoder: Decoder[Executor] = deriveDecoder[Executor]
implicit val customObjectDriverDecoder: Decoder[Driver] = deriveDecoder[Driver]
implicit val customObjectSpecDecoder: Decoder[Spec] = deriveDecoder[Spec]
implicit val customObjectMetadataDecoder: Decoder[Metadata] = deriveDecoder[Metadata]
implicit val customObjectDecoder: Decoder[CustomObject] = deriveDecoder[CustomObject]
The reason you can't derive a decode for CustomObject
is because of the labels: Object
member.
In circe all decoding is driven by static types, and circe does not provide encoders or decoders for types like Object
or Any
, which have no useful static information.
If you change that case class to something like the following:
case class CustomObject(apiVersion: String, kind: String, metadata: Metadata, spec: Spec)
…and leave the rest of your code as is, with the import:
import io.circe.Decoder, io.circe.generic.semiauto.deriveDecoder
And define your JSON document as doc
(after adding a quotation mark to the "image": "gcr.io/ynli-k8s/spark:v2.4.0,
line to make it valid JSON), the following should work just fine:
scala> io.circe.jawn.decode[CustomObject](doc)
res0: Either[io.circe.Error,CustomObject] = Right(CustomObject(sparkoperator.k8s.io/v1alpha1,SparkApplication,Metadata(2019-01-11T15:58:45Z,1,uid,268972,spark-example,default,/apis/sparkoperator.k8s.io/v1alpha1/namespaces/default/sparkapplications/spark-example),Spec(cluster,gcr.io/ynli-k8s/spark:v2.4.0,Always,http://localhost:8089/spark_k8s_airflow.jar,org.apache.spark.examples.SparkExample,Deps(),Driver(0.1,1000m,1024m,default,Labels(2.4.0)),Executor(1.0,1.0,1024m,Labels(2.4.0)),Subresources(Status()))))
Despite what one of the other answers says, circe can definitely derive encoders and decoders for case classes with no members—that's definitely not the problem here.
As a side note, I wish it were possible to have better error messages than this:
Error: could not find Lazy implicit value of type io.circe.generic.decoding.DerivedDecoder[dataModel.CustomObject
But given the way circe-generic has to use Shapeless's Lazy
right now, this is the best we can get. You can try circe-derivation for a mostly drop-in alternative for circe-generic's semi-automatic derivation that has better error messages (and some other advantages), or you can use a compiler plugin like splain that's specifically designed to give better error messages even in the presence of things like shapeless.Lazy
.
As one final note, you can clean up your semi-automatic definitions a bit by letting the type parameter on deriveDecoder
be inferred:
implicit val customObjectLabelsDecoder: Decoder[Labels] = deriveDecoder
This is entirely a matter of taste, but I find it a little less noisy to read.
Looks correct to me. Are you facing any issue?