I am playing with the spark-on-k8s-operator. I wondered upfront if anyone has good examples/manifests for providing spark conf via Kubernetes ConfigMaps?
Appreciate some pointers.
For now, I am using:import com.typesafe.config.{Config, ConfigFactory}
and an explicit application.conf
file in src/main/resources
@transient lazy val logger: Logger = LoggerFactory.getLogger(getClass)
val config: Config = ConfigFactory.parseResources("application.conf")
config.checkValid(ConfigFactory.defaultReference(), topicName)
private val source: String = config.getString(s"${topicName}.source")
private val topic: String =
config.getString(s"${topicName}.topic")
private val brokers: String =
config.getString(s"${topicName}.kafka_bootstrap_servers")
private val offsets: String =
config.getString(s"${topicName}.auto_offset_reset")
private val failOnLoss: String =
config.getString(s"${topicName}.fail_on_data_loss")
I dont have any examples, but i can suggest an offer.
Instead use application.conf, you can use any YAML library(or other types of config files extention), and take the path of the config.yaml from environment variable (for example: app.config.dir=/etc/app/config.yaml),
Than configure kubernetes config map file (config.yaml) to /etc/app, and when your app is started it`ll read from there the configuration (offcourse you should setup the environment variable of app.config.dir).