Istio Proxy Crashing

2/27/2019

I installed istio with the kubernetes and helm instructions and annotated a namespace to automatically inject the istio proxy, but it does not appear to be working well. The proxy tries to start but continually crashes with a segfault. I'm using istio 1.0.6. This is the log output of the proxy.

[2019-02-27 21:48:50.892][78][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:223] gRPC config for type.googleapis.com/envoy.api.v2.Listener update rejected: Error adding/updating listener 10.16.11.206_8293: unable to read file: /etc/certs/root-cert.pem
[2019-02-27 21:48:50.892][78][warning][config] bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_mux_subscription_lib/common/config/grpc_mux_subscription_impl.h:70] gRPC config for type.googleapis.com/envoy.api.v2.Listener rejected: Error adding/updating listener 10.16.11.206_8293: unable to read file: /etc/certs/root-cert.pem
[2019-02-27 21:48:50.892][78][info][config] external/envoy/source/server/listener_manager_impl.cc:908] all dependencies initialized. starting workers
[2019-02-27 21:48:50.902][83][critical][backtrace] bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:125] Caught Segmentation fault, suspect faulting address 0x0
[2019-02-27 21:48:50.902][83][critical][backtrace] bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:94] Backtrace thr<83> obj</usr/local/bin/envoy> (If unsymbolized, use tools/stack_decode.py):
[2019-02-27 21:48:50.903][83][critical][backtrace] bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:114] thr<83> #0 0x487d8d google::protobuf::internal::ArenaStringPtr::CreateInstanceNoArena()
[2019-02-27 21:48:50.904][83][critical][backtrace] bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:114] thr<83> #1 0x4be9c4 Envoy::Utils::GrpcClientFactoryForCluster()
[2019-02-27 21:48:50.906][83][critical][backtrace] bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:114] thr<83> #2 0x4b8389 Envoy::Tcp::Mixer::Control::Control()
[2019-02-27 21:48:50.907][83][critical][backtrace] bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:114] thr<83> #3 0x4ba7c5 std::_Function_handler<>::_M_invoke()
[2019-02-27 21:48:50.908][83][critical][backtrace] bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:114] thr<83> #4 0x792a15 std::_Function_handler<>::_M_invoke()
[2019-02-27 21:48:50.909][83][critical][backtrace] bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:114] thr<83> #5 0x7c828b Envoy::Event::DispatcherImpl::runPostCallbacks()
[2019-02-27 21:48:50.910][83][critical][backtrace] bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:114] thr<83> #6 0x7c836c Envoy::Event::DispatcherImpl::run()
[2019-02-27 21:48:50.912][83][critical][backtrace] bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:114] thr<83> #7 0x7c4c15 Envoy::Server::WorkerImpl::threadRoutine()
[2019-02-27 21:48:50.913][83][critical][backtrace] bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:114] thr<83> #8 0xb354ad Envoy::Thread::Thread::Thread()::{lambda()#1}::_FUN()
[2019-02-27 21:48:50.913][83][critical][backtrace] bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:104] thr<83> obj</lib/x86_64-linux-gnu/libpthread.so.0>
[2019-02-27 21:48:50.913][83][critical][backtrace] bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:114] thr<83> #9 0x7f2701a296b9 start_thread
[2019-02-27 21:48:50.913][83][critical][backtrace] bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:104] thr<83> obj</lib/x86_64-linux-gnu/libc.so.6>
[2019-02-27 21:48:50.913][83][critical][backtrace] bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:117] thr<83> #10 0x7f270145641c (unknown)
[2019-02-27 21:48:50.913][83][critical][backtrace] bazel-out/k8-opt/bin/external/envoy/source/server/_virtual_includes/backtrace_lib/server/backtrace.h:121] end backtrace thread 83
2019-02-27T21:48:50.923768Z warn    Epoch 0 terminated with an error: signal: segmentation fault
2019-02-27T21:48:50.923870Z warn    Aborted all epochs
2019-02-27T21:48:50.923924Z info    Epoch 0: set retry delay to 25.6s, budget to 2
-- Jeff Hutchins
istio
kubernetes
kubernetes-helm

1 Answer

2/28/2019

It appears that the issue was that the istio.default secret was missing from that namespace that my pods were running in. I would assume that in the istio infrastructure should do that, but it didn't appear to. Copying that secret from the istio-system namespace to my own seems to have resolved the issue.

-- Jeff Hutchins
Source: StackOverflow