Received AliveMessage from a peer with the same PKI-ID as myself

5/22/2017

I am attempting to port the Hyperledger Fabric Getting Started to Kubernetes. But am struggling to get peer1's to deploy. If I enable CORE_PEER_GOSSIP_BOOTSTRAP, I receive errors "Received AliveMessage from a peer with the same PKI-ID as myself".

How can I debug a peer reportedly having the same PKI-ID as another?

Using this as a starting point:

https://hyperledger-fabric.readthedocs.io/en/latest/getting_started.html

I am able to create:

  • orderer and cli pods in default namespace
  • peer0's one in each org1|org2 namespace.
  • peer1's but only if I disable (comment out) CORE_PEER_GOSSIP_BOOTSTRAP

If I enable CORE_PEER_GOSSIP_BOOTSTRAP for the peer1's, I receive the following warning and error:

[gossip/gossip#10.0.0.10:7051] NewGossipService -> WARN 01c External endpoint is empty, peer will not be accessible outside of its organization
...
[gossip/discovery#10.0.0.10:7051] handleAliveMessage -> ERRO 02a Bad configuration detected: Received AliveMessage from a peer with the same PKI-ID as myself: tag:EMPTY alive_msg:<membership:<pki_id:"[[REDACTED]]" > timestamp:<inc_number:1495468533769417608 seq_num:416 > >

In order to better map the Orderer, Peers to DNS names, I'm using Kubernetes Namespaces and this configuration:

OrdererOrgs:
  - Name: Orderer
    Domain: default.svc.cluster.local
    Specs:
      - Hostname: orderer
PeerOrgs:
  - Name: Org1
    Domain: org1.svc.cluster.local
    Template:
      Count: 2
    Users:
      Count: 2
  - Name: Org2
    Domain: org2.svc.cluster.local
    Template:
      Count: 2
    Users:
      Count: 2

In order to expose the peer0's to the other peers in the org and to expose the orderer, I have ClusterIP services for the peer0's (selecting only the peer0's) and orderer. It's inelegant but I'm trying to get it to work before I get it working more beautifully.

I am able to resolve orderer.default.svc.cluster.local, peer0.org1.svc.cluster.local, `peer0.org2.svc.cluster.local' using nslookup from within a pod deployed to default on the cluster.

Absent a curl-like tool for gPRC, I am able to open sockets against these endpoints on 7051 and 7053.

-- DazWilkin
hyperledger-fabric
kubernetes

2 Answers

4/26/2019

First, make sure you are using the right certificates. Second, verify that your environment/configuration for gossip is set correctly

environment:
  - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org1.example.com:8051
  - CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org1.example.com:7051
  - CORE_PEER_GOSSIP_ENDPOINT=peer0.org1.example.com:7051

OR in core.yaml

peer:
  gossip:
    bootstrap: peer0.org1.example.com:7051
    externalEndpoint: peer1.org1.example.com:8051
    endpoint: peer0.org1.example.com:7051

Edited: Also make sure that you have properly setup your CA

Hope this helps, it worked for me. And I was successfully able to connect peers.

-- Teh Sunn Liu
Source: StackOverflow

8/5/2017

If the peers are started from the same node, its possible that you are mounting the same crypto-material (path to mspconfig directory) for both the peers. If that is the case, separate the directory structures for both the peers and keep their respective certificates in them, update the respective paths for msp in docker-compose file and try to run.

-- MeenakshiSingh
Source: StackOverflow