This repository was archived by the owner on Feb 18, 2025. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 939
This repository was archived by the owner on Feb 18, 2025. It is now read-only.
remote error: tls: bad certificate #873
Copy link
Copy link
Closed
Description
Question
How can I debug the remote error: tls: bad certificate
below. It's not clear for me which part of orchestrator have tls problems.
config
cat /var/lib/orchestrator/orchestrator-sqlite.conf.json
{
"Debug": true,
"EnableSyslog": false,
"ListenAddress": ":3000",
"AutoPseudoGTID": true,
"RaftEnabled": true,
"RaftDataDir": "/var/lib/orchestrator",
"RaftBind": "104.248.131.78",
"RaftNodes": ["mysql-001.livesystem.at", "mysql-002.livesystem.at", "mysql-003.livesystem.at"] ,
"BackendDB": "sqlite",
"SQLite3DataFile": "/var/lib/orchestrator/data/orchestrator.sqlite3",
"MySQLTopologyCredentialsConfigFile": "/var/lib/orchestrator/orchestrator-topology.cnf",
"InstancePollSeconds": 5,
"DiscoverByShowSlaveHosts": false,
"FailureDetectionPeriodBlockMinutes": 60,
"UseSSL": true,
"SSLPrivateKeyFile": "/var/lib/orchestrator/pki/mysql-001.livesystem.at_privatekey.pem",
"SSLCertFile": "/var/lib/orchestrator/pki/mysql-001.livesystem.at_cert.pem",
"SSLCAFile": "/var/lib/orchestrator/pki/ca_cert.pem",
"SSLSkipVerify": false,
}
debug output
root@mysql-001:~# cd /usr/local/orchestrator && orchestrator --debug --config=/var/lib/orchestrator/orchestrator-sqlite.conf.json --stack http
2019-05-07 10:14:49 INFO starting orchestrator, version: 3.0.14, git commit: f4c69ad05010518da784ce61865e65f0d9e0081c
2019-05-07 10:14:49 INFO Read config: /var/lib/orchestrator/orchestrator-sqlite.conf.json
2019-05-07 10:14:49 DEBUG Parsed topology credentials from /var/lib/orchestrator/orchestrator-topology.cnf
2019-05-07 10:14:49 DEBUG Connected to orchestrator backend: sqlite on /var/lib/orchestrator/data/orchestrator.sqlite3
2019-05-07 10:14:49 DEBUG Initializing orchestrator
2019-05-07 10:14:49 DEBUG Migrating database schema
2019-05-07 10:14:49 DEBUG Migrated database schema to version [3.0.14]
2019-05-07 10:14:49 INFO Connecting to backend :3306: maxConnections: 128, maxIdleConns: 32
2019-05-07 10:14:49 INFO Starting Discovery
2019-05-07 10:14:49 INFO Registering endpoints
2019-05-07 10:14:49 INFO continuous discovery: setting up
2019-05-07 10:14:49 DEBUG Setting up raft
2019-05-07 10:14:49 DEBUG Queue.startMonitoring(DEFAULT)
2019-05-07 10:14:49 INFO Starting HTTPS listener
2019-05-07 10:14:49 INFO Read in CA file: /var/lib/orchestrator/pki/ca_cert.pem
2019-05-07 10:14:49 DEBUG raft: advertise=104.248.131.78:10008
2019-05-07 10:14:49 DEBUG raft: transport=&{connPool:map[] connPoolLock:{state:0 sema:0} consumeCh:0xc42008b500 heartbeatFn:<nil> heartbeatFnLock:{state:0 sema:0} logger:0xc420911400 maxPool:3 shutdown:false shutdownCh:0xc42008b560 shutdownLock:{state:0 sema:0} stream:0xc42026b9a0 timeout:10000000000 TimeoutScale:262144}
2019-05-07 10:14:49 DEBUG raft: peers=[104.248.131.78:10008 142.93.100.13:10008 142.93.161.104:10008]
2019-05-07 10:14:49 DEBUG raft: logStore=&{dataDir:/var/lib/orchestrator backend:<nil>}
2019-05-07 10:14:50 INFO raft: store initialized at /var/lib/orchestrator/raft_store.db
2019-05-07 10:14:50 INFO new raft created
2019/05/07 10:14:50 [INFO] raft: Node at 104.248.131.78:10008 [Follower] entering Follower state (Leader: "")
2019-05-07 10:14:50 INFO continuous discovery: starting
2019-05-07 10:14:50 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
2019/05/07 10:14:51 [WARN] raft: Heartbeat timeout from "" reached, starting election
2019/05/07 10:14:51 [INFO] raft: Node at 104.248.131.78:10008 [Candidate] entering Candidate state
2019/05/07 10:14:51 [ERR] raft: Failed to make RequestVote RPC to 142.93.100.13:10008: dial tcp 142.93.100.13:10008: connect: connection refused
2019/05/07 10:14:51 [ERR] raft: Failed to make RequestVote RPC to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
2019/05/07 10:14:51 [DEBUG] raft: Votes needed: 2
2019/05/07 10:14:51 [DEBUG] raft: Vote granted from 104.248.131.78:10008. Tally: 1
2019/05/07 10:14:53 [WARN] raft: Election timeout reached, restarting election
2019/05/07 10:14:53 [INFO] raft: Node at 104.248.131.78:10008 [Candidate] entering Candidate state
2019/05/07 10:14:53 [ERR] raft: Failed to make RequestVote RPC to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
2019/05/07 10:14:53 [DEBUG] raft: Votes needed: 2
2019/05/07 10:14:53 [DEBUG] raft: Vote granted from 104.248.131.78:10008. Tally: 1
2019/05/07 10:14:53 [DEBUG] raft: Vote granted from 142.93.100.13:10008. Tally: 2
2019/05/07 10:14:53 [INFO] raft: Election won. Tally: 2
2019/05/07 10:14:53 [INFO] raft: Node at 104.248.131.78:10008 [Leader] entering Leader state
2019/05/07 10:14:53 [ERR] raft: Failed to AppendEntries to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
2019/05/07 10:14:53 [INFO] raft: pipelining replication to peer 142.93.100.13:10008
2019/05/07 10:14:53 [ERR] raft: Failed to AppendEntries to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
2019/05/07 10:14:53 [ERR] raft: Failed to AppendEntries to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
2019/05/07 10:14:53 [DEBUG] raft: Node 104.248.131.78:10008 updated peer set (2): [104.248.131.78:10008 142.93.100.13:10008 142.93.161.104:10008]
2019-05-07 10:14:53 DEBUG orchestrator/raft: applying command 2: leader-uri
2019/05/07 10:14:53 [ERR] raft: Failed to AppendEntries to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
2019/05/07 10:14:53 [ERR] raft: Failed to heartbeat to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
2019/05/07 10:14:53 [ERR] raft: Failed to AppendEntries to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
2019/05/07 10:14:53 [ERR] raft: Failed to heartbeat to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
2019/05/07 10:14:53 [ERR] raft: Failed to AppendEntries to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
2019/05/07 10:14:53 [ERR] raft: Failed to heartbeat to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
2019/05/07 10:14:53 [WARN] raft: Failed to contact 142.93.161.104:10008 in 508.458369ms
2019/05/07 10:14:53 [ERR] raft: Failed to heartbeat to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
2019/05/07 10:14:53 [ERR] raft: Failed to AppendEntries to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
2019-05-07 10:14:53 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
2019/05/07 10:14:53 [ERR] raft: Failed to heartbeat to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
2019/05/07 10:14:54 [ERR] raft: Failed to heartbeat to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
2019/05/07 10:14:54 [WARN] raft: Failed to contact 142.93.161.104:10008 in 998.108974ms
2019/05/07 10:14:54 [ERR] raft: Failed to AppendEntries to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
2019/05/07 10:14:54 [ERR] raft: Failed to heartbeat to 142.93.161.104:10008: dial tcp 142.93.161.104:10008: connect: connection refused
2019/05/07 10:14:54 [WARN] raft: Failed to contact 142.93.161.104:10008 in 1.450057377s
2019-05-07 10:14:54 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
2019/05/07 10:14:54 [INFO] raft: pipelining replication to peer 142.93.161.104:10008
2019-05-07 10:14:55 DEBUG raft leader is 104.248.131.78:10008 (this host); state: Leader
2019-05-07 10:14:55 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
2019-05-07 10:14:56 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
2019-05-07 10:14:57 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
2019-05-07 10:14:58 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
2019-05-07 10:14:59 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
2019-05-07 10:15:00 DEBUG raft leader is 104.248.131.78:10008 (this host); state: Leader
2019-05-07 10:15:00 DEBUG orchestrator/raft: applying command 3: request-health-report
2019/05/07 10:15:00 http: TLS handshake error from 104.248.131.78:47866: remote error: tls: bad certificate
2019/05/07 10:15:00 http: TLS handshake error from 142.93.100.13:51332: remote error: tls: bad certificate
2019/05/07 10:15:00 http: TLS handshake error from 142.93.161.104:47940: remote error: tls: bad certificate
2019-05-07 10:15:00 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
2019-05-07 10:15:01 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
2019-05-07 10:15:02 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
2019-05-07 10:15:03 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
2019-05-07 10:15:05 DEBUG raft leader is 104.248.131.78:10008 (this host); state: Leader
2019-05-07 10:15:10 DEBUG raft leader is 104.248.131.78:10008 (this host); state: Leader
2019-05-07 10:15:10 DEBUG orchestrator/raft: applying command 4: request-health-report
2019/05/07 10:15:10 http: TLS handshake error from 104.248.131.78:47870: remote error: tls: bad certificate
2019/05/07 10:15:10 http: TLS handshake error from 142.93.100.13:51334: remote error: tls: bad certificate
2019/05/07 10:15:10 http: TLS handshake error from 142.93.161.104:47942: remote error: tls: bad certificate
2019-05-07 10:15:15 DEBUG raft leader is 104.248.131.78:10008 (this host); state: Leader
Metadata
Metadata
Assignees
Labels
No labels