互联网服务Oracle Goldengatekafka

OGG同步kafka报错,求大神指导一下?

如题 
配置如下
bootstrap.servers=10.200.123.251:2181,10.200.123.223:2181,10.200.125.166:2181
acks=1
compression.type=gzip
reconnect.backoff.ms=1000
value.serializer=org.apache.kafka.common.serialization.ByteArraySerializer
key.serializer=org.apache.kafka.common.serialization.ByteArraySerializer
batch.size=102400
linger.ms=10000

错误 日志如下  感觉是发消息的时候超时,

Dec 07, 2021 6:08:18 PM oracle.goldengate.format.json.JsonSchemaGenerator generateSchemaFile
INFO: Creating JSON schema for table WU_ADMIN.T_TESTT_TEST01 in file ./dirdef/WU_ADMIN.T_TESTT_TEST01.schema.json
Dec 07, 2021 6:09:18 PM oracle.goldengate.handler.kafka.impl.AbstractKafkaProducer$KafkaHandlerCallback onCompletion
SEVERE: A failure occurred sending a message to Kafka.
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.

Dec 07, 2021 6:10:18 PM oracle.goldengate.handler.kafka.impl.AbstractKafkaProducer$KafkaHandlerCallback onCompletion
SEVERE: A failure occurred sending a message to Kafka.
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.

                   2 records processed as of 2021-12-07 18:10:18 (rate 0,delta 0)
Dec 07, 2021 6:11:18 PM oracle.goldengate.handler.kafka.impl.AbstractKafkaProducer$KafkaHandlerCallback onCompletion
SEVERE: A failure occurred sending a message to Kafka.
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.

                   3 records processed as of 2021-12-07 18:11:18 (rate 0,delta 0)
Dec 07, 2021 6:12:18 PM oracle.goldengate.handler.kafka.impl.AbstractKafkaProducer$KafkaHandlerCallback onCompletion
SEVERE: A failure occurred sending a message to Kafka.
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.

参与3

1同行回答

沈天真沈天真售前支持IPS
https://stackoverflow.com/questions/62198606/kafkatimeouterror-failed-to-update-metadata-after-60-0-secsIf a topic does not exist and you are trying to produce to that topic and auto topic creation is set to false, then it can occur.Possible resoluti...显示全部

https://stackoverflow.com/questions/62198606/kafkatimeouterror-failed-to-update-metadata-after-60-0-secs

  1. If a topic does not exist and you are trying to produce to that topic and auto topic creation is set to false, then it can occur.

Possible resolution: In broker configuration (server.properties) auto.create.topics.enable=true (Note, this is default in Confluent Kafka)

  1. Another case could be network congestion or speed, if it is taking more than 60 sec to update metadata with the Kafka broker.

Possible resolution: Producer configuration: max.block.ms = 1200000 (120 sec, for ex)

  1. Check if your broker(s) are going down for some reason (for ex, too much load) and why they are not able to respond to metadata requests. You can see them in server.log file, typically.
收起
硬件生产 · 2021-12-07
浏览974
  • 非常感谢,问题已解决,配置的应该是BROKER的IP和端口,而我配置的是ZK,
    2021-12-08

提问者

wang123kui
系统分析师式 大在在

相关问题

相关资料

相关文章

问题状态

  • 发布时间:2021-12-07
  • 关注会员:1 人
  • 问题浏览:1466
  • 最近回答:2021-12-07
  • X社区推广