0


【kafka】记一次kafka基于linux的原生命令的使用

环境是linux,4台机器,版本3.6,kafka安装在node 1 2 3 上,zookeeper安装在node2 3 4上。

安装好kafka,进入bin目录,可以看到有很多sh文件,是我们执行命令的基础。
在这里插入图片描述
启动kafka,下面的命令的后面带的配置文件的相对路径

  1. kafka-server-start.sh ./server.properties

遇到不熟悉的sh文件,直接输入名字并回车,就会提示你可用的命令参数。如果参数用错了,kafka也会提示你相应的错误。

  1. [root@localhost bin]# kafka-topics.sh
  2. Create, delete, describe, or change a topic.
  3. Option Description
  4. ------ -----------
  5. --alter Alter the number of partitions and
  6. replica assignment. Update the
  7. configuration of an existing topic
  8. via --alter is no longer supported
  9. here (the kafka-configs CLI supports
  10. altering topic configs with a --
  11. bootstrap-server option).
  12. --at-min-isr-partitions if set when describing topics, only
  13. show partitions whose isr count is
  14. equal to the configured minimum.
  15. --bootstrap-server <String: server to REQUIRED: The Kafka server to connect
  16. connect to> to.
  17. --command-config <String: command Property file containing configs to be
  18. config property file> passed to Admin Client. This is used
  19. only with --bootstrap-server option
  20. for describing and altering broker
  21. configs.
  22. --config <String: name=value> A topic configuration override for the
  23. topic being created or altered. The
  24. following is a list of valid
  25. configurations:
  26. cleanup.policy
  27. compression.type
  28. delete.retention.ms
  29. file.delete.delay.ms
  30. flush.messages
  31. flush.ms
  32. follower.replication.throttled.
  33. replicas
  34. index.interval.bytes
  35. leader.replication.throttled.replicas
  36. local.retention.bytes
  37. local.retention.ms
  38. max.compaction.lag.ms
  39. max.message.bytes
  40. message.downconversion.enable
  41. message.format.version
  42. message.timestamp.after.max.ms
  43. message.timestamp.before.max.ms
  44. message.timestamp.difference.max.ms
  45. message.timestamp.type
  46. min.cleanable.dirty.ratio
  47. min.compaction.lag.ms
  48. min.insync.replicas
  49. preallocate
  50. remote.storage.enable
  51. retention.bytes
  52. retention.ms
  53. segment.bytes
  54. segment.index.bytes
  55. segment.jitter.ms
  56. segment.ms
  57. unclean.leader.election.enable
  58. See the Kafka documentation for full
  59. details on the topic configs. It is
  60. supported only in combination with --
  61. create if --bootstrap-server option
  62. is used (the kafka-configs CLI
  63. supports altering topic configs with
  64. a --bootstrap-server option).
  65. --create Create a new topic.
  66. --delete Delete a topic
  67. --delete-config <String: name> A topic configuration override to be
  68. removed for an existing topic (see
  69. the list of configurations under the
  70. --config option). Not supported with
  71. the --bootstrap-server option.
  72. --describe List details for the given topics.
  73. --exclude-internal exclude internal topics when running
  74. list or describe command. The
  75. internal topics will be listed by
  76. default
  77. --help Print usage information.
  78. --if-exists if set when altering or deleting or
  79. describing topics, the action will
  80. only execute if the topic exists.
  81. --if-not-exists if set when creating topics, the
  82. action will only execute if the
  83. topic does not already exist.
  84. --list List all available topics.
  85. --partitions <Integer: # of partitions> The number of partitions for the topic
  86. being created or altered (WARNING:
  87. If partitions are increased for a
  88. topic that has a key, the partition
  89. logic or ordering of the messages
  90. will be affected). If not supplied
  91. for create, defaults to the cluster
  92. default.
  93. --replica-assignment <String: A list of manual partition-to-broker
  94. broker_id_for_part1_replica1 : assignments for the topic being
  95. broker_id_for_part1_replica2 , created or altered.
  96. broker_id_for_part2_replica1 :
  97. broker_id_for_part2_replica2 , ...>
  98. --replication-factor <Integer: The replication factor for each
  99. replication factor> partition in the topic being
  100. created. If not supplied, defaults
  101. to the cluster default.
  102. --topic <String: topic> The topic to create, alter, describe
  103. or delete. It also accepts a regular
  104. expression, except for --create
  105. option. Put topic name in double
  106. quotes and use the '\' prefix to
  107. escape regular expression symbols; e.
  108. g. "test\.topic".
  109. --topic-id <String: topic-id> The topic-id to describe.This is used
  110. only with --bootstrap-server option
  111. for describing topics.
  112. --topics-with-overrides if set when describing topics, only
  113. show topics that have overridden
  114. configs
  115. --unavailable-partitions if set when describing topics, only
  116. show partitions whose leader is not
  117. available
  118. --under-min-isr-partitions if set when describing topics, only
  119. show partitions whose isr count is
  120. less than the configured minimum.
  121. --under-replicated-partitions if set when describing topics, only
  122. show under replicated partitions
  123. --version Display Kafka version.

如这里,我们创建一个topic名为test。

  1. kafka-topics.sh --create --topic test --bootstrap-server node1:9092 --partitions 2 --replication-factor 2
  2. Created topic test.

连接其中node1上的kafka获得metedata里的topic列表

  1. [root@localhost bin]# kafka-topics.sh --list --bootstrap-server node1:9092
  2. test

查看某个topic的细节

  1. [root@localhost bin]# kafka-topics.sh --describe --topic test --bootstrap-server node1:9092
  2. Topic: test TopicId: WgjG4Ou_Q7iQvzgipRgzjg PartitionCount: 2 ReplicationFactor: 2 Configs:
  3. Topic: test Partition: 0 Leader: 2 Replicas: 2,1 Isr: 2,1
  4. Topic: test Partition: 1 Leader: 3 Replicas: 3,2 Isr: 3,2

在其中的一台机器上起一个生产者,在其他两台机器上起2个消费者,都在同一个组里。

  1. [root@localhost bin]# kafka-console-producer.sh --broker-list node1:9092 --topic test
  2. >hello 03
  3. >1
  4. >2
  5. >3
  6. >4
  7. >5
  8. >6
  9. >7
  10. >8

可以看到同一个组内,如果组内消费者注册情况不变化有且只有同一个consumer能够消费数据。可以满足对于消息要求顺序性,不能并发消费的情况。

  1. [root@localhost bin]# kafka-console-consumer.sh --bootstrap-server node1:9092 --topic test --group msb
  2. hello 03
  3. 1
  4. 2
  5. 3
  6. 4
  7. 5
  8. 6
  9. 7
  10. 8

查看某个组内的情况

  1. [root@localhost bin]# kafka-consumer-groups.sh --bootstrap-server node2:9092 --group msb --describe
  2. GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
  3. msb test 1 24 24 0 console-consumer-4987804d-6e59-4f4d-9952-9afb9aff6cbe /192.168.184.130 console-consumer
  4. msb test 0 0 0 0 console-consumer-242992e4-7801-4a38-a8f3-8b44056ed4b6 /192.168.184.130 console-consumer

最后看一下zk中的情况吧。
zk根目录下多了一个kafka节点

  1. [zk: localhost:2181(CONNECTED) 1] ls /
  2. [kafka, node1, node6, node7, testLock, zookeeper]

kafka下面有很多metedata信息,包含在这些节点中,如,,

  1. [zk: localhost:2181(CONNECTED) 2] ls /kafka
  2. [admin, brokers, cluster, config, consumers, controller, controller_epoch, feature, isr_change_notification, latest_producer_id_block, log_dir_event_notification]
  3. #集群id
  4. [zk: localhost:2181(CONNECTED) 3] ls /kafka/cluster
  5. [id]
  6. [zk: localhost:2181(CONNECTED) 5] get /kafka/cluster/id
  7. {"version":"1","id":"8t14lxoAS1SdXapY6ysw_A"}
  8. #controller的id
  9. [zk: localhost:2181(CONNECTED) 6] get /kafka/controller
  10. {"version":2,"brokerid":3,"timestamp":"1698841142070","kraftControllerEpoch":-1}

可以看到topics中有一个__consumer_offsets,是kafka用来存储offset的topic。

  1. [zk: localhost:2181(CONNECTED) 10] ls /kafka/brokers/topics
  2. [__consumer_offsets, test]
  3. [zk: localhost:2181(CONNECTED) 12] get /kafka/brokers/topics/__consumer_offsets
  4. {"partitions":{"44":[1],"45":[2],"46":[3],"47":[1],"48":[2],"49":[3],"10":[3],"11":[1],"12":[2],"13":[3],"14":[1],"15":[2],"16":[3],"17":[1],"18":[2],"19":[3],"0":[2],"1":[3],"2":[1],"3":[2],"4":[3],"5":[1],"6":[2],"7":[3],"8":[1],"9":[2],"20":[1],"21":[2],"22":[3],"23":[1],"24":[2],"25":[3],"26":[1],"27":[2],"28":[3],"29":[1],"30":[2],"31":[3],"32":[1],"33":[2],"34":[3],"35":[1],"36":[2],"37":[3],"38":[1],"39":[2],"40":[3],"41":[1],"42":[2],"43":[3]},"topic_id":"RGxJyefAQlKrmY3LTVbKGw","adding_replicas":{},"removing_replicas":{},"version":3}
标签: kafka linq 分布式

本文转载自: https://blog.csdn.net/gengzhihao10/article/details/134191490
版权归原作者 不想睡觉的橘子君 所有, 如有侵权,请联系我们删除。

“【kafka】记一次kafka基于linux的原生命令的使用”的评论:

还没有评论