You have the option of either adding topics manually or having them be created automatically when data is first published to a non-existent topic. If topics are auto-created then you may want to tune the default

topic configurations

used for auto-created topics.

如果你第一次发布一个不存在的topic时,它会自动创建。你也可以手动添加topic。

Topics are added and modified using the topic tool:
topic的添加和修改使用下面的工具。

>
 bin/kafka-topics.sh --zookeeper zk_host:port/chroot --create --topic my_topic_name 
       --partitions 20 --replication-factor 3 --config x=y

The replication factor controls how many servers will replicate each message that is written. If you have a replication factor of 3 then up to 2 servers can fail before you will lose access to your data. We recommend you use a replication factor of 2 or 3 so that you can transparently bounce machines without interrupting data consumption.

副本控制每个消息在服务器中的备份,如果有3个副本,那么最多允许有2个节点宕掉才能不丢数据,集群中推荐设置2或3个副本,才不会中断数据消费。

The partition count controls how many logs the topic will be sharded into. There are several impacts of the partition count. First each partition must fit entirely on a single server. So if you have 20 partitions the full data set (and read and write load) will be handled by no more than 20 servers (no counting replicas). Finally the partition count impacts the maximum parallelism of your consumers. This is discussed in greater detail in the concepts section.

分区数控制topic将分片成多少log。关于分区数的影响,首先每个分区必须完全安装在独立的服务器上。因此,如果你有20个分区的话(读和写的负载),那么处理完整的数据集不要超过20个服务器(不计算备份)。最后的分区数影响你的消费者的最大并行。这个在概念章节里进行更详细的讨论。

The configurations added on the command line override the default settings the server has for things like the length of time data should be retained. The complete set of per-topic configurations is documented here.

命令行上添加的配置覆盖了服务器的默认设置,服务器有关于时间长度的数据,应该保留。这里记录了每个主题的完整配置。

results matching ""

    No results matching ""