Monitoring
Kafka Clusters using Ganglia is a matter of a few steps. This blog post lists
down those steps with an assumption that you have your Kafka Cluster ready.
Step-I:
Setup JMXTrans on all the machines of the Kafka cluster as done on the Storm
cluster in the previous post.
Step-II:
In the kafka setup, edit “kafka-run-class.sh” script file by adding the
following line to it:
KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote=true
-Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false
"
|
Step-III:
Also, edit the “kafka-server-start.sh” script file present in the kafka setup to
set the JMX port to 9999 by adding the following line:
export
JMX_PORT=${JMX_PORT:-9999}
|
Now,
on all the nodes of the cluster on which you have performed the above steps,
you can run the following json file after which it should start reporting its
metrics to the Ganglia server.
Sample
JSON file
Run
the code below in the form of a json file using the following command:
/usr/share/jmxtrans/jmxtrans.sh start /path_to_sample_json/example.json
|
Note: Please change the paths of output files in
the code below to paths accessible on your cluster machines.
{
"servers" : [ {
"port" : "9999", <---
Defined Kafka JMX Port
"host" : "127.0.0.1", <--- Kafka Server
"queries" : [ {
"outputWriters" : [ {
"@class" :
"com.googlecode.jmxtrans.model.output.KeyOutWriter",
"settings" : {
"outputFile" :
"/home/jayati/JMXTrans/kafkaStats/bufferPool_direct_stats.txt",
"v31" : false
}
} ],
"obj" :
"java.nio:type=BufferPool,name=direct",
"resultAlias":
"bufferPool.direct",
"attr" : [ "Count",
"MemoryUsed", "Name", "ObjectName",
"TotalCapacity" ]
}, {
"outputWriters" : [ {
"@class" :
"com.googlecode.jmxtrans.model.output.KeyOutWriter",
"settings" : {
"outputFile" :
"/home/jayati/JMXTrans/kafkaStats/bufferPool_mapped_stats.txt",
"v31" : false
}
} ],
"obj" :
"java.nio:type=BufferPool,name=mapped",
"resultAlias":
"bufferPool.mapped",
"attr" : [ "Count",
"MemoryUsed", "Name", "ObjectName",
"TotalCapacity" ]
}, {
"outputWriters" : [ {
"@class" :
"com.googlecode.jmxtrans.model.output.KeyOutWriter",
"settings" : {
"outputFile" :
"/home/jayati/JMXTrans/kafkaStats/kafka_log4j_stats.txt",
"v31" : false
}
} ],
"obj" :
"kafka:type=kafka.Log4jController",
"resultAlias":
"kafka.log4jController",
"attr" : [ "Loggers"
]
}, {
"outputWriters" : [ {
"@class" :
"com.googlecode.jmxtrans.model.output.KeyOutWriter",
"settings" : {
"outputFile" :
"/home/jayati/JMXTrans/kafkaStats/kafka_socketServer_stats.txt",
"v31" : false
}
} ],
"obj" :
"kafka:type=kafka.SocketServerStats",
"resultAlias":
"kafka.socketServerStats",
"attr" : [
"AvgFetchRequestMs", "AvgProduceRequestMs",
"BytesReadPerSecond", "BytesWrittenPerSecond",
"FetchRequestsPerSecond", "MaxFetchRequestMs",
"MaxProduceRequestMs" , "NumFetchRequests" ,
"NumProduceRequests" , "ProduceRequestsPerSecond",
"TotalBytesRead", "TotalBytesWritten",
"TotalFetchRequestMs", "TotalProduceRequestMs" ]
} ],
"numQueryThreads" : 2
} ]
} |
}
Get high on the Ganglia graphs showing your Kafka Cluster metrics. :)
All the best !!!
Get high on the Ganglia graphs showing your Kafka Cluster metrics. :)
All the best !!!