|
|
| - Node to run with Prometheus integration
- When you run the node in the default mode, Prometheus needs to be able to scrape
- This is in place
- Next step
- Expose extensive metrics on the system and JVM
- Node to expose the metrics
- Metrics to expose for Node-0.3
- CPU - CPU seconds consumed, it's a counter, CPU seconds consumed/interval of scraping
- Ram
- Disk
- Network core metrics at the node level
- JVM performance
- Garbage collection
- Size of memory pools
- Consumption of memory pools
- Doc needs to describe how to integrate with an existing Prometheus, (if wanted) how to integrate with Docker compose
- Doc how to export metrics from Scala
- Pawel to create template
- Jeremy can then fill it out
- Testing metrics
- We need to create test to validate the outcomes
- Node comes up
- Node publishes metrics
- Metrics are scrapable with Prometheus
- Metrics are scrapable with Docker compose and match
- At end
- Rholang team should be able to integrate with the metric monad to look at performance
- Everyone needs to be able to find the Kamon interface and interact with it
- Need a doc for this
- Need a doc for how to capture metrics
- Don't d
- Publish counters and not gauges
- Publish the numerator and denominator as separate metrics
- IDEA get Pawel to write one metric in the way he likes as an example
- Set up a session and record
|