# Weave Works Authors - The RED Method: key metrics for microservices architecture (Highlights)

## Metadata
**Cover**:: https://readwise-assets.s3.amazonaws.com/static/images/article2.74d541386bbf.png
**Source**:: #from/readwise
**Zettel**:: #zettel/fleeting
**Status**:: #x
**Authors**:: [[Weave Works Authors]]
**Full Title**:: The RED Method: key metrics for microservices architecture
**Category**:: #articles #readwise/articles
**Category Icon**:: 📰
**URL**:: [www.weave.works](https://www.weave.works/blog/the-red-method-key-metrics-for-microservices-architecture/)
**Host**:: [[www.weave.works]]
**Highlighted**:: [[2021-02-21]]
**Created**:: [[2022-09-26]]
## Highlights
### The RED Method: key metrics for microservices architecture
### The RED Method defines the three key
metrics you should measure for every microservice in your architecture
- (Request) Rate
- the number of requests, per second, you services are serving.
- (Request) Errors
- the number of failed requests per second.
- (Request) Duration
- distributions of the amount of time each request takes.
- I’ve
already written a blog post on
how we instrument our services in Weave Cloud
#further-reading #rl
https://www.weave.works/blog/of-metrics-and-middleware/
- error rate should be
expressed as a proportion of request rate.
- two columns, one row per service,
request & error rate on the left, latency on the right:
- We’ve even built a Python library to help
us generate these dashboards:
GrafanaLib.
https://www.weave.works/grafana-dashboards-as-code/
- if you treat all your
services the same, many repetitive tasks become automatable.
- Google calls
it their
“The Four Golden Signals”.
#further-reading #rl
[The Four Golden Signals](https://landing.google.com/sre/book/chapters/monitoring-distributed-systems.html)
- Google include an extra metric,
Saturation, over and above the RED method.
- this method only works
for request-driven services