-
Notifications
You must be signed in to change notification settings - Fork 273
MDS API #210
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MDS API #210
Conversation
Signed-off-by: Rama <ramaraochavali@gmail.com>
PR for issue. |
It seems to me that if we are going to return a proto of metric values, and the primary motivation is to pull Prometheus metrics, that the proto schema should probably use the Prometheus data model (or similar). I would hesitate to introduce yet another model of metrics data. |
+1. I'm not super familiar with Prometheus but we should see what is available there and reuse. |
Signed-off-by: Shriram Rajagopalan <shriram@us.ibm.com>
api/mds.proto
Outdated
|
||
import "google/api/annotations.proto"; | ||
|
||
// While it is not a "true" discovery service, just named it as MetricsDiscoveryService to align with xDS paradigm. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I had this initial thought for Rate Limit Service, but decided better eventually. Let's not shoe horn into the xDS space things that aren't actually discovery.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1. I'm going to be adding a log streaming API soon and will not be calling it xDS. But I think we need to resolve the Prometheus question before we go further on this issue. I am going to ask the Prometheus folks to hop into this issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great. Will change it.
+1 on direct prometheus support as @douglas-reid linked to. It's a reasonable format and there's already software that supports consuming it. |
Signed-off-by: Rama <ramaraochavali@gmail.com>
I agree prometheus data model is very comprehensive and should be a good fit here. Just changed the MetricsService to use that data model. Please check |
Does Prometheus not have a gRPC API? I need to investigate what they have more. Also, please merge master so CircleCI tests run. |
I could not find a grpc api from Prometheues. Will merge the master. |
Signed-off-by: Rama <ramaraochavali@gmail.com>
Signed-off-by: Rama <ramaraochavali@gmail.com>
sorry. i think looks like some thing went wrong with "master" merge. Let us review and if it is ok, will create a separate PR |
Here's the documentation on how prometheus natively presents it's data: https://prometheus.io/docs/instrumenting/exposition_formats/ It's not a gRPC protocol, so I'm not sure that implementing it as gRPC in envoy is the right approach. |
I'm talking to someone on the Prometheus team this morning and will report back. I think there are some high level things we need to sort out before we go further with this. I have no objection to adding a gRPC push API if we want that, but Prometheus preferred model is pull, so for native Prometheus that will likely be what we will do. So we need to be careful about calling this API "Prometheus support" if we end up adding this. |
When I proposed this, I was not thinking that is for "Prometheus support". I was thinking of it as a way to get stats out of proxy and push it what ever metrics backend one has. |
@brancz @SuperQ @juliusv do you mind weighing in here? TL/DR: There is a desire for a way to pull metrics out of Envoy. Per our conversation, this ticket, and envoyproxy/envoy#1947, I thin we should just drop this and move towards native Prometheus export in text/text+gzip/proto which I think gives us what we want here? |
The text format is our primary format these days. Prometheus 1.x supports proto, but Prometheus 2.x currently only supports the text format. We improved our text parser to the point that it's basically at performance parity with proto. It also has the advantage of being human readable. In the end, the output format is the "easy" part, IMO. Getting the polling interface correct is more important. |
@SuperQ also sent me https://github.com/RichiH/OpenMetrics. If the proto format for Prometheus is deprecated we definitely should not use that. Perhaps we should wait for OpenMetrics to stabilize and then do that? |
@mattklien123 if it is deprecated we should not use the promotheus proto. I have not looked at in detail about OpenMetrics. But I think a generic way of pulling metrics with proto would be useful for people who use other metric backdend databases like we do. |
Looked at OpenMetrics. The intent looks good but not sure when the standard will come out though. Any idea when would it be ready by any chance? |
It is indeed unclear when the standard will actually be established. Given that the standard is going to be heavily influenced by the current Prometheus format the change wouldn't be drastic. I think it makes sense to move forward with the currently supported Prometheus format. |
@mattklein123 I would still vote for a gRPC push API. Understood that calling it out as Prometheus support may not be ideal, but having some "standard" way to receive metrics via gRPC seems useful. |
@douglas-reid I agree, but let's not call it Prometheus support. (IMO push is a superior method anyway). We can use Prometheus protos if we want. @ramaraochavali a few things:
|
@mattklein123 Yes.
|
creating another PR - so closing this |
Signed-off-by: Rama ramaraochavali@gmail.com