A tool that enables use of gRPC as a single API for microservices and REST web frontend, by automated translation of your gRPC API into JSON RESTful services.
What is gRPC?
gRPC is the name of a language agnostic data transfer framework designed for cloud microservices implemented via HTTP2, created by Google, around the same time they released Kubernetes.
It is to some extent equivalent to the REST standard that was developed over HTTP1.1 and used to transfer data for web browsers. But REST uses plain text JSON, where as gRPC encodes messages as binary protobuf format data.
There are similarities between gRPC development and REST development, hence similar tools.
For scripted API interactions REST has Postman which also has some gRPC support, but there is also Kreya, grpcUI etc.
Why use gRPC rather than REST?
gRPC is 10 x faster than REST and best suited to cross microservice remote procedure calls.
gRPC uses protoc to generate any or all of Go, Python, C#, C++, Java, Node.js, PHP, Ruby, Dart, Kotlin & Rust code out of the box - so allowing your microservice engineers to use native Go, your SREs Python and other integrators their language of choice. Each language can import the autogenerated API libraries for it, which are generated from the *.proto source files that developers create to define the API.
You have a single versioned API defined in one place for all services. This may be a dedicated myservices-proto package with all your source/*.proto files in and a script to run protoc and generate the libraries for each language your company uses. Along with the master API definition file eg. descriptor/proto.pb or proto.pbtext for the bigger human readable version.
The gRPC protocol is strongly typed and allows full validation of data in and out. It is a binary format so more compact. JSON via REST does not allow such control of the data typing and validation. It was designed as a simple serialisation format for dynamically typed HTML page scripting language, Javascript objects. Not as a performant backend RPC protocol for cloud microservices.
Because of the much looser standards around REST and JSON many people adopt the Swagger framework to help standardize and document their REST API. Whilst gRPC has formal standards in its core protocols.
Why use JSON transcoding?
Why still use REST too?
Added to this, even the latest web browsers have incomplete support for the HTTP2 protocol required by gRPC. This in turn leads to poor support in the Javascript eco-system. Also for devx accessibility, gRPC messages are not immediately human readable.
REST is a web protocol designed for loose coupling services from across the web. Where the loose standards, typing, simplicity and transparency of REST / JSON complement HTML5. Given attempts to impose stricter typing, such as XHMTL were a failure, using gRPC as a front end replacement for REST is asking for trouble. Instead standardizing REST via Swagger, OpenAPI and the like are a more practical approach.
How does JSON transcoding work?
syntax = "proto3";package your.service.v1;option go_package = "github.com/yourorg/yourprotos/gen/go/your/service/v1";
import "google/api/annotations.proto";
message StringMessage { string value = 1; }
service YourService { rpc Echo(StringMessage) returns (StringMessage) { option (google.api.http) = { post: "/v1/example/echo" body: "*" }; }}
The google.api.http set of annotations are used to define how the gRPC method can be called via REST.
Why use it?
Use a transcoder and you only need to maintain your gRPC and your public REST API in one place - the gRPC proto files - with simple one liner annotations
You no longer need to develop any REST server code. Running up web servers that provide a REST API and have to be manually coded to keep it in synch with the backend gRPC API can be dispensed with. Or if you are running direct REST interfaces from your Golang microservices these can be dropped as less type safe and more error prone. Replacing them with gRPC microservices and replacing data validation code layers in different backend code languages, with proto level data validation in one place. Validation can be your own custom validator code or you can use a full plugin such as buf.build or protoc-gen-validator
Now as you build your gRPC API you also build a JSON RESTful one too. Adding custom data validators at the gRPC level, defining your full API in one place.
The gRPC API annotations give you a performant REST API that is auto generated and run via Envoy Proxy’s fast C++ based edge server - for use by the JSON front end. Automatically transcoding messages back and forth from REST request / responses to gRPC ones.
What about Transcoding other HTTP content types that are not JSON?
It may be called a JSON transcoder but the transcoder can also transcode other content types.
To transcode other http content types, you must use the google.api.HttpBody type. Put the content in the body and set (and call the UI) with the appropriate content-type header. For example for a gRPC CSV file download, eg. getting log files ...
A Go implementation of the method to return the CSV file might be ...
A call to /v1/example/echo.csv?value=smalldata.csv with content-type=text/csv (or application/json) should return that file.
Other content types such as PDF can be similarly handled.
Data streaming large content types
What if bigdata.csv is 2 Gb in size? gRPC upload limits are 4 Mb and although download is unlimited it is best to stream anything that may be over that size.
For large messages response streaming content needs to be used.
So if you do this via Envoy and have it on debug mode you can see it being served by a series of responses in chunks via streaming http, in response to a REST get.
When tested out locally on my Mac it downloaded files at over 100Mb/sec so transcoding does not appear to introduce any performance hit.
How to try out Envoy with the transcoder filter
Envoy is easily installed on Linux or Mac
Note that the REST path should not be added to the config with the current version 3 transcoder.
The transcoder reads the paths for REST from the proto.pb
It just needs the gRPC dot notation based URL /helloword.Greeter that indicates the gRPC service
15 - filters:
16 - name: envoy.filters.network.http_connection_manager
17 typed_config:
18 "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
19 stat_prefix: grpc_json
20 codec_type: AUTO
21 route_config:
22 name: local_route
23 virtual_hosts:
24 - name: local_service
25 domains: ["*"]
26 routes:
27 - match:
28 prefix: /helloworld.Greeter
29 route:
30 cluster: grpc_myproto
When port forwarding from a k8s running service something along the following lines could be used to point Envoy at it. Note that the sample config from the envoy transcoder help page is missing some useful config that you may want in production, eg. convert_grpc_status: true, so that if gRPC errors occur, they are transcoded and returned in the body of the http response. By default (== false), the body is blank and the gRPC error is only in the header. There is a full list of config options available.
Testing
If you want to write tests against REST in Go (rather than just testing the gRPC) you will need the Go JSON annotated structs to test with, that marshal JSON back and forth.
Since the REST API is transcoded on the fly by Envoy then these don't exist in any of your source code. To generate them you need to use command line translation tools.
gRPC proto.pb > buf.build > OpenAPIv2 at this point you have a JSON open API spec which can be used to create your REST tests in Javascript or any language with OpenAPI v2 support.
No comments:
Post a Comment