Ed Crewe Home

Sunday, 22 September 2024

What is JSON transcoding?

A tool that enables use of gRPC as a single API for microservices and REST web frontend, by automated translation of your gRPC API into JSON RESTful services.


What is gRPC?

gRPC is the name of a language agnostic data transfer framework designed for cloud microservices implemented via HTTP2, created by Google, around the same time they released Kubernetes.
It is to some extent equivalent to the REST standard that was developed over HTTP1.1 and used to transfer data for web browsers. But REST uses plain text JSON, where as gRPC encodes messages as binary protobuf format data.

There are similarities between gRPC development and REST development, hence similar tools.
For scripted API interactions REST has Postman which also has some gRPC support, but there is also Kreya, grpcUI etc.

Why use gRPC rather than REST?

gRPC is 10 x faster than REST and best suited to cross microservice remote procedure calls.

gRPC uses protoc to generate any or all of Go, Python, C#, C++, Java, Node.js, PHP, Ruby, Dart, Kotlin & Rust code out of the box - so allowing your microservice engineers to use native Go, your SREs Python and other integrators their language of choice. Each language can import the autogenerated API libraries for it, which are generated from the *.proto source files that developers create to define the API.

You have a single versioned API defined in one place for all services. This may be a dedicated myservices-proto package with all your source/*.proto files in and a script to run protoc and generate the libraries for each language your company uses. Along with the master API definition file eg. descriptor/proto.pb or proto.pbtext for the bigger human readable version.

The gRPC protocol is strongly typed and allows full validation of data in and out. It is a binary format so more compact. JSON via REST does not allow such control of the data typing and validation. It was designed as a simple serialisation format for dynamically typed HTML page scripting language, Javascript objects. Not as a performant backend RPC protocol for cloud microservices.

Because of the much looser standards around REST and JSON many people adopt the Swagger  framework to help standardize and document their REST API. Whilst gRPC has formal standards in its core protocols.

Why use JSON transcoding?


Why still use REST too?

The first question that occurs is why run a REST API at all? The reason is that whilst it uses large, slow and minimally typed messaging. It is the default standard approach for front end web development. Since web front ends are implemented in Javascript it is natural to build them against a RESTful backend that provides data in the native JSON format.

Added to this, even the latest web browsers have incomplete support for the HTTP2 protocol required by gRPC. This in turn leads to poor support in the Javascript eco-system. Also for devx accessibility, gRPC messages are not immediately human readable.

Perhaps most importantly gRPC was never designed to replace REST. It was designed for fast backend cloud deployed internal service composition.
REST is a web protocol designed for loose coupling services from across the web. Where the loose standards, typing, simplicity and transparency of REST / JSON complement HTML5. Given attempts to impose stricter typing, such as XHMTL were a failure, using gRPC as a front end replacement for REST is asking for trouble. Instead standardizing REST via Swagger, OpenAPI and the like are a more practical approach.

The front-end web world loves REST. But gRPC is far superior for more tightly coupled backend microservices. Given that, JSON transcoding is likely to remain a very useful means of saving on API proliferation and complexity by providing a single API for both, via your edge servers (ie the servers between your cloud deployed services and the internet).


How does JSON transcoding work?

JSON transcoding suits a gRPC centric single master API design. The ideal approach when designing new cloud services built from micro-services. 

It is implemented by using the transcoder plugin which can run in existing proxy servers such as Envoy Proxy The plugin can use gRPC proto source files to autogenerate a JSON REST API without any code generation required.

Alternatively there is grpc-gateway which generates the implementation from the proto files and requires a compilation step. This different implementation under the hood is not strictly transcoding. But in effect it does the same job.

Or for Microsoft language world, there is the JSON transcoder for ASP.net

In all cases you create the REST API based on the google.api.http annotations standard in your *.proto files (add the green text below) to a simple example service ...

syntax = "proto3";
package your.service.v1;
option go_package = "github.com/yourorg/yourprotos/gen/go/your/service/v1";

import "google/api/annotations.proto";

 message StringMessage {
   string value = 1;
 }

service YourService {
   rpc Echo(StringMessage) returns (StringMessage) {
      option (google.api.http) = {
          post: "/v1/example/echo"
          body: "*"
      };
   }
}

The google.api.http set of annotations are used to define how the gRPC method can be called via REST. 


Why use it?

Use a transcoder and you only need to maintain your gRPC and your public REST API in one place - the gRPC proto files - with simple one liner annotations

You no longer need to develop any REST server code. Running up web servers that provide a REST API and have to be manually coded to keep it in synch with the backend gRPC API can be dispensed with. Or if you are running direct REST interfaces from your Golang microservices these can be dropped as less type safe and more error prone. Replacing them with gRPC microservices and replacing data validation code layers in different backend code languages, with proto level data validation in one place. Validation can be your own custom validator code or you can use a full plugin such as buf.build or protoc-gen-validator

Now as you build your gRPC API you also build a JSON RESTful one too. Adding custom data validators at the gRPC level, defining your full API in one place.

The gRPC API annotations give you a performant REST API that is auto generated and run via Envoy Proxy’s fast C++ based edge server - for use by the JSON front end. Automatically transcoding messages back and forth from REST request / responses to gRPC ones.


What about Transcoding other HTTP content types that are not JSON?


It may be called a JSON transcoder but the transcoder can also transcode other content types.

To transcode other http content types, you must use the google.api.HttpBody type. Put the content in the body and set (and call the UI) with the appropriate content-type header. For example for a gRPC CSV file download, eg. getting log files ...


syntax = "proto3";
package your.service.v1;
option go_package = "github.com/yourorg/yourprotos/gen/go/your/service/v1";

import "google/api/annotations.proto";
import "google/api/httpbody.proto";

 message StringMessage {
   string value = 1;
 }

service YourService {
  rpc GetCSVFile(StringMessage) returns (google.api.HttpBody) {
    option (google.api.http) = {
        get : "/v1/example/echo.csv"
    }; 
  }
}

A Go implementation of the method to return the CSV file might be ...

import (
"embed"
"google.golang.org/genproto/googleapis/api/httpbody"

api_v1 "github.com/my-proto-pkg/generated/go/public/v1"
)
//go:embed *.csv
var EmbedFS embed.FS

// GetCSVFile to return a CSV file via gRPC as HttpBody
func (p *Provider) GetCsvFile(ctx context.Context, req api_v1.StringMessage) (*httpbody.HttpBody, error) {
csvData, err := EmbedFS.ReadFile(req.Value)
if err != nil {
return nil, err
}
return &http.HttpBody{
ContentType: "text/csv",
Data: csvData,
}, nil
}


A call to /v1/example/echo.csv?value=smalldata.csv with content-type=text/csv (or application/json) should return that file.

Other content types such as PDF can be similarly handled.


Data streaming large content types


When returning content other than compact gRPC message formats another issue arises.
What if bigdata.csv is 2 Gb in size?  gRPC upload limits are 4 Mb and although download is unlimited it is best to stream anything that may be over that size.

For large messages response streaming content needs to be used.

It is very simple to switch the protocol for gRPC and the transcoded Http REST request and / or responses. If either is prefixed with the word stream then that streaming handling is implemented both in gRPC and for the transcoded REST API. So to stream 2 Gb files from REST change the proto definition as shown in bold red ...

rpc StreamCSVFile(StringMessage) returns (stream google.api.HttpBody) {

Although this is not the only thing to be done. The main work is that the method implementation needs to stream the data too.

// StreamCsvFile to stream large CSV files via HttpBody
func (p *Provider) StreamCsvFile(req *emptypb.Empty, responseStream api_v1.MyProto_StreamCsvFileServer) error {
f, err := os.Open("/tmp/bigdata.csv")
if err != nil {
return nil
}
defer f.Close()

r := bufio.NewReader(f)
buf := make([]byte, 4*1024*1024) // Use 4 MB buffer

for {
n, err := r.Read(buf)
if n > 0 {
resp := &gapi.HttpBody{
ContentType: "text/csv",
Data: buf[:n],
}
if err := responseStream.Send(resp); err != nil {
return nil
}
}
if err == io.EOF {
break
}
if err != nil {
return nil
}
}
return nil
}

When the file is fetched it returns it to the browser as streamed Http - so in chunks.
So if you do this via Envoy and have it on debug mode you can see it being served by a series of responses in chunks via streaming http, in response to a REST get.

1. curl --header 'Content-Type: application/json' http://localhost:8080/v1/example/echo.csv?value=bigdata.csv
2. transcoder translates that request to gRPC http://grpc_host:80/myproto.api.v1.StreamCsvFile
3. The microservice returns a stream of gRPC responses with the file chunked up into them to Envoy
4. Envoy starts serving those chunks as http responses to the curl web client
5. When all the responses are done the task is complete and curl has downloaded the full file and stuck it back together.

When tested out locally on my Mac it downloaded files at over 100Mb/sec so transcoding does not appear to introduce any performance hit.

How to try out Envoy with the transcoder filter


Envoy can be downloaded and installed locally to try out JSON transcoding. You just need to be able to run up the gRPC service for it to talk to and provide the proto.pb API definition to it via the Envoy config.yaml

Envoy is easily installed on Linux or Mac

Once it is installed run up your gRPC service or port-foward it from k8s or kind.

Copy its proto.pb to a local protos/helloworld.pb directory for envoy to access it. Then run up envoy...

envoy --service-node ingress --service-cluster ingress -c envoy-config.yaml --log-level debug

A sample config for running a service is detailed in the Envoy transcoder help page 
Note that the REST path should not be added to the config with the current version 3 transcoder.
The transcoder reads the paths for REST from the proto.pb
It just needs the gRPC dot notation based URL /helloword.Greeter that indicates the gRPC service
15    - filters:
16      - name: envoy.filters.network.http_connection_manager
17        typed_config:
18          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
19          stat_prefix: grpc_json
20          codec_type: AUTO
21          route_config:
22            name: local_route
23            virtual_hosts:
24            - name: local_service
25              domains: ["*"]
26              routes:
27              - match:
28                  prefix: /helloworld.Greeter
29                route:
30                  cluster: grpc_myproto 
When port forwarding from a k8s running service something along the following lines could be used to point Envoy at it. 

clusters: - name: grpc_myproto type: STATIC connect_timeout: 5s http2_protocol_options: {} load_assignment: cluster_name: grpc_myproto endpoints: - lb_endpoints: - endpoint: address: socket_address: address: 127.0.0.1 port_value: 9090

Note that the sample config from the envoy transcoder help page is missing some useful config that you may want in production, eg.  convert_grpc_status: true, so that if gRPC errors occur, they are transcoded and returned in the body of the http response. By default (== false), the body is blank and the gRPC error is only in the header. There is a full list of config options available.

Testing


If you want to write tests against REST in Go (rather than just testing the gRPC) you will need the Go JSON annotated structs to test with, that marshal JSON back and forth. 

Since the REST API is transcoded on the fly by Envoy then these don't exist in any of your source code. To generate them you need to use command line translation tools.

gRPC proto.pb > buf.build > OpenAPIv2 at this point you have a JSON open API spec which can be used to create your REST tests in Javascript or any language with OpenAPI v2 support.
To do so in Go we can continue the pipeline with OpenAPIv2 > OAPI-codegen > generated_v1.go 
A test using httpexpect (to clean up the raw syntax of http) would be something like ...

resp := generated_v1.GetCsvResponse{}
e := httpexpect.Default(t, restSvcURL)
e.GET("/v1/example/echo.csv").Expect().Status(http.StatusOK).JSON().Decode(&resp)
assert.Equal(t, resp.Data, sampleCSV)
(this assumes our example GetCsvResponse embeds the
google.api.HttpBody as Data and includes other fields)


No comments:

Post a Comment