Skip to content

Connection latency significantly affects throughput  #1043

@adg

Description

@adg

I am working on a system that uses GRPC to send 1MB blobs between clients and servers and have observed some poor throughput when connection latency is high (180ms round trip is typical between Australia and the USA).

The same throughput issues are not present when I take GRPC out of the equation. I have prepared a self-contained program that reproduces the problem on a local machine by simulating latency at the net.Listener level. It can run either using GRPC or just plain HTTP/2 POST requests. In each case the payload and latency are the same, but—as you can see from the data below—GRPC becomes several orders of magnitude slower as latency increases.

The program and related files: https://gist.github.com/adg/641d04ef335782648502cb32a03b2b07

The output of a typical run:

$ ./run.sh 
Duration	Latency	Proto

6.977221ms	0s	GRPC
4.833989ms	0s	GRPC
4.714891ms	0s	GRPC
3.884165ms	0s	GRPC
5.254322ms	0s	GRPC

8.507352ms	0s	HTTP/2.0
936.436µs	0s	HTTP/2.0
453.471µs	0s	HTTP/2.0
252.786µs	0s	HTTP/2.0
265.955µs	0s	HTTP/2.0

107.32663ms	1ms	GRPC
102.51629ms	1ms	GRPC
100.235617ms	1ms	GRPC
100.444982ms	1ms	GRPC
100.881221ms	1ms	GRPC

12.423725ms	1ms	HTTP/2.0
3.02918ms	1ms	HTTP/2.0
2.775928ms	1ms	HTTP/2.0
4.161895ms	1ms	HTTP/2.0
2.951534ms	1ms	HTTP/2.0

195.731175ms	2ms	GRPC
190.571784ms	2ms	GRPC
188.810298ms	2ms	GRPC
190.593822ms	2ms	GRPC
190.015888ms	2ms	GRPC

19.18046ms	2ms	HTTP/2.0
4.663983ms	2ms	HTTP/2.0
5.45113ms	2ms	HTTP/2.0
5.56255ms	2ms	HTTP/2.0
5.582744ms	2ms	HTTP/2.0

378.653747ms	4ms	GRPC
362.14625ms	4ms	GRPC
357.95833ms	4ms	GRPC
364.525646ms	4ms	GRPC
364.954077ms	4ms	GRPC

33.666184ms	4ms	HTTP/2.0
8.68926ms	4ms	HTTP/2.0
10.658349ms	4ms	HTTP/2.0
10.741361ms	4ms	HTTP/2.0
10.188444ms	4ms	HTTP/2.0

719.696194ms	8ms	GRPC
699.807568ms	8ms	GRPC
703.794127ms	8ms	GRPC
702.610461ms	8ms	GRPC
710.592955ms	8ms	GRPC

55.66933ms	8ms	HTTP/2.0
18.449093ms	8ms	HTTP/2.0
17.080567ms	8ms	HTTP/2.0
20.597944ms	8ms	HTTP/2.0
17.318133ms	8ms	HTTP/2.0

1.415272339s	16ms	GRPC
1.350923577s	16ms	GRPC
1.355653965s	16ms	GRPC
1.338834603s	16ms	GRPC
1.358419144s	16ms	GRPC

102.133898ms	16ms	HTTP/2.0
39.144638ms	16ms	HTTP/2.0
40.82348ms	16ms	HTTP/2.0
35.133498ms	16ms	HTTP/2.0
39.516466ms	16ms	HTTP/2.0

2.630821843s	32ms	GRPC
2.46741086s	32ms	GRPC
2.507019279s	32ms	GRPC
2.476177935s	32ms	GRPC
2.49266693s	32ms	GRPC

179.271675ms	32ms	HTTP/2.0
72.575954ms	32ms	HTTP/2.0
67.23265ms	32ms	HTTP/2.0
70.651455ms	32ms	HTTP/2.0
67.579124ms	32ms	HTTP/2.0

I theorize that there is something wrong with GRPC's flow control mechanism, but that's just a guess.

Metadata

Metadata

Assignees

Labels

P1Type: PerformancePerformance improvements (CPU, network, memory, etc)

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions