Skip to main content

An official website of the City of Austin.

Evaluating

Scalability

Scalability is sometimes the driving factor in breaking out a microservice from an existing application. In the following quote, an Uber engineer explains their decision to break out their geofencing functionality out of the Node.js core application and into a microservice written in Go:

Node.js was the real-time marketplace team’s primary programming language at the time we evaluated languages, and thus we had more in-house knowledge and experience with it. However, Go met our needs for the following reasons:

  • High-throughput and low-latency requirements. Geofence lookups are required on every request from Uber’s mobile apps and must quickly (99th percentile < 100 milliseconds) answer a high rate (hundreds of thousands per second) of queries.

  • CPU intensive workload. Geofence lookups require CPU-intensive point-in-polygon algorithms. While Node.js works great for our other services that are I/O intensive, it’s not optimal in this use case due to Node’s interpreted and dynamic-typed nature.

  • Non-disruptive background loading. To ensure we have the freshest geofences data to perform the lookups, this service must keep refreshing the in-memory geofences data from multiple data sources in the background. Because Node.js is single threaded, background refreshing can tie up the CPU for an extended period of time (e.g., for CPU-intensive JSONparsing work), causing a spike in query response times. This isn’t a problem for Go, since goroutines can execute on multiple CPU cores and run background jobs in parallel with foreground queries.