Standard Tech Stack
This doc outlines the ‘default’ choices we reach for when building a new application. By ‘application’, I mean anything we might build for a client, which will usually involve some backend service or web frontend.
Our main motivation for standardizing on a set of choices is because we have limited resources, and would like to build up expertise and tooling around a small and pragmatic set of software tools and services.
We use the following priorities to guide our decision-making:
Security - This is pretty much always going to be the #1 priority. If our systems are easily breached, people and organizations will (rightfully) not trust us to build things for them. We need to be good stewards of the data people are entrusting to us.
Developer Velocity - Our time is limited, we want to use tools that maximize what we can do with a unit of time.
Cost + Efficiency - Our money is also limited, as is the money of our clients. We want to pick technologies that allow us to maintain systems for the long term, consuming as few resources as possible and maximizing use of cheaper/free cloud services.
Priorities like ‘software quality/robustness’ don’t appear on this above list because that’s more a feature of the software we write than the tools we use. We should always strive to write high-quality, robust, well-tested systems, and generally that means not using flaky, buggy, or otherwise low-quality tools (or dependencies) that could cause issues down the road.
When in doubt, try to be obnoxiously pragmatic in making decisions. Perfect is the enemy of the good, our job isn’t to design perfect software, it’s to build (and deliver!) systems that help nonprofits work more effectively.
Cloud Providers
We don’t plan to maintain our own data centers, because that’s a truly terrible idea at our scale (or even 100x our current scale), so we need to pick a cloud provider. Prior discussion exists in our Cloud Provider Analysis doc. We’re standardizing on GCP, mainly because of its sustainability, generous free tier, and good billing support.
Databases
We’re standardizing on PostgreSQL as our database of choice, see the Main Database design doc for more details.
Version Control
Because its the 21st century, all code and configuration (when possible) should be checked into version control. We use git and GitHub, because those are the most widely adopted tools by far and most developers are familiar with them. We use GitHub’s code review and pull request management tools.
We also use Git LFS to store large files (images, videos, binaries, test data, etc) outside the core repo, which makes working with a repository more ergonomic.
Web Frontends
All web frontend code that we write should be in TypeScript, because it catches whole classes of issues at compile time and serves to document the code.
Our web frontends should be built using Vue (specifically Vue 3) and Nuxt (specifically, Nuxt 3). Vue is a popular and ergonomic framework for building web applications, and Nuxt is a layer on top of that that provides Server Side Rendering (SSR), Data Management, and other useful functionality. We’re standardizing on PrimeVue as our component library, and own a license to use the PrimeVue Theme Designer.
As discussed below, the web frontend should use GraphQL for talking to the server, specifically GraphQL Code Generator for generating the client bindings in TypeScript and graphql-request for issuing requests to the server.
Non-Web Frontends
Currently, we don’t have to build any of these. If we did, we’d likely want to use a tool like Flutter, which can build for multiple platforms (e.g. iOS + Android) from a single code base. At our scale, we don’t have the time, resources, or expertise to write native applications for multiple mobile platforms.
Before actually beginning mobile app development, we should thoroughly survey the available options for cross-platform development, like Xamarin, React Native, Ionic, etc.
Backends
All backend infrastructure should be written in Go unless we have a really compelling reason not to, like needing access to a highly specialized library, for example. Go is a great default choice for backends for a bunch of reasons:
It’s a simple, typed language - People who are familiar with Java/Python/C/C++/etc can typically pick up Go and be writing fairly idiomatic code in a week. Static typing means the compiler can catch whole classes of issues early, instead of at runtime or after deployment.
It has extensive library support - In Go, there are typically already high-quality libraries for doing a given task (e.g. a Redis client, K/V store, JWT parsing, etc), and Go is well-supported by popular software libraries and systems (Protobuf/gRPC, GraphQL, K8s, Bazel, etc)
It compiles quickly, to self-contained binaries - Fast compilation makes for a good developer experience, and small, self-contained binaries make for easy deployment (e.g. in Docker, on bare metal, etc).
It’s performant, and concurrency is built-in - While performance isn’t our highest priority, as most applications we build will be fairly small scale, Go’s performance will frequently mean we can can serve thousands of requests a second from a single instance, staying within the free tier for our compute services. Similarly, built-in support for concurrency is useful in a server environment for efficient resource use (e.g. suspending routines that are making network connections to DBs or other services).
Build System
Bazel should be used as the main build system for all backends. A build system is a tool for producing artifacts from your source code. Build systems are a layer on top of the native compilers and interpreters of language-specific ecosystems that provide a generalized interface for interacting with them. There are a few benefits to using a build system:
Reproducible builds - Build systems manage all of your source code and dependencies, which makes it easy to build bit-identical binaries for a given Git commit of a repository.
Caching - Build systems can cache artifacts, meaning that only things that have changed since the last build need to be recompiled. This makes them fast, especially in CI environments.
Standardized interface - Instead of running
go test ./…
orpython -m test …
to execute tests, a build system will standardize everything. For example, in Bazel,bazel build //…
will build all of your code, regardless of what language different parts of the codebase are in, andbazel test //…
will do the same thing for tests.Code generation - Build systems can manage code generation for things like Protocol Buffers and GraphQL, which means they get automatically updated and don’t need to be manually updated and checked into source control.
Bazel specifically has the benefit of being a mature build system with great support for Go, Docker, Protocol Buffers, and more. It also helps that the two founding members have extensive experience using Blaze, Google’s internal version of Bazel.
While Bazel does have support for JavaScript, we aren’t recommending use of it at this time, as it seems to be more effort to setup and maintain than its worth.
User-facing API Servers
API servers are backend systems that are exposed to the internet and take user requests. We’re standardizing on GraphQL for serving these requests, specifically using gqlgen on the server to do code generation for the Go bindings of our schema. The motivation for using GraphQL is:
Cross Server Type Safety - By generating typings/bindings for both the frontend and backend, GraphQL translates the type-safety we design into all servers across the HTTP boundary.
Great support for clients - GraphQL can easily be served over HTTP and JSON, which are ubiquitous regardless of what client platform we’re targeting. Protocol Buffers + gRPC, for comparison, can be hard to work with because it’s a lower-level binary protocol, which in particular makes web support dodgy.
Minimal network traffic - GraphQL clients can request data on a bunch of disparate resources in a single request, which is a boon when working with devices on spotty network connections, as is a requirement for our first nonprofit client. Having to make fewer round trips also decreases the latency before things show up on the screen, which generally makes for a better client experience.
Core Backend Servers
‘Core’ backend servers are backends that aren’t exposed to the internet directly, but may communicate with each other and user-facing API servers. These servers may do things like provide blob storage, handle auth, etc, but won’t directly provide customer-specific business logic. These services should speak gRPC when communicating with each other. The motivation for using gRPC on these backends is:
gRPC is efficent - Using Protocol Buffers as the wire format means that gRPC messages are particularly compact and fast to serialize/deserialize.
gRPC is well-supported in Go - The ecosystem for working with gRPC in Go is mature and full-featured, with support for interceptors, auth, monitoring, logging, streaming RPCs, and more.
gRPC is well-documented - The gRPC site has both language-specific docs, and detailed breakdowns of all the useful features available in the ecosystem.