Developer Handbook

If you have questions or suggestions for how to improve this handbook, let us know!

Hello there!

This doc is a guide for getting started with developing in the Silicon Ally ecosystem. For a high-level overview of the technologies we use, check out our Standard Tech Stack. For the purposes of this doc, we’ll mostly assume you know what each technology is and why we use it.

That this document only covers common infrastructure, be sure to check out the README.md files in a given GitHub repository for service-specific details.

Bazel + Bazelisk

Bazel is the build tool we use for all of our backend infrastructure. We use Bazelisk to manage our active Bazel version, which can be found in the .bazelversion in the root of each repo.

You can install Bazelisk by downloading the latest binary release for your platform (usually linux-amd64) and putting it in your PATH as bazel. Bazelisk will forward your commands to the correct Bazel version behind the scenes, downloading it if needed.

Once Bazelisk is installed, you can add Bash autocompletion by following the official Bazel instructions, or by copying this already-bootstrapped completion script to your local machine, and sourcing it in your .bashrc with:

source /path/to/bazel-complete.bash

A standard location to store the file is $HOME/.local/share/bazel/bazel-complete.bash.

Once everything is installed, you can test your installation by going into the root of a repo and running:

bazel build //...

which will build all targets in the repository. You can also run bazel test //… to test all targets.

Gazelle

Gazelle is a tool that can auto-generate BUILD.bazel files for Go libraries and binaries, and it takes most of the manual toil out of maintaining BUILD files for Go code. It’s managed by Bazel and doesn’t need to be installed independently. To run it:

bazel run //:gazelle

This will update all go_library and go_binary rules to have dependencies matching the imports in the source code. If you find that you need to manually edit a go_library, go_binary, go_test, or go_proto_library rule, something is likely wrong.

Adding a new dependency

If you need to add a new external library to the application, run the following commands:

# This will add the dependency (and any transitive dependencies)
# to your go.mod and go.sum files.
go get -u github.com/your/new-dependency

# This will copy the dependencies from the go.mod file into
# deps.bzl so that Bazel is aware of them.
bazel run //:gazelle-update-repos

# Finally, this will recreate your build files:
bazel run //:gazelle

# And validate your changes build
bazel build //...

The reason for having this information duplicated in two files is that some Go tools and IDE extensions will understand the go.mod file automatically, but not Bazel. Additionally, running go get … will pull in transitive dependencies, which would otherwise be tedious to manually import with Gazelle. We should treat the go.mod file as the source of truth for our dependencies and their versions.

Git LFS

We use Git LFS for managing large files (like media, or binary test data) in this repository. To install Git LFS, download the latest release, put it in your PATH, and then run:

git lfs install

Follow the official documentation to 'track' files as being LFS files.

NVM + NPM

Generally, nvm and npm are only required if the repository has a web frontend, which would be in a frontend/ directory.

We manage our Node.js versions with Node Version Manager, follow the installation instructions to install the nvm tool.

Once installed, run:

cd frontend

# Have nvm install the correct version of Node + NPM.
nvm install

# Install all of the packages in our `package.json` into
# the local `node_modules` cache directory.
npm install

From there, follow the instructions in the relevant frontend/README.md file to run the web frontend. When in doubt, look at the ”scripts” section in the package.json to see what commands are available, then run npm run <command> to execute it.

Docker

The vast majority of our services will be deployed as Docker containers, and we’ll frequently use Docker containers for running services locally. Follow the installation instructions for your platform to get up and running. Many scripts will assume that you can run Docker without using sudo, so you should either be running Docker in rootless mode, or add yourself to the Docker group with usermod -aG docker $USER, though note that the latter basically allows any command running as your user to escalate to root permissions, see the warning here for more details.

Credential Stores

You’ll also likely want to install a credential store. Credential stores are plugins for Docker that manage sensitive credentials. The alternative is that secret tokens are stored in plaintext within your Docker configuration, which is rarely desirable. The osxkeychain store is the most commonly used one on Macs, and Linux users can use the pass store. See the docker/docker-credential-helpers repo for more information.

gcloud

The gcloud CLI is our main local entrypoint for interacting with services on Google Cloud Platform (GCP). Follow the installation instructions to get everything installed, then follow the initialization instructions, which will associate your @siliconally.org credentials with the command line tool. This is required for being able to run Terraform and run some services locally.

sops

sops is a tool developed by Mozilla for managing encrypted data files. The tool is written in Go and can be used as a library and embedded in applications. It has support for secret keys stored in most cloud platforms, including GCP. We use it for encrypting all sensitive material (API credentials, DB username/password, etc) that should be shipped with an application, see the original design for details. These encrypted files are stored in the repo alongside the code, and updated by GCP-authenticated developers with the sops CLI. We’ve standardized on using JSON (instead of YAML, INI, ENV, etc) for our secrets.

Editing credentials

Credentials are usually stored in a cmd/<service>/configs/secrets/<env>.enc.json file, which can be edited with:

sops path/to/secret.enc.json

This will open the file in your default $EDITOR, where it can be edited. On saving and closing the editor, the file will be re-encrypted before being written back.

Handling merge conflicts

When two people edit the same encrypted sops file, and one person submits their changes, the second person will have a merge conflict in that file. To resolve the conflict, do the following:

  1. Open the file with your editor (not with sops)

  2. Remove the merge markers and all but one of the ”mac” fields

  3. Save and close the file

  4. Open the file with sops --ignore-mac path/to/file

  5. Save the file again

This will save the file with the correct MAC signature.

Terraform

Terraform is a tool for managing your infrastructure (e.g. deployed resources on GCP) as code, in a Python-esque language. We use it for managing all of our deployed infrastructure, because it takes the guesswork out of configuring new software and makes things reproducible.

To install Terraform, download the CLI and add it to your PATH.

Overview

We store all of our Terraform configurations in a GCS bucket, silicon-ally-terraform-admin, which you can browse here (if you are a member of siliconally.org) Different subdirectories within the bucket correspond to configuration for different groups of projects, like the admin project itself, client-specific projects, and our centrally-managed infrastructure projects.

Aside from our dedicated Silicon-Ally/terraform repo, which hosts core Terraform configs, other repos will have a terraform/ directory containing one or more sets of Terraform configurations. Check out a repo’s terraform/README.md to see what configurations exist. Before you can manage any infrastructure, you’ll have to run terraform init to fetch the remote state and pull relevant dependencies, though if you forget to do this, Terraform will usually prompt you as well.

Use of Workspaces

Terraform has the concept of Workspaces, which are basically named versions of the same set of configuration. We mainly use this to represent different environments of the same deployment, so we’d have two workspaces named dev and prod. The default workspace always exists, but for cases where we have environments, it is unused.

To see a list of available workspaces, run terraform workspace list. The workspace with an asterisk (*) before it is the currently active one, which means that any other terraform <command> that you run will apply to this environment. To change workspaces, run terraform workspace select <workspace>.