# WarpBuild Documentation Base URL: https://www.warpbuild.com/docs # February 2026 URL: https://www.warpbuild.com/docs/ci/changelog/2026-february Description: List of updates in 2026-February --- title: "February 2026" slug: "2026-February" description: "List of updates in 2026-February" sidebar_position: -26 createdAt: "2026-02-05" updatedAt: "2026-02-11" --- ### February 11, 2026 - `Enhancement`: macOS 12x runners (`warp-macos-15-arm64-12x`, `warp-macos-26-arm64-12x`) now come with 270GB SSD storage, increased from 120GB. [Read more](/docs/ci/cloud-runners#macos-m4-pro-on-arm64). ### February 6, 2026 - `Bug Fix`: AWS BYOC: Fixed ECR authentication issues for runners in public subnets. CloudFormation stack template v1.4 now includes VPC endpoint security group rules allowing traffic from both public and private subnet CIDRs. Existing BYOC AWS users with public subnet runners should [upgrade to v1.4 template](/docs/ci/byoc/aws/config#-ecr-elastic-container-registry). ### February 5, 2026 - `Enhancement`: [macOS 26 image](https://github.com/actions/runner-images/releases/tag/macos-26-arm64%2F20260127.0184) has been updated. --- # Cloud Runners URL: https://www.warpbuild.com/docs/ci/cloud-runners Description: Blazing fast GitHub Action Runners, hosted on WarpBuild's cloud --- title: "Cloud Runners" excerpt: "Blazing fast GitHub Action Runners, hosted on WarpBuild's cloud" description: "Blazing fast GitHub Action Runners, hosted on WarpBuild's cloud" icon: ServerCog createdAt: "2023-12-11" updatedAt: "2025-12-10" --- WarpBuild runners are built to be the fastest CI/CD platform in the world. We pair the fastest processors with blazing fast SSDs and high bandwidth networking to give you the best performance possible. WarpBuild runners are designed to be drop-in replacements for GitHub-hosted runners. They are fully compatible with GitHub Actions. Refer to the customizations section for more information. All WarpBuild runners are run on ephemeral VMs for maximum isolation and security. This means that they are freshly allocated when you need them and destroyed when the workflow is complete. We currently support Linux on `x86-64` and `ARM64` architectures and macOS on `ARM64`. We are working on adding support for Windows. ## Linux x86-64 | Runner Tag | OS | CPU | Memory | Storage | Price | Aliases | | -------------------------- | ------------ | ------- | ------ | --------- | ------------- | ------------------------ | | warp-ubuntu-latest-x64-2x | Ubuntu 24.04 | 2 vCPU | 7GB | 150GB SSD | $0.004/minute | warp-ubuntu-2404-x64-2x | | warp-ubuntu-latest-x64-4x | Ubuntu 24.04 | 4 vCPU | 16GB | 150GB SSD | $0.008/minute | warp-ubuntu-2404-x64-4x | | warp-ubuntu-latest-x64-8x | Ubuntu 24.04 | 8 vCPU | 32GB | 150GB SSD | $0.016/minute | warp-ubuntu-2404-x64-8x | | warp-ubuntu-latest-x64-16x | Ubuntu 24.04 | 16 vCPU | 64GB | 150GB SSD | $0.032/minute | warp-ubuntu-2404-x64-16x | | warp-ubuntu-latest-x64-32x | Ubuntu 24.04 | 32 vCPU | 128GB | 150GB SSD | $0.064/minute | warp-ubuntu-2404-x64-32x | | warp-ubuntu-2204-x64-2x | Ubuntu 22.04 | 2 vCPU | 7GB | 150GB SSD | $0.004/minute | | | warp-ubuntu-2204-x64-4x | Ubuntu 22.04 | 4 vCPU | 16GB | 150GB SSD | $0.008/minute | | | warp-ubuntu-2204-x64-8x | Ubuntu 22.04 | 8 vCPU | 32GB | 150GB SSD | $0.016/minute | | | warp-ubuntu-2204-x64-16x | Ubuntu 22.04 | 16 vCPU | 64GB | 150GB SSD | $0.032/minute | | | warp-ubuntu-2204-x64-32x | Ubuntu 22.04 | 32 vCPU | 128GB | 150GB SSD | $0.064/minute | | The Linux x86-64 runner images have the same tooling installed as GitHub-hosted runners. Runner storage is ephemeral and will be deleted when the runner is terminated. ## Linux ARM64 `Breaking Change`: The arm64 images for ubuntu-22.04 have been deprecated on March 31, 2025. `Disparity`: The arm64 images for ubuntu-24.04 have their work dir set as `/runner/_work`, which is different from GitHub's work dir `/home/runner/work/` for the same instance. | Runner Tag | OS | CPU | Memory | Storage | Price | Aliases | | ---------------------------- | ------------------------ | ------- | ------ | --------- | ------------- | -------------------------- | | warp-ubuntu-latest-arm64-2x | Ubuntu 24.042 | 2 vCPU | 7GB | 150GB SSD | $0.003/minute | warp-ubuntu-2404-arm64-2x | | warp-ubuntu-latest-arm64-4x | Ubuntu 24.042 | 4 vCPU | 16GB | 150GB SSD | $0.006/minute | warp-ubuntu-2404-arm64-4x | | warp-ubuntu-latest-arm64-8x | Ubuntu 24.042 | 8 vCPU | 32GB | 150GB SSD | $0.012/minute | warp-ubuntu-2404-arm64-8x | | warp-ubuntu-latest-arm64-16x | Ubuntu 24.042 | 16 vCPU | 64GB | 150GB SSD | $0.024/minute | warp-ubuntu-2404-arm64-16x | | warp-ubuntu-latest-arm64-32x | Ubuntu 24.042 | 32 vCPU | 128GB | 150GB SSD | $0.048/minute | warp-ubuntu-2404-arm64-32x | 2 The Linux ARM64 runners based on Ubuntu 24.04 LTS are compatible with GitHub's Ubuntu 24.04 ARM64 runners. For more details on the available tooling, refer to [this link.](https://github.com/actions/partner-runner-images/blob/main/images/arm-ubuntu-24-image.md) ## MacOS M4 Pro on ARM64 | Runner Tag | CPU | Memory | Storage | Price | Aliases | |---------------------------|----------|--------|------------|---------------|-------------------------------| | warp-macos-26-arm64-6x | 6 vCPU | 22GB | 120GB SSD | $0.08/minute | | | warp-macos-26-arm64-12x | 12 vCPU | 44GB | 270GB SSD | $0.16/minute | | | warp-macos-15-arm64-6x | 6 vCPU | 22GB | 120GB SSD | $0.08/minute | warp-macos-latest-arm64-6x | | warp-macos-15-arm64-12x | 12 vCPU | 44GB | 270GB SSD | $0.16/minute | warp-macos-latest-arm64-12x | | warp-macos-14-arm64-6x | 6 vCPU | 22GB | 120GB SSD | $0.08/minute | | | warp-macos-13-arm64-6x | 6 vCPU | 22GB | 120GB SSD | $0.08/minute | | The comparable GitHub-hosted runner is `macos-latest-xlarge` with 6 vCPUs (M1) and 14GB of memory. The WarpBuild runner is 60% faster than the GitHub-hosted runner and is 2x cheaper. WarpBuild provides M4 Pro based MacOS runners built on Apple Silicon with ARM64 architecture. These runners have the same tooling pre-installed as GitHub-hosted runners, functioning as drop-in replacements. Compared to the Intel-based runners, the M4 Pro based runners can be up to 8x faster. 1. `macos-latest` runners from GitHub are based on M1 processors and are significantly slower than the M4 Pro based runners. 2. MacOS runners do not support nested virtualization and cannot run docker. 3. MacOS runners latest tag has been switched to `macos-15` in sync with GitHub's `macos-latest` tag. ## Windows x86-64 | Runner Tag | OS | CPU | Memory | Storage | Price | Aliases | | --------------------------- | ------------------- | ------- | ------ | --------- | ------------- | ------------------------- | | warp-windows-latest-x64-2x | Windows Server 2022 | 2 vCPU | 7GB | 256GB SSD | $0.008/minute | warp-windows-2022-x64-2x | | warp-windows-latest-x64-4x | Windows Server 2022 | 4 vCPU | 16GB | 256GB SSD | $0.016/minute | warp-windows-2022-x64-4x | | warp-windows-latest-x64-8x | Windows Server 2022 | 8 vCPU | 32GB | 256GB SSD | $0.032/minute | warp-windows-2022-x64-8x | | warp-windows-latest-x64-16x | Windows Server 2022 | 16 vCPU | 64GB | 256GB SSD | $0.064/minute | warp-windows-2022-x64-16x | | warp-windows-latest-x64-32x | Windows Server 2022 | 32 vCPU | 128GB | 256GB SSD | $0.128/minute | warp-windows-2022-x64-32x | | warp-windows-2025-x64-4x | Windows Server 2025 | 4 vCPU | 16GB | 256GB SSD | $0.016/minute | | | warp-windows-2025-x64-8x | Windows Server 2025 | 8 vCPU | 32GB | 256GB SSD | $0.032/minute | | | warp-windows-2025-x64-16x | Windows Server 2025 | 16 vCPU | 64GB | 256GB SSD | $0.064/minute | | | warp-windows-2025-x64-32x | Windows Server 2025 | 32 vCPU | 128GB | 256GB SSD | $0.128/minute | | The Windows x86-64 runner images have the same tooling installed as GitHub-hosted runners. Runner storage is ephemeral and will be deleted when the runner is terminated. **Windows Server 2025 runners** are now available, providing the latest Windows Server platform with enhanced performance, security features, and improved compatibility. These runners include all the latest Windows updates and modern development tools. Windows 2vCPU runners are deprecated and will fallback to 4vCPU runners starting September 30, 2025 because they are insufficient for most Windows workloads. ## Spot Instances Spot instances are not available for organizations created after 2025-08-01. WarpBuild now supports `spot` instances for runners. Spot instances are 62.5% cheaper than GitHub actions runner instances and 25% cheaper than standard WarpBuild runners. - Naming convention: `warp----x-spot`. Note the `-spot` suffix. - The configuration for spot instances is the same as the WarpBuild standard instances. - The only difference is the price. - Spot instances are ideal for short workloads that can be interrupted and restarted. - They are _not recommended_ for critical deploy tasks that may leave the workflow in a dirty state (for example, `tofu apply` steps). Here is the configuration and pricing table for spot instances: | Runner Tag | OS | CPU | Memory | Storage | Price | Aliases | | --------------------------------- | ------------------- | ------- | ------ | --------- | --------------- | ------------------------------- | | warp-ubuntu-latest-x64-2x-spot | Ubuntu 24.04 | 2 vCPU | 7GB | 150GB SSD | $0.003/minute | warp-ubuntu-2404-x64-2x-spot | | warp-ubuntu-latest-x64-4x-spot | Ubuntu 24.04 | 4 vCPU | 16GB | 150GB SSD | $0.006/minute | warp-ubuntu-2404-x64-4x-spot | | warp-ubuntu-latest-x64-8x-spot | Ubuntu 24.04 | 8 vCPU | 32GB | 150GB SSD | $0.012/minute | warp-ubuntu-2404-x64-8x-spot | | warp-ubuntu-latest-x64-16x-spot | Ubuntu 24.04 | 16 vCPU | 64GB | 150GB SSD | $0.024/minute | warp-ubuntu-2404-x64-16x-spot | | warp-ubuntu-latest-x64-32x-spot | Ubuntu 24.04 | 32 vCPU | 128GB | 150GB SSD | $0.048/minute | warp-ubuntu-2404-x64-32x-spot | | warp-ubuntu-2204-x64-2x-spot | Ubuntu 22.04 | 2 vCPU | 7GB | 150GB SSD | $0.003/minute | | | warp-ubuntu-2204-x64-4x-spot | Ubuntu 22.04 | 4 vCPU | 16GB | 150GB SSD | $0.006/minute | | | warp-ubuntu-2204-x64-8x-spot | Ubuntu 22.04 | 8 vCPU | 32GB | 150GB SSD | $0.012/minute | | | warp-ubuntu-2204-x64-16x-spot | Ubuntu 22.04 | 16 vCPU | 64GB | 150GB SSD | $0.024/minute | | | warp-ubuntu-2204-x64-32x-spot | Ubuntu 22.04 | 32 vCPU | 128GB | 150GB SSD | $0.048/minute | | | warp-ubuntu-latest-arm64-2x-spot | Ubuntu 24.04 | 2 vCPU | 7GB | 150GB SSD | $0.00225/minute | warp-ubuntu-2404-arm64-2x-spot | | warp-ubuntu-latest-arm64-4x-spot | Ubuntu 24.04 | 4 vCPU | 16GB | 150GB SSD | $0.0045/minute | warp-ubuntu-2404-arm64-4x-spot | | warp-ubuntu-latest-arm64-8x-spot | Ubuntu 24.04 | 8 vCPU | 32GB | 150GB SSD | $0.009/minute | warp-ubuntu-2404-arm64-8x-spot | | warp-ubuntu-latest-arm64-16x-spot | Ubuntu 24.04 | 16 vCPU | 64GB | 150GB SSD | $0.018/minute | warp-ubuntu-2404-arm64-16x-spot | | warp-ubuntu-latest-arm64-32x-spot | Ubuntu 24.04 | 32 vCPU | 128GB | 150GB SSD | $0.036/minute | warp-ubuntu-2404-arm64-32x-spot | | warp-ubuntu-2204-arm64-2x-spot | Ubuntu 22.04 | 2 vCPU | 7GB | 150GB SSD | $0.00225/minute | | | warp-ubuntu-2204-arm64-4x-spot | Ubuntu 22.04 | 4 vCPU | 16GB | 150GB SSD | $0.0045/minute | | | warp-ubuntu-2204-arm64-8x-spot | Ubuntu 22.04 | 8 vCPU | 32GB | 150GB SSD | $0.009/minute | | | warp-ubuntu-2204-arm64-16x-spot | Ubuntu 22.04 | 16 vCPU | 64GB | 150GB SSD | $0.018/minute | | | warp-ubuntu-2204-arm64-32x-spot | Ubuntu 22.04 | 32 vCPU | 128GB | 150GB SSD | $0.036/minute | | | warp-windows-latest-x64-2x-spot | Windows Server 2022 | 2 vCPU | 7GB | 256GB SSD | $0.004/minute | warp-windows-2022-x64-2x-spot | | warp-windows-latest-x64-4x-spot | Windows Server 2022 | 4 vCPU | 16GB | 256GB SSD | $0.008/minute | warp-windows-2022-x64-4x-spot | | warp-windows-latest-x64-8x-spot | Windows Server 2022 | 8 vCPU | 32GB | 256GB SSD | $0.016/minute | warp-windows-2022-x64-8x-spot | | warp-windows-latest-x64-16x-spot | Windows Server 2022 | 16 vCPU | 64GB | 256GB SSD | $0.032/minute | warp-windows-2022-x64-16x-spot | | warp-windows-latest-x64-32x-spot | Windows Server 2022 | 32 vCPU | 128GB | 256GB SSD | $0.064/minute | warp-windows-2022-x64-32x-spot | When a spot instance is reclaimed, the job is terminated and does not restart. This is for two reasons: github doesn't provide an easy way to re-trigger the job without user-side changes (like adding dispatch events). Secondly, not all CI jobs are idempotent or "safe" to be run multiple times, like deployments, IAC apply etc. We leave handling reruns up to the user. ## Concurrency The features that are Generally Available (GA) support unlimited concurrency. This means that workflows can spin up any number of jobs in parallel, and any number of workflows can run in parallel. The features that are in beta support may not support unlimited concurrency. ## Caching WarpBuild provides a blazing fast, unlimited cache for GitHub Action runners. This cache can be used to store build artifacts, dependencies, and other files that are needed across builds. The cache is designed to be fast, reliable, and secure. The cache is available on all Linux based runners and is enabled by default. More details can be found in the [cache documentation](/docs/ci/caching). Warpbuild caches aren't supported for windows based runners. ## WarpBuild Agent The WarpBuild agent is present on the runner and is used to communicate with the WarpBuild platform for runner configuration and cleanup. The agent is open source and can be found [here](https://github.com/WarpBuilds/warpbuild-agent). The agent collects telemetry data using port 33931 for monitoring and diagnostics. For more information about telemetry collection and network requirements, see our [observability documentation](/docs/ci/observability). --- # SSO URL: https://www.warpbuild.com/docs/sso Description: SAML integration support for enterprise users --- title: "SSO" excerpt: "SAML integration support" description: "SAML integration support for enterprise users" icon: ShieldUser createdAt: "2025-06-25" updatedAt: "2025-06-25" --- SAML based logins are supported for our enterprise users. Below is the list of providers we currently support - - Generic SAML 2.0 Provider - Microsoft Entra ID (formerly Azure AD) - Microsoft AD FS - Okta - Auth0 - Google - OneLogin - PingOne - JumpCloud - Rippling - OpenID Connect Provider Please reach out to [support](mailto:support@warpbuild.com) if you are interested in using SAML for your logins. ## Directory Sync For SSO enabled organizations, directory sync (SCIM) support can be enabled to manage the users from the Identity Provider itself. When directory sync is configured, the invite flows will be disabled for the organization. Users are added via your identity provider (IdP) only. ### Directory Sync Configuration Mapping To import the users from the identity provider, we need the WarpBuild roles for the incoming users. Add the users to one or more SSO Group(s) in your identity provider and then add a configuration in the WarpBuild dashboard pointing the SSO Group to the WarpBuild Role. Refer to the screenshot below for an example. This can be done via the 'Directory Sync Configuration' section in Settings > Account. **Note**: Modifications on the configuration can only be performed via an admin of the organization. ![Directory Sync Configuration](./img/directory-sync-configuration.png) Please reach out to [support](mailto:support@warpbuild.com) if you are interested in using SCIM for user management. --- # AWS Marketplace Billing URL: https://www.warpbuild.com/docs/ci/aws-marketplace-billing Description: Enable billing through AWS Marketplace for WarpBuild CI products --- title: "AWS Marketplace Billing" excerpt: "Enable billing through AWS Marketplace for WarpBuild CI products" description: "Enable billing through AWS Marketplace for WarpBuild CI products" createdAt: "2025-10-08" updatedAt: "2025-10-08" --- ## Setup We support billing WarpBuild CI products through AWS Marketplace upon request. Please contact us at [support@warpbuild.com](mailto:support@warpbuild.com) to enable this for your account. Once we enable it, please follow the steps below to set it up with your AWS account: 1. Our [billing dashboard](https://app.warpbuild.com/settings/billing) will show this message: ![AWS Marketplace Billing](./img/aws-marketplace-billing/pre_onboard.png) 2. Clicking on the `Purchase on AWS Marketplace` button will take you to our AWS Marketplace product page. ![AWS Marketplace Product Page](./img/aws-marketplace-billing/product_page.png) 3. Click the `View purchase options` button to open the subscription/offer page. 4. Review the offer and click the `Subscribe` button at the bottom of the page. 5. While the subscription is being created, click the `Set up your account` button at the top of the page. ![Setup your Account](./img/aws-marketplace-billing/setup.png) You can also do this after the subscription is created, in which case the callout will look like this: ![Setup your Account](./img/aws-marketplace-billing/setup_2.png) 6. This will associate your subscription with your WarpBuild account and redirect you to the billing dashboard. If the subscription is not active, the callout will look like this: ![Subscription not active](./img/aws-marketplace-billing/post_onboard_inactive.png) When the subscription is activated, the billing dashboard will show the following message: ![Subscription active](./img/aws-marketplace-billing/post_onboard_active.png) 7. You can now start using WarpBuild CI products. ## Limitations - AWS Marketplace billing does not support free credits. - You cannot use Helios with AWS Marketplace billing. - We bill organization usage on a daily basis. Billing occurs every day at 12:15 AM UTC for the previous day. Therefore, the costs on AWS Marketplace might not match the actual costs you see on our billing dashboard. --- # Caching URL: https://www.warpbuild.com/docs/ci/caching Description: Blazing fast, unlimited Cache for GitHub Action Runners by WarpBuild --- title: "Caching" excerpt: "Blazing fast, unlimited Cache for GitHub Action Runners by WarpBuild" description: "Blazing fast, unlimited Cache for GitHub Action Runners by WarpBuild" icon: DatabaseZap --- WarpBuild provides a fast, unlimited cache for GitHub Action runners. This cache can be used to store build artifacts, dependencies, and other files that are needed across builds. The cache is designed to be fast, reliable, and secure. ## ⚠️ Read First GitHub has made updates to caching for improved performance and unlimited size. You can change the size limits and the expiration policy here: [https://github.com/organizations/$YOURORG/settings/actions](https://github.com/organizations/$YOURORG/settings/actions). This is already available for most users and is expected to GA by the end of 2025: [Increased Cache Size in Actions GA](https://github.com/github/roadmap/issues/1029) and [recent commitment from the team](https://github.com/orgs/community/discussions/42506#discussioncomment-14936753). This provides the most seamless experience for users and is the recommended approach. However, for some advanced use case and BYOC, you may still want to use WarpBuild cache. Read on. ## Usage WarpBuild cache can be used by replacing the `actions/cache@v4` action with the `warpbuilds/cache` action. The `warpbuilds/cache` action is fully compatible with the `actions/cache@v4` action and can be used as a drop-in replacement. Refer to the [WarpBuild Actions cache documentation](https://github.com/WarpBuilds/cache) for more information on how to use the cache. ```yaml uses: WarpBuilds/cache@v1 ``` The cache is designed to be fast, reliable, and secure. It is available on all WarpBuild runners and is enabled by default. ### Examples #### Restoring and saving cache using a single action ```yaml name: Caching Primes on: push jobs: build: runs-on: warp-ubuntu-latest-x64-4x steps: - uses: actions/checkout@v3 - name: Cache Primes id: cache-primes uses: WarpBuilds/cache@v1 with: path: prime-numbers key: ${{ runner.os }}-primes - name: Generate Prime Numbers if: steps.cache-primes.outputs.cache-hit != 'true' run: /generate-primes.sh -d prime-numbers - name: Use Prime Numbers run: /primes.sh -d prime-numbers ``` The `cache` action provides a `cache-hit` output which is set to `true` when the cache is restored using the primary `key` and `false` when the cache is restored using `restore-keys` or no cache is restored. #### Using a combination of restore and save actions ```yaml name: Caching Primes on: push jobs: build: runs-on: warp-ubuntu-latest-x64-4x steps: - uses: actions/checkout@v3 - name: Restore Cache Primes id: cache-primes-restore uses: WarpBuilds/cache/restore@v1 with: path: | path/to/dependencies some/other/dependencies key: ${{ runner.os }}-${{ hashFiles('**/lockfiles') }} - name: Install Dependencies if: steps.cache-primes-restore.outputs.cache-hit != 'true' run: /install.sh - name: Save Cache Primes id: cache-primes-save uses: WarpBuilds/cache/save@v1 with: path: | path/to/dependencies some/other/dependencies key: ${{ steps.cache-primes-restore.outputs.cache-primary-key }} ``` ### Inputs - `key` - An explicit key for restoring and saving the cache. - `path` - A list of files, directories, and wildcard patterns to cache and restore. - `restore-keys` - An ordered list of keys to use for restoring stale cache if no cache hit occurred for key. - `enableCrossOsArchive` - An optional boolean when enabled, allows windows runners to save or restore caches that can be restored or saved respectively on other platforms. Default: `false` - `fail-on-cache-miss` - Fail the workflow if cache entry is not found. Default: `false` - `lookup-only` - If true, only checks if cache entry exists and skips download. Does not change save cache behavior. Default: `false` - `delete-cache` - If true, deletes the cache entry. Skips restore and save. Default: `false` ### Outputs - `cache-hit` - A boolean value to indicate an exact match was found for the key. > **Note** `cache-hit` will only be set to `true` when a cache hit occurs for the exact `key` match. For a partial key match via `restore-keys` or a cache miss, it will be set to `false`. See [Skipping steps based on cache-hit](#skipping-steps-based-on-cache-hit) for info on using this output ### Creating a cache key A cache key can include any of the contexts, functions, literals, and operators supported by GitHub Actions. For example, using the [`hashFiles`](https://docs.github.com/en/actions/learn-github-actions/expressions#hashfiles) function allows you to create a new cache when dependencies change. ```yaml - uses: WarpBuilds/cache@v1 with: path: | path/to/dependencies some/other/dependencies key: ${{ runner.os }}-${{ hashFiles('**/lockfiles') }} ``` Additionally, you can use arbitrary command output in a cache key, such as a date or software version: ```yaml # http://man7.org/linux/man-pages/man1/date.1.html - name: Get Date id: get-date run: | echo "date=$(/bin/date -u "+%Y%m%d")" >> $GITHUB_OUTPUT shell: bash - uses: WarpBuilds/cache@v1 with: path: path/to/dependencies key: ${{ runner.os }}-${{ steps.get-date.outputs.date }}-${{ hashFiles('**/lockfiles') }} ``` See [Using contexts to create cache keys](https://help.github.com/en/actions/configuring-and-managing-workflows/caching-dependencies-to-speed-up-workflows#using-contexts-to-create-cache-keys) ### Cache scopes The cache is scoped to the key, [version](#cache-version), and branch. See [Matching a cache key](https://help.github.com/en/actions/configuring-and-managing-workflows/caching-dependencies-to-speed-up-workflows#matching-a-cache-key) for more info. ## Docker Layer Caching > It is recommended to use the new [Docker Builders](/docs/ci/docker-builders) for a faster and better build experience. Since WarpBuild Cache seamlessly integrates as a drop-in replacement for GitHub Actions Cache, you can easily use it for Docker Layer Caching. ### Step 1: Set Up Docker Buildx Action Ensure that the Docker Buildx Action is included in your workflow file if it isn't already. This step is essential to enable the GHA backend for Docker Layer Caching. > Note: Don't forget to include the driver-opts key as shown below. ```yaml - name: Set up Docker Buildx uses: docker/setup-buildx-action@v3 with: driver-opts: | network=host ``` ### Step 2: Configure Docker Build Push Action Update the `cache-from` and `cache-to` fields by setting the `url` to `http://127.0.0.1:49160/`, as shown below. Ensure that the `type` is set to `gha` and `version` is set to `1`. WarpBuild Cache will automatically proxy the storage backend for docker layer caching. > Note: It is recommended to set `mode=max`. This setting ensures that all layers, including intermediate steps, are cached. For more details, read about the mode [here](https://docs.docker.com/build/cache/backends/#cache-mode). > Note: It is mandatory to set `version=1` for the cache to work. ```yaml - name: Docker WarpCache Backend uses: docker/build-push-action@v6 with: context: . push: false tags: "alpine/warpcache:latest" cache-from: type=gha,url=http://127.0.0.1:49160/,version=1 cache-to: type=gha,url=http://127.0.0.1:49160/,mode=max,version=1 ``` That's all there is to it! With these adjustments, WarpBuild Cache is now set up as your Docker Layer Caching backend. ## Advanced Configuration ### Running inside a container A few conditions must be met to use the cache action inside a custom container. - `wget`: the cache action uses `wget` to download the cache. See [our workflow file](https://github.com/WarpBuilds/cache/blob/main/.github/workflows/workflow.yml#L109) for an example. - `zstd`: the downloaded cache is uncompressed using `zstd`. - `WARPBUILD_RUNNER_VERIFICATION_TOKEN`: This environment variable is always present in WarpBuild runners and is used to authenticate the action with the WarpBuild service. To use WarpCache inside a container, pass the `WARPBUILD_RUNNER_VERIFICATION_TOKEN` environment variable to the container as shown below. ```yaml test-proxy-save: runs-on: warp-ubuntu-latest-x64-16x container: image: ubuntu:latest env: WARPBUILD_RUNNER_VERIFICATION_TOKEN: ${{ env.WARPBUILD_RUNNER_VERIFICATION_TOKEN }} ``` ### Known practices and workarounds There are a number of community practices/workarounds to fulfill specific requirements. You may choose to use them if they suit your use case. Note these are not necessarily the only solution or even a recommended solution. - [Cache segment restore timeout](https://github.com/WarpBuilds/cache/blob/main/tips-and-workarounds.md#cache-segment-restore-timeout) - [Update a cache](https://github.com/WarpBuilds/cache/blob/main/tips-and-workarounds.md#update-a-cache) - [Use cache across feature branches](https://github.com/WarpBuilds/cache/blob/main/tips-and-workarounds.md#use-cache-across-feature-branches) - [Cross OS cache](https://github.com/WarpBuilds/cache/blob/main/tips-and-workarounds.md#cross-os-cache) - [Cross Arch cache](https://github.com/WarpBuilds/cache/blob/main/tips-and-workarounds.md#cross-arch-cache) - [Deletion of Caches](https://github.com/WarpBuilds/cache/blob/main/tips-and-workarounds.md#deletion-of-caches) ### Cache Version Cache version is a hash [generated](https://github.com/actions/toolkit/blob/500d0b42fee2552ae9eeb5933091fe2fbf14e72d/packages/cache/src/internal/cacheHttpClient.ts#L73-L90) for a combination of compression tool used (Gzip, Zstd, etc. based on the runner OS) and the `path` of directories being cached. If two caches have different versions, they are identified as unique caches while matching. This, for example, means that a cache created on a `warp-macos-14-arm64-6x` runner can't be restored on `warp-ubuntu-latest-x64-4x` as cache `versions` are different. ### Caching Strategies With the introduction of the `restore` and `save` actions, a lot of caching use cases can now be achieved. Please see the [caching strategies](https://github.com/WarpBuilds/cache/blob/main/caching-strategies.md) document for understanding how you can use the actions strategically to achieve the desired goal. ### Skipping steps based on cache-hit Using the `cache-hit` output, subsequent steps (such as install or build) can be skipped when a cache hit occurs on the key. It is recommended to install missing/updated dependencies in case of a partial key match when the key is dependent on the `hash` of the package file. Example: ```yaml steps: - uses: actions/checkout@v3 - uses: WarpBuilds/cache@v1 id: cache with: path: path/to/dependencies key: ${{ runner.os }}-${{ hashFiles('**/lockfiles') }} - name: Install Dependencies if: steps.cache.outputs.cache-hit != 'true' run: /install.sh ``` > **Note** The `id` defined in `WarpBuilds/cache` must match the `id` in the `if` statement (i.e. `steps.[ID].outputs.cache-hit`) ## Troubleshooting ### Errors in Cache restore To troubleshoot cache restore issues, rerun your workflow with [debug logging enabled](https://docs.github.com/en/actions/monitoring-and-troubleshooting-workflows/enabling-debug-logging). Check for any warnings or errors in the restore step that match those described below. > Note: Using versions older than v1.4.5 might cause issues with cache saves and restores for some customers. ### `zstd version: null` followed by a 404 warning Each cache entry has a unique version associated with it ([See: Cache Version](#cache-version)), which is matched during the restore process. The `zstd version: null` warning indicates that the default compression tool, `zstd`, is not available in the current environment. Consequently, the action falls back to using gzip compression. (This warning can be ignored if the cache was originally saved using gzip.) Since all warpbuild runners have `zstd` available, this warning typically occurs when running the action inside a container that lacks `zstd`. To resolve this issue, ensure that the compression tools available in the current environment match those used when the cache was saved. If the cache was saved in a container with `zstd` and you attempt to restore it in a container without `zstd`, the restore will fail. ### Failed to commit cache (Docker) This error occurs usually when the docker layers are large and the runner size is small. For example, if you are using the `2x` runner and some of the docker layers are larger than 5GB, you will likely encounter this error. To resolve this issue, please upgrade to a larger runner size. ## Limitations - WarpBuild Caching is not supported for windows based runners WarpBuild runners. 1. WarpBuild cache is compatible with the `actions/cache@v4` action. 2. Using versions older than v1.4.5 might cause issues with cache saves and restores for some customers. GitHub has made updates to caching for improved performance and unlimited size. You can change the size limits and the expiration policy here: [https://github.com/organizations/$YOURORG/settings/actions](https://github.com/organizations/$YOURORG/settings/actions). This is the recommended approach and is the most seamless experience for users. However, for some advanced use case and BYOC, you may still want to use WarpBuild cache. ## Expiry Cache is set to expire after 7 days of last use. It can be manually deleted at any time from the action or using the console. ## Pricing | Metric | Hosted | BYOC | | ------------------------ | --------------------- | ---- | | Storage | $0.20 per GB-month | Free | | Cache write/restore/list | $0.0001 per operation | Free | --- # Common Issues URL: https://www.warpbuild.com/docs/ci/common-issues Description: Common issues and troubleshooting for WarpBuild runners --- title: "Common Issues" excerpt: "Common issues and troubleshooting for WarpBuild runners" description: "Common issues and troubleshooting for WarpBuild runners" icon: Bug createdAt: "2026-01-06" updatedAt: "2026-01-06" --- ## Overview Refer to this page if your runners aren't picking up jobs, after you onboarded. ## Bot permissions Check if the WarpBuild bot has access to the repo. Navigate to [WarpBuild Account](https://app.warpbuild.com/settings/account) > Click on 'Configure Runners' for the GitHub connection. And check the list of repositories. ## Runner group checks ### Default runner group 1. If you are using a public repo, please validate that you have enabled WarpBuild on your [public repos](/docs/ci/public-repos#enable-warpbuild-runners-in-public-repositories). ### Non-default runner group If you are using a non-default runner group, which can be validated by going to [WarpBuild](https://app.warpbuild.com/ci) > Runners > Default GitHub Runner Group. ![Github Runner Group](./img/common-issues/runner-group.png) 1. Check if this runner group has access to the repo. Navigate to [Github](https://github.com) > Organization Settings > Actions > Runner Groups > Click on the runner group which is selected on WarpBuild. ![Runner Group Setting](./img/common-issues/runner-group-gh.png) 2. Check if there are workflow restrictions. GitHub supports restricting workflows based on workflow path, sha, branch and tags. For example, `monalisa/octocat/.github/workflows/cd.yaml@refs/heads/main` only picks jobs of cd.yaml on main. --- # Feature Matrix URL: https://www.warpbuild.com/docs/ci/features Description: Matrix of all features supported by WarpBuild --- title: "Feature Matrix" excerpt: "Features - WarpBuild" description: "Matrix of all features supported by WarpBuild" createdAt: "2024-10-04" updatedAt: "2024-11-06" icon: Grid3x3 --- The complete list of features supported by WarpBuild. ## Feature Matrix | Feature | WarpBuild Cloud | BYOC: AWS | BYOC: GCP | BYOC: Azure | | ---------------------------------- | ------------------------- | ------------ | ------------ | ------------ | | Linux runners: x86-64 | 22.04, 24.04 | 22.04, 24.04 | 22.04, 24.04 | 22.04, 24.04 | | Linux runners: arm64 | 24.04 | 24.04 | 24.04 | 24.04 | | MacOS runners: arm64 (M4 Pro) | macos13, macos14, macos15 | - | - | - | | Windows runners: x86-64 | ✅ | ✅ | ⏳ | ✅ | | Static IPs | ❌ | ✅ | ✅ | ✅ | | Standby disks (fast boot) | ✅ | ✅ | ✅ | ✅ | | Custom VM images | - | ✅ | ✅ | ✅ | | GPU support | ⏳ | ⏳ | ⏳ | ⏳ | | Unlimited cache | ✅ | ✅ | ✅ | ✅ | | Container layer caching | ✅ | ✅ | ✅ | ✅ | | Spot instances | ❌ | ✅ | ✅ | ✅ | | Snapshot runners | ✅ | ❌ | ❌ | ❌ | | Create BYOC stack resources | - | ✅ | ✅ | ✅ | | Import BYOC stack resources | - | ✅ | ❌ | ❌ | | Custom resource tags | - | ✅ | ⏳ | ⏳ | | Custom service account (IAM) roles | - | ✅ | ✅ | ⏳ | | Local SSD support | ✅ | ⏳ | ⏳ | ⏳ | | Resource utilization metrics and logs | ✅ | ✅ | ✅ | ✅ | ## Feature requests Contact us at [support@warpbuild.com](mailto:support@warpbuild.com) with any feature requests or questions. We are always looking to improve WarpBuild and make it more useful for our users. --- # MCP Support URL: https://www.warpbuild.com/docs/ci/mcp Description: Model Context Protocol (MCP) integration with WarpBuild CI --- title: "MCP Support" excerpt: "MCP support for WarpBuild CI" description: "Model Context Protocol (MCP) integration with WarpBuild CI" icon: Bot createdAt: "2026-01-20" updatedAt: "2026-01-20" --- ## Overview MCP can be used to interact with the WarpBuild API to create runners, images, etc. Follow this guide to connect your MCP Host (Cursor, Antigravity, etc.) to WarpBuild MCP. ## Step 1: Generate API Key 1. Navigate to the [WarpBuild Dashboard](https://app.warpbuild.com/settings/api-keys) for creating the API Key. 2. Set a name for your API Key and check CI. 3. Click Generate API Key. This should open a generated key like below. Copy the API key. ![API Key Dialog](./img/mcp/api-key.png) ## Step 2: Configure MCP Server Use the following MCP server URL and plug in your API key: **MCP Server URL:** `https://mcp.warpbuild.com/mcp` Configure your MCP client with: ```json { "mcpServers": { "warpbuild": { "url": "https://mcp.warpbuild.com/mcp", "headers": { "Authorization": "Bearer " } } } } ``` ## Example - Using in Cursor 1. Navigate to Cursor. 2. Open the command palette using `CMD + Shift + P` (or `Ctrl+Shift+P` if you are on windows/linux). Search for 'mcp settings'. Select 'View: Open MCP Settings' ![Command Palette](./img/mcp/cmd-palette.png) 3. Click on the 'New MCP Server' at the bottom of the page for the MCP settings. 4. This opens a JSON file for the MCP Configuration. Add the following content to this page. If you have some mcp configuration in this JSON, add only the `warpbuild` section from the below JSON. ```json { "mcpServers": { "warpbuild": { "url": "https://mcp.warpbuild.com/mcp", "headers": { "Authorization": "Bearer " } } } } ``` 5. Verify that MCP is working. An interaction example is in the below screenshot. ![MCP Interaction Example](./img/mcp/mcp-example-full-editor.png) --- # Observability URL: https://www.warpbuild.com/docs/ci/observability Description: GitHub Actions runner observability and monitoring information --- title: "Observability" excerpt: "GitHub Actions runner observability and monitoring" description: "GitHub Actions runner observability and monitoring information" icon: Activity createdAt: "2025-09-02" updatedAt: "2026-01-31" --- WarpBuild collects telemetry data to display metrics and logs for your runners. This page provides information about the observability collection process and port usage. The [Observability page](https://app.warpbuild.com/ci/observability) in WarpBuild is organized into two sections: - **Recommendations**: View runner instance right-sizing recommendations based on resource utilization patterns. - **Usage**: View detailed metrics and logs for individual runner instances. ## Recommendations The Recommendations view helps you optimize your runner configurations by displaying hierarchical performance metrics and identifying resource bottlenecks. ### Hierarchical Metrics Performance metrics are aggregated and displayed in a hierarchical structure: 1. **Repository**: Top-level grouping of all workflows in a repository 2. **Workflow**: Individual workflow files within a repository 3. **Job**: Specific jobs defined in each workflow 4. **Instance Type**: Instance type used for each job This hierarchy allows you to drill down from high-level repository metrics to specific job and instance type performance data. ### Performance Summaries For each runner instance, the following performance metrics are displayed: - **CPU Utilization**: Maximum rolling average CPU usage percentage over last 30s - **Memory Utilization**: Maximum memory usage percentage - **Filesystem Utilization**: Maximum storage usage percentage - **Disk I/O**: Maximum rolling average of read+write disk throughput over last 30s - **Network Utilization**: Maximum rolling average of read+write network throughput over last 30s ### Resource Threshold Alerts The Recommendations view can filter and highlight instances that violate predefined resource thresholds. This helps you quickly identify runners that are under-provisioned (that have consistently high resource utilization). Recommendations for runners that are over-provisioned (that have low resource utilization) is coming soon! Thresholds for the under-provisioned recommendation: | Metric | Threshold | Alert Label | |--------|-----------|-------------| | Max Sustained CPU | >= 80% | High CPU Usage | | Max Memory Utilization | >= 80% | High Memory Usage | | Max Filesystem Utilization | >= 80% | High Filesystem Usage | | Max Disk I/O | >= 80% of supported throughput | High Disk IO | Use these insights to right-size your runner instance configurations for optimal cost and performance. ## Usage The Usage view provides detailed observability data for individual runner instances, including metrics and logs. ## Observability Collection WarpBuild agents running on your runners collect the following observability data: 1. CPU, memory, filesystem, and network utilization metrics. This helps with understanding resource bottlenecks on the runner. 2. System logs for capturing WarpBuild and other service behaviors, useful for debugging runner issues. 3. GitHub Actions logs to help correlate workflow execution with system metrics and logs in the UI. The collected logs and metrics can be viewed in the WarpBuild UI's [Observability page](https://app.warpbuild.com/ci/observability). ![Observability Page](./img/observability.page.png) ### Pausing Observability Observability data collection can be paused. When paused, no telemetry data is collected from your runners, including system logs, and GitHub Actions logs. For pooled instances, you may see observability data appear in the UI before a job is allocated to the instance. This is expected behavior when data collection is paused - the telemetry agent initializes but no actual data is collected until job allocation. ![Observability Metrics](./img/observability.metrics.png) The chart displays info about the resource utilization on your runner instance. ![Observability Logs](./img/observability.logs.png) The logs view shows both system logs (syslogs) from your runner and GitHub Actions logs, making it easier to correlate workflow execution with system behavior. Observability only collects metrics and logs for jobs that are longer than ~1 minute. ## Port Usage WarpBuild observability uses port `33931` for data collection and communication with the WarpBuild platform and uses OpenTelemetry for data collection. ## Data Privacy WarpBuild observability collection follows our privacy and security policies: - **No sensitive data** is collected through telemetry. We only collect syslogs and utilization metrics of the runner like CPU, memory, filesystem and network. - All observability data is **encrypted in transit**. - **No data is used for training** or any other purpose beyond providing observability insights for your runners. If you have any queries regarding the observability, please reach out at support@warpbuild.com. --- # Preinstalled Software URL: https://www.warpbuild.com/docs/ci/preinstalled-software Description: WarpBuild runners are 100% compatible with GitHub-hosted runners and have the same tooling installed. --- title: "Preinstalled Software" excerpt: "WarpBuild runners are 100% compatible with GitHub-hosted runners and have the same tooling installed." description: "WarpBuild runners are 100% compatible with GitHub-hosted runners and have the same tooling installed." createdAt: "2024-01-19" updatedAt: "2025-12-10" --- ## Ubuntu 22.04 with x86-64 The tooling installed on the Ubuntu 22.04 runner is the same as the GitHub-hosted runner. You can find the list of preinstalled software [here](https://github.com/actions/runner-images/blob/main/images/ubuntu/Ubuntu2204-Readme.md). ## Ubuntu 24.04 with x86-64 The tooling installed on the Ubuntu 24.04 runner is the same as the GitHub-hosted runner. You can find the list of preinstalled software [here](https://github.com/actions/runner-images/blob/main/images/ubuntu/Ubuntu2404-Readme.md). ## Ubuntu 22.04 with ARM64 WarpBuild provisioned ARM64 runners are based on the upstream `ubuntu:22.04` image. GitHub introduced ARM64 runners in mid-2024. There are two significant differences between GitHub's ARM64 runners and WarpBuild's ARM64 runners: 1. The default user is `root` instead of `runner`. 1. The tooling installed is minimal. You will need to install the required tooling using the `apt` package manager before using it in your workflows. ## Ubuntu 24.04 with ARM64 WarpBuild provisioned ARM64 runners are fully compatible with GitHub's ARM64 runners. You can find the list of preinstalled software [here](https://github.com/actions/partner-runner-images/blob/main/images/arm-ubuntu-24-image.md). ## MacOS 13 with ARM64 The tooling installed on the MacOS runner is the same as the GitHub-hosted M1 runner. You can find the list of preinstalled software [here](https://github.com/actions/runner-images/blob/main/images/macos/macos-13-arm64-Readme.md). ## MacOS 14 with ARM64 The tooling installed on the MacOS runner is the same as the GitHub-hosted M1 runner. You can find the list of preinstalled software [here](https://github.com/actions/runner-images/blob/main/images/macos/macos-14-arm64-Readme.md). ## MacOS 15 with ARM64 The tooling installed on the MacOS runner is the same as the GitHub-hosted M1 runner. You can find the list of preinstalled software [here](https://github.com/actions/runner-images/blob/main/images/macos/macos-15-arm64-Readme.md). ## MacOS 26 with ARM64 The tooling installed on the MacOS runner is the same as the GitHub-hosted M1 runner. You can find the list of preinstalled software [here](https://github.com/actions/runner-images/blob/main/images/macos/macos-26-arm64-Readme.md). ## Windows Server 2022 x86-64 The tooling installed on the Windows Server 2022 runners is the same as the GitHub-hosted equivalents. You can find the list of preinstalled software [here](https://github.com/actions/runner-images/blob/main/images/windows/Windows2022-Readme.md). ## Customizations While the tooling installed on the runners is the same as the GitHub-hosted runners, we have made some customizations to the runner configurations to improve the performance and reliability of the runners. Notably, these customizations include caching of container images for faster access. --- # Public GitHub Repos URL: https://www.warpbuild.com/docs/ci/public-repos Description: Public GitHub Repos Configuration for WarpBuild --- title: "Public GitHub Repos" excerpt: "Public GitHub Repos Configuration for WarpBuild" description: "Public GitHub Repos Configuration for WarpBuild" createdAt: "2023-12-11" updatedAt: "2023-12-11" --- WarpBuild works by registering itself as a self-hosted runner in the `Default` runner group (id `1`) for your GitHub Organization. However, GitHub disables the ability to use self-hosted runners, including managed ones such as WarpBuild, in public repositories by default. ## Enable WarpBuild runners in public repositories Here are the steps to enable access to WarpBuild runners in public repositories in your organization: 1. Go to your GitHub Organization default runner settings page here: https://github.com/organizations/[YOUR_ORG]/settings/actions/runner-groups/1 1. Check the box for `Allow public repositories` ### GitHub Enterprise GitHub Enterprise supports creation of multiple runner groups. The WarpBuild runners are added to the `Default` runner group (id `1`). ![Enable WarpBuild on public repos](/static/img/public-repos/default-runner-group.png) ## Security WarpBuild runners run the same tools and versions as GitHub-hosted runners. WarpBuild runners provide the same safety as GitHub hosted runners. The [GitHub docs](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/about-self-hosted-runners#self-hosted-runner-security) recommend disabling self-hosted runners on public repositories. PRs from public contributors could include malicious content which could compromise the integrity of the infrastructure (ex: aws/gcp/azure accounts) when the right security policies are not set. This can happen easily when using self-hosted runners on k8s using `actions-runner-controller` ([ARC](https://github.com/actions/actions-runner-controller)) for instance, which runs workflows in containers that cannot provide secure isolation guarantees. WarpBuild runners are secure by design. Workflows using WarpBuild runners are run inside isolated VMs with strong isolation guarantees. This makes it completely safe to use WarpBuild runners for public repos. --- # Quick Start URL: https://www.warpbuild.com/docs/ci/quick-start Description: Get started with your first WarpBuild workflow --- title: "Quick Start" excerpt: "Quickstart - WarpBuild" description: "Get started with your first WarpBuild workflow" icon: Album createdAt: "2023-11-07" updatedAt: "2025-05-02" --- WarpBuild provides blazing fast GitHub runners for your workflows. Here's how you can get started with WarpBuild in 60 seconds. ### Sign up for a WarpBuild account Signup for the WarpBuild account at https://app.warpbuild.com/. ![WarpBuild Account Creation](/static/img/quickstart/warpbuild-signup.png) ### Install WarpBuild Bot > **Note:** The WarpBuild GitHub bot cannot be installed directly from the GitHub marketplace. You must sign up for a WarpBuild account first and then install the bot through the WarpBuild dashboard. After signup, you will be redirected to the GitHub bot installation. Give WarpBuild access to repositories in which you want to use our runners. ![WarpBuild Bot Installation](/static/img/quickstart/github-bot.png) ### Modify the workflow to use WarpBuild runners To use WarpBuild runners in your workflows, change the `runs-on` property in the GitHub workflow file to a `Runner ID`. You can get the `Runner ID` from Warp UI. Alternately, just select the repositories in which you want to use Warp runners and click on the `Select workflows to Warp` button. ![Workflow File Syntax](/static/img/quickstart/github-workflow.png) Multiple runner configurations are available for different use cases. You can find more details about the runner configurations [here](/docs/ci/cloud-runners). ![WarpBuild Runner Page](/static/img/quickstart/warpbuild-runners.png) > To setup and use WarpBuild managed runners in your own cloud infrastructure, refer to the [BYOC Setup Guide](/docs/ci/byoc/). ### Go Warp ⚡️ Elevate your engineering efficiency to the next level - start using faster GitHub actions runners for your workflows and gain insights into your CI/CD. If you have any questions, reach out to us at [support@warpbuild.com](mailto:support@warpbuild.com). --- # Security URL: https://www.warpbuild.com/docs/ci/security Description: Ensuring secure runners - WarpBuild --- title: "Security" excerpt: "Ensuring secure runners - WarpBuild" description: "Ensuring secure runners - WarpBuild" icon: Lock createdAt: "2023-11-07" updatedAt: "2023-11-07" --- We take security very seriously at WarpBuild. Here are some of the measures we take to ensure that your builds, runners, and build environments are secure. ## Compliance ### SOC2 Type 2 WarpBuild is SOC2 Type 2 certified with 3 Trust Services Criteria: Security, Availability, and Confidentiality. The controls required for SOC2 compliance are implemented. We are happy to share our security documentation and work with you to ensure that we meet your compliance needs. Please request the documentation in our [trust center](https://compliance.warpbuild.com) or email us at support@warpbuild.com to discuss your requirements. ## Security ### Compute isolation Each runner runs in its own virtual machine. The VMs are created on demand and destroyed after each build. This ensures that your builds are isolated from other builds and that no data is left behind. The VMs are ephemeral, not reused, and isolated for maximum performance and security. ### Storage protection Each runner has its own encrypted storage volume that is created on demand and destroyed after each build. When caching is enabled to speed up your builds, the cache is encrypted and stored securely in a location that is only accessible to your runner. ### Secrets WarpBuild does not access or store any build secrets. Secrets are stored in your source code repository and are only accessible to your runner environment. --- # What is WarpBuild? URL: https://www.warpbuild.com/docs/ci/what-is-warpbuild Description: WarpBuild - Fast, secure runners for GitHub Actions --- title: "What is WarpBuild?" excerpt: "Introducing WarpBuild" description: "WarpBuild - Fast, secure runners for GitHub Actions" icon: Flame createdAt: "2023-11-07" updatedAt: "2023-12-04" --- WarpBuild provides blazing fast, secure runners for GitHub Actions. WarpBuild uses machines with super fast single core performance and attached NVMe disks for enabling fast builds. This is coupled with ephemeral VMs for security and isolation. The runners are designed to be fully compatible with GitHub Actions, and can be used as a drop-in replacement for GitHub-hosted runners. The same packages are pre-installed on the runners for a seamless experience. Your existing github actions will run without any changes. Provisioning fast runners is the first step on our mission to make build engineering better, through a rich ecosystem of tools, runners, and dashboards for visibility. ### Supported runners 1. Ubuntu x86-64 runners - 2x, 4x, 8x, 16x, 32x variants 1. Ubuntu ARM64 runners - 2x, 4x, 8x, 16x, 32x variants 1. MacOS ARM64 runners - powered by M4 Pro processors with 6vCPU and 14GB RAM 1. Windows x86-64 runners - 4x, 8x, 16x, 32x variants ## Features 1. 30% faster than GitHub Actions, 10x cheaper 1. WarpBuild BYOC - Spin up runners in your own VPC, on your own AWS, GCP account. 1. Customize runners with your own machine images and VM types 1. Unlimited concurrency for eliminating job queueing delays 1. Unlimited, blazing fast caching 1. Secure VM-level isolation for your workloads 1. Easy debugging - SSH into running GitHub actions workflows ## Use cases 1. Plain ol' GitHub actions, but faster, cheaper, and awesomer 1. Easy debugging: SSH into a running workflow using Action-Debugger 1. Get unlimited cache size without having to self-host runners because of 10GB cache limitations from GitHub 1. Running android emulators in CI 1. Spinning up kubernetes clusters in CI 1. Run as a VM, not as a container, for workloads that don't work in `dind` or `kind` environments ## Tools 1. Action-debugger: SSH into running GitHub actions workflows for easy debugging ## Compliance WarpBuild is SOC2 Type 2 compliant. Please request the documentation in our [trust center](https://compliance.warpbuild.com) or email us at support@warpbuild.com to discuss your requirements. --- # Automation URL: https://www.warpbuild.com/docs/ci/api-keys/automation Description: Automate runner and runner images creation --- title: "Automation" excerpt: "Automation" description: "Automate runner and runner images creation" hidden: false sidebar_position: 3 slug: "/api-keys/automation" createdAt: "2024-07-23" updatedAt: "2024-07-23" --- # AWS BYOC API Documentation This page contains API call documentation to automate runner images and runner operations. This can be used to automate custom image builds, run a test suite on the new images, etc. ## Notes - It is recommended to remove unused custom runners. There can be an increase in the job pick up times, if a lot of runners are present. ## Setup #### Create a new API key Go to the [API keys page](https://app.warpbuild.com/settings/api-keys) and create a new API key. Grant the `ci`, and `cache` scopes to the API key. ## Usage ### Stacks #### Get all stacks ```bash curl -X GET 'https://api.warpbuild.com/api/v1/stacks?kind=ec2' \ -H 'Accept: application/json' \ -H 'Authorization: Bearer wkey-xxxx' ``` ### Runner Images #### Get all runner images The `alias` parameter is optional. It is recommended to use the other search parameters (type, page, per_page) to filter the results. ```bash curl -X GET 'https://api.warpbuild.com/api/v1/runner-images?alias=&type=byoc_ami&page=1&per_page=10' \ -H 'Accept: application/json' \ -H 'Authorization: Bearer wkey-xxxx' ``` #### Create a new BYOC runner image You can fetch the `stack_id` from the `GET /api/v1/stacks` endpoint. ```bash curl -X POST 'https://api.warpbuild.com/api/v1/runner-images' \ -H 'Accept: application/json' \ -H 'Authorization: Bearer wkey-xxxx' -d @create-runner-image.json ``` ```json { "alias": "test1", "os": "linux", "arch": "x64", "stack_id": "wxxxxxx", "type": "byoc_ami", "warpbuild_image": { "image_uri": "ami-xxxxxx", "cloud_init_template": "" }, "settings": { "purge_image_versions_offset": 1 }, "byoc_ami": { "ami_id": "ami-xxxxxx", "root_device_name": "/dev/sda1" } } ``` #### Update a BYOC runner image You can fetch the `id` from - the `GET /api/v1/runner-images` endpoint with the `alias` parameter. - As a response to the `POST /api/v1/runner-images` endpoint when creating a new runner image. ```bash curl -X PUT 'https://api.warpbuild.com/api/v1/runner-images/wxxxxxx' \ -H 'Accept: application/json' \ -H 'Authorization: Bearer wkey-xxxx' \ -d @update-runner-image.json ``` ```json { "byoc_ami": { "ami_id": "ami-new-xxxxxx", "root_device_name": "/dev/sda1" }, "id": "wxxxxxx", "settings": { "purge_image_versions_offset": 1 } } ``` ### Runners #### List runners Returns the list of all available runners in the organization. This API call is not paginated. **Params** - `only_custom_runners`: Boolean flag. Enable to list only custom runners. Accepted values - `true`, `false`. - `image`: Image ID as filter. You can use the image id that is generated by the runner images API above. Example: `wjwnpdjox8xeqsza`. **Call** ```bash curl -X GET 'https://api.warpbuild.com/api/v1/runners?only_custom_runners=&image=' \ -H 'Accept: application/json' \ -H 'Authorization: Bearer wkey-xxxx' ``` Output is array of objects. The object structure follows the structure defined in create runner. #### Get a runner Returns a single runner based on its id. **Params** - `runner-id`: The id for the runner. **Call** ```bash curl -X GET 'https://api.warpbuild.com/api/v1/runners/' \ -H 'Accept: application/json' \ -H 'Authorization: Bearer wkey-xxxx' ``` Output structure is same as the create endpoint. #### Create a runner **Params** - `name`: Name of the runner must be lowercase alphabets with hyphen as additional characters. - `provider_id`: Stack id of the runner. - `configuration.image`: The image id from the runner images API - `configuration.byoc_sku.role_arn`: The instance role of the runner. - `configuration.byoc_sku.instance_types`: List of instance/machine types that are to be used when launching the runner. Make sure you provide a valid list here. For a list of valid image types it is recommended to use the UI. Additional params are captured below. ```json filename="create-runner.json" { "name": "warpdev-custom-example-2", "provider_id": "wrslzvbl322yttsc", "pool_size": 2, "configuration": { "capacity_type": "ondemand", "image": "wjwnpdjox8xeqsza", "sku": "", "storage": { "disk_type": "", "tier": "custom", "performance_tier": "", "size": 150, "throughput": 400, "iops": 6000 }, "byoc_sku": { "instance_types": ["c3.large", "c1.xlarge"], "is_public": true, "arch": "x64", "role_arn": "", "network_tier": "STANDARD" } } } ``` **Call** ```bash curl -X POST 'https://api.warpbuild.com/api/v1/runners' \ -H 'Accept: application/json' \ -H 'Authorization: Bearer wkey-xxxx' \ -H 'Content-Type: application/json' \ -d @create-runner.json ``` ```json title="Output" { "id": "", "created_at": "2025-09-25T08:38:01.495654Z", "updated_at": "2025-09-25T08:38:01.495654Z", "name": "warpdev-custom-example-2", "vcs_integration_id": "", "configuration": { "sku": "", "byoc_sku": { "role_arn": "", "arch": "x64", "is_public": true, "instance_types": [ "c3.large", "c1.xlarge" ], "network_tier": "" }, "storage": { "tier": "custom", "size": 150, "iops": 5000, "throughput": 400, "disk_type": "", "performance_tier": "" }, "image": "wjwnpdjox8xeqsza", "capacity_type": "ondemand" }, "stock_runner_id": null, "organization_id": "", "labels": [ "warpdev-custom-example-2" ], "active": true, "provider_id": "wrslzvbl322yttsc", "meta": { "supports_snapshot": false } } ``` #### Update a runner Update an existing runner. **Params** ```json { "pool_size": 2, "labels": [], "configuration": { "image": "wjwnpdjox8xeqsza", "capacity_type": "ondemand", "sku": "", "storage": { "disk_type": "", "tier": "custom", "performance_tier": "", "size": 150, "throughput": 400, "iops": 6000 }, "byoc_sku": { "instance_types": ["c3.large", "c1.xlarge"], "is_public": true, "arch": "x64", "network_tier": "" } } } ``` **Call** ```bash curl -X PATCH 'https://api.warpbuild.com/api/v1/runners/' \ -H 'Accept: application/json' \ -H 'Authorization: Bearer wkey-xxxx' \ -H 'Content-Type: application/json' \ -d @update-runner.json ``` Output structure is same as the create endpoint. #### Delete a runner Delete an existing runner. This action is irreversible. **Params** - `runner-id`: The id for the runner to delete. **Call** ```bash curl -X DELETE 'https://api.warpbuild.com/api/v1/runners/' \ -H 'Accept: application/json' \ -H 'Authorization: Bearer wkey-xxxx' ``` Output structure is same as the create endpoint. --- # API Keys URL: https://www.warpbuild.com/docs/ci/api-keys Description: Manage your API keys in WarpBuild --- title: "API Keys" excerpt: "Manage your API keys in WarpBuild" description: "Manage your API keys in WarpBuild" createdAt: "2025-03-02" updatedAt: "2025-03-02" icon: Key --- API keys can be used to interact with the WarpBuild API programmatically. ### Creating an API Key Navigate to the [API Keys page](https://app.warpbuild.com/dashboard/settings/api-keys) and click the `Generate API Key` button. #### Scopes API Keys can be scoped to combinations of WarpBuild products. Currently, the following scopes are available: - CI - Cache - Helios ![Create an API key](./img/api-keys-create.png) The API key will be displayed only once, so make sure to save it in a secure location. ![API key created](./img/api-keys-generated.png) #### Editing an API Key You can edit the name and scope of an API key from the [API Keys page](https://app.warpbuild.com/dashboard/settings/api-keys). ![Edit an API key](./img/api-keys-edit.png) ### Using an API Key You can use the generated API key to connect to WarpBuild endpoints. Just include the API key in the `Authorization` header of your request. ```bash curl -X GET "https://api.warpbuild.com/api/v1/builder-profiles?page=1&per_page=20&name=test" \ -H "Authorization: Bearer " ``` --- # Custom VM Images URL: https://www.warpbuild.com/docs/ci/byoc/custom-vm-images Description: Add your own custom VM images from your cloud providers to WarpBuild --- title: "Custom VM Images" excerpt: "Add your own custom VM images from your cloud providers to WarpBuild" description: "Add your own custom VM images from your cloud providers to WarpBuild" hidden: false sidebar_position: 4 slug: "/byoc/custom-vm-images" createdAt: "2024-07-23" updatedAt: "2024-07-23" --- WarpBuild allows you to use your own custom VM images in your custom runner configurations while running them on your own cloud. This is useful if you are using a custom VM image that you have built due to a specific need. > 💡 **Note:** The _"images"_ referred to in this section are VM images. This is distinct from the custom container images support. ## VM Image requirements ### AMIs 1. Linux and Windows based AMIs are currently supported. #### Linux 1. The Linux distro the image is based on should be using [`systemd`](https://en.wikipedia.org/wiki/Systemd). WarpBuild relies on it to run its [agent](https://github.com/WarpBuilds/warpbuild-agent/). For example, Ubuntu based AMIs are supported while CentOS/RHEL based AMIs are not. 2. `curl` and `wget` should be present in the image. Here's an example packer file for a Linux AMI that is supported: ```yaml variable "aws_region" { type = string default = "us-east-1" } locals { version = "1.0.0" } source "amazon-ebs" "my-custom-ci" { region = var.aws_region instance_type = "t3.micro" ami_name = "my-custom-ci-v${local.version}" tags = { Name = "my-custom-ci-v${local.version}" team = "platform" repo = "platform" build_date = "{{timestamp}}" version = local.version provider = "packer" pii = "none" product = "github-actions" } source_ami_filter { filters = { name = "ubuntu/images/hvm-ssd-gp3/ubuntu-noble-24.*-amd64-server-*" root-device-type = "ebs" virtualization-type = "hvm" } owners = ["099720109477"] # Amazon Canonical most_recent = true } ssh_username = "ubuntu" launch_block_device_mappings { device_name = "/dev/sda1" volume_type = "gp3" volume_size = 8 delete_on_termination = true } } build { sources = ["source.amazon-ebs.my-custom-ci"] provisioner "shell" { inline = [ # Create runner user and group "sudo groupadd runner || echo 'Group runner already exists'", "sudo useradd -m -g runner -s /bin/bash runner || echo 'User runner already exists'", # Configure passwordless sudo for runner user "echo 'runner ALL=(ALL) NOPASSWD:ALL' | sudo tee /etc/sudoers.d/runner", "sudo chmod 440 /etc/sudoers.d/runner", # Update system packages "sudo apt-get update", "sudo apt-get upgrade -y", # Install essential tools "sudo apt-get install -y curl wget unzip git jq", # Verify installations "curl --version", "wget --version", "git --version", "jq --version", # Setup GitHub Actions tool cache directories with proper permissions "sudo mkdir -p /opt/hostedtoolcache", "sudo mkdir -p /opt/hostedtoolcache/Python", "sudo mkdir -p /opt/hostedtoolcache/Node", "sudo mkdir -p /opt/hostedtoolcache/go", "sudo chown -R runner:runner /opt/hostedtoolcache", "sudo chmod -R 755 /opt/hostedtoolcache", # Setup additional GitHub Actions directories with runner permissions "sudo mkdir -p /home/runner/_work", "sudo mkdir -p /home/runner/_tool", "sudo mkdir -p /home/runner/_temp", "sudo chown -R runner:runner /home/runner/_work /home/runner/_tool /home/runner/_temp", "sudo chmod -R 755 /home/runner/_work /home/runner/_tool /home/runner/_temp", # Create a startup script to verify tools on instance launch "echo '#!/bin/bash\necho \"=== Checking installed tools at startup ===\"\nwhich git\ngit --version\nwhich curl\ncurl --version\nwhich wget\nwget --version\nwhich unzip\nunzip -h | head -n 1\nwhich jq\njq --version' | sudo tee /var/lib/cloud/scripts/per-boot/verify-tools.sh", "sudo chmod +x /var/lib/cloud/scripts/per-boot/verify-tools.sh", ] } } ## Thanks to Joe Hutchinson for the sample packer file. ``` Here's a sample workflow to build the AMI: ```yaml name: Build My Custom CI AMI ## Thanks to Joe Hutchinson for the sample workflow. on: workflow_dispatch: push: branches: - main paths: - "packer/github-actions-ami/**" permissions: contents: read jobs: build-ami: runs-on: warp-ubuntu-2404-x64-4x steps: - name: Checkout repository uses: actions/checkout@v4 - name: Check execution user run: | echo "Workflow is running as user: $(whoami)" echo "User groups: $(groups)" echo "Home directory: $HOME" - name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v4 with: role-to-assume: arn:aws:iam::ACCOUNT_ID:role/YOUR_ROLE_NAME role-session-name: GitHubAction-BuildCIAMI aws-region: us-east-1 - name: Set up Packer uses: hashicorp/setup-packer@v3 with: version: 1 - name: Install Packer plugins run: packer plugins install github.com/hashicorp/amazon - name: Validate Packer template run: packer validate packer/github-actions-ami/my-custom-ci.pkr.hcl - name: Build AMI with Packer run: packer build packer/github-actions-ami/my-custom-ci.pkr.hcl env: AWS_REGION: us-east-1 ``` #### Windows 1. `aria2` is used for downloading the necessary artifacts since the default windows method is slow. The image needs to have `aria2` present and it should be accessible through the system `PATH`. Below is sample script which can be used to install aria2. ```powershell New-Item -ItemType Directory -Path 'C:\Tools\aria2' -Force # Use: https://github.com/aria2/aria2/releases/latest to fetch the latest release and replace it Invoke-WebRequest -Uri 'https://github.com/aria2/aria2/releases/download/release-1.37.0/aria2-1.37.0-win-64bit-build1.zip' -OutFile 'C:\Tools\aria2\aria2.zip' Expand-Archive -Path 'C:\Tools\aria2\aria2.zip' -DestinationPath 'C:\Tools\aria2' -Force Remove-Item 'C:\Tools\aria2\aria2.zip' -Force Move-Item -Path 'C:\Tools\aria2\aria2-1.37.0-win-64bit-build1\*' -Destination 'C:\Tools\aria2' -Force Remove-Item -Path 'C:\Tools\aria2\aria2-1.37.0-win-64bit-build1' -Recurse -Force [Environment]::SetEnvironmentVariable('Path', $Env:Path + ';C:\Tools\aria2', 'Machine') ``` 2. If working with AWS instances, the EC2 instance must be [sysprepped](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ami-create-win-sysprep.html#sysprep-gui-procedure-ec2launchv2) before making a windows image out of it. Please look at the 'Additional Notes' > 'Windows' > 'Sysprep' section for details on sysprepping. ## Additional Notes ### Windows on AWS The runners are run under the runneradmin user, the same user as github's windows runners. If this user is not present, it will be added. If you have user scoped environment variables, you might need to change them to system level or add them to the runneradmin user's environment instead. #### Sysprep (GUI) For generalizing using the GUI, 1. RDP into the instance and open 'Amazon EC2Launch settings'. Press 'Shutdown with sysprep' > Press 'Yes' on the dialog box. 2. The above step will disconnect you in some time. This is expected. Go to the instance in the ec2 dashboard and wait for the instance to reach 'Stopped' state (might take a min). 3. Rest of the steps are mostly same. Use the console to create the image. Fill in the details. Unselect the 'reboot' option since we already have the machine in a shutdown state. Wait for the AMI to be ready. The image should be a generalized one now. #### Sysprep (Automation) Refer to the following [packer docs](https://developer.hashicorp.com/packer/integrations/hashicorp/amazon/latest/components/builder/ebs#windows-2022-sysprep-commands-for-amazon-windows-amis-only) for sysprep automation. These commands can be invoked without the use of packer as well from a powershell instance. ### Common issues 1. Unable to RDP into the instance We don't expose RDP port (3389) by default when you create a stack. You must add a security policy enabling the inbound TCP connections to 3389 from a.b.c.d/e (replace this with your CIDR block) 2. Password incorrect when trying to login via Administrator Administrator account for windows has a rotating password for each ec2 instance. AWS automatically creates and assigns this password. It is recommended that you create your own user and give it admin permissions. You can use the below commands to create the user and assign it admin. ```bash Write-Host 'Creating custom user' $MACHINE_USER = "customuser" $MACHINE_PASSWORD = "CustomU$er!2025" $Password = ConvertTo-SecureString $MACHINE_PASSWORD -AsPlainText -Force New-LocalUser -Name $MACHINE_USER -Password $Password -FullName "Custom User" -Description "Custom User for CI/CD" Add-LocalGroupMember -Group "Administrators" -Member $MACHINE_USER Add-LocalGroupMember -Group "Users" -Member $MACHINE_USER # Set customuser user to not require password change at next logon Set-LocalUser -Name $MACHINE_USER -PasswordNeverExpires $true ``` ## Add the image 1. Setup a [WarpBuild Stack](/docs/ci/byoc#2-setup-a-warpbuild-stack) 2. Add a VM image using the `Add Image` button on the [custom images](https://app.warpbuild.com/dashboard/custom-images) page. All the images in the region the stack is in will be listed. Select the image you want to use. ![Add VM Image](./img/custom-images/add.png) ![Add VM Image](./img/custom-images/add-1.png) 3. Create a [custom runner](https://docs.warpbuild.com/cloud-runners/custom-runners) using the image. 4. Use the custom runner label in your workflows to run the jobs on this container image. ## Pricing There is no additional cost for using custom VM images. --- # BYOC URL: https://www.warpbuild.com/docs/ci/byoc Description: Run Github Actions runners in your own cloud infrastructure. --- title: "BYOC" excerpt: "Run Github Actions runners in your own cloud infrastructure." description: "Run Github Actions runners in your own cloud infrastructure." icon: Cloud createdAt: "2024-07-23" updatedAt: "2024-07-23" --- ## Overview There are three steps to run Github actions workflows in your own cloud infrastructure using WarpBuild: 1. **Connect your cloud account**: Sets up the IAM role with the required permissions. 2. **Setup a WarpBuild Stack**: Configure a region, VPC, and network settings as context for the runners. 3. **Create a custom runner**: Configure the instance types, disk, and IP configuration for the runners. Cloud provider specific setup guides for each of the steps are: - [AWS](/docs/ci/byoc/aws/) - [GCP](/docs/ci/byoc/gcp/) - [Azure](/docs/ci/byoc/azure/) [Install the WarpBuild Github bot](/docs/ci/quick-start#install-warpbuild-bot) and provide access to the repositories you want to run workflows on, before proceeding with these steps. ## Features 1. All the features available on the Cloud version, and cheaper 1. One click setup 1. Use any region 1. Static IPs 1. [Standby disks](/docs/ci/byoc/standby-disks) 1. Custom VM base images, service roles (IAM, service accounts), and runner configurations Refer to the [pricing page](https://www.warpbuild.com/pricing) for pricing info. ## Setup ### 1. Connect your cloud account A connection to your cloud account creates an IAM role or service account with the permissions required to setup and manage Github actions runners in your cloud account. This IAM role is used to import configurations and create a WarpBuild Stack in a region and VPC. Go to [BYOC](https://app.warpbuild.com/dashboard/byoc/) in WarpBuild dashboard and click on the `Connect Cloud Account` button. ![Overview](./img/setup/start.png) ### 2. Setup a WarpBuild Stack A WarpBuild Stack is a group of infrastructure components in a specific region of your cloud account, required by WarpBuild to run your CI workflows. These components include VPC, subnets, and object storage buckets. > 💡 **Tip:** Naming the stack with the product and region info may be helpful for keeping track of resources. Examples: `twitch-use1` and `alexa-use2`. ![Setup Stack](./img/setup/create-stack.png) The object storage bucket in the selected region is used as cache storage, container images layer cache, logs, and storing other workflow artifacts. The stack name, object storage location, and region cannot be changed after creation. #### Easy Create Flow: Create all resources in a new VPC The `easy create` flow creates the required resources and configures them according to best practices. Refer to the cloud provider specific guide for more details on resources created here: [aws](/docs/ci/byoc/aws/config), [gcp](/docs/ci/byoc/gcp/config). #### Import Flow: Create runners in an existing VPC Importing resources are supported only for AWS. In the `import` flow, all the resources must already exist and be supplied as inputs. Select your cloud account, region, VPC, and network configuration in which you want to create a Stack. The name of the stack is used for reference throughout the dashboard and cannot be changed after stack creation. Refer to the cloud provider specific setup guide for more details on the inputs and best practices [here](/docs/ci/byoc/aws/config). ### 3. Create a custom runner You can now create [custom runners](https://app.warpbuild.com/dashboard/runners/custom-runners) that are used in the Github actions workflows. Click on the `Add Custom Runner` button and choose the Stack you just created from the stacks dropdown. ![Create Custom Runner](./img/setup/create-custom-runner.png) The name of the runner is immutable and is used to reference the runner in Github actions workflows. ```yaml name: CI on: [push] jobs: build: runs-on: warp-custom-ci-stack-runner steps: - uses: actions/checkout@v3 - run: npm run build ``` #### Fallback instances You can provide multiple instance types within the same runner configuration. This is useful when the capacity for a specific instance type is unavailable in a region, in which case the runner can fallback to a different instance type. For this reason, it is recommended to choose instances with roughly similar performance to ensure consistency (for example, `m7a.xlarge` and `m7i.xlarge` on aws). This is especially useful for spot instances. The instance type is chosen based on availability and price. For #### Static IPs Enabling static IPs creates the runners in private subnets behind a NAT gateway. This can incur data transfer costs in multiple ways: 1. Outbound data transfer from the runner instance to the internet at data egress rates ~$0.45/GB. 2. NAT gateway data processing fees at ~$0.45/GB for inbound and outbound data transfer. 3. Inter region data transfer costs between the private subnets and the NAT gateway. This can become very expensive and runners with static IPs should be used minimally. When static IPs are disabled, the runner instances are created in public subnets and are not behind a NAT gateway. This ensures that no data transfer costs are incurred for ingress and there is usually minimal egress for CI workloads. Refer to the best practices guide for more information on network setup. ## Updates and deletes WarpBuild may require changes to cloud connections or Stacks to support new features. These will show up as updates and need to be applied. **Deleting a Cloud Connection**: A confirmation will be shown along with the list of Stacks that depend on this cloud connection. The cloud connection cannot be deleted until the stacks depending on it are deleted. **Deleting a Stack**: A confirmation will be shown along with the list of custom runner configurations that depend on this stack. The stack cannot be deleted until the custom runner configurations depending on it are deleted. --- # Standby Disks URL: https://www.warpbuild.com/docs/ci/byoc/standby-disks Description: Create a pool of standby disks to boot up runners in less than 20 seconds --- title: "Standby Disks" excerpt: "Create a pool of standby disks to boot up runners in less than 20 seconds" description: "Create a pool of standby disks to boot up runners in less than 20 seconds" sidebar_position: 5 slug: "/byoc/standby-disks" createdAt: "2024-07-23" updatedAt: "2024-07-23" --- WarpBuild allows you to create a pool of standby disks for each runner type on BYOC. This enables runners to be booted up in ~15seconds. ## What are standby disks? Standby disks are VMs of the same type as the runner, that are immediately shut down after boot up. This initializes the VM, sets up networking, and other configurations. This pre-initialization of the VM allows the runner to be booted up and the job to start in ~15 seconds. The instance type of a standby disk is the same as the runner type. If there are fallback instance types selected, the standby disk will be one of the multiple instance types. ## Configuration The number of standby disks is configurable per custom runner for BYOC runners. 💡 **Note:** Choose the number of standby disks based on the number of jobs you expect to run concurrently for that runner type. ![Standby Disks](./img/standby-disks/standby-disks.png) ## How it works When GitHub requests a runner, WarpBuild checks if a standby disk is available. If one is available, it is booted up and used for the job. If one is not available, a new VM is created for allocation to the job. The WarpBuild control plane ensures that the pool of standby disks is maintained, within reconciliation time limits (~1min). This maneuver puts the instance into a `shutdown` state but not terminate it. 💡 **Note:** Spot instances are not supported for standby disks. They may be terminated at any time and it cannot be guaranteed that a standby instance will be available. ### Cost implication The VM for initializing a standby disk is billable for ~90 seconds, after which it will be shut down. Once the VM is shut down, the network disk is still billable but the VM is not. Overall, the cost implication is minimal for using this feature. ## Pricing WarpBuild does not charge for standby disks. --- # April 2024 URL: https://www.warpbuild.com/docs/ci/changelog/2024-april Description: List of updates in 2024-April --- title: "April 2024" slug: "2024-april" description: "List of updates in 2024-April" sidebar_position: -3 createdAt: "2024-04-12" updatedAt: "2024-04-12" --- ### April 15, 2024 - **Enhancement**: Introducing WarpCache - fast, unlimited cache that is 35% faster for cache retrieval and 75% faster for cache writes. It is a drop-in replacement for `actions/cache@v4`. Just replace that with `WarpBuilds/cache@v1` to get started. ### April 13, 2024 - **Enhancement**: WarpBuild now supports `spot` instances for runners. Spot instances are 25% cheaper than on-demand instances. The spot instances are available in all Linux based runners. Spot instances may be terminated at any time during the execution of the workflow. - **Enhancement**: Updated the Ubuntu x86-64 runners to be in sync with GitHub Actions runners release [20240407](https://github.com/actions/runner-images/releases/tag/ubuntu22%2F20240407.1). The key changes are: - Docker Compose v1 are removed from images ### April 12, 2024 - **Enhancement**: Increased the `arm` runners to have 150GB of SSD storage (up from 64GB). The price remains the same. - **Enhancement**: The p50 for runner start time has been reduced to under 10s while keeping p99 consistent. --- # August 2024 URL: https://www.warpbuild.com/docs/ci/changelog/2024-august Description: List of updates in 2024-August --- title: "August 2024" slug: "2024-August" description: "List of updates in 2024-August" sidebar_position: -7 createdAt: "2024-08-02" updatedAt: "2024-08-02" --- ### August 28, 2024 - `Enhancement`: One click setup for BYOC runners. ### August 25, 2024 - `Fix`: Fix for the docker layer caching restore failures - incorrect digest computation. ### August 24, 2024 - `Fix`: Bug fix where runners did not have the full disk size available to use. ### August 23, 2024 - `Enhancement`: Github actions runner boot times are now 35% faster, with most runners starting in under 36s. ### August 22, 2024 - `Enhancement`: Docker Layer Caching is now supported by WarpBuild Cache. Checkout how to enable it in your workflows here: [Docker Layer Caching](/docs/ci/caching). - `Change`: The runner image has been updated to [Ubuntu 24.04 (20240818)](https://github.com/actions/runner-images/releases/tag/ubuntu24%2F20240818.1) and [Ubuntu 22.04 (20240818)](https://github.com/actions/runner-images/releases/tag/ubuntu22%2F20240818.1). This includes the deprecation of the Docker Compose v1. Workflows using `docker-compose` will now have to use `docker compose` instead. Here's the full [migration guide](https://docs.docker.com/compose/migrate/). ### August 02, 2024 - `Change`: Bring Your Own Cloud (BYOC) pricing is simplified. Per stack pricing is updated to $500/mo including unlimited caching. More details on the [pricing page](https://www.warpbuild.com/pricing). --- # December 2024 URL: https://www.warpbuild.com/docs/ci/changelog/2024-december Description: List of updates in 2024-December --- title: "December 2024" slug: "2024-December" description: "List of updates in 2024-December" sidebar_position: -11 createdAt: "2024-12-04" updatedAt: "2024-12-04" --- ### December 06, 2024 - `Enhancement`: MacOS [13](https://github.com/actions/runner-images/releases/tag/macos-13-arm64/20241202.469), [14](https://github.com/actions/runner-images/releases/tag/macos-14-arm64/20241202.580) and [15](https://github.com/actions/runner-images/releases/tag/macos-15-arm64/20241202.430) images have been updated. ### December 04, 2024 - `Enhancement`: The images for `ubuntu-2204` for [x86-64](https://github.com/actions/runner-images/releases/tag/ubuntu22/20241201.1) have been updated across all the cloud providers including aws, gcp and azure. --- # February 2024 URL: https://www.warpbuild.com/docs/ci/changelog/2024-february Description: List of updates in 2024-February --- title: "February 2024" slug: "2024-february" description: "List of updates in 2024-February" sidebar_position: -1 createdAt: "2024-03-04" updatedAt: "2024-03-04" --- ### Added - Warp Insights is now available for all users. Insights provides a detailed view of your build and deployment activity, including build times, build success rates, and deployment success rates. The first report showing [CI Health](https://app.warpbuild.com/dashboard/insights/ci/all) is live. A lot of exciting features are coming soon, so stay tuned! ![CI Health Dashboard](./img/changelog/ci-health.png) - Added a portal to view previous billing statements and update the billing email. This can be accessed from the [billing page](https://app.warpbuild.com/dashboard/settings/billing). ![Manage Billing](./img/changelog/manage-billing.png) - Added a [new status page](https://warpbuild.instatus.com/) to provide real-time updates on the status of the Warp platform. ![Status Page](./img/changelog/status-page.png) - Infrastructure updates to improve the stability and performance of the platform. This also improves the utilization of the underlying compute. - We shipped support for `macos-14` runners, powered by Apple M2 Pro chips. - Improved disk and network performance for all runners. ### Changed - Improvements to the table layouts so it easier to navigate. - Updated GitHub actions runner version to `v2.313.0`. ### Fixed - Fixed an issue with the billing page where the billing and usage info is missing. - Mitigated some scenarios where builds in progress were getting terminated. - Enable retries for endpoints for seamless recovery. --- # July 2024 URL: https://www.warpbuild.com/docs/ci/changelog/2024-july Description: List of updates in 2024-July --- title: "July 2024" slug: "2024-july" description: "List of updates in 2024-July" sidebar_position: -6 createdAt: "2024-07-09" updatedAt: "2024-07-28" --- ### July 28, 2024 - `Enhancement`: Bring Your Own Cloud (BYOC) is now generally available on AWS. WarpBuild now supports running GitHub Actions runners on your own AWS account. Read more here: [BYOC on AWS](/docs/ci/byoc/aws). - `Enhancement`: Static IPs are now available for GitHub Actions runners for BYOC. Read more here: [Static IPs](/docs/ci/byoc#static-ips). - `Enhancement`: You can now import container images for Github actions runners into WarpBuild. This is especially useful for folks using `actions-runner-controller` on Kubernetes and want to have a more efficient way of running workloads. - `Enhancement`: The `runner` binary is updated to v2.318.0. - `Enhancement`: The [Ubuntu 22.04 image](https://github.com/actions/runner-images/releases/tag/ubuntu22%2F20240721.1) has been updated. - `Enhancement`: The [Ubuntu 24.04 image](https://github.com/actions/runner-images/releases/tag/ubuntu24%2F20240721.1) has been updated. ### July 16, 2024 - `Enhancement`: The [Ubuntu 22.04 image](https://github.com/actions/runner-images/releases/tag/ubuntu22%2F20240714.1) has been updated. - `Enhancement`: The [Ubuntu 24.04 image](https://github.com/actions/runner-images/releases/tag/ubuntu24%2F20240714.1) has been updated. ### July 11, 2024 - `Enhancement`: MacOS [13](https://github.com/actions/runner-images/releases/tag/macos-13%2F20240707.2) and [14](https://github.com/actions/runner-images/releases/tag/macos-14%2F20240708.1) images have been updated. ### July 09, 2024 - `Enhancement`: The [Ubuntu 22.04 image](https://github.com/actions/runner-images/releases/tag/ubuntu22%2F20240708.1) has been updated. - `Enhancement`: The [Ubuntu 24.04 image](https://github.com/actions/runner-images/releases/tag/ubuntu24%2F20240707.1) has been updated. --- # June 2024 URL: https://www.warpbuild.com/docs/ci/changelog/2024-june Description: List of updates in 2024-June --- title: "June 2024" slug: "2024-june" description: "List of updates in 2024-June" sidebar_position: -5 createdAt: "2024-06-03" updatedAt: "2024-06-30" --- ### June 20, 2024 - `Feature`: WarpBuild now supports custom runner configurations. They can be customized for cost and performance for both `arm64` and `x86_64` architectures. Read more in the [custom runner docs](../cloud-runners) ### June 20, 2024 - `Enhancement`: The [WarpCache](https://github.com/marketplace/actions/warpcache) action now supports cache restores from default branches and base branches (for pull requests). [More info](https://github.com/WarpBuilds/cache/blob/main/tips-and-workarounds.md#use-cache-across-feature-branches). ### June 18, 2024 - `Enhancement`: Security enhancements for cache isolation. - `Enhancement`: The [macOS 14 image](https://github.com/actions/runner-images/releases/tag/macos-14/20240612.5) has been updated. ### June 12, 2024 - `Feature`: [Ubuntu 24.04](https://github.com/actions/runner-images/releases/tag/ubuntu24%2F20240604.1) x64 runners are now available. You can start using them by using the relevant labels like `warp-ubuntu-2404-x64-`. You can see all the runner labels on the [Runners](/docs/ci/cloud-runners) page. > **Note**: `warp-ubuntu-latest-x64-` will continue to point to Ubuntu 22.04 until GitHub changes the default runner version for `ubuntu-latest`. ### June 11, 2024 - `Enhancement`: The [Ubuntu 22.04 image](https://github.com/actions/runner-images/releases/tag/ubuntu22%2F20240603.1) has been updated. - `Enhancement`: Spot runners have been made more reliable. Fallback instance types have been added in case of insufficient capacity from our infrastructure provider. ### June 07, 2024 - `Enhancement`: Added support for various `setup-*` actions to utilize WarpBuild cache as their caching service. See the [Caching](/docs/ci/caching#usage-with-setup--actions) page for more information. - `Enhancement`: MacOS [13](https://github.com/actions/runner-images/releases/tag/macos-13-arm64%2F20240603.1) and [14](https://github.com/actions/runner-images/releases/tag/macos-14-arm64%2F20240603.1) images have been updated. ### June 05, 2024 - `Enhancement`: Runners page is updated with day wise usage statistics. The workflows section has been removed and the runners table from the billing page has been moved here and updated. ![Runners Page UI Update](./img/changelog/2024-june-runners-ui.png) - `Change`: The functionality to automatically raise PRs to switch workflows to use WarpBuild runners has been removed, until it can be made to work robustly for all scenarios including reusable workflows and runner groups. ### June 04, 2024 `Enhancements`: - x86-64 runners use new processors that are ~20% faster. They are AMD processors using the Genoa (Zen4) architecture. - Disk performance tiering: It is recommended that workloads that are sensitive to high disk performance use larger runner sizes. - 32x runners are now on 32 vCPU instances, up from 30 vCPUs earlier. The billing for these runners has been updated to reflect the 2 extra vCPUs. - Spot instances do not get interrupted in less than 1 minute of runtime. - Caching UI: A page has been added to view and manage cache entries from all jobs. The billing page will be added soon, when the billing actually starts. ![Caching UI](./img/changelog/2024-june-cache-ui.png) - Billing UI: This has been refreshed to make updating the billing email address more obvious. The page has been cleared of some unnecessary clutter. ![Billing UI Update](./img/changelog/2024-june-billing-ui.png) - Caching updates: Based on user feedback, cache TTL is set to 7 days. This change will go live along with the billing enablement for caching in the next 2 days. There will be a one-time purge of all previous caches when billing is enabled to ensure there are no surprises. `Fixes:` - There was a small fraction of runners that was facing intermittent network connectivity issues. That is now resolved. ### June 03, 2024 - `Fix`: Fixed a corner case where the user list is not synced correctly with organizations. ### June 01, 2024 - `Enhancement`: The `arm64` runners have been upgraded to use Graviton3 instances that are 25% faster. --- # March 2024 URL: https://www.warpbuild.com/docs/ci/changelog/2024-march Description: List of updates in 2024-March --- title: "March 2024" slug: "2024-march" description: "List of updates in 2024-March" sidebar_position: -2 createdAt: "2024-03-05" updatedAt: "2024-03-05" --- ### March 5, 2024 - **Enhancement**: All runners now have 150GB of disk space. This is a 50% increase from the previous 100GB. - **Enhancement**: Updated the Ubuntu x86-64 runners to be in sync with GitHub Actions runners release [20240225.1.1](https://github.com/actions/runner-images/releases/tag/ubuntu22%2F20240225.1). The key changes are: - [All OSes] Ruby versions \<= 2.7.x will be removed on February, 26 - [All OSes] Go 1.19.x will be removed and 1.21.x set as default on February, 26 - **Fix**: Introduced and fixed a bug where the local dockerhub mirror was erroring out in a few cases. ### March 9, 2024 - **Fix**: A race condition caused a small fraction of runners to terminate before the job was complete. This is not fixed. - **Change**: Billing for jobs is now done on a per-minute basis. ### March 16, 2024 - **Enhancement**: The product architecture has been updated to flatten and simplify the serving stack. This results in a huge improvement to the runner start time. ### March 19, 2024 - **Enhancement**: New runner types are available. - `warp-ubuntu-latest-x64-8x` - 8 vCPUs, 32GB RAM, 150GB disk space - `warp-ubuntu-latest-x64-32x` - 30 vCPUs, 120GB RAM, 150GB disk space - `warp-ubuntu-latest-arm64-8x` - 8 vCPUs, 32GB RAM, 150GB disk space - `warp-ubuntu-latest-arm64-32x` - 32 vCPUs, 128GB RAM, 150GB disk space - **Fix**: The p99 for runner start time has been reduced by 50%. --- # May 2024 URL: https://www.warpbuild.com/docs/ci/changelog/2024-may Description: List of updates in 2024-May --- title: "May 2024" slug: "2024-may" description: "List of updates in 2024-May" sidebar_position: -4 createdAt: "2024-05-02" updatedAt: "2024-05-31" --- ### May 31, 2024 - `Enhancement`: Updated the fraud detection logic for organizations. ### May 20, 2024 - `Enhancement`: The ubuntu-2204 images for both arm64 and x86-64 architectures have been updated. ### May 07, 2024 - `Enhancement`: Enabled fraud detection logic for preventing malicious accounts. ### May 02, 2024 - `Enhancement`: Introduced runner group configurations. Now you can choose the runner group to which WarpBuild runners are attached [here](https://app.warpbuild.com/dashboard/runners/groups). --- # November 2024 URL: https://www.warpbuild.com/docs/ci/changelog/2024-november Description: List of updates in 2024-November --- title: "November 2024" slug: "2024-November" description: "List of updates in 2024-November" sidebar_position: -10 createdAt: "2024-11-04" updatedAt: "2024-11-14" --- import BreakingChangeTag from "./BreakingChangeTag"; ### November 14, 2024 - `Feature`: Introducing standby disks for BYOC runners across cloud providers. Standby disks maintains a warm pool of boot disks for BYOC runners. This allows for ~10-15s runner startup times. Read more about standby disks [here](/docs/ci/byoc/standby-disks). > Note: AWS connection version `>=1.2` is required for standby disks to work on AWS BYOC. Please upgrade using WarpBuild UI - https://app.warpbuild.com/dashboard/byoc . - `Feature`: Adds Windows Server 2022 (x86-64) runners support. The technical details of the runners are [here](/docs/ci/cloud-runners#windows-x86-64). For the complete list of tools available on the runner, see [here](https://github.com/actions/runner-images/releases/tag/win22%2F20241021.1). ### November 8, 2024 -

Removal of Xcode 14 and 16 from macOS 14

Xcode 14 and 16 have been removed from GitHub macOS 14 runners on November 4th. Only Xcode 15 will be made available. More details on the [GitHub announcement here](https://github.com/actions/runner-images/issues/10703). WarpBuild macOS 14 runners have been updated to reflect this change. The newly launched [macOS 15 runners](#november-6-2024) can be used when Xcode 16 is a strict requirement. ### November 7, 2024 - `Feature`: Azure BYOC is now generally available. Read more here: [BYOC on Azure](/docs/ci/byoc/azure). ### November 6, 2024 - `Feature`: macOS 15 (arm64) is now available on WarpBuild with the label `warp-macos-15-arm64-6x`. More details [here](/docs/ci/cloud-runners#macos-m2-pro-on-arm64) > **Note**: The image is based on GitHub's [macOs 15 arm64 image](https://github.com/actions/runner-images/blob/main/images/macos/macos-15-arm64-Readme.md) which is currently in [beta](https://github.com/actions/runner-images/issues/10686). Frequent changes are to be expected. ### November 5, 2024 - `Enhancement`: BYOC runners across cloud providers will now block external incoming connections by default. ### November 4, 2024 - `Feature`: Custom VM images are now supported for Azure BYOC runners. --- # October 2024 URL: https://www.warpbuild.com/docs/ci/changelog/2024-october Description: List of updates in 2024-October --- title: "October 2024" slug: "2024-October" description: "List of updates in 2024-October" sidebar_position: -9 createdAt: "2024-10-04" updatedAt: "2024-10-29" --- import BreakingChangeTag from './BreakingChangeTag'; ### ⚠️ Future Breaking Changes ⚠️ #### MacOS 14 Xcode versions Xcode 14 and 16 will be removed from macOS 14 on October 28, 2024. This change will go live on November 8, 2024 on WarpBuild. More details on the [GitHub announcement here](https://github.com/actions/runner-images/issues/10703). --- ### October 30, 2024

GitHub Actions Runner Labels Update

GitHub is rolling out updates to the runner labels. The labels are now updated on WarpBuild to reflect the new ones. The following table shows the changes. | Runner Label | Old Alias | New Alias | | ---------------------------- | -------------------------- | -------------------------- | | `warp-ubuntu-latest-x64-*` | `warp-ubuntu-2204-x64-*` | `warp-ubuntu-2404-x64-*` | | `warp-ubuntu-latest-arm64-*` | `warp-ubuntu-2204-arm64-*` | `warp-ubuntu-2404-arm64-*` | | `warp-macos-latest-arm64-6x` | `warp-macos-13-arm64-6x` | `warp-macos-14-arm64-6x` | These changes may break workflows for some users. Please pin the label version or migrate to using the new OS versions. The [GitHub announcement is here](https://github.blog/changelog/2024-09-25-actions-new-images-and-ubuntu-latest-changes/). ### October 29, 2024 - `Feature`: Custom VM images are now supported for GCP BYOC runners. ### October 21, 2024 - `Feature`: Ubuntu 24.04 arm64 runners are now supported natively as cloud runners as well as with AWS and GCP custom runners. These runners are compatible with GitHub's Ubuntu 24.04 arm64. Refer to [cloud runner labels](/docs/ci/cloud-runners#linux-arm64) for the full list of available labels. Refer to [this link](https://github.com/actions/partner-runner-images/blob/main/images/arm-ubuntu-24-image.md) for the details on the packaged tools. ### October 17, 2024 - `Enhancement`: The image for `macos-14` (https://github.com/actions/runner-images/releases/tag/macos-14-arm64%2F20241007.259) has been updated. This fixes the issue with iOS 18 SDK and simulator not being available. ### October 15, 2024 - `Feature`: Docker Layer Caching is now available for GCP BYOC runners. - `Enhancement`: The images for `ubuntu-2204` for [x86-64](https://github.com/actions/runner-images/releases/tag/ubuntu22%2F20241006.1) for `arm64` architecture have been updated. - `Enhancement`: [ubuntu-2404 for x86-64](https://github.com/actions/runner-images/releases/tag/ubuntu24%2F20241006.1) image has been updated. ### October 14, 2024 - `Enhancement`: BYOC features do not require a payment method to be added, by default. Credits can be used for BYOC runners. ### October 11, 2024 - `Pricing`: Cost for cache operations has been **reduced** from $0.001 to $0.0001 per operation. ### October 09, 2024 - `Feature`: GCP BYOC is now generally available. Read more here: [BYOC on GCP](/docs/ci/byoc/gcp). ### October 08, 2024 - `Enhancement`: The runner start times are now much faster, with a 90%ile of the start times being under 20 seconds. This is a a significant improvement over the previous 90%ile of 45 seconds. --- # September 2024 URL: https://www.warpbuild.com/docs/ci/changelog/2024-september Description: List of updates in 2024-September --- title: "September 2024" slug: "2024-September" description: "List of updates in 2024-September" sidebar_position: -8 createdAt: "2024-09-19" updatedAt: "2024-09-30" --- ### September 30, 2024 - `Enhancement`: MacOS [13](https://github.com/actions/runner-images/releases/tag/macos-13%2F20240923.120) and [14](https://github.com/actions/runner-images/releases/tag/macos-14%2F20240923.101) images have been updated. ### September 19, 2024 - `Enhancement`: Custom VM Images for BYOC runners. --- # April 2025 URL: https://www.warpbuild.com/docs/ci/changelog/2025-april Description: List of updates in 2025-April --- title: "April 2025" slug: "2025-April" id: "2025-April" description: "List of updates in 2025-April" sidebar_position: -16 createdAt: "2025-04-01" updatedAt: "2025-04-02" --- ### April 17, 2025 - `Feature`: Windows Server 2022 Custom AMI are now supported on AWS. Read more about [custom images](/docs/ci/byoc/custom-vm-images#windows). ### April 14, 2025 - `Enhancement`: [Ubuntu 24.04](https://github.com/actions/runner-images/releases/tag/ubuntu24/20250406.1) was updated for WarpBuild Cloud. - `Enhancement`: [Ubuntu 22.04](https://github.com/actions/runner-images/releases/tag/ubuntu22/20250406.1) was updated for WarpBuild Cloud. ### April 10, 2025 - `Feature`: Spending Limits and alerts are available for all users. ### April 8, 2025 - `Enhancement`: [Ubuntu 22.04](https://github.com/actions/runner-images/releases/tag/ubuntu22/20250316.1) was updated for Azure, AWS and GCP. - `Enhancement`: [Ubuntu 24.04](https://github.com/actions/runner-images/releases/tag/ubuntu24/20250316.1) was updated for AWS, Azure and GCP. - `Enhancement`: [Windows Server 2022](https://github.com/actions/runner-images/releases/tag/win22/20250330.1) was updated for Azure. - `Enhancement`: Ubuntu 24.04 ARM was updated for Azure, AWS and GCP. ### April 7, 2025 - `Feature`: Windows Server 2022 x86-64 runners are now supported on AWS. Read more [here](/docs/ci/byoc/aws#windows-support) ### April 2, 2025 - `Fix`: An issue was identified where various `WarpBuilds/setup-*` actions were hanging post their completion causing significantly longer build times. This is now fixed. Please upgrade to the latest version of the action in your workflows if you have been affected. --- # August 2025 URL: https://www.warpbuild.com/docs/ci/changelog/2025-august Description: List of updates in 2025-August --- title: "August 2025" slug: "2025-August" description: "List of updates in 2025-August" sidebar_position: -20 createdAt: "2025-08-06" updatedAt: "2025-08-06" --- ### August 08, 2025 - `Enhancement`: [macOS 14 image and packages](https://github.com/actions/runner-images/releases/tag/macos-14-arm64%2F20250805.1714) have been updated. ### August 6, 2025 - `Enhancement`: [Windows 2022 image](https://github.com/actions/runner-images/releases/tag/win22%2F20250727.1) has been updated. - `Enhancement`: [macOS 15 image and packages](https://github.com/actions/runner-images/releases/tag/macos-15-arm64%2F20250722.2025) have been updated. --- # December 2025 URL: https://www.warpbuild.com/docs/ci/changelog/2025-december Description: List of updates in 2025-December --- title: "December 2025" slug: "2025-December" description: "List of updates in 2025-December" sidebar_position: -24 createdAt: "2025-12-07" updatedAt: "2025-12-22" --- ### December 22, 2025 - `Enhancement`: The remote docker builder cache/data now has a TTL of 10 Days. If the builder profile is not used for more than 10 days, the cache will be reset automatically. The profile itself can continue to be used. ### December 15, 2025 - `Enhancement`: [MacOS 15 image](https://github.com/actions/runner-images/releases/tag/macos-15-arm64%2F20251203.0057) has been updated. ### December 10, 2025 - `Feature`: macOS 26 (arm64) runners are now available on WarpBuild with 6x and 12x configurations. The new runners are available with the labels `warp-macos-26-arm64-6x` and `warp-macos-26-arm64-12x`. The latest aliases (`warp-macos-latest-arm64-6x` and `warp-macos-latest-arm64-12x`) continue to point to macOS 15 runners for stability. ### December 7, 2025 - `Pricing`: Snapshot pricing has been updated. Snapshot restore costs are now **$0.04 per restore**, and snapshot **storage costs are now $0.025/hour per snapshot**. --- # February 2025 URL: https://www.warpbuild.com/docs/ci/changelog/2025-february Description: List of updates in 2025-February --- title: "February 2025" slug: "2025-February" id: "2025-February" description: "List of updates in 2025-February" sidebar_position: -14 createdAt: "2025-02-17" updatedAt: "2025-02-17" --- ### February 27, 2025 - `Enhancement`: The macOS VMs have been upgraded from M2 Pro to M4 Pro processors for all users. The runners are now 60% faster and 2x cheaper than GitHub-hosted runners. The memory has been increased from 14GB to 22GB. ### February 22, 2025 - `Enhancement`: The WarpBuild Cloud and Azure BYOC images have been updated for `windows-2022` [x86-64](https://github.com/actions/runner-images/releases/tag/win22/20250209.1). ### February 17, 2025 - `Enhancement`: Added a new option which allows you to assign and run Dependabot workflows on WarpBuild custom runners. Know more [here](/docs/ci/cloud-runners/custom-runners#running-dependabot-workflows). --- # January 2025 URL: https://www.warpbuild.com/docs/ci/changelog/2025-january Description: List of updates in 2025-January --- title: "January 2025" slug: "2025-January" description: "List of updates in 2025-January" sidebar_position: -12 createdAt: "2025-01-10" updatedAt: "2025-01-16" --- import BreakingChangeTag from "./BreakingChangeTag"; ### ⚠️ Future Breaking Changes ⚠️ - `Breaking Change`: The arm64 images for ubuntu-22.04 will be deprecated on March 31, 2025. ### January 28, 2025 - `Enhancement`: The following images have been updated across all the cloud providers - aws, gcp and azure - `ubuntu-2204` for [x86-64](https://github.com/actions/runner-images/releases/tag/ubuntu22%2F20250120.2) - `ubuntu-2404` for [x86-64](https://github.com/actions/runner-images/releases/tag/ubuntu24%2F20250120.5) and for [arm64](https://github.com/actions/partner-runner-images/blob/main/images/arm-ubuntu-24-image.md) ### January 20, 2025 - `Enhancement`: The `BYOC` page now shows error messages from the cloud provider. This is very useful for tracking quota issues and other issues from the cloud provider. - `Enhancement`: The UI pages now have better search and filter capabilities. ### January 15, 2025 - `Fix`: Overflowing content in the BYOC UI page container. ### January 12, 2025 - `Fix`: Reloading a page does not redirect to the parent product page. - `Fix`: Add the ability to change the organization name from the [organization settings page](https://app.warpbuild.com/dashboard/settings/general). ### January 11, 2025 - `Enhancement`: The top bar breadcrumbs are more informative. - `Deprecation`: The `custom container images` feature is deprecated. ### January 10, 2025 - `Enhancement`: The following images have been updated across all the cloud providers - aws, gcp and azure - `ubuntu-2204` for [x86-64](https://github.com/actions/runner-images/releases/tag/ubuntu22%2F20250105.1) - `ubuntu-2404` for [x86-64](https://github.com/actions/runner-images/releases/tag/ubuntu24%2F20250105.1) and for [arm64](https://github.com/actions/partner-runner-images/blob/main/images/arm-ubuntu-24-image.md) - `Enhancement`: The WarpBuild cloud and azure byoc images have been updated for `windows-2022` for [x86-64](https://github.com/actions/runner-images/releases/tag/win22%2F20250105.1) have been updated --- # July 2025 URL: https://www.warpbuild.com/docs/ci/changelog/2025-july Description: List of updates in 2025-July --- title: "July 2025" slug: "2025-July" description: "List of updates in 2025-July" sidebar_position: -19 createdAt: "2025-07-07" updatedAt: "2025-07-28" --- ### July 28, 2025 - `Enhancement`: [Ubuntu 22.04](https://github.com/actions/runner-images/releases/tag/ubuntu22/20250720.1) has been updated. - `Enhancement`: [Ubuntu 24.04](https://github.com/actions/runner-images/releases/tag/ubuntu24/20250720.1) has been updated. - `Enhancement`: Ubuntu 24.04 ARM has been updated. ### July 5, 2025 - `Feature`: Added support for [directory sync](/docs/sso#directory-sync). --- # June 2025 URL: https://www.warpbuild.com/docs/ci/changelog/2025-june Description: List of updates in 2025-June --- title: "June 2025" slug: "2025-June" description: "List of updates in 2025-June" sidebar_position: -18 createdAt: "2025-06-25" updatedAt: "2025-06-25" --- ### June 25, 2025 - `Enhancement`: [MacOS 15 image and packages](https://github.com/actions/runner-images/releases/tag/macos-15-arm64/20250623.1849) have been updated. ### June 19, 2025 - `Feature`: Added a metadata field to stack errors page. This field contains useful context about the stack operations for better debuggability. ### June 10, 2025 - `Feature`: Added support for a readonly role. This role is called viewer and can be assigned to other users from the workspace settings page. ### June 6, 2025 - `Enhancement`: [Ubuntu 22.04 image and packages](https://github.com/actions/runner-images/releases/tag/ubuntu22/20250602.1) have been updated. - `Enhancement`: [Ubuntu 24.04 image and packages](https://github.com/actions/runner-images/releases/tag/ubuntu24/20250602.3) have been updated. - `Enhancement`: [Ubuntu 24.04 ARM image and packages](https://github.com/actions/runner-images/releases/tag/ubuntu24-arm64/20250602.3) have been updated. ### June 2, 2025 - `Feature`: Added support for IdP-initiated logins for SSO users. --- # March 2025 URL: https://www.warpbuild.com/docs/ci/changelog/2025-march Description: List of updates in 2025-March --- title: "March 2025" slug: "2025-March" id: "2025-March" description: "List of updates in 2025-March" sidebar_position: -15 createdAt: "2025-03-06" updatedAt: "2025-03-21" --- ### March 31, 2025 - `Fix`: The default Xcode version for macOS 14 was incorrect. It now points to 15.4 which is consistent with GitHub runners. - `Enhancement`: MacOS [14](https://github.com/actions/runner-images/releases/tag/macos-14-arm64/20250324.1158) has been updated. ### March 25, 2025 - `Enhancement`: Ubuntu [22.04](https://github.com/actions/runner-images/releases/tag/ubuntu22/20250323.1) has been updated. - `Enhancement`: Ubuntu [24.04](https://github.com/actions/runner-images/releases/tag/ubuntu24/20250323.1) has been updated. - `Enhancement`: Ubuntu 22.04 ARM has been updated. - `Enhancement`: Ubuntu 24.04 ARM has been updated. ### March 21, 2025 - `NEW`: A drop-in replacement for the `docker/build-push-action` action is now available. This new action uses faster and more efficient WarpBuild Remote Docker Builders out of the box. Know more [here](/docs/ci/docker-builders). - `Enhancement`: MacOS [13](https://github.com/actions/runner-images/releases/tag/macos-13-arm64%2F20250317.910) ,[14](https://github.com/actions/runner-images/releases/tag/macos-14-arm64%2F20250304.1018) and [15](https://github.com/actions/runner-images/releases/tag/macos-15-arm64%2F20250312.1001) images have been updated. ### March 20, 2025 - `NEW`: `arm64` remote docker builders are now available. ### March 6, 2025 - `NEW`: Remote docker builders are now available These are the fastest docker builders possible, paired with local NVMe storage for caching. This results in extremely fast builds, with 60x speed ups on real world projects. Know more [here](/docs/ci/docker-builders). --- # May 2025 URL: https://www.warpbuild.com/docs/ci/changelog/2025-may Description: List of updates in 2025-May --- title: "May 2025" slug: "2025-May" id: "2025-May" description: "List of updates in 2025-May" sidebar_position: -17 createdAt: "2025-05-01" updatedAt: "2025-06-25" --- ### May 27, 2025 - `Feature`: Auto onboard SSO users to linked SSO organizations. ### May 16, 2025 - `Security Enhancement`: Email verification for SSO based users. ### May 13, 2025 - `Enhancement`: [Windows 2022 image](https://github.com/actions/runner-images/releases/tag/win22/20250504.1) has been updated. ### May 8, 2025 - `Feature`: Added SSO support for enterprise users. SSO refers to SAML only, OAuth logins are not part of SSO. ### May 1, 2025 - `Feature`: GCE instances now support custom service account for API and identity management. Read more [here](/docs/ci/byoc/gcp/service-account) --- # November 2025 URL: https://www.warpbuild.com/docs/ci/changelog/2025-november Description: List of updates in 2025-November --- title: "November 2025" slug: "2025-November" description: "List of updates in 2025-November" sidebar_position: -23 createdAt: "2025-11-03" updatedAt: "2025-11-26" --- ### November 26, 2025 - `Enhancement`: [Ubuntu 24.04 image](https://github.com/actions/runner-images/releases/tag/ubuntu24%2F20251112.124) has been updated. - `Enhancement`: [Ubuntu 22.04 image](https://github.com/actions/runner-images/releases/tag/ubuntu22%2F20251112.150) has been updated. - `Enhancement`: [Windows Server 2025 image](https://github.com/actions/runner-images/releases/tag/win25%2F20251102.77) has been updated. - `Enhancement`: [Windows Server 2022 image](https://github.com/actions/runner-images/releases/tag/win22%2F20251102.87) has been updated. ### November 24, 2025 - `New`: macOS 15 ARM64 12x runners are now available! These runners offer 12 vCPUs and 44GB of memory. Use the `warp-macos-15-arm64-12x` label (alias: `warp-macos-latest-arm64-12x`) to access these runners. Pricing is $0.16/minute. ### November 18, 2025 - `Enhancement`: Improvements to tables and other UI elements for adapting to different screen sizes. ### November 15, 2025 - `Enhancement`: The billing page has been updated to make it easier to understand the billing details. ### November 3, 2025 - `Fix`: Ubuntu 24.04 ARM64 snapshot instances weren't starting up because of an issue with the init script. This has been fixed. ### November 2, 2025 - `Enhancement`: [macOS 14 image](https://github.com/actions/runner-images/releases/tag/macos-14-arm64/20251020.0056) has been updated. --- # October 2025 URL: https://www.warpbuild.com/docs/ci/changelog/2025-october Description: List of updates in 2025-October --- title: "October 2025" slug: "2025-October" description: "List of updates in 2025-October" sidebar_position: -22 createdAt: "2025-10-07" updatedAt: "2025-10-17" --- There are multiple deprecations coming soon in the upstream GitHub Actions runner images. Please refer to the following issue for more details: [GitHub Actions Runner Images Deprecations on 2025-Nov-03](https://github.com/actions/runner-images/issues/12898) ### October 23, 2025 - `Enhancement`: The WarpBuild agent has been updated to capture more detailed resource utilization metrics. - `Enhancement`: Updated the WarpBuild UI to show GitHub Actions logs in the Observability page. - `Enhancement`: Improved the WarpBuild UI to show billing related banners for important events like hitting spend limits. - `Enhancement`: [Ubuntu 24.04 image](https://github.com/actions/runner-images/releases/tag/ubuntu24%2F20250929.60) has been updated. - `Enhancement`: [Ubuntu 22.04 image](https://github.com/actions/runner-images/releases/tag/ubuntu22%2F20251014.106) has been updated. - `Enhancement`: [Windows Server 2022 image](https://github.com/actions/runner-images/releases/tag/win22%2F20251014.68) has been updated. - `Enhancement`: Ubuntu 24.04 ARM image has been updated. ### October 17, 2025 - `Feature`: GitHub Actions logs are now collected and displayed in the Observability page, allowing you to correlate workflow execution with system metrics and logs. This collection can be paused along with other observability data and is enabled by default. More details [here](/docs/ci/observability). - `Enhancement`: [Ubuntu 24.04 image](https://github.com/actions/runner-images/releases/tag/ubuntu24%2F20250929.60) has been updated. - `Enhancement`: [Ubuntu 22.04 image](https://github.com/actions/runner-images/releases/tag/ubuntu22%2F20250929.88) has been updated. ### October 7, 2025 - `Enhancement`: [MacOS 15 image](https://github.com/actions/runner-images/releases/tag/macos-15%2F20250928.1958) has been updated. --- # September 2025 URL: https://www.warpbuild.com/docs/ci/changelog/2025-september Description: List of updates in 2025-September --- title: "September 2025" slug: "2025-September" description: "List of updates in 2025-September" sidebar_position: -21 createdAt: "2025-09-02" updatedAt: "2025-09-05" --- ### September 16, 2025 - `Enhancement`: [MacOS 15 image](https://github.com/actions/runner-images/releases/tag/macos-15-arm64%2F20250911.2324) has been updated. ### September 12, 2025 - `Enhancement`: [MacOS 14 image](https://github.com/actions/runner-images/releases/tag/macos-14-arm64%2F20250901.1774) has been updated. ### September 11, 2025 - `Enhancement`: [MacOS 15 image](https://github.com/actions/runner-images/releases/tag/macos-15-arm64%2F20250830.2281) has been updated. ### September 8, 2025 - `Fix`: Telemetry info is now available for WarpBuild Cloud and Azure runners. The data for these should show up for the new jobs in the instance metrics pages. ### September 5, 2025 - `Feature`: Windows Server 2025 runners are now available! These new runners provide the latest Windows Server platform with enhanced performance and security features. Available in 4x, 8x, 16x, and 32x vCPU configurations. ### September 3, 2025 - `Feature`: Runner resource utilization metrics and logs can now be seen in the WarpBuild UI. - `Enhancement`: The WarpBuild dashboard now shows telemetry data for each job, including CPU, memory, network, and disk usage. Syslog data is also available for each job. More details [here](/docs/ci/observability). ### September 2, 2025 - `Enhancement`: `macos-latest` runner label now points to macOS 15 instead of macOS 14. - `Enhancement`: [Ubuntu 22.04 image](https://github.com/actions/runner-images/releases/tag/ubuntu22%2F20250825.1) has been updated. - `Enhancement`: [Ubuntu 24.04 image](https://github.com/actions/runner-images/releases/tag/ubuntu24%2F20250824.1) has been updated. - `Enhancement`: [Windows Server 2022 image](https://github.com/actions/runner-images/releases/tag/win22%2F20250825.1) has been updated. - `Enhancement`: Ubuntu 24.04 arm64 image has been updated. - `Enhancement`: WarpBuild telemetry now uses port 33931 for data collection. See our [observability documentation](/docs/ci/observability) for more details. --- # January 2026 URL: https://www.warpbuild.com/docs/ci/changelog/2026-january Description: List of updates in 2026-January --- title: "January 2026" slug: "2026-January" description: "List of updates in 2026-January" sidebar_position: -25 createdAt: "2026-01-06" updatedAt: "2026-01-31" --- ### January 31, 2026 - `Feature`: Enhanced Observability with Runner Instance Right Sizing Recommendations. The Observability section now includes a dedicated "Recommendations" view that displays hierarchical performance metrics (Repository > Workflow > Job > Instance Type) with detailed summaries including CPU, memory, filesystem, disk I/O, and network utilization. Easily identify instances that violate resource thresholds to optimize runner configurations. Read more about it [here](/docs/ci/observability). ### January 20, 2026 - `Feature`: MCP support for WarpBuild CI. Read more about it [here](/docs/ci/mcp). ### January 16, 2026 - `Enhancement`: [Ubuntu 22.04 image](https://github.com/actions/runner-images/releases/tag/ubuntu22%2F20260112.2) has been updated. - `Enhancement`: [Ubuntu 24.04 image](https://github.com/actions/runner-images/releases/tag/ubuntu24%2F20260111.209) has been updated. - `Enhancement`: [Windows Server 2022 image](https://github.com/actions/runner-images/releases/tag/win22%2F20260112.2) has been updated. - `Enhancement`: [Windows Server 2025 image](https://github.com/actions/runner-images/releases/tag/win25%2F20260111.179) has been updated. ### January 6, 2026 - `Feature`: AWS BYOC runners now support disabling IMDS v1, allowing instances to require IMDS v2 only. Read more about it [here](/docs/ci/byoc/aws/config#set-instance-metadata-service-version-2-imds-v2-to-required). --- # Changelog URL: https://www.warpbuild.com/docs/ci/changelog Description: WarpBuild Changelog --- title: "Changelog" excerpt: "WarpBuild Changelog" description: "WarpBuild Changelog" icon: FileClock createdAt: "2024-03-04" updatedAt: "2024-03-04" --- The WarpBuild changelog is a list of updates, improvements, and bug fixes for the WarpBuild platform. This page is updated regularly to keep you informed about the latest changes. Subscribe to our [RSS feed](https://docs.warpbuild.com/ci/rss.xml) to stay updated on the latest changes. --- # Custom Runners URL: https://www.warpbuild.com/docs/ci/cloud-runners/custom-runners Description: Customize runners for cost and performance --- title: "Custom Runners" excerpt: "Customize runners for cost and performance" description: "Customize runners for cost and performance" hidden: false sidebar_position: 2 slug: "/cloud-runners/custom-runners" createdAt: "2024-07-01" updatedAt: "2024-07-01" --- Custom runners are not available for organizations created after 2025-07-01. Choose from available CPU and disk presets to fit your price-performance preferences from the [Custom Runners page](https://app.warpbuild.com/dashboard/runners/custom-runners). The overall price for the selected configuration is displayed. ## CPU configurations | Config | Description | Arch | Capacity Type | Price per 1vCPU | | ------ | ------------------------------------------------ | ----- | ------------- | --------------- | | large | High performance, current generation processors | x64 | OnDemand | $0.00126 | | medium | High performance, previous generation processors | x64 | OnDemand | $0.00108 | | small | Burst performance, older generation processors | x64 | OnDemand | $0.0009 | | large | High performance, current generation processors | arm64 | OnDemand | $0.000945 | | medium | High performance, previous generation processors | arm64 | OnDemand | $0.00081 | | small | Burst performance, older generation processors | arm64 | OnDemand | $0.000675 | | large | High performance, current generation processors | x64 | Spot | $0.000945 | | medium | High performance, previous generation processors | x64 | Spot | $0.00081 | | small | Burst performance, older generation processors | x64 | Spot | $0.000675 | | large | High performance, current generation processors | arm64 | Spot | $0.000506 | | medium | High performance, previous generation processors | arm64 | Spot | $0.000608 | | small | Burst performance, older generation processors | arm64 | Spot | $0.000709 | ## Disk configurations | Config | Storage | IOPS | Throughput (MB/s) | Price | | ------ | ------- | ----- | ----------------- | --------- | | xlarge | 300GB | 12000 | 1000 | $0.009045 | | large | 300GB | 8000 | 800 | $0.006164 | | medium | 150GB | 5000 | 500 | $0.001695 | | small | 150GB | 3200 | 250 | $0.000719 | ![Disk configurations](./img/custom-runners/disk.png) ## Optimization The specific workflow runtime depends on various factors beyond just disk and CPU performance. It's a good idea to understand whether the workflow is CPU bound or IO bound for choosing the optimal configuration for your workflow. A general rule to note is that the network infrastructure speeds (> 900MBps) are much higher than the disk throughput (~250-1000 MBps) and is unlikely to be a bottleneck. However, it could still be the limiting factor if the source (example: dockerhub) does not support high data transfer speeds. ## Boot times Boot times for custom runners can be slower than the default runners and take 45-60s. ## Running Dependabot Workflows You can run Dependabot update workflows on WarpBuild custom runners. [Enable Dependabot workflows on self-hosted runners](https://docs.github.com/en/enterprise-cloud@latest/code-security/dependabot/maintain-dependencies/managing-dependabot-on-self-hosted-runners#enabling-self-hosted-runners-for-dependabot-updates) on GitHub and check the `Allow Dependabot` check box while creating the custom runner on WarpBuild: ![Allow Dependabot](./img/custom-runners/dependabot.png) This setup will run all your Dependabot workflows on the WarpBuild custom runner. For security reasons, when running Dependabot on GitHub Actions self-hosted runners, Dependabot updates will not be run on public repositories. [Learn More](https://docs.github.com/en/enterprise-cloud@latest/code-security/dependabot/maintain-dependencies/managing-dependabot-on-self-hosted-runners). --- # Runner Images URL: https://www.warpbuild.com/docs/ci/cloud-runners/runner-images Description: Available runner images and their upstream sources --- title: "Runner Images" excerpt: "Available runner images and their upstream sources" description: "Available runner images and their upstream sources" icon: Image sidebar_position: 1 createdAt: "2026-02-05" updatedAt: "2026-02-05" --- WarpBuild runner images are built to be compatible with GitHub-hosted runners. We maintain parity with the official GitHub Actions runner images, ensuring your workflows run seamlessly on WarpBuild infrastructure. ## macOS Images | Image | Architecture | Upstream Release | Last Updated | |-------|--------------|------------------|--------------| | macOS 26 | ARM64 | [macos-26-arm64/20260127.0184](https://github.com/actions/runner-images/releases/tag/macos-26-arm64%2F20260127.0184) | 2026-02-05 | | macOS 15 | ARM64 | [macos-15-arm64/20251203.0057](https://github.com/actions/runner-images/releases/tag/macos-15-arm64%2F20251203.0057) | 2025-12-15 | | macOS 14 | ARM64 | [macos-14-arm64](https://github.com/actions/runner-images/releases?q=macos-14-arm64) | - | | macOS 13 | ARM64 | [macos-13-arm64](https://github.com/actions/runner-images/releases?q=macos-13-arm64) | - | ## Ubuntu Images | Image | Architecture | Upstream Release | Last Updated | |-------|--------------|------------------|--------------| | Ubuntu 24.04 | x86-64 | [ubuntu24/20260111.209](https://github.com/actions/runner-images/releases/tag/ubuntu24%2F20260111.209) | 2026-01-16 | | Ubuntu 24.04 | ARM64 | [Partner Runner Images](https://github.com/actions/partner-runner-images/blob/main/images/arm-ubuntu-24-image.md) | 2025-11-20 | | Ubuntu 22.04 | x86-64 | [ubuntu22/20260112.2](https://github.com/actions/runner-images/releases/tag/ubuntu22%2F20260112.2) | 2026-01-16 | ## Windows Images | Image | Architecture | Upstream Release | Last Updated | |-------|--------------|------------------|--------------| | Windows Server 2025 | x86-64 | [win25/20260111.179](https://github.com/actions/runner-images/releases/tag/win25%2F20260111.179) | 2026-01-16 | | Windows Server 2022 | x86-64 | [win22/20260112.2](https://github.com/actions/runner-images/releases/tag/win22%2F20260112.2) | 2026-01-16 | ## Image Update Policy WarpBuild regularly updates runner images to include the latest security patches, tooling updates, and compatibility improvements. Image updates are documented in our [changelog](/docs/ci/changelog). For detailed information about pre-installed software on each image, see the [preinstalled software documentation](/docs/ci/preinstalled-software). --- # Docker Builders URL: https://www.warpbuild.com/docs/ci/docker-builders Description: Build Docker images with WarpBuild --- title: "Docker Builders" excerpt: "Build Docker images with WarpBuild" description: "Build Docker images with WarpBuild" icon: Container createdAt: "2025-03-03" updatedAt: "2025-03-07" --- WarpBuild provides powerful Docker builders that significantly accelerate your Docker build times, delivering superior performance for your containerization workflow. ## Features - 🚀 Fast Docker builds with WarpBuild's remote builder nodes. - 🔄 Automatic Docker BuildX integration. - 🔐 Secure TLS authentication. - 🌐 Works with both WarpBuild runners and non-WarpBuild runners. - 🔌 Integrate anywhere via API, supporting local development and various CI platforms (GitHub, GitLab, Bitbucket, Buildkite, etc.). - 🏗️ Multi-architecture builds (amd64, arm64) out of the box. ## See it in Action ![Docker builder in action](./img/benchmarks.png) Experience up to 50% faster Docker builds compared to traditional solutions. Our optimized builder infrastructure handles the heavy lifting so your CI/CD pipeline runs more efficiently. To get started with Docker builders, go to the [Docker Builders page](https://app.warpbuild.com/dashboard/runners/builder-profiles) and create a builder profile. ![View docker builders](./img/builder-profiles.png) ## Concurrency Each builder profile corresponds to one optimized, dedicated Docker builder virtual machine with caching. Multiple jobs can run on the same builder profile in parallel and these will effectively run on the same VM in parallel. While there is no limit on the number of builds that can be run concurrently on a builder profile, the recommended minimum resource requirements for builders is approximately 8 vCPU and 16GB memory per build job. ## Usage ### Using WarpBuild's Build Push Action (Recommended) WarpBuild provides a drop-in replacement for the widely used `docker/build-push-action`. Our action automatically sets up the WarpBuild's Remote Docker builders for you. > Note: We recommend that you remove the `docker/setup-buildx-action` step from your workflow if you are only using it to setup builders. ```diff - name: Setup Buildx - uses: docker/setup-buildx-action@v3 name: Docker Build Push Action - uses: docker/build-push-action@v6 + uses: Warpbuilds/build-push-action@v6 # Uses WarpBuild Docker Builders with: context: . push: true tags: user/app:latest + profile-name: "super-fast-builder" # Specify the builder profile to use ``` Here's how you can use the `Warpbuilds/build-push-action` in your workflows, whether you are using WarpBuild Runners or non-WarpBuild Runners. ```yaml jobs: build: runs-on: warp-ubuntu-latest-x64-4x steps: - uses: actions/checkout@v3 - name: Build and push uses: Warpbuilds/build-push-action@v6 with: context: . push: true tags: user/app:latest profile-name: "super-fast-builder" api-key: ${{ secrets.WARPBUILD_API_KEY }} # Not required for WarpBuild Runners timeout: 600000 # The timeout(in ms) to wait for the Docker Builders to be ready. By default, it is 10 minutes ``` ### Using WarpBuild's Bake Action WarpBuild provides a drop-in replacement for the widely used `docker/bake-action`. Our action automatically sets up the WarpBuild's Remote Docker builders for you. > Note: We recommend that you remove the `docker/setup-buildx-action` step from your workflow if you are only using it to setup builders. ```diff - name: Setup Buildx - uses: docker/setup-buildx-action@v3 name: Docker Bake Action - uses: docker/bake-action@v6 + uses: Warpbuilds/bake-action@v6 # Uses WarpBuild Docker Builders with: context: . push: true tags: user/app:latest + profile-name: "super-fast-builder" # Specify the builder profile to use ``` Here's how you can use the `Warpbuilds/bake-action` in your workflows, whether you are using WarpBuild Runners or non-WarpBuild Runners. ```yaml jobs: build: runs-on: warp-ubuntu-latest-x64-4x steps: - uses: actions/checkout@v3 - name: Bake uses: Warpbuilds/bake-action@v6 with: context: . push: true set: | *.tags=user/app:latest profile-name: "super-fast-builder" api-key: ${{ secrets.WARPBUILD_API_KEY }} # Not required for WarpBuild Runners timeout: 600000 # The timeout(in ms) to wait for the Docker Builders to be ready. By default, it is 10 minutes ``` ### Using WarpBuild's Docker Configure Action For users wanting more control over their workflows, WarpBuild provides a `docker-configure` action. This action sets up the builder for you in the VM and outputs all their details which you can then use in your workflow. Although, we recommend using the `Warpbuilds/build-push-action` as it is easier to use, this action allows you to use builders with your custom steps. #### With WarpBuild Runners ```diff jobs: build: runs-on: warp-ubuntu-latest-x64-4x steps: - uses: actions/checkout@v4 - - name: Setup Buildx - - uses: docker/setup-buildx-action@v3 + - name: Configure WarpBuild Docker Builders + uses: Warpbuilds/docker-configure@v1 + with: + api-key: ${{ secrets.WARPBUILD_API_KEY }} # Not required on WarpBuild Runners + profile-name: "super-fast-builder" + timeout: 300000 # The timeout(in ms) to wait for the Docker Builders to be ready. By default, it is 5 minutes - name: Custom Build docker image run: | ... ``` Learn more about the outputs of the `docker-configure` action in the [docker-configure action docs](https://github.com/WarpBuilds/docker-configure/tree/main?tab=readme-ov-file#outputs). ### From CLI You can use WarpBuild's Docker builders directly from your CLI. 1. **Set your API key as an environment variable**: ```bash export WARPBUILD_API_KEY="your-api-key" ``` 2. **Assign builders from your profile**: ```bash # Generate a unique idempotency key (16 characters) - Optional IDEMPOTENCY_KEY=$(uuidgen | tr -d '-' | cut -c1-16) BUILDER_NAME="builder-$IDEMPOTENCY_KEY" # Request a builder assignment # Note: external_unique_id is optional but recommended for idempotency RESPONSE=$(curl -s -X POST \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $WARPBUILD_API_KEY" \ -d '{"profile_name": "your-profile-name", "external_unique_id": "'"$IDEMPOTENCY_KEY"'"}' \ https://api.warpbuild.com/api/v1/builders/assign) # Get all builder IDs and request IDs (we'll use the first one for this example) BUILDER_ID=$(echo $RESPONSE | jq -r '.builder_instances[0].id') REQUEST_ID=$(echo $RESPONSE | jq -r '.builder_instances[0].request_id') ``` 3. **Wait for builder to be ready and get details**: ```bash # Poll for builder details until status is ready echo "Waiting for builder to be ready..." while true; do DETAILS=$(curl -s -H "Authorization: Bearer $WARPBUILD_API_KEY" \ https://api.warpbuild.com/api/v1/builders/$BUILDER_ID/details) STATUS=$(echo $DETAILS | jq -r '.status') if [ "$STATUS" = "ready" ]; then echo "Builder is ready!" break elif [ "$STATUS" = "failed" ]; then echo "Builder failed to initialize" exit 1 fi echo "Builder status: $STATUS. Waiting..." sleep 2 done # Extract connection information HOST=$(echo $DETAILS | jq -r '.metadata.host') # Create certificate directory CERT_DIR="$HOME/.docker/warpbuild/$BUILDER_NAME/$BUILDER_ID" mkdir -p $CERT_DIR # Save certificates echo "$DETAILS" | jq -r '.metadata.ca' > $CERT_DIR/ca.pem echo "$DETAILS" | jq -r '.metadata.client_cert' > $CERT_DIR/cert.pem echo "$DETAILS" | jq -r '.metadata.client_key' > $CERT_DIR/key.pem ``` 4. **Create a buildx instance with your builder**: ```bash docker buildx create --name "$BUILDER_NAME" \ --node "$BUILDER_ID" \ --driver remote \ --driver-opt "cacert=$CERT_DIR/ca.pem" \ --driver-opt "cert=$CERT_DIR/cert.pem" \ --driver-opt "key=$CERT_DIR/key.pem" \ --use \ tcp://$HOST ``` 5. **Use the builder for your Docker builds**: ```bash docker buildx build --builder $BUILDER_NAME -t myimage:latest . ``` You can now use this builder for faster Docker builds directly from your terminal! 6. **Terminate the assigned builders after usage**: ```bash # Complete the builder session # To be done for all builder instances (we'll use the first one for this example) RESPONSE=$(curl -s -X POST \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $WARPBUILD_API_KEY" \ -d '{"request_id": "'"$REQUEST_ID"'", "external_unique_id": "'"$IDEMPOTENCY_KEY"'"}' \ https://api.warpbuild.com/api/v1/builder-session-requests/complete) # Remove the buildx builder configuration docker buildx rm $BUILDER_NAME --force ``` Note : User will be billed for entire duration till the assigned builders are terminated by user. #### Resetting the cache If you need to reset the cache for a builder profile, you can do so by using the following steps. 1. Get the builder profile id using the following command. ```bash curl -s -H "Authorization: Bearer $WARPBUILD_API_KEY" \ https://api.warpbuild.com/api/v1/builder-profiles?per_page=30&page=1 ``` 2. Reset the cache for the builder profile id. ```bash curl -s -X POST \ -H "Authorization: Bearer $WARPBUILD_API_KEY" \ https://api.warpbuild.com/api/v1/builder-profiles/$BUILDER_PROFILE_ID/cache/reset ``` #### Working with multiple builders If your assignment returns multiple builders, you can set up additional nodes: ```bash # For a second builder BUILDER_ID_2=$(echo $RESPONSE | jq -r '.builder_instances[1].id') REQUEST_ID_2=$(echo $RESPONSE | jq -r '.builder_instances[1].request_id') # Poll for builder details until status is ready echo "Waiting for second builder to be ready..." while true; do DETAILS_2=$(curl -s -H "Authorization: Bearer $WARPBUILD_API_KEY" \ https://api.warpbuild.com/api/v1/builders/$BUILDER_ID_2/details) STATUS_2=$(echo $DETAILS_2 | jq -r '.status') if [ "$STATUS_2" = "ready" ]; then echo "Second builder is ready!" break elif [ "$STATUS_2" = "failed" ]; then echo "Second builder failed to initialize" exit 1 fi echo "Builder status: $STATUS_2. Waiting..." sleep 2 done # Extract connection information HOST_2=$(echo $DETAILS_2 | jq -r '.metadata.host') # Create certificate directory CERT_DIR_2="$HOME/.docker/warpbuild/$BUILDER_NAME/$BUILDER_ID_2" mkdir -p $CERT_DIR_2 # Save certificates echo "$DETAILS_2" | jq -r '.metadata.ca' > $CERT_DIR_2/ca.pem echo "$DETAILS_2" | jq -r '.metadata.client_cert' > $CERT_DIR_2/cert.pem echo "$DETAILS_2" | jq -r '.metadata.client_key' > $CERT_DIR_2/key.pem # Append second builder to existing buildx instance docker buildx create --name "$BUILDER_NAME" \ --append \ --node "$BUILDER_ID_2" \ --driver remote \ --driver-opt "cacert=$CERT_DIR_2/ca.pem" \ --driver-opt "cert=$CERT_DIR_2/cert.pem" \ --driver-opt "key=$CERT_DIR_2/key.pem" \ --use \ tcp://$HOST_2 ``` With multiple builders configured, your buildx instance can distribute build workloads more efficiently. ### Multi platform builds WarpBuild supports multi-platform builds for Docker images. You can specify the platforms you want to build for using the `platforms` option in the `docker/build-push-action`. > Note: Make sure that the builder profile being used has both architectures enabled in WarpBuild UI. ```yaml platforms: linux/amd64,linux/arm64 ``` ### Deleting a Builder Profile You can delete a builder profile from the [Docker Builders page](https://app.warpbuild.com/dashboard/runners/builder-profiles). It is necessary to wait for all the builds to finish before deleting a builder profile. ![Delete a builder profile](./img/builder-profiles-delete.png) ## Pricing Docker builders are billed per **session**. A session is measured from when the builder action starts until the job completes. Multiple concurrent jobs using the same builder profile share one session, billed from the first job's start to the last job's completion. | Size | Price / min | | ------------------------------ | ----------- | | 16vCPU, 32GB RAM, 100GB Disk | $0.06 | | 32vCPU, 64GB RAM, 200GB Disk | $0.12 | | 64vCPU, 128GB RAM, 200GB Disk | $0.24 | | 96vCPU, 192GB RAM, 600GB Disk | $0.36 | | 192vCPU, 384GB RAM, 600GB Disk | $0.72 | For multi-arch builds, each architecture runs on a separate builder instance, resulting in one session per architecture. However, multi-arch builders create two sessions for the same builder profile, one for each architecture. These sessions are alive until the post-action steps are complete. **Example:** Two jobs (J1 and J2) use the same x64 builder profile with overlapping execution: - J1: builder starts at t1, job completes at t2 - J2: builder starts at t3, job completes at t4 - Timeline: t1 < t3 < t2 < t4 → **Billed as one session** from t1 to t4. If both jobs use multi arch profile, you would have 2 parallel sessions for the builders one each for x64 and arm64, billed independently. ## F.A.Q. ### How does caching work for concurrent builds? For caching behavior between concurrent builds, note that the cache is shared but `eventually consistent`. This means that layer caching from one build may not be immediately available to another concurrent build, but will be available for future builds after synchronization occurs. ### Does the builder cache/data have any TTL? Yes, the builder cache/data has a TTL of 10 Days. If the builder profile is not used for more than 10 days, it will be reset automatically. ### Docker Builder is timing out Docker Builders have timeout built-in so that users don't get charged for builders that are idle. We recommend that you invoke the `WarpBuilds/docker-configure` action just before the `build-and-push` action or any other step that performs docker build. ### How to use `cache-to` and `cache-from` with Docker Builders When using Docker Builders, the `cache-to` and `cache-from` options are not required. A cached Docker Builder will automatically cache the layers and reuse them for subsequent builds. ```diff - name: Older Docker WarpCache Backend - uses: docker/build-push-action@v6 - with: - context: . - push: false - tags: "alpine/warpcache:latest" - cache-from: type=gha,url=http://127.0.0.1:49160/ - cache-to: type=gha,url=http://127.0.0.1:49160/,mode=max + name: New WarpBuild Docker Builders + uses: Warpbuilds/docker-configure@v1 + with: + profile-name: "super-fast-builder" + name: Build and push + uses: docker/build-push-action@v6 + with: + context: . + push: false + tags: "alpine/warpcache:latest" ``` ### Is the size of the Docker machine related to the the GitHub runner? No, the size of the Docker machine is not limited by or related to the size of the GitHub runner used. It is determined by the size of the builder profile that you have selected. ### Do I get charged for both the GitHub runner and the WarpBuild Docker builder runtime? Yes. These are two separate, independent resources and you will be charged for both. ### `exec format error` while building for arm64 / multi-platform builds This is likely because the builder profile does not have the correct architectures selected for your builder profile. Ensure the builder profile has both the architectures enabled. --- # BYOC URL: https://www.warpbuild.com/docs/ci/snapshot-runners/byoc Description: Blazing fast GitHub Action Runners, hosted on WarpBuild's cloud --- title: "BYOC" excerpt: "Blazing fast GitHub Action Runners, hosted on WarpBuild's cloud" description: "Blazing fast GitHub Action Runners, hosted on WarpBuild's cloud" hidden: false sidebar_position: 3 slug: "/snapshot-runners/byoc" createdAt: "2024-09-30" updatedAt: "2024-09-30" --- For using snapshots in your BYOC custom runners, ensure that the following quotas are available in your cloud account. ## AWS | Quota | Recommended Limit | Notes | | ------------------------------------------------------------- | ----------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Concurrent snapshot copies per destination Region** | 50 | Increase this if multiple snapshots are getting created in the same region in parallel. | | **CompleteSnapshot requests per account** | 100 | Increase this if multiple snapshots are getting created in an account across regions in parallel. | | **Concurrent snapshots per General Purpose SSD (gp3) volume** | 100 | We use gp3 as the EBS SSDs, so it is recommended to increase it to a larger value. | | **Pending snapshots per account** | 100 | This is required for cases where AWS receives snapshot request and marks it as pending. | | **GetSnapshotBlock requests per account** | 5,000 per second | Used for EBS creation and snapshot creation. | | **PutSnapshotBlock requests per account** | 5,000 per second | Used for snapshot creation to write the data blocks from EBS to snapshot. | | **Snapshots per Region** | 100,000 | | | **StartSnapshot** | 25 per second | The number of simultaneous requests to start creating snapshots. | | **Storage for General Purpose SSD (gp3) volumes, in TiB** | 100 | If there are additional EC2 instances outside of WarpBuild that are using gp3 storage we recommend to increase the storage limit to a larger number based on your requirements. | | **AMIs** | 50,000 | The maximum number of public and private AMIs allowed per Region. These include available, pending, and disabled AMIs, and AMIs in the Recycle Bin. | ## TTL Snapshots are retained for a maximum of 15 days. After this period, the snapshot is automatically deleted. --- # Snapshot Runners URL: https://www.warpbuild.com/docs/ci/snapshot-runners Description: Blazing fast GitHub Action Runners, hosted on WarpBuild's cloud --- title: "Snapshot Runners" excerpt: "Blazing fast GitHub Action Runners, hosted on WarpBuild's cloud" description: "Blazing fast GitHub Action Runners, hosted on WarpBuild's cloud" icon: HardDrive createdAt: "2024-09-30" updatedAt: "2024-09-30" --- WarpBuild allows you to take snapshots of your runner VMs at any point during your workflow, enabling faster consecutive runs by reusing these snapshots. Snapshots are temporary and will be deleted after 15 days. ## Prerequisites - **Supported Platforms:** Snapshot Runners feature is supported only on WarpBuild Linux runners at the moment. - **Unsupported Platforms:** BYOC based runners, container-based runner images and Mac runners are not supported. ## Limitations - **/tmp** directory will not persist state since this directory is cleaned on reboots. - **Windows runners** do not support snapshots. ## Usage First, enable snapshots for specific runners from the [dashboard](https://app.warpbuild.com/ci).
To use WarpSnapshot in your workflow, add the following step to your `.github/workflows/{workflow_name}.yml` file, preferably at the end of the job: ```yaml jobs: build: runs-on: warp-ubuntu-latest-x64-2x;snapshot.key=unique-snapshot-alias steps: - name: Checkout code uses: actions/checkout@v5 # Add your build and test steps here - name: Create snapshot uses: WarpBuilds/snapshot-save@v1 with: alias: "unique-snapshot-alias" fail-on-error: true wait-timeout-minutes: 60 ``` Invoking the action creates the snapshot of the runner. To use the snapshot in subsequent runs, specify the snapshot alias in the `runs-on` field of the job as shown above. If the runner machine is made from a snapshot, it will have an environment variable `WARPBUILD_SNAPSHOT_KEY` set to the alias of the snapshot. ### Inputs - **alias** (Required): A unique alias for the snapshot, helping you easily identify and manage your snapshots. - **fail-on-error** (Optional): If set to `true`, the action will fail if an error occurs during snapshot creation. Default is `true`. - **wait-timeout-minutes** (Optional): The maximum time (in minutes) to wait for the snapshot to be created. Default is `30` minutes. ### Conditional Snapshot Usage You can conditionally use snapshot runners by configuring the `runs-on` field in your workflow: ```yaml jobs: build: runs-on: ${{ contains(github.event.head_commit.message, '[warp-no-snapshot]') ? 'warp-ubuntu-latest-x64-2x' : 'warp-ubuntu-latest-x64-2x;snapshot.key=unique-snapshot-alias' }} steps: - name: Checkout code uses: actions/checkout@v5 # Add your build and test steps here - name: Create snapshot uses: WarpBuilds/snapshot-save@v1 with: alias: 'unique-snapshot-alias' ``` ### Advanced Conditional Logic For more complex scenarios, such as determining whether to use a standard or snapshot runner based on branch protection or other conditions, use the following setup: ```yaml jobs: determine-runner: runs-on: ubuntu-latest outputs: runner: ${{ steps.set-runner.outputs.runner }} steps: - name: Determine Branch Protection id: branch-protection run: | branch=$(echo "${{ github.ref }}" | sed 's|refs/heads/||') echo "Branch: $branch" response=$(curl -s -o /dev/null -w "%{http_code}" \ -H "Authorization: token ${{ secrets.GITHUB_TOKEN }}" \ -H "Accept: application/vnd.github.v3+json" \ "https://api.github.com/repos/${{ github.repository }}/branches/$branch/protection") if [ $response -eq 200 ]; then echo "Branch is protected" echo "runner=warp-ubuntu-latest-x64-8x;snapshot.key=unique-snapshot-alias" >> $GITHUB_OUTPUT else echo "Branch is not protected" echo "runner=warp-ubuntu-latest-x64-8x" >> $GITHUB_OUTPUT fi build: needs: determine-runner runs-on: ${{ needs.determine-runner.outputs.runner }} steps: - name: Checkout code uses: actions/checkout@v5 # Add your build and test steps here - name: Create snapshot uses: WarpBuilds/snapshot-save@v1 with: alias: "unique-snapshot-alias" ``` ### Cleanup Script It is highly recommended to include a cleanup step to remove credentials and sensitive information before creating a snapshot. This can be done by adding a cleanup script before the snapshot step: ```yaml jobs: build: runs-on: warp-ubuntu-latest-x64-2x;snapshot.key=unique-snapshot-alias steps: - name: Checkout code uses: actions/checkout@v5 # Add your build and test steps here - name: Cleanup VM run: | rm -rf $HOME/.ssh rm -rf $HOME/.aws - name: Create snapshot uses: WarpBuilds/snapshot-save@v1 with: alias: "unique-snapshot-alias" fail-on-error: true wait-timeout-minutes: 60 ``` #### Common cleanup commands **Remove untracked files and directories:** It might be useful to remove some secret files that were added during the job, before making a snapshot. ```bash git clean -ffdx ``` - _git clean_: removes untracked files from the local git repo. - _-f (force)_: forces the removal of files and directories. - _-f (force again)_: if `git config clean.requireForce true` is present, some files may not be removed without this flag. - _-d (directories)_: removes directories not just files. - _-x (ignore .gitignore)_: removes files and directories that are ignored by git. ## Security ### Public Repositories When using public repositories, ensure that no sensitive information (such as cloud credentials) is stored in the snapshot. This is crucial as others may access the snapshot using the alias in a PR workflow run. ### Private Repositories WarpBuild provisions runners at the organization level, and GitHub may allocate a runner intended for snapshot jobs to different jobs within the organization. This could expose sensitive information to other users in the organization. It is recommended to use the cleanup script to remove sensitive data before creating a snapshot. ## Additional Notes - Snapshot runners are not supported on BYOC runners. - Boot times for custom runners can be slower than the default runners and take 45-60s. --- # Action-Debugger URL: https://www.warpbuild.com/docs/ci/tools/action-debugger Description: Use Action-Debugger by WarpBuild to SSH into your GitHub Actions for rapid debugging --- title: "Action-Debugger" slug: "action-debugger" description: "Use Action-Debugger by WarpBuild to SSH into your GitHub Actions for rapid debugging" sidebar_position: 1 updatedAt: "2024-07-23" --- A common pain point that we have come across while using GitHub Actions extensively is the missing ability to debug any action. If the action fails, we have to rely on the logs generated by steps and keep re-running the actions to see if something changes. This leads to the frustrating trial and error way of debugging. To handle this, we built Action-Debugger. Action-Debugger lets you SSH into a running GitHub Action to debug it. It is as simple as adding a single line to your workflow. Action-Debugger is a free to use, open-source GitHub action which can be plugged in to any GitHub workflow to make it easy to debug. ![Example WarpBuild Syntax](./img/action-debugger/code.gif) Any workflow's execution will get paused as soon as the Action-Debugger step is invoked. While it is paused, an SSH session will be started on the runner machine and the SSH URL will be outputted inside the action logs and as a check in the corresponding GitHub run. Action-Debugger will keep the action paused on the step until a user connects and exits that session. ![Getting URL from GitHub Checks](./img/action-debugger/url.gif) Below are some additional features of the action that might come in handy while debugging. ### Security By default, if the GitHub user that triggers the action has added SSH keys to their account, then only they would be allowed to connect to the session. **_Otherwise, anyone on the internet would be able to connect to that session, if they have/guess the generated SSH URL._** However, this can be forced as well by setting the option `limit-access-to-actor` to `true`. ```yaml name: CI on: [push] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Setup interactive ssh session uses: Warpbuilds/action-debugger@v1.3 with: limit-access-to-actor: true ``` ### Only pause on failure This will make Action-Debugger pause the workflow execution only when any of the previous steps fail. ```yaml name: CI on: [push] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Setup interactive ssh session if: ${{ failure() }} uses: Warpbuilds/action-debugger@v1.3 ``` ### Run in detached mode The default behavior of Action-Debugger is to pause the workflow as it is invoked. The workflow resumes after the SSH session is ended by the user. When setting `detached` as `true`, all the steps of the workflow will execute as normal and then the action will be paused at the end of the job. ```yaml name: CI on: [push] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Setup interactive ssh session uses: Warpbuilds/action-debugger@v1.3 with: detached: true ``` ### Timeouts A custom timeout can be specified to close the SSH session automatically after the specified time. By default, GitHub Actions kills workflows after 6 hours. We recommend setting a default timeout as the runner minutes are billed when the debug session is running. Accidentally leaving an action running might incur unexpected costs. ```yaml name: CI on: [push] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Setup interactive ssh session uses: Warpbuilds/action-debugger@v1.3 timeout-minutes: 15 ``` ### Deterministic SSH URLs (Named sessions) (This feature requires an API key from WarpBuild. Please contact WarpBuild Support.) SSH URLs produced by the Action-Debugger are long random strings which are uniquely generated every time an action is run. This is to keep anyone from guessing the string and connecting to the runner machine. However, there can be cases where you want to keep the URL same across action runs. This can be achieved using named sessions. Just input `named-session-name` and `named-session-api-key`, and the action will always generate the SSH URL in the form of `/@gha.warp.build`. Make sure that `limit-access-to-actor` is set to `true` for named sessions. ```yaml name: CI on: [push] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Setup interactive ssh session uses: Warpbuilds/action-debugger@v1.3 with: limit-access-to-actor: true named-session-name: random-string-2345ab named-session-api-key: ``` ### Troubleshooting ### SSH URL is not appearing in checks of my action run To create a check in the repo, Action-Debugger requires write permission to the checks scope. Please make sure that `read and write permissions` are enabled in `Workflow permissions`. This may need to be set at the organization level instead of the repository level. ![GitHub workflow permissions](./img/action-debugger/github-workflow-settings.png) More info about these permissions can be found here: [GitHub token permissions](https://docs.github.com/en/organizations/managing-organization-settings/disabling-or-limiting-github-actions-for-your-organization#setting-the-permissions-of-the-github_token-for-your-organization) ### I'm seeing a blank screen when I connect to the SSH session This is likely because of `tmux`. `ctrl+c` will drop you into a shell. However, this may disable the multiplexing feature of `tmux`. ### Acknowledgement This project owes much to the great work done by [tmate.io](https://tmate.io). --- # Tools URL: https://www.warpbuild.com/docs/ci/tools Description: Tools to improve build engineering efficiency by WarpBuild --- title: "Tools" excerpt: "Tools by WarpBuild" description: "Tools to improve build engineering efficiency by WarpBuild" icon: Wrench createdAt: "2023-11-07" updatedAt: "2024-07-23" --- In our mission to make build engineering more efficient, we have developed a number of tools to help us and our customers. We are making these tools available to the community to help improve build engineering efficiency. ## Action-Debugger [Action-Debugger](/docs/ci/tools/action-debugger) is a tool that allows you to debug your GitHub Actions. --- # Architecture URL: https://www.warpbuild.com/docs/ci/byoc/aws/architecture Description: Architecture and security --- title: "Architecture" excerpt: "Architecture and security" description: "Architecture and security" hidden: false sidebar_position: 3 slug: "/byoc/aws/architecture" createdAt: "2024-07-23" updatedAt: "2024-07-23" --- ## Architecture Here are the high level details of the architecture: ![Architecture Diagram](./img/architecture/architecture.png)
Mermaid diagram ```yaml graph TD Internet((Internet)) IGW[Internet Gateway] VPC[VPC] PublicSubnet[Public Subnet] PrivateSubnet[Private Subnet] EC2[EC2 Instances] S3[S3 Bucket] S3GW[S3 Gateway Endpoint] SG[Security Group] NATGW[NAT Gateway
Static IP] PublicRT[Public Route Table] PrivateRT[Private Route Table] Internet --> IGW IGW --> VPC VPC --> PublicSubnet VPC --> PrivateSubnet PublicSubnet --> EC2 PrivateSubnet --> EC2 VPC --> S3GW S3GW --> S3 SG -.-> EC2 NATGW --> Internet PrivateSubnet --> NATGW PublicRT -.-> PublicSubnet PrivateRT -.-> PrivateSubnet ```
Ensure the recommendations from the [Configuration and best practices](/docs/ci/byoc/aws/config) are followed for secure and robust infrastructure. --- # Config URL: https://www.warpbuild.com/docs/ci/byoc/aws/config Description: Configuration and best practices for setting up AWS with WarpBuild --- title: "Config" excerpt: "Configuration and best practices for setting up AWS with WarpBuild" description: "Configuration and best practices for setting up AWS with WarpBuild" hidden: false sidebar_position: 1 slug: "/byoc/aws/config" createdAt: "2024-07-23" updatedAt: "2024-07-23" --- ## Prerequisites Here's a checklist of things to have setup on AWS when getting started: ### ✅ Cloudformation permissions User must be able to run a cloudformation stack. WarpBuild provisions an IAM role and requires these [permissions](#permissions). ### ✅ VPC and subnets The VPC must have at least one public and private subnet. The subnets must have internet connectivity. The private subnets must have a NAT gateway. Runners with static IPs will have the IPs of the NAT gateway as the external IP addresses. Recommendations: 1. Have three public and private subnets in different availability zones. This maximizes the availability of instance types and robustness. 1. Each subnet must have enough IPs to accommodate the maximum number of runners you want to run concurrently. The minimum recommended number of IPs per subnet is 250. ### ✅ Security groups Security groups act as a virtual firewall for your EC2 instances. For WarpBuild: 1. Create a security group that blocks all inbound traffic. This is to ensure that no unauthorized access to the runners is possible. 1. Ensure outbound traffic is allowed to all destinations. This can be finetuned per your organization's security policy, but needs to allow runners to connect to the WarpBuild servers, Github API, and other locations (like package managers). ### ✅ Network Routes For optimal network routing in your WarpBuild setup: 1. Configure your VPC route tables to direct internet-bound traffic through the Internet Gateway. 1. Ensure proper routes are in place for outbound internet access for NAT Gateways in private subnets. 1. Optionally, block all access between instances in the same subnet. Runner instances can be isolated and do not need to communicate with each other. 1. Optionally, consider implementing VPC peering or AWS Transit Gateway. This is a simple and secure way to access services in other VPCs, accounts or private subnets. ### ✅ `s3` bucket 1. Setup an `s3` gateway endpoint for the VPC to allow the runners to connect to the `s3` bucket without incurring data transfer charges. 1. Ensure the `s3` bucket is in the same region as the CI stack created. 1. The `s3` bucket is used for: - Cache: `//artifact_cache/////` - Telemetry: `/runner/logs/all/.` Setup the `s3` lifecycle policy for managing the cache and telemetry data according to your organization's policy. 7 days retention is recommended. ### ✅ ECR (Elastic Container Registry) WarpBuild automatically configures VPC endpoints for ECR operations from both public and private subnet runners. **Important Notes:** - Stacks created with CloudFormation template versions prior to v1.4 may experience ECR authentication failures (ecr login timing out issues) for runners in public subnets. - If you encounter ECR login issues with public subnet runners, upgrade your stack to template v1.4 via the WarpBuild dashboard. For more details on ECR VPC endpoints, refer to the [AWS ECR VPC Endpoints documentation](https://docs.aws.amazon.com/AmazonECR/latest/userguide/vpc-endpoints.html). ### ✅ Quotas Ensure that there are enough IPs, EBS volume capacity, and vCPUs available in the region for the selected instance types. ## Cloud Connection Creating a cloud connection sets up an IAM role with the permissions required by WarpBuild Stacks and runners. ### Permissions
Expand to see the IAM role permissions ```yaml Policies: - PolicyName: "FineGrainedEC2Permissions" PolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Action: - "ec2:DescribeInstanceTypes" - "ec2:DescribeInstanceTypeOfferings" - "ec2:DescribeInstances" - "ec2:RunInstances" - "ec2:CreateFleet" - "ec2:RequestSpotInstances" - "ec2:CancelSpotInstanceRequests" - "ec2:DescribeSpotInstanceRequests" - "ec2:DescribeSpotPriceHistory" - "ec2:CreateLaunchTemplate" - "ec2:DeleteLaunchTemplate" - "ec2:ModifyLaunchTemplate" - "ec2:TerminateInstances" - "ec2:CreateImage" - "ec2:DeregisterImage" - "ec2:DescribeImages" - "ec2:CreateTags" - "ec2:DeleteTags" Resource: "*" - PolicyName: "SpotServiceLinkedRolePermissions" PolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Action: - "iam:CreateServiceLinkedRole" - "iam:DeleteServiceLinkedRole" - "iam:GetServiceLinkedRoleDeletionStatus" - "iam:AttachRolePolicy" - "iam:PutRolePolicy" - "iam:PassRole" Resource: "arn:aws:iam::*:role/aws-service-role/spot.amazonaws.com/AWSServiceRoleForEC2Spot" - PolicyName: "NetworkPermissions" PolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Action: - "ec2:DescribeRegions" - "ec2:DescribeVpcs" - "ec2:DescribeSubnets" - "ec2:DescribeRouteTables" - "ec2:DescribeSecurityGroups" - "ec2:DescribeInternetGateways" - "ec2:CreateVpcEndpoint" - "ec2:DeleteVpcEndpoints" Resource: "*" - PolicyName: "StoragePermissions" PolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Action: - "ec2:CreateSnapshot" - "ec2:DeleteSnapshot" - "ec2:DescribeSnapshots" - "s3:ListBucket" - "s3:GetBucketLocation" - "s3:PutLifecycleConfiguration" - "s3:GetLifecycleConfiguration" - "s3:DeleteObject" - "s3:PutObject" - "s3:GetObject" - "s3:CreateBucket" - "s3:DeleteBucket" - "s3:ListBucketVersions" - "s3:ListBucketMultipartUploads" - "s3:AbortMultipartUpload" - "s3:PutObjectAcl" - "s3:GetObjectAcl" - "s3:PutBucketAcl" - "s3:GetBucketAcl" - "s3:PutBucketPolicy" - "s3:GetBucketPolicy" - "s3:DeleteBucketPolicy" Resource: "*" ```
Updates may be required to the IAM role if new permissions are required to the WarpBuild Stacks and runners for new features. ### Permission Customization If you need to customize the permissions for your specific requirements, there are two approaches available: 1. **Modify CloudFormation Template**: Modify the CloudFormation template on the AWS redirect page before applying the connection role creation stack. 2. **Modify Role After Creation**: Modify the permissions for the created role after it's provisioned. The role follows the format: `warpbuild-`. You can use the `managed-by: warpbuild` tag to control access to WarpBuild-managed resources using [AWS IAM tag-based access control](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html). ### Default Policy Contexts The IAM permissions are organized into the following policy contexts: - **FineGrainedEC2Permissions, SpotServiceLinkedRolePermissions**: Used to launch JIT (Just-In-Time) runners, spot permissions, instance types and offerings available to account/region/AZ for launching runners, create custom runner configurations, launch templates, describe instances using `Name` tag, and attach roles to runners. - **NetworkPermissions**: Permissions used to support stack creation (import mode). List region is also used in create mode. - **StoragePermissions**: Used for runner flow with WarpBuild cache action and pushing runner system logs. Also used to support stack creation (import mode). - **CloudFormationPermissions**: To initiate CloudFormation stack changeset requests to connection and stacks (create mode) upgrades. ## Resource Naming Conventions WarpBuild follows consistent naming patterns for AWS resources: | Resource Type | Naming Pattern | | ---------------- | ------------------ | | S3 Buckets | `warpbuild-*` | | EC2 Instances | `warp-*` | | EBS Volumes | `warp-*` | | Launch Templates | `tmpl-warp-*` | ## Stack ![Create Stack](./img/config/create-stack.png) Creating a stack imports the infra configuration provided and uses the `s3` bucket for cache and telemetry. The stack name, `s3` bucket, and region cannot be changed after creation. ### Tags Users can specify custom tags. Tags provided here are added to all resources created by the stack. These can be used for cost attribution and resource management. WarpBuild automatically adds the `managed-by: warpbuild` tag to all provisioned resources. For WarpBuild stack resources, this tagging is available starting from version 1.3 in create mode, while runners are tagged in all cases. By default, the following tags are added to all resources created by the stack: | key | value | | ----------------------- | --------------------------------------- | | warpbuild-managed-by | warpbuild | | Name | `{runner-id}` | | warpbuild-github-org | `{github-org}` | | warpbuild-runner-labels | `{runner-label1}, {runner-label2}, ...` | | warpbuild-runner-id | `{runner-id}` | | warpbuild-stack-id | `{stack-id}` | | warpbuild-stack-name | `{stack-name}` | ## Attach IAM roles You can specify IAM roles to attach to the runner instances. This is useful if you want to use a custom IAM role with specific permissions. This can be set at 2 levels: 1. Stack level: All the runners created using this Stack use the IAM role provided, but can be overridden at the runner level. 1. Runner level: inherits the stack level IAM role by default but can be overridden. This is very useful when runners need to have fine-grained permissions to access specific AWS resources. More details on how to attach IAM roles to the runners can be found [here](./instance-profile.mdx). ## Custom Runners ![Custom Runners](./img/config/create-custom-runner.png) 1. Spot instances are useful for short jobs that can be interrupted and can lead to significant (~70%) cost savings. 1. One or more instance types in priority order can be chosen. The Github workflow uses a single runner label but picks the instance type based on availability. 1. The minimum disk configurations are: - Size: `100GB` - Throughput: `125MBps` - IOPS: `3000` ### Set Instance Metadata Service Version 2 (IMDS v2) to required Both IMDS v1 and v2 are supported for interacting with the metadata service by default. You can switch your runners to only use IMDS v2. To do so, go to the Update Runner page > 'Runner Specs' Section > Set 'Require IMDSv2' to selected. This configuration is also available when creating the runner. More details on IMDS can be found [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html). ![IMDSv2 Required](./img/config/imdsv2.png) ### Best practices: 1. Use multiple instance types, especially when using spot instances to ensure availability and jobs aren't stuck in queue. 1. When using multiple instance types, choose instance types that are similar in price and performance. 1. Choose a minimum disk configuration of: - Size: `150GB` - Throughput: `400MBps` - IOPS: `4000` ### Windows Runners Minimum Infrastructure Requirements For Windows Server 2022 x86-64 runners on AWS, the following minimum infrastructure requirements are recommended: - **Instance Type**: At least 8x vCPU (m7a series recommended) - **Disk Configuration**: - IOPS: `6000` - Throughput: `500 MBps` These requirements ensure optimal performance for Windows-based CI/CD workloads and provide sufficient resources for typical Windows build and test scenarios. ### Coming Soon - Instances with LocalSSD --- # AWS URL: https://www.warpbuild.com/docs/ci/byoc/aws Description: Github actions runners on your AWS account, managed by WarpBuild --- title: "AWS" excerpt: "Github actions runners on your AWS account, managed by WarpBuild" description: "Github actions runners on your AWS account, managed by WarpBuild" hidden: false sidebar_position: 1 slug: "/byoc/aws" createdAt: "2024-07-23" updatedAt: "2024-07-23" --- Connect your AWS account to WarpBuild to run Github Actions runners. Enable Github Actions workflows to run on your own infrastructure and save 90% on your build costs. ## Quotas The runner resources are created in the BYOC AWS account. To function correctly, here are some guidelines on the `additional` quotas required, per stack. We assume that the number of concurrently running jobs is `$CON`, say 1000. | Resource | Quota | Notes | URL | | ------------- | ------------------------ | ----------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- | | EC2 Instances | `$CON` \* `vCPU per Job` | Adjust for instance type, spot, and on-demand instances | [Adjust Quota](https://us-east-1.console.aws.amazon.com/servicequotas/home/services/ec2/quotas) | | EBS Volumes | `$CON` \* `DISK_TB` | Adjust provisioned IOPS if needed. | [Adjust Quota](https://us-east-1.console.aws.amazon.com/servicequotas/home/services/ebs/quotas/L-7A658B76) | | Elastic IPs | 3 + `$CON` | 3 static Elastic IP are required for NAT gateways in 3 AZs.
One EIP is attached to each concurrently running job. | [Adjust Quota](https://us-east-1.console.aws.amazon.com/servicequotas/home/services/ec2/quotas/L-0263D0A3) | | S3 Buckets | 1 | The same bucket is used for artifact cache, container layer caches, and telemetry data. | [Adjust Quota](https://us-east-1.console.aws.amazon.com/servicequotas/home/services/s3/quotas) | | NAT Gateways | 3 | One per availability zone | [Adjust Quota](https://us-east-1.console.aws.amazon.com/servicequotas/home/services/vpc/quotas/L-FE5A380F) | | VPCs | 1 | One VPC is needed | [Adjust Quota](https://us-east-1.console.aws.amazon.com/servicequotas/home/services/vpc/quotas/L-F678F1CE) | The quotas need to be applied to the region where the stack is created. Change the region in the AWS console while editing the quotas. This is not an exhaustive list. Please reach out to [support@warpbuild.com](mailto:support@warpbuild.com) for any questions or reach out on chat. ## Windows Support Windows Server 2022 x86-64 runners are supported on AWS. These don't support Hyper-V features like the Azure equivalent instances since AWS uses a different hypervisor. For a full list of tools, refer the [preinstalled software page](/docs/ci/preinstalled-software#windows-server-2022-x86-64) ## Resources: - [Configuration and best practices](/docs/ci/byoc/aws/config) - [Architecture and security](/docs/ci/byoc/aws/architecture) --- # Instance Profile URL: https://www.warpbuild.com/docs/ci/byoc/aws/instance-profile Description: Attach IAM Instance Profile to EC2 runners --- title: "Instance Profile" excerpt: "Attach IAM Instance Profile to EC2 runners" description: "Attach IAM Instance Profile to EC2 runners" hidden: false sidebar_position: 2 slug: "/byoc/aws/instance-profile" createdAt: "2025-01-19" updatedAt: "2025-01-19" --- ## Prerequisites Here's a checklist of things to have setup on AWS when getting started: ### ✅ AWS IAM Instance Profile Create an IAM instance profile and role attached to the instance profile. Here's how: - [AWS EC2 IAM roles](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html) - [AWS IAM Instance Profiles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html) ### ✅ Warpbuild Integration IAM role name Fetch the IAM role name from the WarpBuild connection page for the runner. [WarpBuild Connections](https://app.warpbuild.com/dashboard/byoc) WarpBuild Role Name Format: `warpbuild-` ## Setup Permissions Execute the below command to grant the `iam.PassRole` permission to the `warpbuild-` role. ```bash aws iam put-role-policy \ --role-name \ --policy-name PassRolePolicy \ --policy-document '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "iam:PassRole", "Resource": "", "Condition": { "StringEquals": { "iam:PassedToService": "ec2.amazonaws.com" } } } ] }' ``` To verify the policy is attached, run the below command: ```bash aws iam simulate-principal-policy \ --policy-source-arn \ --action-names iam:PassRole \ --resource-arns \ --context-entries ContextKeyName=iam:PassedToService,ContextKeyType=string,ContextKeyValues=ec2.amazonaws.com ``` ## Attach IAM roles to the runners Use the Instance Profile ARN while configuring the stack for all the runners in stack. You can also override this at the Custom Runner configuration. --- # Config URL: https://www.warpbuild.com/docs/ci/byoc/azure/config Description: Configuration and best practices for setting up Azure with WarpBuild --- title: "Config" excerpt: "Configuration and best practices for setting up Azure with WarpBuild" description: "Configuration and best practices for setting up Azure with WarpBuild" hidden: false sidebar_position: 1 slug: "/byoc/azure/config" createdAt: "2024-11-07" updatedAt: "2024-11-07" --- ## Prerequisites: Permissions User must have [Privileged Role Administrator](https://learn.microsoft.com/en-us/entra/identity/enterprise-apps/grant-admin-consent?pivots=portal#prerequisites) for setup. ## Cloud Connection ![Create Connection](./img/config/create-connection.png) Creating a cloud connection sets up a consent for Warpbuild CI Enterprise application with the permissions required by WarpBuild to manage runners. Provide the tenant ID and subscription ID to verify the connection after the consent and permission configuration (arm deployment) is complete ## Stack ![Create Stack](./img/config/create-stack.png) Creating a stack creates the infra configuration provided and uses the `storage account and container` for cache and telemetry. ![Create Stack](./img/config/create-stack-pending.png) The stack name, `storage account and container`, and region cannot be changed after creation. ## Custom Runners ![Custom Runners](./img/config/create-custom-runner.png) 1. Spot instances are useful for short jobs that can be interrupted and can lead to significant (~70%) cost savings. 2. One or more instance types in priority order can be chosen. The Github workflow uses a single runner label but picks the instance type based on availability. 3. The minimum disk configurations are: - Size: `256GB` ### Best practices: 1. Choose a minimum disk configuration of: - Size: `P20` Throughput and IOPS automatically managed by Azure. Refer: https://learn.microsoft.com/en-us/azure/virtual-machines/disks-types#premium-ssd-size ## Limitations 1. BYOC Azure does not support import flow for stack creation. 2. Snapshot-based runners are not available for BYOC Azure. 3. BYOC Azure currently only enabled for East US. For adding more regions, please reach out to support@warpbuild.com. ## Coming Soon - Resource tagging --- # Azure URL: https://www.warpbuild.com/docs/ci/byoc/azure Description: Github actions runners on your Azure Subscription, managed by WarpBuild --- title: "Azure" excerpt: "Github actions runners on your Azure Subscription, managed by WarpBuild" description: "Github actions runners on your Azure Subscription, managed by WarpBuild" hidden: false sidebar_position: 3 slug: "/byoc/azure" createdAt: "2024-11-07" updatedAt: "2024-11-07" --- Connect your Azure Subscription to WarpBuild to run Github Actions runners. Enable Github Actions workflows to run on your own infrastructure and save 90% on your build costs. ## Quotas The runner resources are created in the BYOC Azure Subscription. To function correctly, here are some guidelines on the `additional` quotas required, per stack. We assume that the number of concurrently running jobs is `$CON`, say 1000. | Resource | Quota | Notes | URL | |-----------------------------------------|--------------------------|------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------| | CPU | `$CON` * `vCPU per Job` | Adjust for machine type, preemptible, and on-demand instances | [View and Adjust Quota](https://portal.azure.com/#view/Microsoft_Azure_Capacity/QuotaMenuBlade/~/overview) | | Persistent Disks | `$CON` * `DISK_TB` | Adjust provisioned IOPS if needed. | [View and Adjust Quota](https://portal.azure.com/#view/Microsoft_Azure_Capacity/QuotaMenuBlade/~/overview) | | In-use regional external IPv4 addresses | 3 + `$CON` | 1 static IP is required for NAT in a stack.
One static IP is attached to each concurrently running job. | [View and Adjust Quota](https://portal.azure.com/#view/Microsoft_Azure_Capacity/QuotaMenuBlade/~/overview) | | Object Storage | 1 | The same storage container is used for artifact cache, container layer caches, and telemetry data. | [View and Adjust Quota](https://portal.azure.com/#view/Microsoft_Azure_Capacity/QuotaMenuBlade/~/overview) | | NAT | 3 | One per stack | [View and Adjust Quota](https://portal.azure.com/#view/Microsoft_Azure_Capacity/QuotaMenuBlade/~/overview) | | Networks | 1 | One Network (Vnet) is needed | [View and Adjust Quota](https://portal.azure.com/#view/Microsoft_Azure_Capacity/QuotaMenuBlade/~/overview) | | Subnetworks | 2 | One public and private subnetwork is needed per stack. | [View and Adjust Quota](https://portal.azure.com/#view/Microsoft_Azure_Capacity/QuotaMenuBlade/~/overview) | The quotas need to be applied to the region where the stack is created. Change the region in the Azure console while editing the quotas. This is not an exhaustive list. Please reach out to [support@warpbuild.com](mailto:support@warpbuild.com) for any questions or reach out on chat. ## Resources: - [Configuration and best practices](/docs/ci/byoc/azure/config) --- # Config URL: https://www.warpbuild.com/docs/ci/byoc/gcp/config Description: Configuration and best practices for setting up GCP with WarpBuild --- title: "Config" excerpt: "Configuration and best practices for setting up GCP with WarpBuild" description: "Configuration and best practices for setting up GCP with WarpBuild" hidden: false sidebar_position: 1 slug: "/byoc/gcp/config" createdAt: "2024-10-08" updatedAt: "2024-10-08" --- ## Prerequisites Here's a checklist of things to have setup on your GCP Project when getting started: ### ✅ Associate billing account The GCP project must be associated to a billing account for it to used. Use the link: https://console.cloud.google.com/billing/linkedaccount to check if your project is linked to a billing account. Make sure to choose your project from the project dropdown in GCP console. ### ✅ Enable services WarpBuild requires the following services be enabled before initiating a cloud connect. Make sure to choose your project from the project dropdown in GCP console. | Service | Purpose | Link | | ---------------------------------------- | ------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------- | | Cloud Storage API | Used for caches and storing telemetry | [Enable](https://console.cloud.google.com/apis/api/storage.googleapis.com) | | IAM Service Account Credentials API | Generates short lived tokens through our service account to your project specific service account | [Enable](https://console.cloud.google.com/apis/api/iamcredentials.googleapis.com) | | Identity and Access Management (IAM) API | Creates service account for access management in your project | [Enable](https://console.cloud.google.com/apis/api/iam.googleapis.com) | | Cloud Deployment Manager V2 API | Creates cloud integrations and stacks using deployment manager for easier versioning | [Enable](https://console.cloud.google.com/apis/api/deploymentmanager.googleapis.com) | | Compute Engine API | Used for runner lifecycle management | [Enable](https://console.cloud.google.com/apis/api/compute.googleapis.com) | | Cloud Resource Manager API | Used for resource tagging and management | [Enable](https://console.cloud.google.com/apis/api/cloudresourcemanager.googleapis.com) | ### Permissions Users should have permissions to create the resources for cloud integration and stack. The following roles should be associated with the user: - [Security Admin](https://cloud.google.com/iam/docs/understanding-roles#iam.securityAdmin) - [Storage Admin](https://cloud.google.com/iam/docs/understanding-roles#storage.admin) - [Deployment Manager Editor](https://cloud.google.com/iam/docs/understanding-roles#deploymentmanager.editor) - [Compute Admin](https://cloud.google.com/iam/docs/understanding-roles#compute.admin) ## Cloud Connection Creating a cloud connection sets up a service account role with the permissions required by WarpBuild Stacks and runners. This SA can be impersonated by WarpBuild's service account to generate short lived tokens which we use for access. ## Stack ![Create Stack](./img/config/create-stack.png) Creating a stack creates the infra configuration provided and uses the `cloud storage bucket` for cache and telemetry. ![Create Stack](./img/config/create-stack-pending.png) The stack name, `cloud storage bucket`, and region cannot be changed after creation. ## Custom Runners ![Custom Runners](./img/config/create-custom-runner.png) 1. Spot instances are useful for short jobs that can be interrupted and can lead to significant (~70%) cost savings. 2. One or more instance types in priority order can be chosen. The Github workflow uses a single runner label but picks the instance type based on availability. 3. The minimum disk configurations are: - Size: `100GB` ### Best practices: 1. Use multiple instance types, especially when using spot instances to ensure availability and jobs aren't stuck in queue. 2. When using multiple instance types, choose instance types that are similar in price and performance. 3. Choose a minimum disk configuration of: - Size: `150GB` Throughput and IOPS automatically managed by GCP. Refer: https://cloud.google.com/compute/docs/disks/performance ## Limitations 1. BYOC GCP does not support import flow for stack creation. 4. Snapshot-based runners are not available for BYOC GCP. ## Coming Soon - Instances with LocalSSD - Resource tagging --- # GCP URL: https://www.warpbuild.com/docs/ci/byoc/gcp Description: Github actions runners on your GCP project, managed by WarpBuild --- title: "GCP" excerpt: "Github actions runners on your GCP project, managed by WarpBuild" description: "Github actions runners on your GCP project, managed by WarpBuild" hidden: false sidebar_position: 2 slug: "/byoc/gcp" createdAt: "2024-10-08" updatedAt: "2024-10-08" --- Connect your GCP project to WarpBuild to run Github Actions runners. Enable Github Actions workflows to run on your own infrastructure and save 90% on your build costs. ## Quotas The runner resources are created in the BYOC GCP project. To function correctly, here are some guidelines on the `additional` quotas required, per stack. We assume that the number of concurrently running jobs is `$CON`, say 1000. | Resource | Quota | Notes | URL | | --------------------------------------- | ------------------------ | ------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------- | | CPU | `$CON` \* `vCPU per Job` | Adjust for machine type, preemptible, and on-demand instances | [Adjust Quota](https://console.cloud.google.com/iam-admin/quotas) | | Persistent Disks | `$CON` \* `DISK_TB` | Adjust provisioned IOPS if needed. | [Adjust Quota](https://console.cloud.google.com/iam-admin/quotas) | | In-use regional external IPv4 addresses | 3 + `$CON` | 1 static IP is required for cloud NAT in a stack.
One static IP is attached to each concurrently running job. | [Adjust Quota](https://console.cloud.google.com/iam-admin/quotas) | | Cloud Storage | 1 | The same bucket is used for artifact cache, container layer caches, and telemetry data. | [Adjust Quota](https://console.cloud.google.com/iam-admin/quotas) | | Cloud NAT | 3 | One per stack | [Adjust Quota](https://console.cloud.google.com/iam-admin/quotas) | | Networks | 1 | One Network (VPC) is needed | [Adjust Quota](https://console.cloud.google.com/iam-admin/quotas) | | Subnetworks | 2 | One public and private subnetwork is needed per stack. | [Adjust Quota](https://console.cloud.google.com/iam-admin/quotas) | The quotas need to be applied to the region where the stack is created. Change the region in the GCP console while editing the quotas. This is not an exhaustive list. Please reach out to [support@warpbuild.com](mailto:support@warpbuild.com) for any questions or reach out on chat. ## Troubleshooting Ensure that the bucket name is globally unique. Else, you may see an error like this: ```bash (gcloud.deployment-manager.deployments.create) Error in Operation [operation-1732573114216-627c41d05312d-c4e57059-xxxx]: errors: - code: RESOURCE_ERROR location: /deployments/xxxx/resources/ message: "{\"ResourceType\":\"storage.v1.bucket\",\"ResourceErrorCode\":\"403\"\ ,\"ResourceErrorMessage\":{\"code\":403,\"errors\":[{\"domain\":\"global\",\"\ message\":\"xxxxx@cloudservices.gserviceaccount.com does not have storage.buckets.get\ \ access to the Google Cloud Storage bucket. Permission 'storage.buckets.get'\ \ denied on resource (or it may not exist).\",\"reason\":\"forbidden\"}],\"message\"\ :\"xxxxx@cloudservices.gserviceaccount.com does not have storage.buckets.get\ \ access to the Google Cloud Storage bucket. Permission 'storage.buckets.get'\ \ denied on resource (or it may not exist).\",\"statusMessage\":\"Forbidden\"\ ,\"requestPath\":\"https://storage.googleapis.com/storage/v1/b/\"\ ,\"httpMethod\":\"GET\",\"suggestion\":\"Consider granting permissions to 300580948756@cloudservices.gserviceaccount.com\"\ }}" ``` ## Resources: - [Configuration and best practices](/docs/ci/byoc/gcp/config) --- # Attach Service Account URL: https://www.warpbuild.com/docs/ci/byoc/gcp/service-account Description: Attach custom service account to GCE runners to give them default access --- title: "Attach Service Account" excerpt: "Attach custom service account to GCE runners" description: "Attach custom service account to GCE runners to give them default access" hidden: false sidebar_position: 2 slug: "/byoc/gcp/service-account" createdAt: "2025-05-01" updatedAt: "2025-05-01" --- ## Prerequisites ### Configure gcloud This doc contains gcloud commands to help you setup the resources. Login to google cloud using and follow the gcloud steps. ``` gcloud login ``` Configure gcloud with the GCP project ID ``` gcloud config set project ``` ### Service Account Create a [service account](https://cloud.google.com/iam/docs/service-account-overview) to attach directly to GCE if you haven't already. ```bash gcloud iam service-accounts create "instance-sa" \ --display-name="Instance Service Account" \ ``` Set the service account as `SA_EMAIL` in your current terminal. We'll refer the above created service account as `SA_EMAIL` at all further points. ```bash export SA_EMAIL= ``` WarpBuild must have permissions to pass this service account to the runners that we spin up. For this you must establish a policy. ```bash gcloud iam service-accounts add-iam-policy-binding "${SA_EMAIL}" \ --member="serviceAccount:${CREATOR_SA}" \ --role="roles/iam.serviceAccountUser" ``` The `CREATOR_SA` here is the service account we use to spin up the runners. You can find this in your [BYOC](https://app.warpbuild.com/ci/byoc) page. ### Attach additional service account policies Right now our service account doesn't have any permissions which can be used to go keyless in the GCE instance. To do so, you must add some polices. For example, if you want to access the buckets and artifact registry you can do ``` echo "🔐 Granting Storage Admin to ${SA_EMAIL} at project level..." gcloud projects add-iam-policy-binding "${PROJECT_ID}" \ --member="serviceAccount:${SA_EMAIL}" \ --role="roles/storage.admin" echo "📦 Granting Artifact Registry Admin to ${SA_EMAIL} at project level..." gcloud projects add-iam-policy-binding "${PROJECT_ID}" \ --member="serviceAccount:${SA_EMAIL}" \ --role="roles/artifactregistry.admin" ``` ## Attach Service Account to the runners Use the `Service Account` field in the runner edit page to configure your runners to run with this service account. To validate, check the console page of your GCP project > Compute Engine > 'runner-instance' > Under 'API and identity management' > Check 'Service account'. This should have the same value as the service account that you created. --- # Golden Dockerfiles URL: https://www.warpbuild.com/docs/ci/docker-builders/golden-dockerfiles Description: Best practices for Dockerfiles across languages and frameworks --- title: "Golden Dockerfiles" excerpt: "Best practices for Dockerfiles across languages and frameworks" description: "Best practices for Dockerfiles across languages and frameworks" createdAt: "2025-04-15" updatedAt: "2025-04-21" --- > Last updated: 2025-04-21 A collection of WarpBuild-curated Dockerfile best practices, optimized to help you build production-ready container images effortlessly. ### Node.js - [Node.js with npm](./nodejs/nodejs-npm.mdx) - [Node.js with bun](./nodejs/nodejs-bun.mdx) - [Node.js with pnpm](./nodejs/nodejs-pnpm.mdx) - [Node.js with yarn](./nodejs/nodejs-yarn.mdx) #### Frameworks - [Next.js](./nodejs/nodejs-next.mdx) #### Build Tools - [Vite](./nodejs/nodejs-vite.mdx) - [Webpack](./nodejs/nodejs-webpack.mdx) ### Python - [Python with pip](./python/python-pip.mdx) - [Python with uv](./python/python-uv.mdx) - [Python with Poetry](./python/python-poetry.mdx) ### Ruby - [Ruby with Bundler](./ruby/ruby-bundler.mdx) ### Rust - [Rust with Cargo](./rust/rust-cargo.mdx) ### Go - [Go with Go Modules](./go/go-modules.mdx) ### Java - [Java with Maven](./java/java-maven.mdx) - [Java with Gradle](./java/java-gradle.mdx) ### PHP - [PHP with Composer](./php/php-composer.mdx) ### C# - [C# with dotnet](./csharp/csharp-dotnet.mdx) ### C++ - [C++ with CMake](./cpp/cpp-cmake.mdx) ### Scala - [Scala with sbt](./scala/scala-sbt.mdx) --- # C++ with CMake URL: https://www.warpbuild.com/docs/ci/docker-builders/golden-dockerfiles/cpp/cpp-cmake Description: Best practices for Dockerfile for C++ with CMake --- title: "C++ with CMake" excerpt: "Best practices for Dockerfile for C++ with CMake" description: "Best practices for Dockerfile for C++ with CMake" hidden: false sidebar_position: 1 slug: "/docker-builders/golden-dockerfiles/cpp/cpp-cmake" createdAt: "2025-04-15" updatedAt: "2025-04-15" --- # C++ with CMake This Dockerfile is designed for C++ projects using CMake as the build system. It uses a multi-stage build to create a lightweight runtime image. ```docker # Stage 1: Build environment FROM ubuntu:24.04 AS builder # Prevent interactive prompts during package installation ARG DEBIAN_FRONTEND=noninteractive # Install build dependencies RUN apt-get update && apt-get install -y --no-install-recommends \ build-essential \ cmake \ ninja-build \ git \ ca-certificates \ ccache \ && rm -rf /var/lib/apt/lists/* # Set working directory WORKDIR /src # Copy all source files first (both CMakeLists.txt and source code) COPY . . # Create build directory RUN mkdir -p build # Configure CMake with Ninja generator and enable cache WORKDIR /src/build RUN --mount=type=cache,target=/root/.ccache \ cmake .. -G Ninja \ -DCMAKE_BUILD_TYPE=Release \ -DCMAKE_CXX_COMPILER_LAUNCHER=ccache \ -DCMAKE_INSTALL_PREFIX=/install # Build and install the application RUN --mount=type=cache,target=/root/.ccache \ ninja install && \ # Create lib directory if it doesn't exist mkdir -p /install/lib # Stage 2: Runtime image FROM alpine:3.19 AS runtime # Install only the required runtime library RUN apk add --no-cache libstdc++ # Create a non-root user RUN addgroup -S appuser && adduser -S -g appuser appuser # Create application directory WORKDIR /app # Copy only the built artifacts from the builder stage COPY --from=builder /install/bin/ /app/bin/ COPY --from=builder /install/lib/ /app/lib/ # Set ownership and permissions RUN chown -R appuser:appuser /app && \ chmod +x /app/bin/* # Switch to non-root user USER appuser # Expose port 8080 EXPOSE 8080 # Set the entrypoint to your application ENTRYPOINT ["/app/bin/myapp"] ``` ## Key Features 1. **Multi-stage build**: Separates build environment from runtime image 2. **Cache optimization**: Uses ccache with buildkit cache mounts to speed up rebuilds 3. **Minimal runtime image**: Uses Alpine for a small runtime footprint 4. **Security**: Runs as a non-root user 5. **Build tool integration**: Uses Ninja for faster builds ## Customization - Adjust the `cmake` options based on your project requirements - Update the `ENTRYPOINT` to match your application's executable name ### 🔍 Why these are best practices: ✅ Multi-stage builds - Dramatically reduces final image size by separating build and runtime environments. - Eliminates build tools and source code from the runtime image. - Improves security by minimizing the attack surface. ✅ Compiler caching with ccache - Speeds up incremental builds by caching compiled objects. - Uses Docker's mount feature to preserve the cache between builds. - Significantly improves build times in CI/CD pipelines. ✅ Ninja build system - Faster build speed compared to traditional Make. - Better parallelism and dependency handling. - Improved build performance for large C++ projects. ✅ Minimal runtime dependencies - Installs only libraries required to run the application. - Reduces image size and potential vulnerabilities. - Improves container startup time and resource efficiency. ✅ Security best practices - Runs the application as a non-root user. - Sets appropriate file permissions. - Minimizes installed packages to reduce attack surface. ### 🚀 Additional Dockerfile best practices you can adopt: #### Enable compiler optimizations Optimize builds for production use: ```docker RUN cmake .. -G Ninja \ -DCMAKE_BUILD_TYPE=Release \ -DCMAKE_CXX_FLAGS="-O3 -march=x86-64-v3 -flto" \ -DCMAKE_INSTALL_PREFIX=/install ``` #### Add health checks Monitor the application health: ```docker HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \ CMD curl -f http://localhost:8080/health || exit 1 ``` #### Use .dockerignore Exclude unnecessary files from the Docker build context: ``` build/ .git/ .github/ .vscode/ .idea/ *.md docs/ tests/ ``` #### Static linking for portable binaries Create fully static binaries to eliminate runtime dependencies: ```docker # In CMake configuration RUN cmake .. -G Ninja \ -DCMAKE_BUILD_TYPE=Release \ -DCMAKE_CXX_FLAGS="-static" \ -DBUILD_SHARED_LIBS=OFF \ -DCMAKE_INSTALL_PREFIX=/install # Then use a minimal runtime image FROM scratch COPY --from=builder /install/bin/myapp /myapp ENTRYPOINT ["/myapp"] ``` #### Build for multiple architectures Support various hardware platforms: ```docker # Use buildx with platform-specific arguments FROM --platform=$BUILDPLATFORM ubuntu:22.04 AS builder ARG TARGETPLATFORM ARG BUILDPLATFORM # Set appropriate compiler flags based on target RUN case "$TARGETPLATFORM" in \ "linux/amd64") CMAKE_ARCH_FLAGS="-march=x86-64" ;; \ "linux/arm64") CMAKE_ARCH_FLAGS="-march=armv8-a" ;; \ *) CMAKE_ARCH_FLAGS="" ;; \ esac && \ cmake ... -DCMAKE_CXX_FLAGS="$CMAKE_ARCH_FLAGS" ``` #### Configure for different build types Use build arguments to control build configuration: ```docker ARG BUILD_TYPE=Release RUN cmake .. -G Ninja \ -DCMAKE_BUILD_TYPE=${BUILD_TYPE} \ -DCMAKE_INSTALL_PREFIX=/install ``` #### Add runtime configuration Include configuration files in your image: ```docker # In the runtime stage COPY --from=builder /src/config/ /app/config/ ENV CONFIG_PATH=/app/config/config.json ``` By following these practices, you'll create Docker images for your C++ applications that are secure, efficient, and optimized for both development and production environments. These techniques help minimize build times, reduce image sizes, and ensure consistent behavior across different deployment environments, which is particularly important for C++ applications. --- # C# with .NET URL: https://www.warpbuild.com/docs/ci/docker-builders/golden-dockerfiles/csharp/csharp-dotnet Description: Best practices for Dockerfile for C# with .NET --- title: "C# with .NET" excerpt: "Best practices for Dockerfile for C# with .NET" description: "Best practices for Dockerfile for C# with .NET" hidden: false sidebar_position: 1 slug: "/docker-builders/golden-dockerfiles/csharp/csharp-dotnet" createdAt: "2025-04-15" updatedAt: "2025-04-15" --- ### 🐳 Annotated Dockerfile for C# with .NET: ```docker # Stage 1: Build the application FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build # Set working directory WORKDIR /src # Copy csproj file(s) and restore dependencies COPY *.csproj ./ RUN --mount=type=cache,target=/root/.nuget/packages \ dotnet restore # Copy everything else and build the project COPY . ./ RUN --mount=type=cache,target=/root/.nuget/packages \ dotnet build -c Release --no-restore # Stage 2: Publish the application FROM build AS publish # Publish the application RUN --mount=type=cache,target=/root/.nuget/packages \ dotnet publish -c Release --no-build -o /app/publish # Stage 3: Create the runtime image FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS final # Set working directory WORKDIR /app # Create a non-root user RUN useradd -u 1000 -r -s /bin/false dotnetuser && \ mkdir -p /home/dotnetuser && \ chown -R dotnetuser:dotnetuser /home/dotnetuser # Copy published files from the publish stage COPY --from=publish --chown=dotnetuser:dotnetuser /app/publish . # Use the non-root user to run the application USER dotnetuser # Set the entrypoint ENTRYPOINT ["dotnet", "YourAppName.dll"] ``` ### 🔍 Why these are best practices: ✅ Multi-stage builds - Separates build environment from runtime environment. - Dramatically reduces final image size. - Eliminates build tools and intermediate artifacts from the runtime image. ✅ NuGet package caching - Uses Docker's cache mount feature to avoid downloading packages repeatedly. - Significantly speeds up build times, especially for large projects. - Prevents redundant network requests during iterative builds. ✅ Build flow optimization - Restores dependencies separately from building for better layer caching. - Copies only the project file first to take advantage of Docker's cache. - Optimizes the build process by using the `--no-restore` flag. ✅ Minimal runtime image - Uses the ASP.NET runtime image instead of the full SDK. - Includes only what's needed to run the application, not build it. - Reduces attack surface and resource usage in production. ✅ Security best practices - Runs the application as a non-root user. - Follows the principle of least privilege. - Sets proper file ownership to enhance security. ### 🚀 Additional Dockerfile best practices you can adopt: #### Use Alpine-based images for even smaller footprint For applications that don't require the full ASP.NET runtime: ```docker FROM mcr.microsoft.com/dotnet/runtime-deps:8.0-alpine AS final # ... other steps ... # For self-contained applications with trimming COPY --from=publish /app/publish . ``` #### Enable assembly trimming and ahead-of-time compilation Reduce application size and improve startup time: ```docker # In the publish stage RUN dotnet publish -c Release -r linux-x64 --self-contained true \ /p:PublishTrimmed=true /p:PublishAot=true -o /app/publish ``` #### Add health checks Monitor application health for container orchestration: ```docker HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \ CMD curl -f http://localhost:80/health || exit 1 ``` #### Use .dockerignore Exclude unnecessary files from your Docker build context: ``` bin/ obj/ TestResults/ .vscode/ .git/ .github/ .gitignore Dockerfile README.md *.sln ``` #### Configuration for development vs. production Use environment variables and build arguments to switch between environments: ```docker # Set environment variables for different configurations ENV ASPNETCORE_ENVIRONMENT=Production \ DOTNET_EnableDiagnostics=0 # For development builds FROM mcr.microsoft.com/dotnet/sdk:8.0 AS dev WORKDIR /src COPY . . ENV ASPNETCORE_ENVIRONMENT=Development ENTRYPOINT ["dotnet", "watch", "run", "--urls", "http://0.0.0.0:5000"] ``` #### Optimize for container resource limits Configure .NET to respect container resources: ```docker # In the final stage ENV DOTNET_EnableDiagnostics=0 \ DOTNET_gcServer=1 \ DOTNET_gcConcurrent=1 \ DOTNET_ThreadPool_UnfairSemaphoreSpinLimit=0 ``` #### Support for multiple architectures Build for multiple platforms: ```docker FROM --platform=$BUILDPLATFORM mcr.microsoft.com/dotnet/sdk:8.0 AS build ARG TARGETARCH # ... other steps ... RUN dotnet publish -c Release -o /app/publish \ -r linux-$TARGETARCH --self-contained true ``` By following these practices, you'll create Docker images for your .NET applications that are secure, efficient, and optimized for both development and production environments. These approaches help minimize build times, reduce image sizes, improve startup performance, and ensure consistent behavior across different deployment environments. --- # Go with Go Modules URL: https://www.warpbuild.com/docs/ci/docker-builders/golden-dockerfiles/go/go-modules Description: Best practices for Dockerfile for Go with Go Modules --- title: "Go with Go Modules" excerpt: "Best practices for Dockerfile for Go with Go Modules" description: "Best practices for Dockerfile for Go with Go Modules" hidden: false sidebar_position: 1 slug: "/docker-builders/golden-dockerfiles/go/go-modules" createdAt: "2025-04-15" updatedAt: "2025-04-15" --- ### 🐳 Annotated Dockerfile for Go with Go Modules: ```docker # Start with the official Go image as our builder FROM golang:1.24-bookworm AS builder # Set the working directory inside the container WORKDIR /app # Copy go.mod and go.sum files first for better caching COPY go.mod go.sum ./ # Download dependencies using go modules # This layer will be cached unless go.mod/go.sum changes RUN --mount=type=cache,target=/go/pkg/mod \ --mount=type=cache,target=/root/.cache/go-build \ go mod download # Copy the source code into the container COPY . . # Build the application with optimizations # CGO_ENABLED=0 creates a static binary # -ldflags="-s -w" strips debug information to reduce binary size RUN --mount=type=cache,target=/go/pkg/mod \ --mount=type=cache,target=/root/.cache/go-build \ CGO_ENABLED=0 GOOS=linux go build -ldflags="-s -w" -o /go/bin/app # -------------------------------------- # Stage 2: Create an extremely small runtime image # -------------------------------------- FROM gcr.io/distroless/static-debian12 # Copy the binary from the builder stage COPY --from=builder /go/bin/app /app # Use a non-root user for better security (distroless provides 'nonroot') USER nonroot:nonroot # Expose the port the app runs on EXPOSE 8080 # Command to run the executable ENTRYPOINT ["/app"] ``` ### 🔍 Why these are best practices: ✅ Multi-stage builds - Dramatically reduces final image size. - Eliminates all build dependencies and the Go compiler from the runtime image. - Final image contains only your statically compiled Go binary. ✅ Go Modules for dependency management - Ensures reproducible builds with explicit dependency versions. - go.mod and go.sum provide deterministic dependency resolution. - Downloads dependencies first to leverage Docker's caching. ✅ Caching Go modules and build cache - Uses Docker's build cache efficiently to avoid redundant downloads. - Significantly speeds up builds on iterative development. - Saves bandwidth and time, especially important in CI/CD environments. ✅ Static binary compilation - CGO_ENABLED=0 creates binaries with no external dependencies. - Allows use of scratch or distroless containers for maximum security. - Simplifies deployment across different environments. ✅ Binary optimization - Strips debug information to reduce binary size. - Smaller binaries mean faster container startup and smaller images. - Reduces attack surface by eliminating unnecessary information. ### 🚀 Additional Dockerfile best practices you can adopt: #### Use scratch instead of distroless for even smaller images If your application doesn't need certificates or other basics: ```docker FROM scratch # Copy necessary SSL certificates if your app makes HTTPS calls COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ COPY --from=builder /go/bin/app /app ENTRYPOINT ["/app"] ``` #### Build for multiple architectures For cross-platform compatibility: ```docker # Build with platform-specific arguments RUN --mount=type=cache,target=/go/pkg/mod \ --mount=type=cache,target=/root/.cache/go-build \ GOOS=linux GOARCH=amd64 go build -ldflags="-s -w" -o /go/bin/app-amd64 RUN --mount=type=cache,target=/go/pkg/mod \ --mount=type=cache,target=/root/.cache/go-build \ GOOS=linux GOARCH=arm64 go build -ldflags="-s -w" -o /go/bin/app-arm64 ``` #### Add build-time metadata with ldflags Include version info and build timestamps: ```docker ARG VERSION=dev ARG COMMIT=unknown ARG BUILD_DATE=unknown RUN --mount=type=cache,target=/go/pkg/mod \ --mount=type=cache,target=/root/.cache/go-build \ CGO_ENABLED=0 GOOS=linux go build \ -ldflags="-s -w -X main.version=${VERSION} -X main.commit=${COMMIT} -X main.buildDate=${BUILD_DATE}" \ -o /go/bin/app ``` #### Vendor dependencies for air-gapped builds For environments without internet access: ```docker # First locally run: go mod vendor COPY vendor/ ./vendor/ RUN --mount=type=cache,target=/root/.cache/go-build \ CGO_ENABLED=0 GOOS=linux go build -mod=vendor -ldflags="-s -w" -o /go/bin/app ``` #### Use .dockerignore Exclude unnecessary files from your Docker build context: ``` .git .github .gitignore README.md Dockerfile docker-compose.yml *.md .idea .vscode ``` #### Configure health checks Ensure your container reports health correctly: ```docker HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \ CMD curl -f http://localhost:8080/health || exit 1 ``` #### Enable Go's build-time security checks For enhanced security scanning during builds: ```docker # Enable Go's security-focused static analysis RUN go install golang.org/x/vuln/cmd/govulncheck@latest && \ govulncheck ./... ``` By following these practices, you'll create Docker images for your Go applications that are secure, efficient, and optimized for production environments. Go's strengths in producing small, statically-linked binaries make it an excellent language for containerized deployments. --- # Java with Gradle URL: https://www.warpbuild.com/docs/ci/docker-builders/golden-dockerfiles/java/java-gradle Description: Best practices for Dockerfile for Java with Gradle --- title: "Java with Gradle" excerpt: "Best practices for Dockerfile for Java with Gradle" description: "Best practices for Dockerfile for Java with Gradle" hidden: false sidebar_position: 2 slug: "/docker-builders/golden-dockerfiles/java/java-gradle" createdAt: "2025-04-15" updatedAt: "2025-04-15" --- ### 🐳 Annotated Dockerfile for Java with Gradle: ```docker # Stage 1: Build the application FROM eclipse-temurin:21 AS builder # Set working directory WORKDIR /app # Install Gradle RUN apt-get update && apt-get install -y --no-install-recommends \ gradle \ && rm -rf /var/lib/apt/lists/* # Copy Gradle files for dependency resolution COPY settings.gradle build.gradle ./ # Download dependencies and cache them # Using Gradle cache mount to speed up builds RUN --mount=type=cache,target=/root/.gradle \ gradle dependencies --no-daemon # Copy source code after dependencies for better caching COPY src/ ./src/ # Build the application (skipping tests for faster builds) RUN --mount=type=cache,target=/root/.gradle \ gradle build --no-daemon -x test # Stage 2: Create a minimal runtime image FROM eclipse-temurin:21-alpine # Create a non-root user to run the application # Alpine has different syntax for user/group creation RUN addgroup -S -g 1001 javauser && \ adduser -S -u 1001 -G javauser javauser # Set working directory WORKDIR /app # Copy the built JAR from the builder stage COPY --from=builder /app/build/libs/*.jar app.jar # Set ownership to the non-root user RUN chown -R javauser:javauser /app # Switch to non-root user USER javauser # Expose the application port EXPOSE 8080 # Configure container-optimized JVM settings ENV JAVA_OPTS="-XX:+UseContainerSupport -XX:MaxRAMPercentage=75.0 -Djava.security.egd=file:/dev/./urandom" # Run the application ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -jar app.jar" ] ``` ### 🔍 Why these are best practices: ✅ Multi-stage builds - Reduces final image size dramatically. - Eliminates build tools and source code from the runtime image. - Improves security by minimizing the attack surface. ✅ Gradle dependency caching - Uses Docker's cache mount feature to cache Gradle dependencies. - Significantly speeds up build times on iterative builds. - Avoids redundant downloads by preserving the Gradle cache between builds. ✅ Separation of dependency resolution from builds - Resolves dependencies separately from the build step. - Takes advantage of Docker layer caching for unchanged dependencies. - Speeds up rebuilds when only application code changes. ✅ JRE-only runtime image - Uses a minimal JRE instead of full JDK for the runtime image. - Reduces container size by eliminating compilation tools. - Improves security by removing unnecessary components. ✅ Container-optimized Java options - XX:+UseContainerSupport ensures JVM respects container memory limits. - XX:MaxRAMPercentage=75.0 prevents excessive memory usage. - Improves application stability in containerized environments. ### 🚀 Additional Dockerfile best practices you can adopt: #### Enable Spring Boot layered JARs For improved caching and smaller image updates: ```docker # Extract layered JAR (for Spring Boot applications) COPY --from=builder /app/build/libs/*.jar app.jar RUN java -Djarmode=layertools -jar app.jar extract # Create optimal layer order FROM eclipse-temurin:21-alpine WORKDIR /app COPY --from=builder /app/dependencies/ ./ COPY --from=builder /app/spring-boot-loader/ ./ COPY --from=builder /app/snapshot-dependencies/ ./ COPY --from=builder /app/application/ ./ ENTRYPOINT ["java", "org.springframework.boot.loader.JarLauncher"] ``` #### Use Gradle's --no-daemon option for containers Optimize Gradle for container builds: ```docker # Ensure no daemons are running for containerized builds RUN --mount=type=cache,target=/root/.gradle \ ./gradlew build --no-daemon -x test ``` #### Add health checks Monitor container health for better orchestration: ```docker HEALTHCHECK --interval=30s --timeout=3s --start-period=60s --retries=3 \ CMD curl -f http://localhost:8080/actuator/health || exit 1 ``` #### Use .dockerignore Exclude unnecessary files from your Docker build context: ``` .gradle/ build/ !build/libs/*.jar .git/ .github/ .gitignore README.md Dockerfile docker-compose.yml ``` #### Fine-tune garbage collection for containers Optimize memory usage in containerized environments: ```docker ENV JAVA_OPTS="-XX:+UseContainerSupport -XX:MaxRAMPercentage=75.0 -XX:+UseG1GC -XX:+ExplicitGCInvokesConcurrent -XX:+ParallelRefProcEnabled -XX:+UseStringDeduplication" ``` #### Set up for GraalVM native images Create ultra-fast startup and smaller footprint: ```docker FROM ghcr.io/graalvm/native-image-community:21 AS builder WORKDIR /app COPY . . RUN --mount=type=cache,target=/root/.gradle \ ./gradlew nativeCompile -x test FROM oraclelinux:8-slim COPY --from=builder /app/build/native/nativeCompile/application /app/application ENTRYPOINT ["/app/application"] ``` #### Configure JVM for reliable container operation Advanced JVM settings for containerized applications: ```docker ENV JAVA_OPTS="-XX:+UseContainerSupport -XX:MaxRAMPercentage=75.0 -XX:InitialRAMPercentage=50.0 -XX:+UseG1GC -XX:G1HeapRegionSize=4M -XX:+UseStringDeduplication -XX:+ExitOnOutOfMemoryError" ``` #### Enable JVM flight recorder for diagnostics For production troubleshooting capabilities: ```docker ENV JAVA_OPTS="-XX:+UseContainerSupport -XX:MaxRAMPercentage=75.0 -XX:+FlightRecorder -XX:StartFlightRecording=disk=true,dumponexit=true,filename=/tmp/recording.jfr,maxsize=1024m,settings=profile" ``` By following these practices, you'll create Docker images for your Java Gradle applications that are secure, efficient, and optimized for both development and production environments. These approaches minimize build times, reduce image sizes, and ensure consistent behavior across different deployment environments. --- # Java with Maven URL: https://www.warpbuild.com/docs/ci/docker-builders/golden-dockerfiles/java/java-maven Description: Best practices for Dockerfile for Java with Maven --- title: "Java with Maven" excerpt: "Best practices for Dockerfile for Java with Maven" description: "Best practices for Dockerfile for Java with Maven" hidden: false sidebar_position: 1 slug: "/docker-builders/golden-dockerfiles/java/java-maven" createdAt: "2025-04-15" updatedAt: "2025-04-15" --- ### 🐳 Annotated Dockerfile for Java with Maven: ```docker # Stage 1: Build the application FROM eclipse-temurin:21 AS builder # Set working directory WORKDIR /app # Copy the Maven wrapper files COPY .mvn/ .mvn/ COPY mvnw mvnw.cmd pom.xml ./ # Download dependencies and cache them (will be cached by Docker if pom.xml doesn't change) RUN --mount=type=cache,target=/root/.m2 \ ./mvnw dependency:go-offline -B # Copy source code after dependencies to leverage build caching COPY src/ ./src/ # Build the application RUN --mount=type=cache,target=/root/.m2 \ ./mvnw package -DskipTests -B # Stage 2: Create runtime image FROM eclipse-temurin:21-alpine # Add a non-root user to run the app RUN addgroup -S javauser && \ adduser -S -G javauser -u 1001 javauser # Set working directory WORKDIR /app # Copy the built JAR from the builder stage COPY --from=builder /app/target/*.jar app.jar # Set ownership of the application files to non-root user RUN chown -R javauser:javauser /app # Switch to non-root user USER javauser # Expose application port EXPOSE 8080 # Configure JVM options (optimized for containers) ENV JAVA_OPTS="-XX:+UseContainerSupport -XX:MaxRAMPercentage=75.0 -Djava.security.egd=file:/dev/./urandom" # Run the application ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -jar app.jar" ] ``` ### 🔍 Why these are best practices: ✅ Multi-stage builds - Reduces final image size by separating build environment from runtime. - Eliminates build tools and dependencies from the final image. - Creates a cleaner, smaller, and more secure production image. ✅ Maven dependency caching - Uses Docker's build cache to avoid downloading dependencies repeatedly. - Dramatically speeds up builds by caching Maven artifacts. - Improves CI/CD pipeline efficiency and reduces network usage. ✅ Optimized JRE base image - Uses JRE for runtime instead of full JDK to reduce image size. - Eclipse Temurin provides a reliable, secure, and enterprise-ready OpenJDK distribution. ✅ Container-optimized Java options - XX:+UseContainerSupport ensures JVM recognizes container memory limits. - XX:MaxRAMPercentage=75.0 prevents JVM from using all available memory. ✅ Security best practices - Runs as a non-root user to enhance container security. - Follows the principle of least privilege to limit potential damage from vulnerabilities. - Prevents privilege escalation attacks. ### 🚀 Additional Dockerfile best practices you can adopt: #### Use the Maven Wrapper for version consistency Enforce consistent Maven versions across environments: ```docker # Copy Maven wrapper files COPY .mvn/ .mvn/ COPY mvnw mvnw.cmd ./ # Use Maven wrapper instead of system Maven RUN ./mvnw clean package ``` #### Enable layered JARs (for Spring Boot applications) Create more granular layers for better cache utilization: ```docker # Extract layered JAR COPY --from=builder /app/target/*.jar app.jar RUN java -Djarmode=layertools -jar app.jar extract # Create layers in order of change frequency FROM eclipse-temurin:17-jre-jammy WORKDIR /app COPY --from=builder /app/dependencies/ ./ COPY --from=builder /app/spring-boot-loader/ ./ COPY --from=builder /app/snapshot-dependencies/ ./ COPY --from=builder /app/application/ ./ ENTRYPOINT ["java", "org.springframework.boot.loader.JarLauncher"] ``` #### Add health checks Monitor application health for better container orchestration: ```docker HEALTHCHECK --interval=30s --timeout=3s --start-period=60s --retries=3 \ CMD curl -f http://localhost:8080/actuator/health || exit 1 ``` #### Use .dockerignore Exclude unnecessary files from your Docker build context: ``` target/ !target/*.jar .git/ .github/ .gitignore README.md Dockerfile docker-compose.yml ``` #### Consider GraalVM Native Image for faster startup and lower memory For optimal performance in containerized environments: ```docker FROM ghcr.io/graalvm/native-image-community:21 AS builder WORKDIR /app COPY . . RUN ./mvnw -Pnative native:compile -DskipTests FROM oraclelinux:8-slim COPY --from=builder /app/target/application /app/application ENTRYPOINT ["/app/application"] ``` #### Set appropriate Spring Boot/Java memory settings Optimize memory usage for containers: ```docker ENV JAVA_OPTS="-XX:+UseContainerSupport -XX:MaxRAMPercentage=75.0 -XX:InitialRAMPercentage=50.0 -Xss512k -XX:+UseG1GC -XX:+UseStringDeduplication" ``` By following these practices, you'll create Docker images for your Java Maven applications that are secure, efficient, and optimized for both development and production environments. These techniques help minimize build times, reduce image sizes, improve security, and ensure consistent behavior across different deployment environments. --- # Kotlin with Gradle URL: https://www.warpbuild.com/docs/ci/docker-builders/golden-dockerfiles/kotlin/kotlin-gradle Description: Best practices for Dockerfile for Kotlin with Gradle --- title: "Kotlin with Gradle" excerpt: "Best practices for Dockerfile for Kotlin with Gradle" description: "Best practices for Dockerfile for Kotlin with Gradle" hidden: false sidebar_position: 1 slug: "/docker-builders/golden-dockerfiles/kotlin/kotlin-gradle" createdAt: "2025-04-15" updatedAt: "2025-04-15" --- ### 🐳 Annotated Dockerfile for Kotlin with Gradle: ```docker # Stage 1: Build the application FROM eclipse-temurin:21 AS builder # Set working directory WORKDIR /app # Copy Gradle configuration files COPY gradle/ gradle/ COPY gradlew gradlew.bat settings.gradle.kts build.gradle.kts ./ # Use Gradle cache mount for faster builds RUN --mount=type=cache,target=/root/.gradle \ ./gradlew dependencies --no-daemon # Copy source code COPY src/ ./src/ # Build the application RUN --mount=type=cache,target=/root/.gradle \ ./gradlew build --no-daemon -x test # Stage 2: Create minimal runtime image FROM eclipse-temurin:21-alpine # Create a non-root user RUN groupadd -r kotlinuser && useradd -r -g kotlinuser kotlinuser # Set working directory WORKDIR /app # Copy the built artifacts from the builder stage COPY --from=builder /app/build/libs/*.jar app.jar # Set ownership and permissions RUN chown -R kotlinuser:kotlinuser /app # Switch to non-root user USER kotlinuser # Expose application port EXPOSE 8080 # Configure JVM options for containerized environments ENV JAVA_OPTS="-XX:+UseContainerSupport -XX:MaxRAMPercentage=75.0 -Djava.security.egd=file:/dev/./urandom" # Run the application ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -jar app.jar" ] ``` ### 🔍 Why these are best practices: ✅ Multi-stage builds - Reduces the final image size by separating build environment from runtime. - Eliminates build tools and intermediate files from the production image. - Creates a smaller, more secure runtime environment. ✅ Gradle dependency caching - Uses Docker's build cache to avoid downloading dependencies repeatedly. - Dramatically speeds up iterative builds in development and CI/CD. - Leverages `--mount=type=cache` for efficient dependency management. ✅ JRE-only runtime image - Uses a smaller JRE image instead of full JDK for the final runtime. - Reduces attack surface by removing development tools from production. - Decreases image size, improving deployment speed and resource usage. ✅ Container-aware JVM settings - Configures the JVM to respect container memory limits. - Optimizes garbage collection and memory usage for containerized environments. - Improves application stability and resource utilization. ✅ Security best practices - Runs the application as a non-root user to enhance container security. - Follows the principle of least privilege to limit potential attack vectors. - Sets proper file ownership and permissions. ### 🚀 Additional Dockerfile best practices you can adopt: #### Optimize Kotlin applications for native compilation For Kotlin/Native applications or using GraalVM: ```docker # For GraalVM native image compilation FROM ghcr.io/graalvm/native-image-community:21 AS builder WORKDIR /app COPY . /app RUN ./gradlew nativeCompile # Minimal runtime image FROM gcr.io/distroless/base COPY --from=builder /app/build/native/nativeCompile/app /app ENTRYPOINT ["/app"] ``` #### Enable Spring Boot layered JARs For Spring Boot applications to improve caching: ```docker # Extract layered JAR COPY --from=builder /app/build/libs/*.jar app.jar RUN java -Djarmode=layertools -jar app.jar extract # Create optimized layers FROM eclipse-temurin:21-alpine WORKDIR /app COPY --from=builder /app/dependencies/ ./ COPY --from=builder /app/spring-boot-loader/ ./ COPY --from=builder /app/snapshot-dependencies/ ./ COPY --from=builder /app/application/ ./ ENTRYPOINT ["java", "org.springframework.boot.loader.JarLauncher"] ``` #### Add health checks Monitor container health for better orchestration: ```docker HEALTHCHECK --interval=30s --timeout=3s --start-period=60s --retries=3 \ CMD curl -f http://localhost:8080/actuator/health || exit 1 ``` #### Use .dockerignore Exclude unnecessary files from your Docker build context: ``` .gradle/ build/ !build/libs/*.jar .git/ .github/ .gitignore *.md .idea/ *.iml ``` #### Environment-specific configurations Configure for different environments: ```docker # Use build arguments to customize the build ARG PROFILE=production RUN ./gradlew build -Pprofile=${PROFILE} --no-daemon -x test ``` #### JVM performance tuning for containers Fine-tune JVM settings: ```docker ENV JAVA_OPTS="-XX:+UseContainerSupport -XX:MaxRAMPercentage=75.0 -XX:InitialRAMPercentage=50.0 -XX:+UseG1GC -XX:+AlwaysPreTouch -XX:+ExitOnOutOfMemoryError" ``` #### Implement proper signaling Ensure your application responds to container orchestration signals: ```docker # For Springboot apps ENV JAVA_OPTS="-XX:+UseContainerSupport -XX:MaxRAMPercentage=75.0 -Dserver.shutdown=graceful" # Or add a lightweight init system RUN apt-get update && apt-get install -y --no-install-recommends dumb-init ENTRYPOINT ["/usr/bin/dumb-init", "--"] CMD ["java", "-jar", "app.jar"] ``` By following these practices, you'll create Docker images for your Kotlin applications that are secure, efficient, and optimized for both development and production environments. Kotlin's JVM foundation combined with these Docker techniques provides excellent performance while maintaining developer productivity. --- # Node.js with Bun URL: https://www.warpbuild.com/docs/ci/docker-builders/golden-dockerfiles/nodejs/nodejs-bun Description: Best practices for Dockerfile for Node.js with Bun --- title: "Node.js with Bun" excerpt: "Best practices for Dockerfile for Node.js with Bun" description: "Best practices for Dockerfile for Node.js with Bun" hidden: false sidebar_position: 4 slug: "/docker-builders/golden-dockerfiles/nodejs/nodejs-bun" createdAt: "2025-04-15" updatedAt: "2025-04-15" --- ### 🐳 Annotated Dockerfile for Node.js with Bun: ```docker # Use Bun's official image as the base FROM oven/bun:1 AS base # Stage 1: Install production dependencies FROM base AS deps # Set working directory WORKDIR /app # Copy only package definition files first COPY package.json bun.lock ./ # Install only production dependencies # Using --frozen ensures exact versions from lockfile are used RUN --mount=type=cache,id=bun,target=/root/.bun/install/cache \ bun install --frozen-lockfile --production # Stage 2: Build the application FROM base AS build WORKDIR /app # Copy package definitions to maintain consistent build context COPY package.json bun.lock ./ # Install all dependencies (dev + prod) for building the app RUN --mount=type=cache,id=bun,target=/root/.bun/install/cache \ bun install --frozen-lockfile # Copy entire source code COPY . . # Run build script defined in your package.json RUN bun run build # Stage 3: Create the final lightweight production image FROM base # Set working directory WORKDIR /app # Copy only production dependencies (no dev dependencies) COPY --from=deps /app/node_modules /app/node_modules # Copy compiled application output (dist directory) COPY --from=build /app/dist /app/dist # Explicitly set environment to production ENV NODE_ENV production # Default command to run your application with Bun (adjust path as needed) CMD ["bun", "run", "./dist/index.js"] ``` ### 🔍 Why these are best practices: ✅ Multi-stage builds - Smaller final images: Dependencies and build tools are discarded after use, reducing container size. - Security: Fewer files and tools mean a smaller attack surface. ✅ Caching Bun modules - Faster builds: Bun already installs dependencies up to 30x faster than npm, and caching makes it even faster. - Lower CI/CD overhead: Speeds up continuous integration and deployment workflows. ✅ Separating dependencies and build stages - Clear separation of concerns: Each stage serves a single purpose, making it easier to debug and optimize. - Improved cache efficiency: Changes in code don't trigger unnecessary reinstallation of unchanged dependencies. ✅ Minimal runtime image - Performance and security: Only the essential runtime code is present, limiting potential vulnerabilities. - Lower resource consumption: Optimized resource usage in production deployments. ### 🚀 Additional Dockerfile best practices you can adopt: #### Use a non-root user For enhanced security, run your app as a non-root user: ```docker FROM base # Create a non-root user RUN adduser --disabled-password --gecos "" appuser WORKDIR /app COPY --from=deps /app/node_modules /app/node_modules COPY --from=build /app/dist /app/dist ENV NODE_ENV production # Switch to non-root user USER appuser CMD ["bun", "run", "./dist/index.js"] ``` #### Use HEALTHCHECK directive Allows Docker to monitor container health automatically. ```docker HEALTHCHECK --interval=30s --timeout=3s \ CMD curl -f http://localhost:3000/health || exit 1 ``` #### Use explicit .dockerignore Prevent copying unnecessary files into your image. Example .dockerignore ``` node_modules dist coverage .git Dockerfile docker-compose.yml README.md *.log ``` #### Set resource limits explicitly When deploying containers, always set CPU and memory limits to avoid resource starvation or instability. Example in Kubernetes or Docker Compose (outside Dockerfile) ```yaml resources: limits: cpu: 1000m memory: 1Gi ``` By following these annotations and best practices, your Docker images become faster to build, more secure, smaller, and easier to maintain—ideal for modern production workflows. --- # Node.js with Next.js URL: https://www.warpbuild.com/docs/ci/docker-builders/golden-dockerfiles/nodejs/nodejs-next Description: Best practices for Dockerfile for Node.js with Next.js --- title: "Node.js with Next.js" excerpt: "Best practices for Dockerfile for Node.js with Next.js" description: "Best practices for Dockerfile for Node.js with Next.js" hidden: false sidebar_position: 2 slug: "/docker-builders/golden-dockerfiles/nodejs/nodejs-next" createdAt: "2025-04-21" updatedAt: "2025-04-21" --- ### 🐳 Annotated Dockerfile for Node.js with Next.js: ```docker # Use Node.js LTS as the base image for consistency and long-term support FROM node:lts-slim AS base # Stage 1: Install dependencies only when needed FROM base AS deps # Set working directory WORKDIR /app # Copy package.json and related files COPY package.json package-lock.json* ./ # Install dependencies RUN --mount=type=cache,target=/root/.npm \ npm ci # Stage 2: Build the application FROM base AS builder WORKDIR /app # Copy dependencies COPY --from=deps /app/node_modules ./node_modules # Copy project files COPY . . # Next.js collects anonymous telemetry data - disable it ENV NEXT_TELEMETRY_DISABLED=1 # Build the Next.js application RUN --mount=type=cache,target=/root/.npm \ npm run build # Stage 3: Create production image FROM base AS runner WORKDIR /app # Set to production environment ENV NODE_ENV=production ENV NEXT_TELEMETRY_DISABLED=1 # Create a non-root user RUN useradd -m nextuser # Copy necessary files from builder COPY --from=builder /app/public ./public COPY --from=builder /app/.next ./.next COPY --from=builder /app/node_modules ./node_modules COPY --from=builder /app/package.json ./package.json # Set correct ownership RUN chown -R nextuser:nextuser /app # Switch to non-root user USER nextuser # Expose port EXPOSE 3000 # Run the Next.js application CMD ["npm", "start"] ``` ### 🔍 Why these are best practices: ✅ Multi-stage builds - Smaller final images: Dependencies and build tools are discarded after use, reducing container size. - Security: Fewer files and tools mean a smaller attack surface. ✅ Using npm ci instead of npm install - Deterministic builds: Ensures exact versions from package-lock.json are used. - Faster than npm install: Bypasses dependency resolution for clean installations. - CI-friendly: Designed specifically for automated environments. ✅ Next.js specific optimizations - Standalone output mode: Creates a self-contained application that includes the Next.js server - Static assets are properly handled and copied to the right locations - Telemetry is disabled for privacy and performance ✅ Non-root user implementation - Security best practice: Running as a non-privileged user minimizes potential security risks - Follows principle of least privilege ### 🚀 Additional Next.js-specific configurations: #### Enable standalone output In your next.config.js file, ensure you have: ```javascript module.exports = { output: "standalone", }; ``` #### Set up environment variables properly For environment variables that need to be available at build time: ```docker # In the builder stage ARG DATABASE_URL ENV DATABASE_URL=${DATABASE_URL} ``` #### Optimizing for different Next.js deployment modes For static export: ```docker # For static export (if not using API routes or server components) FROM base AS builder # ...other steps... RUN npm run build && npm run export FROM nginx:alpine COPY --from=builder /app/out /usr/share/nginx/html ``` #### Using Next.js with Docker Compose for development ```yaml version: "3" services: nextjs: build: context: . target: deps # Only build up to the deps stage for development command: npm run dev volumes: - .:/app - /app/node_modules ports: - "3000:3000" ``` By following these best practices, your Next.js applications will be containerized efficiently, securely, and with optimal performance for production deployments. --- # Node.js with npm URL: https://www.warpbuild.com/docs/ci/docker-builders/golden-dockerfiles/nodejs/nodejs-npm Description: Best practices for Dockerfile for Node.js with npm --- title: "Node.js with npm" excerpt: "Best practices for Dockerfile for Node.js with npm" description: "Best practices for Dockerfile for Node.js with npm" hidden: false sidebar_position: 2 slug: "/docker-builders/golden-dockerfiles/nodejs/nodejs-npm" createdAt: "2025-04-15" updatedAt: "2025-04-15" --- ### 🐳 Annotated Dockerfile for Node.js with npm: ```docker # Use Node.js LTS as the base image for consistency and long-term support FROM node:lts-jod AS base # Stage 1: Install production dependencies using npm FROM base AS deps # Set working directory WORKDIR /app # Copy only package definition files first COPY package.json package-lock.json* ./ # Install only production dependencies # Use npm ci for faster, reliable installations from lockfile RUN --mount=type=cache,id=npm,target=/root/.npm \ npm ci --production # Stage 2: Build the application FROM base AS build WORKDIR /app # Copy package definitions to maintain consistent build context COPY package.json package-lock.json* ./ # Install all dependencies (dev + prod) for building the app RUN --mount=type=cache,id=npm,target=/root/.npm \ npm ci # Copy entire source code COPY . . # Run build script defined in your package.json (generally builds into a "dist" directory) RUN npm run build # Stage 3: Create the final lightweight production image FROM base # Set working directory WORKDIR /app # Copy only production dependencies (no dev dependencies) COPY --from=deps /app/node_modules /app/node_modules # Copy compiled application output (dist directory) COPY --from=build /app/dist /app/dist # Explicitly set environment to production ENV NODE_ENV production # Default command to run your Node.js application CMD ["node", "./dist/index.js"] ``` ### 🔍 Why these are best practices: ✅ Multi-stage builds - Smaller final images: Dependencies and build tools are discarded after use, reducing container size. - Security: Fewer files and tools mean a smaller attack surface. ✅ Using npm ci instead of npm install - Deterministic builds: Ensures exact versions from package-lock.json are used. - Faster than npm install: Bypasses dependency resolution for clean installations. - CI-friendly: Designed specifically for automated environments. ✅ Caching npm modules - Faster builds: Reusing the npm cache reduces install times significantly. - Lower CI/CD overhead: Speeds up continuous integration and deployment workflows. ✅ Separating dependencies and build stages - Clear separation of concerns: Each stage serves a single purpose, making it easier to debug and optimize. - Improved cache efficiency: Changes in code don't trigger unnecessary reinstallation of unchanged dependencies. ✅ Minimal runtime image - Performance and security: Only the essential runtime code is present, limiting potential vulnerabilities. - Lower resource consumption: Optimized resource usage in production deployments. ### 🚀 Additional Dockerfile best practices you can adopt: #### Use a non-root user For enhanced security, run your app as a non-root user: ```docker FROM base # Create a non-root user RUN useradd -m appuser WORKDIR /app COPY --from=deps /app/node_modules /app/node_modules COPY --from=build /app/dist /app/dist ENV NODE_ENV production # Switch to non-root user USER appuser CMD ["node", "./dist/index.js"] ``` #### Use HEALTHCHECK directive Allows Docker to monitor container health automatically. ```docker HEALTHCHECK --interval=30s --timeout=3s \ CMD curl -f http://localhost:3000/health || exit 1 ``` #### Use explicit .dockerignore Prevent copying unnecessary files into your image. Example .dockerignore ``` node_modules dist coverage .git Dockerfile docker-compose.yml README.md *.log ``` #### Set resource limits explicitly When deploying containers, always set CPU and memory limits to avoid resource starvation or instability. Example in Kubernetes or Docker Compose (outside Dockerfile) ```yaml resources: limits: cpu: 1000m memory: 1Gi ``` #### Consider using Distroless or Alpine images Switch to even lighter-weight base images if you're comfortable handling potential compatibility issues: ```docker FROM node:22-alpine AS base ``` Or distroless: ```docker FROM gcr.io/distroless/nodejs22-debian12 AS final ``` By following these annotations and best practices, your Docker images become faster to build, more secure, smaller, and easier to maintain—ideal for modern production workflows. --- # Node.js with pnpm URL: https://www.warpbuild.com/docs/ci/docker-builders/golden-dockerfiles/nodejs/nodejs-pnpm Description: Best practices for Dockerfile for Node.js with pnpm --- title: "Node.js with pnpm" excerpt: "Best practices for Dockerfile for Node.js with pnpm" description: "Best practices for Dockerfile for Node.js with pnpm" hidden: false sidebar_position: 3 slug: "/docker-builders/golden-dockerfiles/nodejs/nodejs-pnpm" createdAt: "2025-04-15" updatedAt: "2025-04-15" --- ### 🐳 Annotated Dockerfile for Node.js with pnpm: ```docker # Use Node.js LTS as the base image for consistency and long-term support FROM node:lts-jod AS base # Stage 1: Install production dependencies using pnpm FROM base AS deps # Enable corepack (manages package managers like pnpm) RUN corepack enable # Set working directory WORKDIR /app # Copy only package definition files first COPY package.json pnpm-lock.yaml ./ # Cache the pnpm store for faster dependency fetches across builds RUN --mount=type=cache,id=pnpm,target=/root/.local/share/pnpm/store \ pnpm fetch --frozen-lockfile # Install only production dependencies RUN --mount=type=cache,id=pnpm,target=/root/.local/share/pnpm/store \ pnpm install --frozen-lockfile --prod # Stage 2: Build the application FROM base AS build # Enable corepack (manages package managers like pnpm) RUN corepack enable WORKDIR /app # Copy package definitions to maintain consistent build context COPY package.json pnpm-lock.yaml ./ # Reuse cached pnpm store for quick dependency installation RUN --mount=type=cache,id=pnpm,target=/root/.local/share/pnpm/store \ pnpm fetch --frozen-lockfile # Install all dependencies (dev + prod) for building the app RUN --mount=type=cache,id=pnpm,target=/root/.local/share/pnpm/store \ pnpm install --frozen-lockfile # Copy entire source code COPY . . # Run build script defined in your package.json (generally builds into a "dist" directory) RUN pnpm build # Stage 3: Create the final lightweight production image FROM base # Set working directory WORKDIR /app # Copy only production dependencies (no dev dependencies) COPY --from=deps /app/node_modules /app/node_modules # Copy compiled application output (dist directory) COPY --from=build /app/dist /app/dist # Explicitly set environment to production ENV NODE_ENV production # Default command to run your Node.js application CMD ["node", "./dist/index.js"] ``` ### 🔍 Why these are best practices: ✅ Multi-stage builds - Smaller final images: Dependencies and build tools are discarded after use, reducing container size. - Security: Fewer files and tools mean a smaller attack surface. ✅ Caching pnpm store - Faster builds: Reusing a cached store reduces install times drastically, especially beneficial for large dependency trees. - Lower CI/CD overhead: Speeds up continuous integration and deployment workflows. ✅ Separating dependencies and build stages - Clear separation of concerns: Each stage serves a single purpose, making it easier to debug and optimize. - Improved cache efficiency: Changes in code don’t trigger unnecessary reinstallation of unchanged dependencies. ✅ Minimal runtime image - Performance and security: Only the essential runtime code is present, limiting potential vulnerabilities. - Lower resource consumption: Optimized resource usage in production deployments. ### 🚀 Additional Dockerfile best practices you can adopt: #### Use a non-root user For enhanced security, run your app as a non-root user: ```docker FROM base # Create a non-root user RUN useradd -m appuser WORKDIR /app COPY --from=deps /app/node_modules /app/node_modules COPY --from=build /app/dist /app/dist ENV NODE_ENV production # Switch to non-root user USER appuser CMD ["node", "./dist/index.js"] ``` #### Use HEALTHCHECK directive Allows Docker to monitor container health automatically. ```docker HEALTHCHECK --interval=30s --timeout=3s \ CMD curl -f http://localhost:3000/health || exit 1 ``` #### Use explicit .dockerignore Prevent copying unnecessary files into your image. Example .dockerignore node_modules dist coverage .git Dockerfile docker-compose.yml README.md \*.log ```` #### Set resource limits explicitly When deploying containers, always set CPU and memory limits to avoid resource starvation or instability. Example in Kubernetes or Docker Compose (outside Dockerfile) ```yaml resources: limits: cpu: 1000m memory: 1Gi ```` #### Consider using Distroless or Alpine images Switch to even lighter-weight base images if you’re comfortable handling potential compatibility issues: ```docker FROM node:22-alpine AS base ``` Or distroless: ```docker FROM gcr.io/distroless/nodejs22-debian12 AS final ``` By following these annotations and best practices, your Docker images become faster to build, more secure, smaller, and easier to maintain—ideal for modern production workflows. --- # Node.js with Vite URL: https://www.warpbuild.com/docs/ci/docker-builders/golden-dockerfiles/nodejs/nodejs-vite Description: Best practices for Dockerfile for Node.js with Vite --- title: "Node.js with Vite" excerpt: "Best practices for Dockerfile for Node.js with Vite" description: "Best practices for Dockerfile for Node.js with Vite" hidden: false sidebar_position: 3 slug: "/docker-builders/golden-dockerfiles/nodejs/nodejs-vite" createdAt: "2025-04-21" updatedAt: "2025-04-21" --- ### 🐳 Annotated Dockerfile for Node.js with Vite: ```docker # Use Node.js LTS as the base image for consistency and long-term support FROM node:lts-slim AS base # Stage 1: Install dependencies FROM base AS deps # Set working directory WORKDIR /app # Copy only package definition files first COPY package.json package-lock.json* ./ # Install dependencies RUN --mount=type=cache,target=/root/.npm \ npm ci # Stage 2: Build the application FROM base AS build WORKDIR /app # Copy dependencies COPY --from=deps /app/node_modules ./node_modules # Copy source code COPY . . # Build the Vite application (outputs to 'dist' folder) RUN --mount=type=cache,target=/root/.npm \ npm run build # Stage 3: Production image (using NGINX to serve static files) FROM nginx:alpine # Copy Vite build output to NGINX serve directory COPY --from=build /app/dist /usr/share/nginx/html # Copy custom NGINX config if needed # COPY nginx.conf /etc/nginx/conf.d/default.conf # Add non-root user RUN addgroup -g 1001 -S appuser && \ adduser -u 1001 -S appuser -G appuser # Set permissions RUN chown -R appuser:appuser /usr/share/nginx/html && \ chmod -R 755 /usr/share/nginx/html && \ chown -R appuser:appuser /var/cache/nginx && \ chown -R appuser:appuser /var/log/nginx && \ chown -R appuser:appuser /etc/nginx/conf.d # Set user USER appuser # Expose port EXPOSE 80 # NGINX will start automatically CMD ["nginx", "-g", "daemon off;"] ``` ### 🔍 Why these are best practices for Vite: ✅ Static build output optimization - Vite produces static files in the `dist` directory that are ideal for serving with NGINX - No need for a Node.js runtime in production for client-side applications - Much smaller final image size and improved security ✅ Multi-stage build approach - The final image only contains the built assets, not any source code or dependencies - Build tools, source code, and node_modules are not included in the production image - Significantly reduces the attack surface and image size ✅ Dependency caching - Speeds up repeated builds by caching the npm modules - Improves CI/CD pipeline efficiency ✅ Non-root NGINX configuration - Runs NGINX as a non-root user for enhanced security - Properly sets file permissions for the nginx user ### 🚀 Additional Vite-specific configurations: #### Environment Variables in Vite Vite handles environment variables differently. Only variables prefixed with `VITE_` are exposed to client-side code: ```docker # In the build stage ARG VITE_API_URL ENV VITE_API_URL=${VITE_API_URL} ``` #### Custom NGINX Configuration For SPAs (Single Page Applications), you might need this NGINX configuration: ```nginx server { listen 80; server_name _; root /usr/share/nginx/html; index index.html; # SPA routing - redirect all requests to index.html location / { try_files $uri $uri/ /index.html; } # Cache static assets location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg)$ { expires 30d; add_header Cache-Control "public, no-transform"; } } ``` Save this as `nginx.conf` in your project and uncomment the COPY line in the Dockerfile. #### Development Setup with Docker Compose ```yaml version: "3" services: vite: build: context: . target: deps command: npm run dev -- --host 0.0.0.0 volumes: - .:/app - /app/node_modules ports: - "5173:5173" environment: - VITE_API_URL=http://localhost:8080/api ``` This approach mounts your local directory for hot-reloading during development. By following these best practices, your Vite applications will have optimized production builds with minimal container size and maximum security. --- # Node.js with Webpack URL: https://www.warpbuild.com/docs/ci/docker-builders/golden-dockerfiles/nodejs/nodejs-webpack Description: Best practices for Dockerfile for Node.js with Webpack --- title: "Node.js with Webpack" excerpt: "Best practices for Dockerfile for Node.js with Webpack" description: "Best practices for Dockerfile for Node.js with Webpack" hidden: false sidebar_position: 4 slug: "/docker-builders/golden-dockerfiles/nodejs/nodejs-webpack" createdAt: "2025-04-21" updatedAt: "2025-04-21" --- ### 🐳 Annotated Dockerfile for Node.js with Webpack: ```docker # Use Node.js LTS as the base image for consistency and long-term support FROM node:lts-slim AS base # Stage 1: Install dependencies FROM base AS deps # Set working directory WORKDIR /app # Copy only package definition files first COPY package.json package-lock.json* ./ # Install dependencies RUN --mount=type=cache,target=/root/.npm \ npm ci # Stage 2: Build the application FROM base AS build WORKDIR /app # Copy dependencies COPY --from=deps /app/node_modules ./node_modules # Copy source code COPY . . # Build the application with Webpack RUN --mount=type=cache,target=/root/.npm \ npm run build # Stage 3: Production image (using NGINX to serve static files) FROM nginx:alpine # Copy build output to NGINX serve directory COPY --from=build /app/dist /usr/share/nginx/html # Copy custom NGINX config if needed # COPY nginx.conf /etc/nginx/conf.d/default.conf # Add non-root user RUN addgroup -g 1001 -S appuser && \ adduser -u 1001 -S appuser -G appuser # Set permissions RUN chown -R appuser:appuser /usr/share/nginx/html && \ chmod -R 755 /usr/share/nginx/html && \ chown -R appuser:appuser /var/cache/nginx && \ chown -R appuser:appuser /var/log/nginx && \ chown -R appuser:appuser /etc/nginx/conf.d # Set user USER appuser # Expose port EXPOSE 80 # NGINX will start automatically CMD ["nginx", "-g", "daemon off;"] ``` ### 🔍 Why these are best practices for Webpack: ✅ Optimized build process - The multi-stage build approach keeps the final image size small - Webpack bundling creates optimized assets for production ✅ Dependency separation - Dependencies are installed in a separate stage, improving build caching - Production dependencies only are used in the final image ✅ Security enhancements - Running NGINX as a non-root user reduces security risks - Minimal attack surface with only the build artifacts in the final image ✅ Performance tuning - NGINX serves static files efficiently compared to Node.js servers - Proper file ownership and permissions for the web server ### 🚀 Additional Webpack-specific configurations: #### Webpack Configuration Optimization For production builds, ensure your webpack.config.js includes: ```javascript const path = require("path"); const TerserPlugin = require("terser-webpack-plugin"); module.exports = { mode: "production", output: { path: path.resolve(__dirname, "dist"), filename: "[name].[contenthash].js", clean: true, }, optimization: { minimizer: [new TerserPlugin()], splitChunks: { chunks: "all", }, }, }; ``` #### Environment Variables in Webpack Use the DefinePlugin to inject environment variables: ```javascript const webpack = require("webpack"); module.exports = { // ...other config plugins: [ new webpack.DefinePlugin({ "process.env.API_URL": JSON.stringify(process.env.API_URL), }), ], }; ``` And in the Dockerfile: ```docker # In the build stage ARG API_URL ENV API_URL=${API_URL} ``` #### Custom NGINX Configuration for SPAs ```nginx server { listen 80; server_name _; root /usr/share/nginx/html; index index.html; # SPA routing - redirect all requests to index.html location / { try_files $uri $uri/ /index.html; } # Cache static assets location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg)$ { expires 30d; add_header Cache-Control "public, no-transform"; } } ``` #### Development Setup with Docker Compose ```yaml version: "3" services: webpack: build: context: . target: deps command: npm run start volumes: - .:/app - /app/node_modules ports: - "8080:8080" environment: - API_URL=http://localhost:3000/api ``` By following these best practices, your Webpack applications will have optimized production builds with minimal container size and maximum security. --- # Node.js with Yarn URL: https://www.warpbuild.com/docs/ci/docker-builders/golden-dockerfiles/nodejs/nodejs-yarn Description: Best practices for Dockerfile for Node.js with Yarn --- title: "Node.js with Yarn" excerpt: "Best practices for Dockerfile for Node.js with Yarn" description: "Best practices for Dockerfile for Node.js with Yarn" hidden: false sidebar_position: 2 slug: "/docker-builders/golden-dockerfiles/nodejs/nodejs-yarn" createdAt: "2025-04-15" updatedAt: "2025-04-15" --- ### 🐳 Annotated Dockerfile for Node.js with Yarn: ```docker # Use Node.js LTS as the base image for consistency and long-term support FROM node:lts-jod AS base # Stage 1: Install production dependencies using Yarn FROM base AS deps # Set working directory WORKDIR /app # Copy only package definition files first COPY package.json yarn.lock .yarnrc.yml ./ # Enable corepack RUN corepack enable # Install only production dependencies # Use yarn install --immutable for deterministic builds RUN --mount=type=cache,id=yarn,target=/usr/local/share/.cache/yarn \ yarn install --immutable # Stage 2: Build the application FROM base AS build WORKDIR /app # Copy package definitions to maintain consistent build context COPY package.json yarn.lock .yarnrc.yml ./ # Enable corepack RUN corepack enable # Install all dependencies (dev + prod) for building the app RUN --mount=type=cache,id=yarn,target=/usr/local/share/.cache/yarn \ yarn install --immutable # Copy entire source code COPY . . # Run build script defined in your package.json RUN yarn build # Stage 3: Create the final lightweight production image FROM base # Set working directory WORKDIR /app # Copy only production dependencies (no dev dependencies) COPY --from=deps /app/node_modules /app/node_modules # Copy compiled application output (dist directory) COPY --from=build /app/dist /app/dist # Explicitly set environment to production ENV NODE_ENV production # Default command to run your Node.js application CMD ["node", "./dist/index.js"] ``` ### 🔍 Why these are best practices: ✅ Multi-stage builds - Smaller final images: Dependencies and build tools are discarded after use, reducing container size. - Security: Fewer files and tools mean a smaller attack surface. ✅ Caching Yarn modules - Faster builds: Reusing the Yarn cache reduces install times significantly. - Lower CI/CD overhead: Speeds up continuous integration and deployment workflows. ✅ Using --frozen-lockfile flag - Deterministic builds: Ensures exact versions from yarn.lock are used. - Fails if yarn.lock needs to be updated, preventing unexpected changes. ✅ Separating dependencies and build stages - Clear separation of concerns: Each stage serves a single purpose, making it easier to debug and optimize. - Improved cache efficiency: Changes in code don't trigger unnecessary reinstallation of unchanged dependencies. ✅ Minimal runtime image - Performance and security: Only the essential runtime code is present, limiting potential vulnerabilities. - Lower resource consumption: Optimized resource usage in production deployments. ### 🚀 Additional Dockerfile best practices you can adopt: #### Use a non-root user For enhanced security, run your app as a non-root user: ```docker FROM base # Create a non-root user RUN useradd -m appuser WORKDIR /app COPY --from=deps /app/node_modules /app/node_modules COPY --from=build /app/dist /app/dist ENV NODE_ENV production # Switch to non-root user USER appuser CMD ["node", "./dist/index.js"] ``` #### Use HEALTHCHECK directive Allows Docker to monitor container health automatically. ```docker HEALTHCHECK --interval=30s --timeout=3s \ CMD curl -f http://localhost:3000/health || exit 1 ``` #### Use explicit .dockerignore Prevent copying unnecessary files into your image. Example .dockerignore ``` node_modules dist coverage .git Dockerfile docker-compose.yml README.md *.log ``` #### Set resource limits explicitly When deploying containers, always set CPU and memory limits to avoid resource starvation or instability. Example in Kubernetes or Docker Compose (outside Dockerfile) ```yaml resources: limits: cpu: 1000m memory: 1Gi ``` #### Consider using Distroless or Alpine images Switch to even lighter-weight base images if you're comfortable handling potential compatibility issues: ```docker FROM node:22-alpine AS base ``` Or distroless: ```docker FROM gcr.io/distroless/nodejs22-debian12 AS final ``` By following these annotations and best practices, your Docker images become faster to build, more secure, smaller, and easier to maintain—ideal for modern production workflows. --- # PHP with Composer URL: https://www.warpbuild.com/docs/ci/docker-builders/golden-dockerfiles/php/php-composer Description: Best practices for Dockerfile for PHP with Composer --- title: "PHP with Composer" excerpt: "Best practices for Dockerfile for PHP with Composer" description: "Best practices for Dockerfile for PHP with Composer" hidden: false sidebar_position: 1 slug: "/docker-builders/golden-dockerfiles/php/php-composer" createdAt: "2025-04-15" updatedAt: "2025-04-15" --- ### 🐳 Annotated Dockerfile for PHP with Composer: ```docker # Stage 1: Composer dependencies FROM composer:lts AS composer # Set working directory WORKDIR /app # Copy only the files needed for composer installation COPY composer.json composer.lock ./ # Install dependencies with Composer # --no-dev for production, remove for development environments # --no-interaction for CI/CD environments # --no-progress to reduce log output RUN --mount=type=cache,target=/tmp/composer-cache \ composer install --no-dev --no-interaction --no-progress # Stage 2: PHP application runtime FROM php:8.3-fpm-alpine # Install production dependencies and common extensions RUN apk add --no-cache \ icu-libs \ libpq \ && docker-php-ext-install \ pdo_mysql \ opcache # Configure PHP for production COPY docker/php/php.ini /usr/local/etc/php/conf.d/app.ini COPY docker/php/fpm.conf /usr/local/etc/php-fpm.d/zz-app.conf # Create a non-root user to run the application RUN addgroup -g 1000 appuser && \ adduser -u 1000 -G appuser -s /bin/sh -D appuser # Set working directory WORKDIR /var/www/html # Copy application files COPY --chown=appuser:appuser . /var/www/html/ # Copy Composer dependencies from the composer stage COPY --from=composer --chown=appuser:appuser /app/vendor/ /var/www/html/vendor/ # Set proper permissions for storage and cache directories (for Laravel projects) RUN if [ -d "storage" ]; then \ chmod -R 775 storage bootstrap/cache; \ fi # Switch to non-root user USER appuser # Expose port 9000 for PHP-FPM EXPOSE 9000 # Set the entrypoint CMD ["php-fpm"] ``` ### 🔍 Why these are best practices: ✅ Multi-stage builds - Separates dependency installation from the runtime environment. - Uses official Composer image for dependency management. - Eliminates build tools and dev dependencies from final image. ✅ Composer optimization - Installs production dependencies only with `--no-dev`. - Uses cache mounting for faster builds. - Copies only necessary files to optimize layer caching. ✅ Alpine-based image - Reduces final image size dramatically (from ~1GB to ~100MB). - Minimizes attack surface by including only necessary packages. - Keeps application lightweight and efficient. ✅ PHP configuration optimizations - Custom php.ini settings for production environment. - Properly configured PHP-FPM for containerized deployments. - Enables OPcache for better performance. ✅ Security best practices - Runs as a non-root user to enhance container security. - Sets proper file ownership and permissions. - Explicitly installs only required PHP extensions. ### 🚀 Additional Dockerfile best practices you can adopt: #### Fine-tune OPcache settings Optimize PHP performance with production-ready OPcache settings: ```docker # Add this to your php.ini configuration COPY docker/php/opcache.ini /usr/local/etc/php/conf.d/opcache.ini ``` With `opcache.ini` containing: ```ini [opcache] opcache.enable=1 opcache.revalidate_freq=0 opcache.validate_timestamps=0 opcache.max_accelerated_files=10000 opcache.memory_consumption=128 opcache.max_wasted_percentage=10 opcache.interned_strings_buffer=16 opcache.fast_shutdown=1 ``` #### Configure for Laravel/Symfony projects For Laravel applications, add Laravel-specific optimizations: ```docker # In the final stage, run Laravel optimizations RUN php artisan optimize && \ php artisan config:cache && \ php artisan route:cache && \ php artisan view:cache ``` For Symfony: ```docker # Symfony optimizations RUN APP_ENV=prod APP_DEBUG=0 php bin/console cache:warmup ``` #### Add health checks Monitor container health: ```docker HEALTHCHECK --interval=30s --timeout=3s --retries=3 \ CMD curl -f http://localhost:9000/ping || exit 1 ``` #### Use .dockerignore Exclude unnecessary files from your Docker build context: ``` .git .github vendor node_modules storage/logs/* storage/app/* storage/framework/cache/* storage/framework/sessions/* storage/framework/views/* .env* Dockerfile docker-compose.yml README.md ``` #### Environment-specific builds Use build arguments to toggle between development and production builds: ```docker ARG APP_ENV=production # For development, include dev dependencies RUN if [ "$APP_ENV" = "development" ]; then \ composer install --no-interaction; \ else \ composer install --no-dev --no-interaction --optimize-autoloader; \ fi ``` #### Add development tools conditionally Include developer tools only in development images: ```docker ARG APP_ENV=production RUN if [ "$APP_ENV" = "development" ]; then \ apk add --no-cache git zip unzip && \ pecl install xdebug && \ docker-php-ext-enable xdebug; \ fi ``` #### Split web server and PHP-FPM For production deployments, use separate containers for web server and PHP: ```docker # Nginx configuration in a separate container FROM nginx:alpine COPY docker/nginx/default.conf /etc/nginx/conf.d/default.conf COPY --from=app /var/www/html/public /var/www/html/public EXPOSE 80 ``` #### Implement proper init process Handle signals correctly with proper init: ```docker RUN apk add --no-cache tini ENTRYPOINT ["/sbin/tini", "--"] CMD ["php-fpm"] ``` By following these practices, you'll create Docker images for your PHP applications that are secure, efficient, and optimized for both development and production environments. These approaches help minimize build times, reduce image sizes, and provide a consistent experience across different deployment environments. --- # Python with pip URL: https://www.warpbuild.com/docs/ci/docker-builders/golden-dockerfiles/python/python-pip Description: Best practices for Dockerfile for Python with pip --- title: "Python with pip" excerpt: "Best practices for Dockerfile for Python with pip" description: "Best practices for Dockerfile for Python with pip" hidden: false sidebar_position: 1 slug: "/docker-builders/golden-dockerfiles/python/python-pip" createdAt: "2025-04-15" updatedAt: "2025-04-15" --- ### 🐳 Annotated Dockerfile for Python with pip: ```docker # Start with a slim Python base image - 3.12 is recommended for modern features and performance FROM python:3.12-slim-bookworm AS base # Set environment variables ENV PYTHONFAULTHANDLER=1 \ PYTHONUNBUFFERED=1 \ PYTHONDONTWRITEBYTECODE=1 \ PYTHONHASHSEED=random \ PIP_NO_CACHE_DIR=off \ PIP_DISABLE_PIP_VERSION_CHECK=on \ PIP_DEFAULT_TIMEOUT=100 # -------------------------------------- # Stage 1: Builder - installs dependencies and prepares app # -------------------------------------- FROM base AS builder # Install build dependencies RUN apt-get update && apt-get install -y --no-install-recommends \ build-essential \ && rm -rf /var/lib/apt/lists/* # Set up a virtual environment RUN python -m venv /opt/venv # Ensure we use the virtual environment ENV PATH="/opt/venv/bin:$PATH" # Set working directory WORKDIR /app # Copy and install requirements first for better caching COPY requirements.txt . COPY requirements-prod.txt . # Install Python dependencies with caching RUN --mount=type=cache,target=/root/.cache/pip \ pip install -r requirements-prod.txt # Copy the rest of the application code COPY . . # If you have a build step (e.g., compiling assets), run it here RUN pip install --no-deps -e . # -------------------------------------- # Stage 2: Final production image # -------------------------------------- FROM base # Create a non-root user and group RUN groupadd -r appuser && useradd -r -g appuser appuser # Set working directory WORKDIR /app # Copy only the virtual environment from the builder stage COPY --from=builder /opt/venv /opt/venv # Copy application code COPY --from=builder /app /app # Set environment variables to use virtual environment ENV PATH="/opt/venv/bin:$PATH" # Set ownership for application files RUN chown -R appuser:appuser /app # Switch to non-root user USER appuser # Expose the port your application runs on EXPOSE 8000 # Run your application CMD ["gunicorn", "--bind", "0.0.0.0:8000", "myapp.wsgi:application"] ``` ### 🔍 Why these are best practices: ✅ Multi-stage builds - Efficiently separates the build environment from the runtime environment. - Dramatically reduces final image size by excluding build tools in the final image. - Improves security by minimizing the attack surface in your production container. ✅ Virtual environments - Isolates application dependencies from system Python packages. - Ensures consistent environment for your application. - Makes it easier to manage dependency conflicts. ✅ Dependency caching - Uses Docker's build cache to avoid redundant pip downloads. - Dramatically speeds up build time, especially in CI/CD environments. - Reduces network usage and build time. ✅ Environment variable optimization - PYTHONDONTWRITEBYTECODE=1 avoids creating .pyc files, reducing image size. - PYTHONUNBUFFERED=1 ensures Python output is sent straight to the container logs. - PIP_DISABLE_PIP_VERSION_CHECK=on eliminates unnecessary version checks. ✅ Non-root user - Runs application as non-privileged user for enhanced security. - Follows principle of least privilege to reduce risk of container escape. - Required in many enterprise Kubernetes environments. ### 🚀 Additional Dockerfile best practices you can adopt: #### Split requirements for dev and prod Maintain separate requirements files for different environments: ``` # requirements.txt - Base requirements flask==2.3.2 sqlalchemy==2.0.16 pydantic==2.0.2 # requirements-dev.txt - Development requirements -r requirements.txt pytest==7.3.1 black==23.3.0 flake8==6.0.0 # requirements-prod.txt - Production requirements -r requirements.txt gunicorn==20.1.0 ``` #### Add a health check Monitor the health of your container and enable automatic recovery: ```docker HEALTHCHECK --interval=30s --timeout=3s \ CMD curl -f http://localhost:8000/health || exit 1 ``` #### Use .dockerignore Exclude unnecessary files from your Docker build context: ``` __pycache__/ *.py[cod] *$py.class .env .venv env/ venv/ ENV/ .pytest_cache/ .coverage htmlcov/ .git/ .github/ .idea/ .vscode/ *.md !README.md ``` #### Pin exact dependency versions For deterministic builds, pin exact versions in your requirements.txt: ``` # Good - pinned exact versions flask==2.3.2 sqlalchemy==2.0.16 # Bad - allows minor or patch version changes flask>=2.3.0 sqlalchemy~=2.0.0 ``` #### Consider using a dedicated Python app server Use a production-ready (Web Server Gateway Interface) server instead of development servers: ```docker # Install production server RUN pip install gunicorn # Use in your CMD CMD ["gunicorn", "--workers", "4", "--bind", "0.0.0.0:8000", "app:app"] ``` #### Pre-compile Python code For slightly faster startup, pre-compile your Python modules: ```docker # Pre-compile Python bytecode RUN python -m compileall /app ``` By following these practices, you'll create Docker images for your Python applications that are efficient, secure, and optimized for both development and production environments. --- # Python with Poetry URL: https://www.warpbuild.com/docs/ci/docker-builders/golden-dockerfiles/python/python-poetry Description: Best practices for Dockerfile for Python with Poetry --- title: "Python with Poetry" excerpt: "Best practices for Dockerfile for Python with Poetry" description: "Best practices for Dockerfile for Python with Poetry" hidden: false sidebar_position: 2 slug: "/docker-builders/golden-dockerfiles/python/python-poetry" createdAt: "2025-04-15" updatedAt: "2025-04-15" --- ### 🐳 Annotated Dockerfile for Python with Poetry: ```docker # Start with a slim Python base image - 3.12 is recommended for modern features and performance FROM python:3.12-slim-bookworm AS base # Set build arguments and environment variables ARG POETRY_VERSION=1.8.2 ENV PYTHONFAULTHANDLER=1 \ PYTHONUNBUFFERED=1 \ PYTHONDONTWRITEBYTECODE=1 \ PYTHONHASHSEED=random \ PIP_NO_CACHE_DIR=1 \ PIP_DISABLE_PIP_VERSION_CHECK=1 \ # Poetry configuration POETRY_NO_INTERACTION=1 \ POETRY_VIRTUALENVS_IN_PROJECT=1 \ POETRY_HOME="/opt/poetry" # -------------------------------------- # Stage 1: Builder - installs poetry and dependencies # -------------------------------------- FROM base AS builder # Install Poetry using official installer RUN pip install --no-cache-dir "poetry==$POETRY_VERSION" # Set working directory WORKDIR /app # Copy only requirements-related files to optimize caching COPY pyproject.toml poetry.lock* ./ # Configure Poetry to use system Python RUN poetry config virtualenvs.create false # Install dependencies using Poetry with caching # (Only dependencies, without development dependencies) RUN --mount=type=cache,target=/root/.cache/pypoetry \ poetry install --only main --no-root # Copy the rest of the application code COPY . . # Install the project itself RUN --mount=type=cache,target=/root/.cache/pypoetry \ poetry install --only main # -------------------------------------- # Stage 2: Final production image # -------------------------------------- FROM base # Copy the virtual environment from the builder stage COPY --from=builder /app /app # Set working directory WORKDIR /app # Set proper path to include the virtual environment ENV PATH="/app/.venv/bin:$PATH" # Expose the port your application runs on EXPOSE 8000 # Run your application using the virtual environment's Python CMD ["python", "-m", "your_package.main"] ``` ### 🔍 Why these are best practices: ✅ Multi-stage builds - Efficiently separates the build environment from the runtime environment. - Dramatically reduces final image size by not including build tools in production. - Improves security by minimizing the attack surface in your production container. ✅ Poetry for dependency management - Precise, deterministic dependency resolution with lockfiles. - Clear separation between development and production dependencies. - Ensures identical environments across development, testing, and production. ✅ Caching Poetry dependencies - Uses Docker's build cache effectively to avoid redundant downloads. - Significantly speeds up build time, especially in CI/CD environments. - Reduces network usage and dependency resolution time. ✅ Environment variable optimization - PYTHONDONTWRITEBYTECODE=1 avoids creating .pyc files, reducing image size. - PYTHONUNBUFFERED=1 ensures logs are output immediately, improving visibility. - POETRY_VIRTUALENVS_IN_PROJECT=1 keeps virtual environments in the project for better portability. ✅ Minimal final container - Smaller attack surface with fewer installed packages. - Faster container startup and less resource usage. - Improved security posture by excluding build tools from production. ### 🚀 Additional Dockerfile best practices you can adopt: #### Create and use a non-root user Enhance security by running your application as a non-privileged user: ```docker FROM base # Create a non-root user RUN adduser --disabled-password --gecos "" appuser COPY --from=builder /app /app WORKDIR /app ENV PATH="/app/.venv/bin:$PATH" # Change ownership and switch to non-root user RUN chown -R appuser:appuser /app USER appuser EXPOSE 8000 CMD ["python", "-m", "your_package.main"] ``` #### Add a health check Monitor the health of your container and enable automatic recovery: ```docker HEALTHCHECK --interval=30s --timeout=3s \ CMD curl -f http://localhost:8000/health || exit 1 ``` #### Use .dockerignore Exclude unnecessary files from your Docker build context: ``` __pycache__/ *.py[cod] *$py.class .pytest_cache/ .coverage htmlcov/ .git/ .github/ .venv/ .vscode/ *.md !README.md ``` #### Optimize for production builds with build arguments Use build arguments to toggle between development and production builds: ```docker ARG ENV=production RUN if [ "$ENV" = "production" ] ; then \ poetry install --only main ; \ else \ poetry install ; \ fi ``` #### Separate dependency installation from code changes To further optimize build caching: ```docker # Copy and install dependencies first COPY pyproject.toml poetry.lock* ./ RUN --mount=type=cache,target=/root/.cache/pypoetry \ poetry install --only main --no-root # Then copy application code (which changes more frequently) COPY . . ``` By following these practices, you'll create Docker images for your Python applications that are efficient, secure, and optimized for both development and production environments. --- # Python with uv URL: https://www.warpbuild.com/docs/ci/docker-builders/golden-dockerfiles/python/python-uv Description: Best practices for Dockerfile for Python with uv --- title: "Python with uv" excerpt: "Best practices for Dockerfile for Python with uv" description: "Best practices for Dockerfile for Python with uv" hidden: false sidebar_position: 3 slug: "/docker-builders/golden-dockerfiles/python/python-uv" createdAt: "2025-04-15" updatedAt: "2025-04-15" --- ### 🐳 Annotated Dockerfile for Python with uv: ```docker # Use a slim and modern base image: Python 3.12 on Debian Bookworm FROM python:3.12-slim-bookworm AS base # -------------------------------------- # Stage 1: Builder - installs dependencies and prepares environment # -------------------------------------- FROM base AS builder # Copy pre-built uv binary from official image COPY --from=ghcr.io/astral-sh/uv:0.6 /uv /bin/uv # Enable uv optimizations: # UV_COMPILE_BYTECODE=1 compiles Python bytecode for faster startup # UV_LINK_MODE=copy ensures dependencies are copied (isolated env) ENV UV_COMPILE_BYTECODE=1 UV_LINK_MODE=copy # Set the working directory in container to /app WORKDIR /app # Copy dependency files COPY pyproject.toml uv.lock /app/ # Create a virtual environment first RUN uv venv # Install dependencies from the lock file RUN --mount=type=cache,target=/root/.cache/uv \ uv pip install -r uv.lock --no-deps # Now copy the source code COPY . /app # Install your project itself RUN --mount=type=cache,target=/root/.cache/uv \ uv pip install -e . # -------------------------------------- # Stage 2: Final production image - minimal and optimized # -------------------------------------- FROM base # Copy prepared app environment from builder stage COPY --from=builder /app /app # Add virtual environment bin folder to PATH for easy command access ENV PATH="/app/.venv/bin:$PATH" # Expose port 8000 for application server EXPOSE 8000 # Default command to start your Python app CMD ["python", "app.py"] ``` ### 🔍 Why these are best practices ✅ Slim Base Images (python:3.12-slim-bookworm) - Minimizes image size, enhancing security and reducing container startup time. - Debian Bookworm provides modern dependencies with stable long-term support. ✅ Using uv (Ultra-fast Python package manager) - Faster and more efficient than traditional pip. - Built-in dependency caching significantly improves build speeds. - Ensures reproducible builds through lock files (uv.lock), avoiding dependency drift. ✅ Multi-stage Builds - Keeps final image minimal by excluding build-time artifacts (e.g., cache files, temporary dependencies). - Reduces production container size, resulting in lower resource usage and faster deployments. ✅ Dependency and Source Separation - Copying dependency-related files separately allows Docker to reuse cached layers effectively. - Changes in source code don’t trigger unnecessary reinstallations of unchanged dependencies. ✅ Mounting Cache (--mount=type=cache) - Dramatically reduces build time in CI/CD environments by reusing cached downloads and installed packages. ✅ Environment Variables - UV_COMPILE_BYTECODE=1 compiles bytecode, which optimizes startup times in production. - UV_LINK_MODE=copy isolates dependencies clearly, simplifying management and ensuring immutability. ### 🚀 Additional best practices to consider #### Run as a Non-root User For enhanced security, switch to a non-root user in your production container. ```docker FROM base RUN useradd -m appuser COPY --from=builder /app /app ENV PATH="/app/.venv/bin:$PATH" WORKDIR /app USER appuser EXPOSE 8000 CMD ["uvicorn", "uv_docker_example:app", "--host", "0.0.0.0", "--port", "8000"] ``` #### Add Health Checks Integrate Docker’s built-in health monitoring to enable auto-recovery mechanisms. ```docker HEALTHCHECK --interval=30s --timeout=3s \ CMD curl -f http://localhost:8000/health || exit 1 (This requires adding a /health endpoint to your Python app.) ``` #### Use .dockerignore file Avoid accidentally copying unwanted files (e.g., logs, .git, test folders): ```docker .venv __pycache__ *.pyc .git tests/ docker-compose.yml ``` #### Explicit Resource Limits Set CPU and memory limits explicitly when running containers (via Kubernetes, Docker Compose, or runtime flags). Example (Docker Compose): ```yaml services: web: build: . deploy: resources: limits: cpus: "1.0" memory: "512M" ``` By following these annotations and additional best practices, you’ll achieve containers that are fast to build, secure, easy to maintain, and optimized for production environments. --- # Ruby with Bundler URL: https://www.warpbuild.com/docs/ci/docker-builders/golden-dockerfiles/ruby/ruby-bundler Description: Best practices for Dockerfile for Ruby with Bundler --- title: "Ruby with Bundler" excerpt: "Best practices for Dockerfile for Ruby with Bundler" description: "Best practices for Dockerfile for Ruby with Bundler" hidden: false sidebar_position: 1 slug: "/docker-builders/golden-dockerfiles/ruby/ruby-bundler" createdAt: "2025-04-15" updatedAt: "2025-04-15" --- ### 🐳 Annotated Dockerfile for Ruby with Bundler: ```docker # Use a specific version of Ruby from the official Ruby image repository FROM ruby:3.3-slim-bookworm AS base # -------------------------------------- # Stage 1: Builder - installs dependencies and prepares the environment # -------------------------------------- FROM base AS builder # Install system dependencies required for gem compilation RUN apt-get update && apt-get install -y --no-install-recommends \ build-essential \ git \ libpq-dev \ && rm -rf /var/lib/apt/lists/* # Set working directory WORKDIR /app # Copy Gemfile and lockfile first for better caching COPY Gemfile Gemfile.lock ./ # Install gems with caching RUN --mount=type=cache,target=/usr/local/bundle \ bundle install && \ bundle clean --force && \ rm -rf /usr/local/bundle/cache/*.gem && \ find /usr/local/bundle -name "*.c" -delete && \ find /usr/local/bundle -name "*.o" -delete # Copy the rest of the application code COPY . /app # -------------------------------------- # Stage 2: Final production image # -------------------------------------- FROM base # Install runtime dependencies only RUN apt-get update && apt-get install -y --no-install-recommends \ libpq5 \ tzdata \ && rm -rf /var/lib/apt/lists/* # Create a non-root user to run the application RUN groupadd -r appuser && useradd -r -g appuser appuser # Set working directory WORKDIR /app # Copy gems from builder stage COPY --from=builder /usr/local/bundle /usr/local/bundle # Copy application code COPY --from=builder /app /app # Set ownership for application files RUN chown -R appuser:appuser /app # Switch to non-root user USER appuser # Expose port 4567 for the sinatra application EXPOSE 4567 # Command to run the application CMD ["bundle", "exec", "ruby", "app.rb", "-o", "0.0.0.0"] ``` ### 🔍 Why these are best practices: ✅ Multi-stage builds - Separates build environment (with compilers and dev dependencies) from runtime environment. - Significantly reduces final image size by not including build tools in production. - Improves security by having fewer binaries in the production image. ✅ Bundler optimization - BUNDLE_WITHOUT excludes development and test gems from production. - BUNDLE_DEPLOYMENT enables deployment mode for consistent environments. - BUNDLE_JOBS accelerates gem installation by using multiple cores. - BUNDLE_RETRY adds resilience to network issues during installation. ✅ Cleanup operations - Removes build artifacts and temporary files to reduce image size. - Deletes gem caches, source files, and object files that aren't needed at runtime. - Uses `apt-get clean` and `rm -rf /var/lib/apt/lists/*` to remove package metadata. ✅ Asset precompilation - Precompiles assets during build phase for Rails applications. - Ensures assets are optimized and ready to serve in production. - Uses a dummy SECRET_KEY_BASE for precompilation to avoid environment variable requirements. ✅ Non-root user - Runs the application as a non-privileged user for enhanced security. - Follows container security best practices to minimize potential damage from vulnerabilities. ### 🚀 Additional Dockerfile best practices you can adopt: #### Configure Redis/Sidekiq For applications using background processing with Sidekiq: ```docker # Final stage CMD for Sidekiq worker CMD ["bundle", "exec", "sidekiq", "-C", "config/sidekiq.yml"] ``` #### Add health checks Monitor application health and enable automatic container recovery: ```docker HEALTHCHECK --interval=30s --timeout=5s --start-period=30s \ CMD curl -f http://localhost:3000/health || exit 1 ``` #### Use .dockerignore Exclude unnecessary files from your Docker build context: ``` .git .github .gitignore log/* tmp/* spec/* test/* vendor/* *.md !README.md .DS_Store .env* .dockerignore docker-compose* ``` #### Configure database migrations For Rails applications, consider adding an entrypoint script to handle migrations: ```docker # Create entrypoint.sh COPY entrypoint.sh /usr/bin/ RUN chmod +x /usr/bin/entrypoint.sh ENTRYPOINT ["entrypoint.sh"] CMD ["bundle", "exec", "puma", "-C", "config/puma.rb"] ``` With `entrypoint.sh`: ```bash #!/bin/bash set -e # Run pending migrations if any bundle exec rake db:migrate # Then exec the container's main process (what's set as CMD) exec "$@" ``` #### Optimize for boot time For faster application startup: ```docker # Add bootsnap for faster boot RUN bundle exec bootsnap precompile --gemfile app/ lib/ ``` #### Consider using jemalloc for better memory management For high-traffic Ruby applications: ```docker # Install jemalloc in the final stage RUN apt-get update && apt-get install -y --no-install-recommends \ libjemalloc2 \ && rm -rf /var/lib/apt/lists/* # Enable jemalloc ENV LD_PRELOAD="libjemalloc.so.2" ``` By following these practices, you'll create Docker images for your Ruby applications that are efficient, secure, and optimized for production environments. These approaches will help ensure consistent deployment, faster performance, and better resource utilization for your Ruby applications. --- # Rust with Cargo URL: https://www.warpbuild.com/docs/ci/docker-builders/golden-dockerfiles/rust/rust-cargo Description: Best practices for Dockerfile for Rust with Cargo --- title: "Rust with Cargo" excerpt: "Best practices for Dockerfile for Rust with Cargo" description: "Best practices for Dockerfile for Rust with Cargo" hidden: false sidebar_position: 1 slug: "/docker-builders/golden-dockerfiles/rust/rust-cargo" createdAt: "2025-04-15" updatedAt: "2025-04-15" --- ### 🐳 Annotated Dockerfile for Rust with Cargo: ```docker # Use Rust official image with debian bookworm (12) as base FROM rust:1.86-slim-bookworm AS builder # Set build arguments for customization ARG APP_NAME=myapp ARG PROFILE=release # Create a new empty shell project RUN USER=root cargo new --bin ${APP_NAME} WORKDIR /usr/${APP_NAME} # Copy manifests first for dependency caching COPY Cargo.lock Cargo.toml ./ # Copy the source code COPY src ./src/ # Build only the dependencies to cache them RUN --mount=type=cache,target=/usr/local/cargo/registry \ --mount=type=cache,target=/usr/${APP_NAME}/target \ cargo build --profile ${PROFILE} && \ rm -rf target/${PROFILE}/.fingerprint/${APP_NAME}-* # Build the application and copy binary to a accessible location RUN --mount=type=cache,target=/usr/${APP_NAME}/target \ cargo build --profile ${PROFILE} && \ mkdir -p /tmp/app && \ find /usr/${APP_NAME}/target/${PROFILE} -type f -executable -name "rust-cargo-test" -exec cp {} /tmp/app/myapp \; # -------------------------------------- # Stage 2: Final minimal runtime image # -------------------------------------- # Use Debian slim for minimal runtime image FROM debian:bookworm-slim # Install only runtime dependencies RUN apt-get update && apt-get install -y --no-install-recommends \ ca-certificates \ && rm -rf /var/lib/apt/lists/* # Create a non-root user and group for running the application RUN groupadd -r appuser && useradd -r -g appuser appuser # Create app directory WORKDIR /app # Copy the built binary from the temp location COPY --from=builder /tmp/app/myapp /app/myapp # Set ownership RUN chown appuser:appuser /app/myapp # Switch to non-root user USER appuser # Expose application port EXPOSE 8080 # Set the entrypoint to the binary ENTRYPOINT ["/app/myapp"] ``` ### 🔍 Why these are best practices: ✅ Multi-stage builds - Dramatically reduces final image size from ~1.5GB to ~100MB. - Eliminates all build dependencies, toolchains, and intermediate artifacts. - Keeps only the compiled binary in the final image for enhanced security. ✅ Cargo registry caching - Uses Docker's build cache efficiently by caching the Cargo registry. - Speeds up build times dramatically on iterative builds. - Avoids re-downloading dependencies when only source code changes. ✅ Dependency layering optimization - Builds dependencies first, separately from the application code. - Leverages Docker's layer caching for faster rebuilds when only the app code changes. - Only re-compiles what's necessary on subsequent builds. ✅ Profile-based optimization - Uses Cargo's build profiles (release, dev) for different environments. - Release profile enables optimizations like LTO (Link Time Optimization). - Configurable via build arguments for flexibility. ✅ Minimal runtime container - Uses debian:slim or distroless as the final image for minimal attack surface. - Includes only what's needed to run the binary, not build tools. - Rust's static linking minimizes runtime dependencies. ### 🚀 Additional Dockerfile best practices you can adopt: #### Use Alpine for even smaller images If binary size is critical and your application doesn't rely on glibc: ```docker FROM rust:1.86-alpine AS builder # Install required build dependencies RUN apk add --no-cache musl-dev # ... (build steps) ... FROM alpine:3.21 RUN apk add --no-cache ca-certificates # ... (final stage) ... ``` #### Cross-compilation for different architectures For multi-architecture builds: ```docker FROM rust:1.86-slim-bookworm AS builder # Install cross-compilation tooling RUN apt-get update && apt-get install -y --no-install-recommends \ gcc-aarch64-linux-gnu libc6-dev-arm64-cross && \ rustup target add aarch64-unknown-linux-gnu # Build for ARM64 RUN --mount=type=cache,target=/usr/local/cargo/registry \ cargo build --release --target aarch64-unknown-linux-gnu ``` #### Use .dockerignore Create a proper .dockerignore file to speed up builds by excluding unnecessary files: ``` .git .gitignore .github .vscode target Dockerfile README.md *.md ``` #### Configure startup health checks Ensure your container reports health correctly: ```docker HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \ CMD curl -f http://localhost:8000/health || exit 1 ``` #### Enable Cargo features selectively For configurable container builds: ```docker ARG FEATURES=default # Then in the build command: RUN cargo build --profile ${PROFILE} --features ${FEATURES} ``` #### Optimize binary size further For the absolute smallest binaries, configure the release profile: ```toml # In Cargo.toml [profile.release] opt-level = 'z' # Optimize for size lto = true # Enable Link Time Optimization codegen-units = 1 # Maximize size reduction optimizations panic = 'abort' # Remove backtrace support strip = true # Remove debug symbols ``` By following these practices, you'll create Docker images for your Rust applications that are secure, efficient, and optimized for production environments. Rust's strong compilation guarantees and performance characteristics pair perfectly with Docker's containerization benefits, resulting in robust and reliable deployments. --- # Scala with SBT URL: https://www.warpbuild.com/docs/ci/docker-builders/golden-dockerfiles/scala/scala-sbt Description: Best practices for Dockerfile for Scala with SBT --- title: "Scala with SBT" excerpt: "Best practices for Dockerfile for Scala with SBT" description: "Best practices for Dockerfile for Scala with SBT" hidden: false sidebar_position: 1 slug: "/docker-builders/golden-dockerfiles/scala/scala-sbt" createdAt: "2025-04-15" updatedAt: "2025-04-15" --- ### 🐳 Annotated Dockerfile for Scala with SBT: ```docker # Stage 1: Build the application FROM sbtscala/scala-sbt:eclipse-temurin-21.0.6_7_1.10.11_3.6.4 AS builder # Set working directory WORKDIR /app # Copy SBT build definition files first COPY build.sbt ./ COPY project/ project/ # Cache SBT dependencies using a mount cache RUN --mount=type=cache,target=/root/.ivy2 \ --mount=type=cache,target=/root/.cache/coursier \ --mount=type=cache,target=/root/.sbt \ sbt update # Copy source code COPY src/ src/ # Build the application RUN --mount=type=cache,target=/root/.ivy2 \ --mount=type=cache,target=/root/.cache/coursier \ --mount=type=cache,target=/root/.sbt \ sbt "set ThisBuild / test := {}" assembly # Stage 2: Create a minimal runtime image FROM eclipse-temurin:21-alpine # Create a non-root user RUN groupadd -r scalauser && useradd -r -g scalauser scalauser # Set working directory WORKDIR /app # Copy the built fat JAR from the builder stage COPY --from=builder /app/target/scala-*/app-assembly-*.jar app.jar # Set ownership and permissions RUN chown -R scalauser:scalauser /app # Switch to non-root user USER scalauser # Expose application port EXPOSE 8080 # Configure JVM options for containerized environments ENV JAVA_OPTS="-XX:+UseContainerSupport -XX:MaxRAMPercentage=75.0 -Djava.security.egd=file:/dev/./urandom" # Run the application ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -jar app.jar" ] ``` ### 🔍 Why these are best practices: ✅ Multi-stage builds - Reduces final image size. - Separates build environment from runtime environment. - Eliminates SBT, Scala compiler, and build tools from production image. ✅ SBT dependency caching - Uses Docker's build cache to avoid downloading dependencies repeatedly. - Caches Ivy2, Coursier, and SBT directories for faster builds. - Dramatically improves build times in CI/CD pipelines. ✅ SBT assembly for fat JAR - Creates a single, self-contained JAR with all dependencies. - Simplifies deployment with minimal runtime requirements. - Ensures consistent behavior across environments. ✅ JRE-only runtime image - Uses minimal JRE instead of full JDK for smaller runtime image. - Reduces attack surface by excluding development tools. - Improves startup times and resource utilization. ✅ Security best practices - Runs the application as a non-root user. - Follows principle of least privilege for enhanced security. - Sets appropriate file ownership and permissions. ### 🚀 Additional Dockerfile best practices you can adopt: #### Skip tests during build For faster builds when tests are run separately: ```docker # Explicitly skip tests during the build RUN sbt "set ThisBuild / test := {}" assembly ``` #### Optimize JVM for containers Fine-tune JVM settings for containerized environments: ```docker ENV JAVA_OPTS="-XX:+UseContainerSupport -XX:MaxRAMPercentage=75.0 -XX:+UseG1GC -XX:+ExitOnOutOfMemoryError -XX:+HeapDumpOnOutOfMemoryError" ``` #### Add health checks Monitor application health: ```docker HEALTHCHECK --interval=30s --timeout=3s --start-period=30s --retries=3 \ CMD curl -f http://localhost:8080/health || exit 1 ``` #### Use .dockerignore Exclude unnecessary files from your Docker build context: ``` target/ !target/*.jar .git/ .github/ .bsp/ .idea/ .bloop/ .metals/ project/project/ project/target/ *.log ``` #### Configure for different environments Use build arguments to customize your builds: ```docker ARG SBT_PROFILE=production RUN sbt "set ThisBuild / test := {}" "set ThisBuild / scalacOptions ++= Seq(\"-Xfatal-warnings\")" s"assembly $SBT_PROFILE" ``` #### Enable GraalVM native-image for Scala For Scala 3 projects that support native compilation: ```docker FROM ghcr.io/graalvm/native-image-community:21 AS builder WORKDIR /app COPY . . RUN gu install native-image RUN sbt "nativeImage" FROM scratch COPY --from=builder /app/target/native-image/app /app ENTRYPOINT ["/app"] ``` #### Implement proper signal handling Ensure your container responds to orchestration signals: ```docker # Add tini for proper signal handling RUN apt-get update && apt-get install -y --no-install-recommends tini \ && rm -rf /var/lib/apt/lists/* ENTRYPOINT ["/usr/bin/tini", "--"] CMD ["java", "-jar", "app.jar"] ``` #### Use a distroless base image for minimal runtime Further reduce attack surface with distroless: ```docker FROM gcr.io/distroless/java21-debian12:nonroot COPY --from=builder /app/target/scala-*/app-assembly-*.jar /app.jar ENTRYPOINT ["java", "-jar", "/app.jar"] ``` By following these practices, you'll create Docker images for your Scala applications that are secure, efficient, and optimized for both development and production environments. These approaches help minimize build times, reduce image sizes, and ensure consistent behavior across different deployment environments, leveraging Scala's JVM foundation for robust containerized applications. ---