Testing Environment Setup

In this tutorial, you’ll create isolated testing environments for running QA, integration, and end-to-end tests with realistic data and infrastructure.

Time to complete: 30-40 minutes

Prerequisites:

  • Teabar account
  • Application with test suites
  • Basic understanding of testing concepts

What You’ll Build

A complete testing infrastructure with:

  1. Isolated test environments with production-like setup
  2. Seeded test data for consistent results
  3. Parallel test execution support
  4. Test result reporting and artifacts

Step 1: Create the Test Blueprint

Define a blueprint that mirrors your production environment:

# blueprints/testing.yaml
name: test-environment
version: "1.0"
description: Isolated testing environment with seeded data

parameters:
  test_suite:
    type: string
    default: all
    enum: [all, unit, integration, e2e, performance]
  seed_data:
    type: boolean
    default: true
  parallel_workers:
    type: integer
    default: 4

components:
  app:
    image: ${APP_IMAGE:-myapp:test}
    ports:
      - 3000:3000
    environment:
      NODE_ENV: test
      DATABASE_URL: postgres://postgres:postgres@database:5432/test_db
      REDIS_URL: redis://cache:6379
      LOG_LEVEL: debug
    resources:
      cpu: 2
      memory: 4G
    health_check:
      type: http
      path: /health
      port: 3000

  database:
    image: postgres:15
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres
      POSTGRES_DB: test_db
    volumes:
      - db-data:/var/lib/postgresql/data
    health_check:
      type: tcp
      port: 5432

  cache:
    image: redis:7-alpine
    health_check:
      type: tcp
      port: 6379

  # Test services
  mailhog:
    image: mailhog/mailhog:latest
    ports:
      - 1025:1025  # SMTP
      - 8025:8025  # Web UI

  minio:
    image: minio/minio:latest
    command: server /data --console-address ":9001"
    ports:
      - 9000:9000
      - 9001:9001
    environment:
      MINIO_ROOT_USER: minioadmin
      MINIO_ROOT_PASSWORD: minioadmin

  selenium:
    image: selenium/standalone-chrome:latest
    ports:
      - 4444:4444
    shm_size: 2g

volumes:
  db-data:

hooks:
  post_create:
    - name: Seed database
      condition: ${seed_data}
      command: |
        npm run db:migrate
        npm run db:seed:test

expose:
  - component: app
    port: 3000
    public: true
  - component: mailhog
    port: 8025
    public: true
    path: /mail
  - component: minio
    port: 9001
    public: true
    path: /storage

Step 2: Create Test Data Seeds

Set up consistent test data:

// seeds/test-data.js
const { faker } = require('@faker-js/faker');

// Use fixed seed for reproducible data
faker.seed(12345);

module.exports = {
  users: [
    {
      id: 'user-1',
      email: '[email protected]',
      password: 'testpassword123',
      role: 'admin'
    },
    {
      id: 'user-2', 
      email: '[email protected]',
      password: 'testpassword123',
      role: 'user'
    },
    // Generate additional users
    ...Array.from({ length: 100 }, (_, i) => ({
      id: `user-${i + 3}`,
      email: faker.internet.email(),
      password: 'testpassword123',
      role: 'user'
    }))
  ],
  
  products: Array.from({ length: 50 }, (_, i) => ({
    id: `product-${i + 1}`,
    name: faker.commerce.productName(),
    price: faker.commerce.price(),
    category: faker.commerce.department()
  })),
  
  orders: Array.from({ length: 200 }, (_, i) => ({
    id: `order-${i + 1}`,
    userId: `user-${faker.number.int({ min: 1, max: 102 })}`,
    total: faker.commerce.price(),
    status: faker.helpers.arrayElement(['pending', 'processing', 'shipped', 'delivered'])
  }))
};

Step 3: Configure Test Runners

// jest.config.js
module.exports = {
  testEnvironment: 'node',
  setupFilesAfterEnv: ['./test/setup.js'],
  testTimeout: 30000,
  
  // Use Teabar environment URL
  globals: {
    BASE_URL: process.env.TEABAR_ENV_URL || 'http://localhost:3000'
  },
  
  // Parallel execution
  maxWorkers: process.env.PARALLEL_WORKERS || 4,
  
  // Coverage
  collectCoverage: true,
  coverageDirectory: './coverage',
  coverageReporters: ['text', 'lcov', 'html'],
  
  // Reporters
  reporters: [
    'default',
    ['jest-junit', {
      outputDirectory: './test-results',
      outputName: 'junit.xml'
    }]
  ]
};
// test/setup.js
const { Teabar } = require('@teabar/sdk');

beforeAll(async () => {
  // Wait for environment to be healthy
  if (process.env.TEABAR_ENV_NAME) {
    const teabar = new Teabar();
    await teabar.environments.waitHealthy(process.env.TEABAR_ENV_NAME, {
      timeout: 60000
    });
  }
});

Step 4: Create CI Pipeline

# .github/workflows/test.yml
name: Test Suite

on:
  pull_request:
  push:
    branches: [main]

jobs:
  unit-tests:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'npm'
      - run: npm ci
      - run: npm run test:unit
      - uses: actions/upload-artifact@v4
        with:
          name: unit-coverage
          path: coverage/

  integration-tests:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Install Teabar CLI
        run: curl -fsSL https://get.teabar.dev | sh

      - name: Authenticate with Keycloak
        run: |
          TOKEN=$(curl -s -X POST "https://auth.bcp.technology/realms/teabar/protocol/openid-connect/token" 
            -d "client_id=${{ secrets.KEYCLOAK_CLIENT_ID }}" 
            -d "client_secret=${{ secrets.KEYCLOAK_CLIENT_SECRET }}" 
            -d "grant_type=client_credentials" | jq -r '.access_token')
          teabar auth set-token "$TOKEN"

      - name: Create Test Environment
        id: env
        run: |
          ENV_NAME="test-${{ github.run_id }}-integration"
          teabar env create $ENV_NAME 
            --blueprint testing 
            --var test_suite=integration 
            --var seed_data=true 
            --wait
          
          URL=$(teabar env info $ENV_NAME --output json | jq -r '.url')
          echo "url=$URL" >> $GITHUB_OUTPUT
          echo "name=$ENV_NAME" >> $GITHUB_OUTPUT

      - uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'npm'
      - run: npm ci
      
      - name: Run Integration Tests
        env:
          TEABAR_ENV_URL: ${{ steps.env.outputs.url }}
          TEABAR_ENV_NAME: ${{ steps.env.outputs.name }}
        run: npm run test:integration

      - name: Cleanup
        if: always()
        run: teabar env delete ${{ steps.env.outputs.name }} --yes

      - uses: actions/upload-artifact@v4
        if: always()
        with:
          name: integration-results
          path: test-results/

  e2e-tests:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        browser: [chromium, firefox, webkit]
    steps:
      - uses: actions/checkout@v4
      - name: Install Teabar CLI
        run: curl -fsSL https://get.teabar.dev | sh

      - name: Create Test Environment
        id: env
        run: |
          ENV_NAME="test-${{ github.run_id }}-e2e-${{ matrix.browser }}"
          teabar env create $ENV_NAME 
            --blueprint testing 
            --var test_suite=e2e 
            --var seed_data=true 
            --wait
          
          URL=$(teabar env info $ENV_NAME --output json | jq -r '.url')
          echo "url=$URL" >> $GITHUB_OUTPUT
          echo "name=$ENV_NAME" >> $GITHUB_OUTPUT

      - uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'npm'
      - run: npm ci
      - run: npx playwright install --with-deps ${{ matrix.browser }}
      
      - name: Run E2E Tests
        env:
          TEABAR_ENV_URL: ${{ steps.env.outputs.url }}
        run: npx playwright test --project=${{ matrix.browser }}

      - name: Cleanup
        if: always()
        run: teabar env delete ${{ steps.env.outputs.name }} --yes

      - uses: actions/upload-artifact@v4
        if: always()
        with:
          name: e2e-results-${{ matrix.browser }}
          path: |
            playwright-report/
            test-results/

Step 5: Parallel Test Execution

Run tests in parallel across multiple environments:

#!/bin/bash
# run-parallel-tests.sh

SHARDS=4
RUN_ID=$(date +%s)

# Create environments for each shard
for i in $(seq 1 $SHARDS); do
  teabar env create "test-$RUN_ID-shard-$i" 
    --blueprint testing 
    --var parallel_workers=1 
    --async
done

# Wait for all environments
for i in $(seq 1 $SHARDS); do
  teabar env wait "test-$RUN_ID-shard-$i" --healthy
done

# Run tests in parallel
pids=()
for i in $(seq 1 $SHARDS); do
  URL=$(teabar env info "test-$RUN_ID-shard-$i" --output json | jq -r '.url')
  
  TEABAR_ENV_URL=$URL npx playwright test --shard=$i/$SHARDS &
  pids+=($!)
done

# Wait for all tests
for pid in "${pids[@]}"; do
  wait $pid
done

# Cleanup
for i in $(seq 1 $SHARDS); do
  teabar env delete "test-$RUN_ID-shard-$i" --yes &
done
wait

Step 6: Test Data Management

Checkpoint-Based Data

Use checkpoints for consistent test data:

# Create a checkpoint with known-good test data
teabar env create test-data-golden --blueprint testing --var seed_data=true
# ... verify data is correct ...
teabar checkpoint create test-data-golden --name "golden-data-v1"

# Use checkpoint for tests
teabar env create test-run-123 --from-checkpoint golden-data-v1

Data Reset Between Tests

// test/helpers/reset.js
const { Teabar } = require('@teabar/sdk');

async function resetTestData() {
  const teabar = new Teabar();
  const envName = process.env.TEABAR_ENV_NAME;
  
  // Restore from checkpoint
  await teabar.checkpoints.restore(envName, 'clean-state');
}

module.exports = { resetTestData };

Step 7: Test Artifacts and Reporting

Collect and store test artifacts:

# teabar.yaml
hooks:
  post_test:
    - name: Collect artifacts
      command: |
        mkdir -p /artifacts
        cp -r /app/coverage /artifacts/
        cp -r /app/test-results /artifacts/
        cp -r /app/playwright-report /artifacts/
        
    - name: Upload to storage
      command: |
        teabar artifacts upload /artifacts 
          --env ${ENV_NAME} 
          --run ${RUN_ID}

View artifacts:

# List artifacts for a test run
teabar artifacts list --run test-123

# Download artifacts
teabar artifacts download --run test-123 --output ./results

Advanced: Performance Testing

Add performance testing to your environment:

# blueprints/performance-testing.yaml
components:
  # ... existing components ...
  
  k6:
    image: grafana/k6:latest
    volumes:
      - ./performance:/scripts
    
  influxdb:
    image: influxdb:2.7
    ports:
      - 8086:8086
    environment:
      DOCKER_INFLUXDB_INIT_MODE: setup
      DOCKER_INFLUXDB_INIT_USERNAME: admin
      DOCKER_INFLUXDB_INIT_PASSWORD: adminpassword
      DOCKER_INFLUXDB_INIT_BUCKET: k6

  grafana:
    image: grafana/grafana:latest
    ports:
      - 3001:3000
    environment:
      GF_AUTH_ANONYMOUS_ENABLED: true

Run performance tests:

teabar exec test-env --component k6 -- 
  k6 run /scripts/load-test.js 
    --out influxdb=http://influxdb:8086/k6

Summary

You’ve built a complete testing infrastructure that:

  • Creates isolated environments for each test run
  • Seeds consistent test data
  • Supports parallel test execution
  • Collects and stores test artifacts
  • Integrates with CI/CD pipelines

Next Steps

ende