AI-Assisted Backend Development with Go and TypeScript

An honest assessment of AI coding assistants for Go and TypeScript backend work — where they save real time, where they fall short, and how to integrate them without losing your edge.

AIDevelopmentProductivity

I’ve been using AI coding assistants daily for over a year now, primarily for Go and TypeScript backend work. The productivity gains are real but uneven — AI is excellent at boilerplate, test generation, and code review, but still struggles with complex architecture decisions and subtle concurrency bugs. This post is an honest assessment of what works, what doesn’t, and how I’ve integrated these tools into my workflow without becoming dependent on them.

Where AI Actually Helps: Go

Go’s simplicity and strong typing make it particularly well-suited for AI assistance. The language has few ways to express the same thing, so AI suggestions tend to be idiomatic. Here’s where I get the most value.

Boilerplate Generation

This is the killer use case. A repository implementation for a Go service is 90% predictable — CRUD operations, error handling, context propagation. I describe the entity and get production-quality code back:

// Example: AI-generated repository implementation
// Input: "Create a PostgreSQL repository for User entity with CRUD operations"

// user_repository.go
package repository

import (
	"context"
	"database/sql"
	"errors"
	"time"

	"github.com/google/uuid"
	"github.com/yourorg/yourapp/domain"
)

var (
	ErrUserNotFound = errors.New("user not found")
)

type UserRepository struct {
	db *sql.DB
}

func NewUserRepository(db *sql.DB) *UserRepository {
	return &UserRepository{
		db: db,
	}
}

func (r *UserRepository) Create(ctx context.Context, user *domain.User) error {
	query := `
		INSERT INTO users (id, email, name, created_at, updated_at)
		VALUES ($1, $2, $3, $4, $5)
	`
	
	_, err := r.db.ExecContext(
		ctx,
		query,
		user.ID,
		user.Email,
		user.Name,
		user.CreatedAt,
		user.UpdatedAt,
	)
	
	return err
}

func (r *UserRepository) GetByID(ctx context.Context, id uuid.UUID) (*domain.User, error) {
	query := `
		SELECT id, email, name, created_at, updated_at
		FROM users
		WHERE id = $1
	`
	
	var user domain.User
	err := r.db.QueryRowContext(ctx, query, id).Scan(
		&user.ID,
		&user.Email,
		&user.Name,
		&user.CreatedAt,
		&user.UpdatedAt,
	)
	
	if err == sql.ErrNoRows {
		return nil, ErrUserNotFound
	}
	
	if err != nil {
		return nil, err
	}
	
	return &user, nil
}

func (r *UserRepository) Update(ctx context.Context, user *domain.User) error {
	query := `
		UPDATE users
		SET email = $2, name = $3, updated_at = $4
		WHERE id = $1
	`
	
	user.UpdatedAt = time.Now()
	
	result, err := r.db.ExecContext(
		ctx,
		query,
		user.ID,
		user.Email,
		user.Name,
		user.UpdatedAt,
	)
	
	if err != nil {
		return err
	}
	
	rowsAffected, err := result.RowsAffected()
	if err != nil {
		return err
	}
	
	if rowsAffected == 0 {
		return ErrUserNotFound
	}
	
	return nil
}

func (r *UserRepository) Delete(ctx context.Context, id uuid.UUID) error {
	query := `
		DELETE FROM users
		WHERE id = $1
	`
	
	result, err := r.db.ExecContext(ctx, query, id)
	if err != nil {
		return err
	}
	
	rowsAffected, err := result.RowsAffected()
	if err != nil {
		return err
	}
	
	if rowsAffected == 0 {
		return ErrUserNotFound
	}
	
	return nil
}

This code follows Go conventions, uses context properly, handles sql.ErrNoRows, and uses parameterized queries. I’d accept this as-is in a code review. The time savings on a 20-entity service are substantial — maybe 2-3 days of mechanical typing.

Test Generation

This is where AI shines brightest. Writing tests is repetitive, and AI handles the table-driven pattern well:

// Example: AI-generated tests for the UserRepository
// Input: "Write tests for the UserRepository"

// user_repository_test.go
package repository

import (
	"context"
	"database/sql"
	"testing"
	"time"

	"github.com/DATA-DOG/go-sqlmock"
	"github.com/google/uuid"
	"github.com/stretchr/testify/assert"
	"github.com/yourorg/yourapp/domain"
)

func setupMockDB(t *testing.T) (*sql.DB, sqlmock.Sqlmock) {
	db, mock, err := sqlmock.New()
	if err != nil {
		t.Fatalf("Error creating mock database: %v", err)
	}
	
	return db, mock
}

func TestUserRepository_Create(t *testing.T) {
	// Setup
	db, mock := setupMockDB(t)
	defer db.Close()
	
	repo := NewUserRepository(db)
	ctx := context.Background()
	
	userID := uuid.New()
	now := time.Now()
	user := &domain.User{
		ID:        userID,
		Email:     "test@example.com",
		Name:      "Test User",
		CreatedAt: now,
		UpdatedAt: now,
	}
	
	// Expectations
	mock.ExpectExec("INSERT INTO users").
		WithArgs(userID, "test@example.com", "Test User", now, now).
		WillReturnResult(sqlmock.NewResult(1, 1))
	
	// Execute
	err := repo.Create(ctx, user)
	
	// Verify
	assert.NoError(t, err)
	assert.NoError(t, mock.ExpectationsWereMet())
}

func TestUserRepository_GetByID(t *testing.T) {
	// Setup
	db, mock := setupMockDB(t)
	defer db.Close()
	
	repo := NewUserRepository(db)
	ctx := context.Background()
	
	userID := uuid.New()
	now := time.Now()
	
	// Expectations
	rows := sqlmock.NewRows([]string{"id", "email", "name", "created_at", "updated_at"}).
		AddRow(userID, "test@example.com", "Test User", now, now)
	
	mock.ExpectQuery("SELECT (.+) FROM users WHERE id = \\$1").
		WithArgs(userID).
		WillReturnRows(rows)
	
	// Execute
	user, err := repo.GetByID(ctx, userID)
	
	// Verify
	assert.NoError(t, err)
	assert.NotNil(t, user)
	assert.Equal(t, userID, user.ID)
	assert.Equal(t, "test@example.com", user.Email)
	assert.Equal(t, "Test User", user.Name)
	assert.Equal(t, now, user.CreatedAt)
	assert.Equal(t, now, user.UpdatedAt)
	assert.NoError(t, mock.ExpectationsWereMet())
}

func TestUserRepository_GetByID_NotFound(t *testing.T) {
	// Setup
	db, mock := setupMockDB(t)
	defer db.Close()
	
	repo := NewUserRepository(db)
	ctx := context.Background()
	
	userID := uuid.New()
	
	// Expectations
	mock.ExpectQuery("SELECT (.+) FROM users WHERE id = \\$1").
		WithArgs(userID).
		WillReturnError(sql.ErrNoRows)
	
	// Execute
	user, err := repo.GetByID(ctx, userID)
	
	// Verify
	assert.Error(t, err)
	assert.Equal(t, ErrUserNotFound, err)
	assert.Nil(t, user)
	assert.NoError(t, mock.ExpectationsWereMet())
}

func TestUserRepository_Update(t *testing.T) {
	// Setup
	db, mock := setupMockDB(t)
	defer db.Close()
	
	repo := NewUserRepository(db)
	ctx := context.Background()
	
	userID := uuid.New()
	now := time.Now()
	user := &domain.User{
		ID:        userID,
		Email:     "updated@example.com",
		Name:      "Updated User",
		CreatedAt: now,
		UpdatedAt: now,
	}
	
	// Expectations
	mock.ExpectExec("UPDATE users SET").
		WithArgs(userID, "updated@example.com", "Updated User", sqlmock.AnyArg()).
		WillReturnResult(sqlmock.NewResult(0, 1))
	
	// Execute
	err := repo.Update(ctx, user)
	
	// Verify
	assert.NoError(t, err)
	assert.NoError(t, mock.ExpectationsWereMet())
}

func TestUserRepository_Delete(t *testing.T) {
	// Setup
	db, mock := setupMockDB(t)
	defer db.Close()
	
	repo := NewUserRepository(db)
	ctx := context.Background()
	
	userID := uuid.New()
	
	// Expectations
	mock.ExpectExec("DELETE FROM users WHERE id = \\$1").
		WithArgs(userID).
		WillReturnResult(sqlmock.NewResult(0, 1))
	
	// Execute
	err := repo.Delete(ctx, userID)
	
	// Verify
	assert.NoError(t, err)
	assert.NoError(t, mock.ExpectationsWereMet())
}

These tests correctly use go-sqlmock, test both happy and error paths, and follow the arrange-act-assert pattern. The one thing I always review is mock expectations — AI sometimes generates overly permissive matchers that don’t actually verify the query content.

Documentation Generation

The third high-value use case. AI generates Go doc comments that are accurate and follow standard conventions:

// Example: AI-generated documentation for a service
// Input: "Document this service"

// user_service.go
package service

import (
	"context"

	"github.com/google/uuid"
	"github.com/yourorg/yourapp/domain"
	"github.com/yourorg/yourapp/repository"
)

// Note: method bodies below are elided for brevity. A real implementation
// would import "errors", "time", and "golang.org/x/crypto/bcrypt" (plus an
// HMAC-SHA256 pepper stage in front of bcrypt — see the password hashing
// section). They're omitted here so the snippet compiles as shown.

// UserService provides business logic for user management.
// It handles operations such as registration, authentication,
// and profile management while enforcing business rules.
type UserService struct {
	repo repository.UserRepository
}

// NewUserService creates a new UserService with the provided repository.
// The repository is used for persistence operations.
//
// Parameters:
//   - repo: An implementation of UserRepository for data access
//
// Returns:
//   - A new UserService instance
func NewUserService(repo repository.UserRepository) *UserService {
	return &UserService{
		repo: repo,
	}
}

// Register creates a new user account with the provided information.
// It validates the input, hashes the password, and stores the user.
//
// Parameters:
//   - ctx: Context for the operation
//   - email: User's email address (must be unique)
//   - name: User's full name
//   - password: User's password (will be hashed before storage)
//
// Returns:
//   - The created user or an error if registration fails
//
// Errors:
//   - ErrEmailAlreadyExists: If the email is already registered
//   - ErrInvalidEmail: If the email format is invalid
//   - ErrInvalidPassword: If the password doesn't meet requirements
func (s *UserService) Register(ctx context.Context, email, name, password string) (*domain.User, error) {
	// Implementation elided — see note above.
	return nil, nil
}

// Authenticate verifies user credentials and returns the user if valid.
//
// Parameters:
//   - ctx: Context for the operation
//   - email: User's email address
//   - password: User's password (will be compared with stored hash)
//
// Returns:
//   - The authenticated user or an error if authentication fails
//
// Errors:
//   - ErrInvalidCredentials: If the email or password is incorrect
func (s *UserService) Authenticate(ctx context.Context, email, password string) (*domain.User, error) {
	// Implementation elided — see note above.
	return nil, nil
}

// UpdateProfile updates a user's profile information.
//
// Parameters:
//   - ctx: Context for the operation
//   - id: ID of the user to update
//   - name: New name for the user
//
// Returns:
//   - The updated user or an error if the update fails
//
// Errors:
//   - ErrUserNotFound: If the user doesn't exist
func (s *UserService) UpdateProfile(ctx context.Context, id uuid.UUID, name string) (*domain.User, error) {
	// Implementation elided — see note above.
	return nil, nil
}

I wouldn’t normally write this level of documentation for every function, but AI makes it free. The Errors section is particularly useful — it documents the contract that callers need to handle.

Where AI Actually Helps: TypeScript

TypeScript’s rich type system gives AI more to work with. Type inference, generics, and discriminated unions all flow naturally from the type context.

Type Definition Generation

Describe your domain, get comprehensive types back. This is faster than writing them by hand and the AI tends to include edge cases you’d forget:

// Example: AI-generated type definitions
// Input: "Create types for an e-commerce system with products, orders, and customers"

// types.ts
/**
 * Represents a product in the e-commerce system
 */
export interface Product {
  id: string;
  name: string;
  description: string;
  price: number;
  category: ProductCategory;
  images: string[];
  inventory: {
    quantity: number;
    reserved: number;
    available: number;
  };
  attributes: Record<string, string>;
  createdAt: Date;
  updatedAt: Date;
}

/**
 * Product categories
 */
export enum ProductCategory {
  Electronics = 'ELECTRONICS',
  Clothing = 'CLOTHING',
  Home = 'HOME',
  Beauty = 'BEAUTY',
  Books = 'BOOKS',
  Other = 'OTHER',
}

/**
 * Represents a customer in the e-commerce system
 */
export interface Customer {
  id: string;
  email: string;
  name: {
    first: string;
    last: string;
  };
  shippingAddresses: Address[];
  billingAddresses: Address[];
  paymentMethods: PaymentMethod[];
  createdAt: Date;
  updatedAt: Date;
}

/**
 * Represents a physical address
 */
export interface Address {
  id: string;
  line1: string;
  line2?: string;
  city: string;
  state: string;
  postalCode: string;
  country: string;
  isDefault: boolean;
}

/**
 * Represents a payment method
 */
export interface PaymentMethod {
  id: string;
  type: PaymentMethodType;
  isDefault: boolean;
  // Only one of these will be populated based on type
  creditCard?: {
    last4: string;
    brand: string;
    expiryMonth: number;
    expiryYear: number;
  };
  paypal?: {
    email: string;
  };
}

/**
 * Payment method types
 */
export enum PaymentMethodType {
  CreditCard = 'CREDIT_CARD',
  PayPal = 'PAYPAL',
}

/**
 * Represents an order in the e-commerce system
 */
export interface Order {
  id: string;
  customerId: string;
  items: OrderItem[];
  shippingAddress: Address;
  billingAddress: Address;
  payment: {
    methodId: string;
    amount: number;
    currency: string;
    status: PaymentStatus;
  };
  status: OrderStatus;
  subtotal: number;
  tax: number;
  shipping: number;
  total: number;
  createdAt: Date;
  updatedAt: Date;
}

/**
 * Represents an item within an order
 */
export interface OrderItem {
  productId: string;
  quantity: number;
  unitPrice: number;
  subtotal: number;
}

/**
 * Order status values
 */
export enum OrderStatus {
  Created = 'CREATED',
  PaymentPending = 'PAYMENT_PENDING',
  Paid = 'PAID',
  Processing = 'PROCESSING',
  Shipped = 'SHIPPED',
  Delivered = 'DELIVERED',
  Cancelled = 'CANCELLED',
  Refunded = 'REFUNDED',
}

/**
 * Payment status values
 */
export enum PaymentStatus {
  Pending = 'PENDING',
  Authorized = 'AUTHORIZED',
  Captured = 'CAPTURED',
  Refunded = 'REFUNDED',
  Failed = 'FAILED',
}

Notice the inventory.available computed field, the discriminated union on PaymentMethod, and the comprehensive enum values. I’d still review these against the actual domain — AI sometimes invents plausible-looking fields that don’t exist — but the structural work is solid.

React Component Generation

Full components with TypeScript, hooks, error handling, and loading states:

// Example: AI-generated React component
// Input: "Create a product list component with filtering and pagination"

// ProductList.tsx
import React, { useState, useEffect } from 'react';
import { Product, ProductCategory } from '../types';
import { fetchProducts } from '../api/productApi';

interface ProductListProps {
  initialCategory?: ProductCategory;
  itemsPerPage?: number;
  onProductSelect?: (product: Product) => void;
}

interface ProductFilters {
  category: ProductCategory | 'ALL';
  minPrice: number | null;
  maxPrice: number | null;
  searchQuery: string;
}

/**
 * ProductList displays a paginated, filterable list of products
 */
export const ProductList: React.FC<ProductListProps> = ({
  initialCategory = 'ALL' as ProductCategory | 'ALL',
  itemsPerPage = 10,
  onProductSelect,
}) => {
  // State
  const [products, setProducts] = useState<Product[]>([]);
  const [filteredProducts, setFilteredProducts] = useState<Product[]>([]);
  const [loading, setLoading] = useState<boolean>(true);
  const [error, setError] = useState<string | null>(null);
  const [currentPage, setCurrentPage] = useState<number>(1);
  const [filters, setFilters] = useState<ProductFilters>({
    category: initialCategory,
    minPrice: null,
    maxPrice: null,
    searchQuery: '',
  });

  // Fetch products on component mount
  useEffect(() => {
    const loadProducts = async () => {
      try {
        setLoading(true);
        const data = await fetchProducts();
        setProducts(data);
        setLoading(false);
      } catch (err) {
        setError('Failed to load products. Please try again later.');
        setLoading(false);
      }
    };

    loadProducts();
  }, []);

  // Apply filters when products or filters change
  useEffect(() => {
    let result = [...products];

    // Apply category filter
    if (filters.category !== 'ALL') {
      result = result.filter(product => product.category === filters.category);
    }

    // Apply price filters
    if (filters.minPrice !== null) {
      result = result.filter(product => product.price >= filters.minPrice!);
    }

    if (filters.maxPrice !== null) {
      result = result.filter(product => product.price <= filters.maxPrice!);
    }

    // Apply search query
    if (filters.searchQuery) {
      const query = filters.searchQuery.toLowerCase();
      result = result.filter(
        product =>
          product.name.toLowerCase().includes(query) ||
          product.description.toLowerCase().includes(query)
      );
    }

    setFilteredProducts(result);
    setCurrentPage(1); // Reset to first page when filters change
  }, [products, filters]);

  // Calculate pagination
  const totalPages = Math.ceil(filteredProducts.length / itemsPerPage);
  const startIndex = (currentPage - 1) * itemsPerPage;
  const paginatedProducts = filteredProducts.slice(
    startIndex,
    startIndex + itemsPerPage
  );

  // Handle filter changes
  const handleCategoryChange = (e: React.ChangeEvent<HTMLSelectElement>) => {
    setFilters({
      ...filters,
      category: e.target.value as ProductCategory | 'ALL',
    });
  };

  const handlePriceChange = (
    type: 'min' | 'max',
    e: React.ChangeEvent<HTMLInputElement>
  ) => {
    const value = e.target.value ? Number(e.target.value) : null;
    setFilters({
      ...filters,
      [type === 'min' ? 'minPrice' : 'maxPrice']: value,
    });
  };

  const handleSearchChange = (e: React.ChangeEvent<HTMLInputElement>) => {
    setFilters({
      ...filters,
      searchQuery: e.target.value,
    });
  };

  // Handle pagination
  const handlePageChange = (page: number) => {
    setCurrentPage(page);
  };

  // Render loading state
  if (loading) {
    return <div className="loading">Loading products...</div>;
  }

  // Render error state
  if (error) {
    return <div className="error">{error}</div>;
  }

  return (
    <div className="product-list-container">
      {/* Filters */}
      <div className="product-filters">
        <div className="filter-group">
          <label htmlFor="category-filter">Category:</label>
          <select
            id="category-filter"
            value={filters.category}
            onChange={handleCategoryChange}
          >
            <option value="ALL">All Categories</option>
            {Object.values(ProductCategory).map(category => (
              <option key={category} value={category}>
                {category}
              </option>
            ))}
          </select>
        </div>

        <div className="filter-group">
          <label htmlFor="min-price">Min Price:</label>
          <input
            id="min-price"
            type="number"
            min="0"
            value={filters.minPrice || ''}
            onChange={e => handlePriceChange('min', e)}
          />
        </div>

        <div className="filter-group">
          <label htmlFor="max-price">Max Price:</label>
          <input
            id="max-price"
            type="number"
            min="0"
            value={filters.maxPrice || ''}
            onChange={e => handlePriceChange('max', e)}
          />
        </div>

        <div className="filter-group">
          <label htmlFor="search-query">Search:</label>
          <input
            id="search-query"
            type="text"
            value={filters.searchQuery}
            onChange={handleSearchChange}
            placeholder="Search products..."
          />
        </div>
      </div>

      {/* Product grid */}
      {paginatedProducts.length === 0 ? (
        <div className="no-products">No products found matching your criteria.</div>
      ) : (
        <div className="product-grid">
          {paginatedProducts.map(product => (
            <div
              key={product.id}
              className="product-card"
              onClick={() => onProductSelect && onProductSelect(product)}
            >
              <div className="product-image">
                <img
                  src={product.images[0] || '/placeholder.jpg'}
                  alt={product.name}
                />
              </div>
              <div className="product-info">
                <h3>{product.name}</h3>
                <p className="product-price">${product.price.toFixed(2)}</p>
                <p className="product-category">{product.category}</p>
              </div>
            </div>
          ))}
        </div>
      )}

      {/* Pagination */}
      {totalPages > 1 && (
        <div className="pagination">
          <button
            disabled={currentPage === 1}
            onClick={() => handlePageChange(currentPage - 1)}
          >
            Previous
          </button>
          
          {Array.from({ length: totalPages }, (_, i) => i + 1).map(page => (
            <button
              key={page}
              className={page === currentPage ? 'active' : ''}
              onClick={() => handlePageChange(page)}
            >
              {page}
            </button>
          ))}
          
          <button
            disabled={currentPage === totalPages}
            onClick={() => handlePageChange(currentPage + 1)}
          >
            Next
          </button>
        </div>
      )}
    </div>
  );
};

This is production-quality React with proper TypeScript typing. The one thing I’d change is the client-side filtering — in a real application with thousands of products, filtering and pagination should happen server-side. AI optimizes for the code it can see, not the architecture it can’t.

API Client Generation

Type-safe API clients with error handling and authentication:

// Example: AI-generated API client
// Input: "Create a TypeScript API client for a RESTful user service"

// userApi.ts
import axios, { AxiosInstance, AxiosRequestConfig, AxiosResponse } from 'axios';

// In-memory token store. Wiped on reload. Not reachable via
// `document.cookie` or `localStorage`, so an XSS payload has to
// reach this module's closure — higher bar than the default.
let inMemoryAuthToken: string | null = null;
export function setInMemoryAuthToken(token: string | null): void {
  inMemoryAuthToken = token;
}
function getInMemoryAuthToken(): string | null {
  return inMemoryAuthToken;
}

/**
 * User data transfer object
 */
export interface UserDTO {
  id: string;
  email: string;
  firstName: string;
  lastName: string;
  role: UserRole;
  createdAt: string;
  updatedAt: string;
}

/**
 * User role enum
 */
export enum UserRole {
  Admin = 'ADMIN',
  User = 'USER',
}

/**
 * Create user request
 */
export interface CreateUserRequest {
  email: string;
  firstName: string;
  lastName: string;
  password: string;
  role?: UserRole;
}

/**
 * Update user request
 */
export interface UpdateUserRequest {
  firstName?: string;
  lastName?: string;
  role?: UserRole;
}

/**
 * User list response
 */
export interface UserListResponse {
  users: UserDTO[];
  total: number;
  page: number;
  pageSize: number;
  totalPages: number;
}

/**
 * User API client for interacting with the user service
 */
export class UserApiClient {
  private client: AxiosInstance;

  /**
   * Creates a new UserApiClient
   * 
   * @param baseURL - Base URL for the user service API
   * @param config - Additional Axios configuration
   */
  constructor(baseURL: string, config: AxiosRequestConfig = {}) {
    this.client = axios.create({
      baseURL,
      ...config,
    });

    // Add request interceptor for authentication if needed.
    // NOTE: a real implementation should NOT pull the bearer token from
    // localStorage — any XSS on the page reads it and your session is gone.
    // The defaults I'd actually ship: a BFF (Backend-for-Frontend) that keeps
    // the token server-side and sets an HttpOnly, Secure, SameSite=Lax cookie
    // on the browser. If you must keep the token in the SPA, hold it in a
    // module-scoped variable (in-memory only, gone on reload) and pair it
    // with a short TTL + silent refresh. The snippet below uses an in-memory
    // token deliberately so the AI-generated pattern doesn't become the
    // copy-paste default.
    this.client.interceptors.request.use(config => {
      const token = getInMemoryAuthToken();
      if (token) {
        config.headers = config.headers || {};
        config.headers.Authorization = `Bearer ${token}`;
      }
      return config;
    });
  }

  /**
   * Get a list of users with pagination
   * 
   * @param page - Page number (1-based)
   * @param pageSize - Number of items per page
   * @returns Paginated list of users
   */
  async getUsers(page: number = 1, pageSize: number = 10): Promise<UserListResponse> {
    try {
      const response: AxiosResponse<UserListResponse> = await this.client.get('/users', {
        params: { page, pageSize },
      });
      return response.data;
    } catch (error) {
      this.handleError(error, 'Failed to fetch users');
      throw error;
    }
  }

  /**
   * Get a user by ID
   * 
   * @param id - User ID
   * @returns User data
   */
  async getUserById(id: string): Promise<UserDTO> {
    try {
      const response: AxiosResponse<UserDTO> = await this.client.get(`/users/${id}`);
      return response.data;
    } catch (error) {
      this.handleError(error, `Failed to fetch user with ID ${id}`);
      throw error;
    }
  }

  /**
   * Create a new user
   * 
   * @param user - User creation data
   * @returns Created user data
   */
  async createUser(user: CreateUserRequest): Promise<UserDTO> {
    try {
      const response: AxiosResponse<UserDTO> = await this.client.post('/users', user);
      return response.data;
    } catch (error) {
      this.handleError(error, 'Failed to create user');
      throw error;
    }
  }

  /**
   * Update an existing user
   * 
   * @param id - User ID
   * @param updates - User update data
   * @returns Updated user data
   */
  async updateUser(id: string, updates: UpdateUserRequest): Promise<UserDTO> {
    try {
      const response: AxiosResponse<UserDTO> = await this.client.put(`/users/${id}`, updates);
      return response.data;
    } catch (error) {
      this.handleError(error, `Failed to update user with ID ${id}`);
      throw error;
    }
  }

  /**
   * Delete a user
   * 
   * @param id - User ID
   * @returns True if deletion was successful
   */
  async deleteUser(id: string): Promise<boolean> {
    try {
      await this.client.delete(`/users/${id}`);
      return true;
    } catch (error) {
      this.handleError(error, `Failed to delete user with ID ${id}`);
      throw error;
    }
  }

  /**
   * Handle API errors
   * 
   * @param error - Error object
   * @param defaultMessage - Default error message
   */
  private handleError(error: any, defaultMessage: string): void {
    // Never log the raw server payload or error object. Browser consoles
    // are piped to Sentry/Datadog/LogRocket, so any PII the backend echoed
    // back (email, token fragments, stack traces) would leak into a third
    // party. Log only status + our own synthesized message.
    if (axios.isAxiosError(error)) {
      const status = error.response?.status ?? 'unknown';

      if (status === 401) {
        console.error(`[api] ${status}: authentication required`);
      } else if (status === 403) {
        console.error(`[api] ${status}: permission denied`);
      } else if (status === 404) {
        console.error(`[api] ${status}: resource not found`);
      } else {
        console.error(`[api] ${status}: ${defaultMessage}`);
      }
    } else {
      console.error(`[api] client error: ${defaultMessage}`);
    }
  }
}

// Create and export a default instance
const userApi = new UserApiClient(process.env.REACT_APP_API_BASE_URL || '/api');
export default userApi;

Where AI Falls Short

Let me be blunt about the limitations, because the hype cycle oversells these tools.

Architecture decisions. AI will generate code for whatever architecture you describe, even a terrible one. It won’t tell you that your event sourcing implementation is overkill for a CRUD app, or that your microservice boundaries are wrong. Architecture requires understanding the business context and trade-offs that AI doesn’t have.

Concurrency bugs. Go’s goroutine leaks, race conditions, and deadlocks are subtle enough that AI generates them routinely. I’ve seen AI produce channel-based code that looks correct but has a goroutine leak under cancellation. Always run go vet -race on AI-generated concurrent code.

Here’s a concrete example I’ve caught in review more than once. Asked for a “concurrent counter with a getter”, a model will happily produce:

// AI-generated. Looks fine. Isn't.
type Counter struct {
    mu    sync.Mutex
    count int
}

func (c *Counter) Inc() {
    c.mu.Lock()
    defer c.mu.Unlock()
    c.count++
}

func (c *Counter) Value() int {
    return c.count // <-- unsynchronized read
}

The Inc method is correctly guarded. The Value method reads count without holding the lock, which is a data race under -race and, on some architectures, a torn read. The fix is one line (c.mu.Lock(); defer c.mu.Unlock() inside Value, or switch to atomic.Int64), but you’ll only spot it if you’re specifically looking. The AI isn’t “wrong” in any obvious way — the code compiles, passes naive tests, and reads naturally. That’s exactly why it ships.

Another pattern I see constantly: swallowed errors in generated retry loops.

// AI-generated retry helper. Spot the bug.
for i := 0; i < 3; i++ {
    resp, err := client.Do(req)
    if err != nil {
        time.Sleep(time.Second)
        continue
    }
    return resp, nil
}
return nil, errors.New("request failed")

The final error from the last attempt is thrown away and replaced with a generic string. When this fails in production, your logs show “request failed” with no cause, no status code, no DNS error, nothing. Return err from the last iteration or wrap it with attempt count.

Code smells in AI output (senior review checklist)

When I review AI-generated code I run through this list before anything else:

  • Unsynchronized access to shared state. Getter methods on mutex-guarded structs, map reads alongside map writes, package-level variables touched from handlers. Run -race even if the tests “pass”.
  • Context dropped or shadowed. ctx is accepted but never passed to the next call, or a new context.Background() appears mid-function. This breaks cancellation and tracing.
  • Error swallowing and generic error strings. if err != nil { return errors.New("failed") }. Always check that the original error is wrapped, not replaced.
  • Resource leaks on the error path. defer rows.Close() missing, stmt.Close() not checked, goroutines spawned without a cancellation path.
  • Off-by-one and boundary conditions. Slice bounds, loop terminators, pagination offsets. AI is bad at boundaries.
  • Security-adjacent primitives rebuilt from scratch. Custom password comparison (timing attack), custom token generation (low entropy), custom SQL string-building (injection). If you see a hand-rolled version of something crypto/subtle or a parameterized query would handle, stop.
  • Mocks that don’t reflect real behavior. Tests that always return success, no error paths exercised, no concurrent access tested.
  • Plausible-looking APIs that don’t exist. Fabricated struct fields, method names that match the other library. Cross-check against the actual package docs.

None of this is “AI is bad”. It’s “AI generates code that looks like code you’ve seen before, and your review habits have to adjust”. Reviewers who skim AI output the same way they skim a junior dev’s PR will ship these bugs.

Security-sensitive code. AI will generate authentication code that looks correct but has subtle vulnerabilities — timing attacks in password comparison, insufficient entropy in token generation, or missing CSRF protection. Always have security-critical code reviewed by a human who specializes in it.

Domain logic. The more business-specific the logic, the less useful AI becomes. It can scaffold the structure, but the actual business rules need to come from you.

Integrating AI Into Your Workflow

Here’s how I actually use these tools, stripped of the vendor marketing:

# AI Coding Assistant Usage Guidelines

## Approved Use Cases

- Generating boilerplate code (repositories, DTOs, etc.)
- Writing unit tests
- Documenting existing code
- Converting between languages or frameworks
- Explaining complex code

## Restricted Use Cases (Require Review)

- Security-sensitive code (authentication, authorization)
- Financial calculations
- Data transformation logic
- Performance-critical sections

## Prohibited Use Cases

- Generating credentials or secrets
- Bypassing security controls
- Implementing undocumented features
- Generating code that violates licensing terms

## Best Practices

1. **Always review generated code** - AI assistants can make mistakes
2. **Test thoroughly** - Don't assume generated code works correctly
3. **Understand before committing** - If you don't understand the code, don't use it
4. **Cite AI assistance** - Add comments indicating AI-generated sections
5. **Iterative refinement** - Use AI as a starting point, then refine

Integrating with Development Workflows

AI coding assistants can be integrated at various stages of the development workflow:

Code Generation

Use AI to scaffold new components:

# Example: Using a CLI tool to generate code with AI
$ ai-code-gen --type "repository" --entity "User" --db "postgres" > user_repository.go

Code Review

Use AI to review pull requests:

# .github/workflows/ai-code-review.yml
name: AI Code Review

on:
  pull_request:
    types: [opened, synchronize]

# Least-privilege GITHUB_TOKEN: read the repo contents, comment on the PR,
# and nothing else. A compromised reviewer action cannot push to branches,
# create releases, or touch Actions secrets beyond what this scope allows.
permissions:
  contents: read
  pull-requests: write

jobs:
  ai-review:
    runs-on: ubuntu-latest
    steps:
      # Pin third-party actions to a full commit SHA, not a tag.
      # Tags are mutable; a rewritten tag on an action that sees your
      # OPENAI_API_KEY and GITHUB_TOKEN is a supply-chain compromise.
      # The trailing comment tracks the tag for Dependabot updates.
      - uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
        with:
          fetch-depth: 0

      - name: AI Code Review
        uses: example/ai-code-review-action@<full-40-char-sha> # v1.x.y
        with:
          github-token: ${{ secrets.GITHUB_TOKEN }}
          openai-api-key: ${{ secrets.OPENAI_API_KEY }}
          review-comment-prefix: "AI Review: "

Two things worth naming here, because they’re the bite that ships. Third-party actions see every secret you pass them and can exfiltrate the PR diff to whatever endpoint they want — that is the entire AI-review workflow by design, so the trust decision is load-bearing. Pin by SHA, review the action source before bumping, and set a least-privilege permissions: block on GITHUB_TOKEN (as shown above — contents: read + pull-requests: write, nothing else) so a compromised reviewer can’t push to branches. Second, the PR body and diff are shipped to OpenAI; if your repo has customer data in fixtures, that data just left the building. Scrub fixtures or run the model against a local endpoint for private repos.

Documentation Generation

Use AI to generate and maintain documentation:

# Example: Using a CLI tool to generate documentation
$ ai-docs-gen --source "./pkg/service" --output "./docs/api" --format "markdown"

Measuring Impact

To justify investment in AI coding tools, measure their impact:

  1. Developer Productivity - Time saved on routine tasks
  2. Code Quality - Reduction in bugs and technical debt
  3. Onboarding Time - How quickly new developers become productive
  4. Knowledge Sharing - How effectively knowledge is distributed

Here’s an example of a simple tracking system:

// ai-usage-tracker.ts
interface AIUsageEvent {
  tool: string;
  action: 'suggestion' | 'acceptance' | 'rejection';
  codeType: 'function' | 'class' | 'test' | 'documentation' | 'other';
  language: string;
  lineCount: number;
  timestamp: Date;
  duration: number; // milliseconds from suggestion to decision
}

class AIUsageTracker {
  private events: AIUsageEvent[] = [];
  private apiEndpoint: string;

  constructor(apiEndpoint: string) {
    this.apiEndpoint = apiEndpoint;
    
    // Send events periodically
    setInterval(() => this.flushEvents(), 5 * 60 * 1000); // every 5 minutes
    
    // Send events before page unload
    window.addEventListener('beforeunload', () => this.flushEvents());
  }

  trackEvent(event: Omit<AIUsageEvent, 'timestamp'>) {
    const fullEvent: AIUsageEvent = {
      ...event,
      timestamp: new Date(),
    };
    
    this.events.push(fullEvent);
    
    // If we have a lot of events, flush immediately
    if (this.events.length >= 10) {
      this.flushEvents();
    }
  }

  private async flushEvents() {
    if (this.events.length === 0) return;
    
    const eventsToSend = [...this.events];
    this.events = [];
    
    try {
      await fetch(this.apiEndpoint, {
        method: 'POST',
        headers: {
          'Content-Type': 'application/json',
        },
        body: JSON.stringify({ events: eventsToSend }),
      });
    } catch (error) {
      console.error('Failed to send AI usage events:', error);
      // Put the events back in the queue, but cap it so a broken endpoint
      // doesn't grow the buffer unboundedly.
      const MAX_BUFFERED = 1000;
      const merged = [...eventsToSend, ...this.events];
      if (merged.length > MAX_BUFFERED) {
        const dropped = merged.slice(0, merged.length - MAX_BUFFERED);
        // At minimum, write dropped events to a local log so they're not
        // silently lost. In a real system this would be a ring buffer or
        // IndexedDB-backed fallback.
        console.warn(`AI usage tracker: dropping ${dropped.length} events, buffer full`, dropped);
        this.events = merged.slice(-MAX_BUFFERED);
      } else {
        this.events = merged;
      }
    }
  }
}

// Export a singleton instance
export const aiTracker = new AIUsageTracker('/api/ai-usage-tracking');

A note on the endpoint this posts to: /api/ai-usage-tracking needs authentication and rate limiting on the server side. If the endpoint is anonymous, anyone can spam it and poison your metrics; if it’s cookie-authenticated, it needs CSRF protection (SameSite=Lax on the session cookie, or a double-submit token). Rate limit per authenticated principal, not per IP — IP-based limits collapse behind corporate NAT and mobile carriers. This snippet leaves that to the server because the tracker itself doesn’t know what auth shape the host app uses, but an AI that generates the client and not the server is the usual gap.

The Rules I Follow

After a year of daily use, here’s my framework:

  1. Use AI for the 80% that’s predictable. Repositories, DTOs, test scaffolding, API clients, documentation. These are patterns, and AI is great at patterns.
  2. Write the 20% yourself. Architecture decisions, business logic, security, concurrency. These require judgment that AI doesn’t have.
  3. Always review. I read every line of AI-generated code before committing it. Not skimming — actually reading. If you don’t understand a line, don’t ship it.
  4. Run the linters. go vet, staticcheck, eslint with strict rules. AI-generated code passes these most of the time, but when it doesn’t, the failures are instructive.
  5. Don’t stop learning the fundamentals. If you can’t write a repository implementation without AI, you can’t review one. AI makes you faster, not smarter — and the smartness is what matters when things break.

The developers who benefit most from AI tools are the ones who were already productive without them. They use AI to eliminate the tedious parts of their work so they can spend more time on the parts that require actual thought. If you’re junior, use AI to learn — study what it generates, understand why it made those choices, and verify against the documentation. But don’t let it become a crutch.

← Back to blog