03artificial intelligence
AI, shipped. Not sold.
Large-language-model pipelines, classical ML, and the unglamorous plumbing that makes both pay back. I read the papers; I also read the bill at the end of the month.
ai.01
LLM integration, done properly
Production RAG pipelines, agent orchestration, tool use. Cost and latency budgets modelled before the prompt is written. OpenAI, Anthropic, Gemini, open-weights on Bedrock and vLLM — whatever the workload rewards.
RAGagentsfunction callingevals
ai.02
Classical ML in production
Recommenders, classifiers, scoring models shipped against real traffic. Feature stores, training pipelines, drift monitoring. The unglamorous plumbing that decides whether the model earns its keep once the launch announcement has aged.
feature storestraining pipelinesdrift monitoring
ai.03
Applied to your business, not your pitch deck
A cold eye over where AI actually helps — which is rarely where the hype lands. Internal tooling, ops leverage, and customer workflows tend to pay back long before the customer-facing chatbot does.
opportunity reviewbuild vs buycost modelling
ai.04
Governance from the first line
Data lineage, prompt versioning, audit trails, red-team exercises. If the workload touches regulated data, the controls are in place before it ships — not bolted on after a procurement review blocks the rollout.
data lineagered teamingaudit
providers
OpenAI · Anthropic · Google · Mistral · open-weights
inference
Bedrock · vLLM · Ollama · Groq · local GPU
tooling
LangGraph · Vercel AI · LlamaIndex · Weaviate · pgvector
agents
MCP servers · function calling · multi-step tool use