
Dash It All! Is AI Em Dash Addiction Real?
I tested 27 models on Amazon Bedrock for em dash usage. Llama produces zero. Claude and Palmyra can't stop. What does that tell us about how LLMs learn style?

I tested 27 models on Amazon Bedrock for em dash usage. Llama produces zero. Claude and Palmyra can't stop. What does that tell us about how LLMs learn style?

My AI agent has access to my email and Slack. Here are four tactics I use to stop it from sending a career-ending message — from system prompts to deterministic hooks, LLM-as-a-judge steering, and Cedar policies at cloud scale.

How to wire together AgentCore Browser, Nova Act, and IAM authentication to build a production browser agent on AWS, with complete working code.

I fine-tuned Qwen2.5-0.5B to always talk like a pirate using LoRA. The first attempt failed because of a system prompt in the training data. Here's what I learned about training data design.

Nine Python agent frameworks, compared honestly. Architecture, code samples, community sentiment, and what actually matters when you're picking one.

Context Hub is a curated documentation registry for coding agents. Here's how to add your API before someone else does.
AI agents, runtime software, and what comes after SaaS.
I joined Romain Jourdan on the AWS Developers Podcast to discuss OpenClaw, async agentic tools, Strands Labs, AI Functions, and the future of software development.
A check-in on the definition of an AI Engineer in 2026 - from fine-tuning to agentic systems, tools, RAG, and MCP.

Moving beyond Software 3.0's generate-and-verify loop, AI Functions execute LLM-generated code at runtime, return native Python objects, and use automated post-conditions for continuous verification. This is Software 3.1: where AI doesn't just write code—it runs it.