Kozlovski warns that the event-streaming industry has become oversubscribed. Too many companies chase a relatively small total addressable market, and even leader Confluent’s stock price has languished between roughly $20–30 for years. As cloud revenue growth decelerates, he predicts a wave of consolidation and questions whether real-time event streaming will ever match the scale of data warehousing
To combat chaotic Microsoft Fabric workspaces filled with notebooks and pipelines, to simple, opinionated structure. Numbered stage folders (e.g., 1_ingest/, 2_validate/, 3_transform/) keep ingestion, validation and transformation artefacts organised. Descriptive names and a clear distinction between stage-specific items and shared assets remove the need for pl_/nb_ prefixes and speed up onboarding
Benchmarks comparing Onehouse’s Quanton runtime with Amazon EMR reveal that EMR 7.12 delivers a 32 % performance bump but still trails in price/performance. Quanton’s lakehouse-optimised Spark shows roughly 2.5× better price/performance across 10 TB TPC‑DS workloads. While AWS narrows the gap with its Photon rival, the post notes that EMR still imposes significant operational overhead and treats lakehouse table formats as external plug-ins.
This opening instalment argues that analytics translators remain vital even in the GenAI era. Venegas introduces the AI Solution framework (ideate–experiment–industrialize) and explains why analytics translators are more than just project managers. They bridge business and data teams, organise ideation and prioritisation exercises, and bring human creativity that large language models can’t match
Müller explains how to combine more than 40 AWS news feeds via an unofficial AWS News MCP server to give AI tools a single endpoint. The unauthenticated server exposes functions such as getLatestNews, searchNews and getNewsStats, and can be plugged into Claude, Cursor, VS Code, Amazon Q and other AI assistants for real-time, structured access to AWS updates.
On day one of Microsoft Ignite 2025, the Power BI team introduced powerbi‑modeling‑mcp, their first public Model Context Protocol (MCP) server. Built on the same APIs (TOM for metadata and ADOMD.NET for querying) that underpin Analysis Services and Power BI, it allows users to create and maintain models using natural language. The semantic interface supports synonyms, batch updates and transaction control. New capabilities include multi‑model orchestration, headless TMDL editing and cross‑platform independence
FinOps engineer Deana Solis joins the show to discuss communication, avoiding bias in AI models, and building a career with purpose. The conversation emphasises empathy, continuous learning and cost-awareness as core skills for modern DevOps roles.
A comprehensive tutorial on constructing a self-healing, agentic data pipeline. CodeWithYu demonstrates multi-agent orchestration for extraction, transformation and monitoring, highlighting automated error handling and continuous optimisation.
TRAE positions itself as a “real AI engineer” that can understand requirements, execute tasks and deliver software solutions, effectively augmenting development teams and shortening delivery times
Pylar offers governed data access for AI agents by converting SQL views into Model Context Protocol (MCP) tools, giving agents safe, controlled access to structured data stacks
Warp’s terminal AI adds major upgrades, including full-terminal use for running interactive CLI programs, REPLs, debuggers, and database queries with transparent step-by-step execution; a /plan command for collaborative execution-plan review; interactive code review with diffs, inline comments, and agent-applied changes under human oversight; and first-party integrations with Slack, Linear, and GitHub Actions that bring agents into team workflows with real-time visibility and persisted records.
Materialize experts Pranshu Maheshwari and Sid Sawhney unpack v26 enhancements, showing how to handle upstream schema changes without downtime and highlighting updates that boost security, efficiency and production readiness. Includes live demo and Q&A.
A practical workshop on stabilizing ML pipelines, packaging models, managing dependencies and reducing drift. Includes demos on automating model lifecycle workflows with artifact repositories.