<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Machine Learning | Muhammad Hasan Ferdous</title><link>https://mhasanferdous.github.io/tags/machine-learning/</link><atom:link href="https://mhasanferdous.github.io/tags/machine-learning/index.xml" rel="self" type="application/rss+xml"/><description>Machine Learning</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Mon, 20 May 2024 00:00:00 +0000</lastBuildDate><item><title>Beyond Next-Token Prediction: Why Agentic AI Needs Causal Guardrails</title><link>https://mhasanferdous.github.io/blog/data-visualization/</link><pubDate>Mon, 20 May 2024 00:00:00 +0000</pubDate><guid>https://mhasanferdous.github.io/blog/data-visualization/</guid><description>&lt;p&gt;The AI industry is currently undergoing a massive shift: we are moving from &lt;strong&gt;Generative AI&lt;/strong&gt; (models that talk) to &lt;strong&gt;Agentic AI&lt;/strong&gt; (models that act). We are empowering LLMs to browse the web, execute code, and manage complex workflows. However, as we grant AI more &amp;ldquo;agency,&amp;rdquo; we are hitting a fundamental wall. Most current agents are brilliant at pattern matching but completely blind to &lt;strong&gt;causation&lt;/strong&gt;.&lt;/p&gt;
&lt;h3 id="the-intervention-gap"&gt;The Intervention Gap&lt;/h3&gt;
&lt;p&gt;Current agents operate primarily on the first rung of Judea Pearl’s “Ladder of Causation”: &lt;strong&gt;Association&lt;/strong&gt;. They see that “A” often follows “B” and assume they are related. But an agent doesn’t just observe; it &lt;strong&gt;intervenes&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;When an agent takes an action, it changes the system. To do this reliably, it must understand the difference between a spurious correlation and a true causal link. Without this, agents fall into “hallucination loops”—repeating failed actions because they don’t understand the underlying mechanism of why the failure occurred.&lt;/p&gt;
&lt;h3 id="solving-the-messy-data-problem"&gt;Solving the &amp;ldquo;Messy Data&amp;rdquo; Problem&lt;/h3&gt;
&lt;p&gt;Most real-world data, especially in high-stakes fields like healthcare and climate science, is “messy.” In my research, I’ve focused on building frameworks like &lt;strong&gt;CDANs&lt;/strong&gt; and &lt;strong&gt;DCD&lt;/strong&gt; (Decomposition-based Causal Discovery) that can handle temporal causal discovery even when data is shifting over time.&lt;/p&gt;
&lt;p&gt;By applying these methods to challenges like &lt;strong&gt;Arctic Sea Ice Prediction&lt;/strong&gt;, we’ve shown that causal models achieve significantly higher robustness under distribution shifts compared to purely correlation-based deep learning.&lt;/p&gt;
&lt;h3 id="the-path-to-robust-autonomy"&gt;The Path to Robust Autonomy&lt;/h3&gt;
&lt;p&gt;The future of Agentic AI isn&amp;rsquo;t just “more parameters.” It is the integration of &lt;strong&gt;Structural Causal Models (SCMs)&lt;/strong&gt; with the reasoning flexibility of LLMs. We need agents that don&amp;rsquo;t just ask &amp;ldquo;What comes next?&amp;rdquo; but &amp;ldquo;If I change this, what will happen—and why?&amp;rdquo;&lt;/p&gt;</description></item><item><title>Why Large Language Models Fail When the World Changes And Why Causality Is No Longer Optional</title><link>https://mhasanferdous.github.io/blog/project-management/</link><pubDate>Mon, 20 May 2024 00:00:00 +0000</pubDate><guid>https://mhasanferdous.github.io/blog/project-management/</guid><description>&lt;h2 id="from-fluency-to-fragility"&gt;From Fluency to Fragility&lt;/h2&gt;
&lt;p&gt;Large language models have become astonishingly fluent. They draft legal briefs, debug code, compose poetry, and generate explanations that feel reasoned and deliberate. This fluency, however, rests on a deceptively simple foundation: next-token prediction. Training reduces to minimizing prediction error over massive corpora of historical text. When the future resembles the past, this strategy works remarkably well. When it does not, the cracks begin to show.&lt;/p&gt;
&lt;p&gt;The widespread success of LLMs has encouraged a dangerous conflation of predictive accuracy with understanding. These systems do not learn mechanisms. They learn statistical regularities. They are powerful curve-fitting machines optimized to reproduce patterns that once minimized loss. As long as the environment remains stable, the distinction appears academic. The moment the environment shifts, it becomes decisive.&lt;/p&gt;
&lt;h2 id="prediction-without-awareness-of-time"&gt;Prediction Without Awareness of Time&lt;/h2&gt;
&lt;p&gt;A fundamental limitation of LLMs is their inability to recognize when their internal knowledge has gone stale. Models trained on historical data silently assume that political regimes, medical guidelines, and economic conditions persist. When these assumptions fail, the model does not update its beliefs. It continues to emit sentences that were once optimal under the training objective, not sentences that reflect current reality.&lt;/p&gt;
&lt;p&gt;In high-stakes domains, this failure mode is dangerous rather than inconvenient. Outdated medical advice, mispriced financial risk, or obsolete policy recommendations are delivered with confidence and coherence. Because the model has no representation of temporal causality, it cannot signal uncertainty when the world changes.&lt;/p&gt;
&lt;p&gt;This exposes a deeper issue. LLMs treat history as a static pool of correlations rather than as a dynamic process shaped by interventions, feedback, and structural change.&lt;/p&gt;
&lt;h2 id="spurious-correlations-and-the-cost-of-convenience"&gt;Spurious Correlations and the Cost of Convenience&lt;/h2&gt;
&lt;p&gt;During training, any regularity that improves predictive accuracy is retained, regardless of whether it reflects a causal relationship. If a dataset contains spurious associations, the model has no incentive to discard them. Doing so would increase loss.&lt;/p&gt;
&lt;p&gt;This becomes dangerous when surface cues stand in for causal signals. Names, demographics, stylistic features, or proxies may correlate with outcomes in historical data but fail under intervention. When context changes, the spurious cue persists. The model cannot ask whether the association would survive conditioning or manipulation because it has no representation of confounding.&lt;/p&gt;
&lt;p&gt;The result is a system that can be statistically impressive while remaining scientifically unsound. The training objective rewards correlation, not explanation.&lt;/p&gt;
&lt;h2 id="the-absence-of-counterfactual-reasoning"&gt;The Absence of Counterfactual Reasoning&lt;/h2&gt;
&lt;p&gt;Perhaps the most fundamental limitation of LLMs is their inability to reason counterfactually. They can generate statements that resemble expert reasoning because similar statements appear in training data. What they cannot do is evaluate alternative actions that were not taken.&lt;/p&gt;
&lt;p&gt;Without an explicit causal graph, the model cannot answer questions such as “What would happen if we did not intervene?” or “What would change if one variable were manipulated while others were held fixed?” It can only reproduce narratives about actions that historically occurred.&lt;/p&gt;
&lt;p&gt;This matters because decision-making is fundamentally about intervention. Prediction describes what tends to happen. Causation explains what will happen if we act.&lt;/p&gt;
&lt;h2 id="when-prediction-meets-control"&gt;When Prediction Meets Control&lt;/h2&gt;
&lt;p&gt;These limitations converge into a single structural failure. Systems optimized for prediction are not optimized for control. When deployed as chatbots, the harm is often limited. When deployed as clinical assistants, policy advisors, or autonomous agents, the harm propagates into the world and feeds back into future data.&lt;/p&gt;
&lt;p&gt;As LLM outputs increasingly shape their environments, feedback loops emerge. The model’s own predictions influence behavior, which then becomes training data, reinforcing the original bias. Without causal awareness, the system cannot recognize that it is shaping the very distribution it claims to model.&lt;/p&gt;
&lt;p&gt;This is how brittle systems become entrenched.&lt;/p&gt;
&lt;h2 id="why-causality-changes-the-equation"&gt;Why Causality Changes the Equation&lt;/h2&gt;
&lt;p&gt;Causal models are designed to survive distribution shift. They encode invariant mechanisms rather than contingent frequencies. A causal relationship remains valid even when policies change, instruments are replaced, or regimes evolve.&lt;/p&gt;
&lt;p&gt;Causality provides the machinery required for reasoning under intervention. It distinguishes correlation from mechanism and enables counterfactual reasoning. These are precisely the capabilities predictive models lack.&lt;/p&gt;
&lt;p&gt;Integrating causality into language models is not about replacing them. It is about grounding their outputs in structures that remain valid when the world changes.&lt;/p&gt;
&lt;h2 id="integrating-causality-with-language-models"&gt;Integrating Causality with Language Models&lt;/h2&gt;
&lt;p&gt;Several complementary research directions are emerging.&lt;/p&gt;
&lt;p&gt;Representation-focused approaches embed causal structure directly into model architecture, constraining information flow to respect directionality and conditional independence. Estimation-focused approaches train models to remain invariant across families of interventional distributions rather than a single observational one. Generation-focused approaches enforce causal consistency during decoding, either through symbolic checks or differentiable penalties.&lt;/p&gt;
&lt;p&gt;These methods are computationally expensive and still experimental. Yet even modest causal scaffolding has been shown to dramatically improve robustness under distribution shift.&lt;/p&gt;
&lt;h2 id="from-research-prototype-to-practice"&gt;From Research Prototype to Practice&lt;/h2&gt;
&lt;p&gt;Until fully integrated causal-language systems mature, practitioners can adopt pragmatic safeguards. Identify key variables and interventions. Encode them in explicit causal graphs. Condition model outputs on those assumptions. Treat violations not as edge cases, but as signals that the system is operating outside its domain of validity.&lt;/p&gt;
&lt;p&gt;In this regime, the model ceases to be an oracle. It becomes a conversational partner whose claims are explicitly conditional. This is not a weakness. It is a return to scientific discipline.&lt;/p&gt;
&lt;h2 id="from-mirrors-to-windows"&gt;From Mirrors to Windows&lt;/h2&gt;
&lt;p&gt;LLMs are powerful mirrors of the past. Without causality, they remain confined to reflection. With causality, they begin to function as windows into possible futures. The glass is not yet clear, but the outline is visible.&lt;/p&gt;
&lt;p&gt;If AI systems are to guide decisions rather than merely narrate history, causality is not an enhancement. It is a prerequisite.&lt;/p&gt;</description></item><item><title>CDANs: Temporal Causal Discovery from Autocorrelated and Non-Stationary Time Series Data</title><link>https://mhasanferdous.github.io/publications/cdans/</link><pubDate>Tue, 01 Aug 2023 00:00:00 +0000</pubDate><guid>https://mhasanferdous.github.io/publications/cdans/</guid><description/></item></channel></rss>