Functional Neurological Disorders: The Brain as a Prediction Machine
1.0 The Predictive Brain: Perception as Inference
The last few decades have witnessed a paradigm shift in our understanding of brain function. Traditional models often depicted the brain as a passive, stimulus-response organ that simply processes sensory information as it arrives from the outside world. The Bayesian brain hypothesis (the idea that the brain works like a statistician, constantly weighing what it expects against what it actually senses) inverts this classical view, positing that the brain is not a passive receiver of information but a proactive prediction engine (a mechanism that is always trying to guess what sensory information it will receive next). Its fundamental task is not to react to the world, but to predict it. This forward-looking, proactive stance is not merely a feature of high-level cognition but a fundamental principle that governs every aspect of perception, action, and learning.
At the core of this framework is the concept of a generative model (an internal, working hypothesis about how the world causes the body's sensations). The brain is thought to embody a probabilistic model of how its sensations are caused (Parr, Pezzulo, and Friston, 2022). This internal model is not a static representation but a dynamic set of beliefs about the world that the brain uses to predict sensory data before it even arrives. For example, when reaching for a coffee cup, your brain predicts the specific tactile and thermal sensations of warm ceramic against your fingertips. This prediction prepares you for the interaction, and the subsequent sensory input serves primarily to update and refine this ongoing model.
The process by which the brain updates its generative model in light of new evidence is known as Bayesian inference. This is an optimal, probabilistic method for combining pre-existing beliefs with new sensory data to form an updated perception of the world. This process can be broken down into three key components:
Prior Beliefs (what the brain already expects or assumes to be true): These are the brain's pre-existing expectations or hypotheses about the causes of sensations, formed before receiving new sensory information (Parr, Pezzulo, and Friston, 2022). They represent the accumulated knowledge and context the brain brings to any given situation.
Likelihood (Sensory Evidence) (the new information coming from the senses): This is the incoming stream of sensory data (e.g., light hitting the retina, sound waves reaching the ear). This data provides evidence that either confirms or contradicts the brain's prior beliefs about the state of the world (Parr, Pezzulo, and Friston, 2022).
Posterior Beliefs (the final, updated conclusion the brain reaches about the world, or the "percept"): This is the brain's updated belief—what we experience as a "percept"—which is formed by integrating the prior beliefs with the likelihood of the sensory evidence, in accordance with Bayes' rule (Parr, Pezzulo, and Friston, 2022). The posterior is essentially a balanced compromise between what the brain expected to sense and what it actually sensed.
When there is a mismatch between the brain's predictions and the actual sensory input, a prediction error (the difference between what the brain expected and what it actually sensed, which signals "surprise") is generated (Parr, Pezzulo, and Friston, 2022). The brain's imperative is to minimise (reduce or keep this surprise as low as possible) this prediction error, or surprise, over the long run. Formally, this is achieved by minimising (reducing) a tractable proxy for surprise known as 'variational free energy' (a complex mathematical term that acts as the brain's internal measure of its own surprise and disorder) (Friston, 2010; Parr, Pezzulo, and Friston, 2022). It achieves this by continuously updating its internal model to make better predictions. This is the driving force behind both learning and perception.
However, the brain has another powerful way to minimise prediction error: it can actively change the world to make sensory inputs conform to its predictions.2.0 Active Inference: Perception and Action as Two Sides of the Same Coin
Active Inference (the theory that both perception and action are strategies used by the brain to reduce prediction error) extends the Bayesian brain hypothesis by unifying perception and action under the single imperative of minimising prediction error. In this framework, action is not simply a response to a perception; rather, perception and action are two complementary strategies for resolving the discrepancy between the brain's model of the world and the sensory evidence it receives. The brain constantly strives to reduce "surprise" by making the world more predictable, and it can do so by changing its mind or by changing the world.The brain employs a dual mechanism for minimising prediction error, treating perception and action as two sides of the same coin:
Perceptual Inference (the process of changing the brain's mind/beliefs to match reality): This involves changing the brain's internal model to better align with sensory reality. When prediction errors signal a mismatch, the brain updates its beliefs and revises its predictions for the future. This belief-updating process is what we traditionally understand as perception (Parr, Pezzulo, and Friston, 2022).
Active Inference: This involves acting upon the world to make sensory inputs conform to the brain's predictions. Instead of changing its beliefs, the brain can change its sensory samples through action, thereby fulfilling its own top-down predictions (Parr, Pezzulo, and Friston, 2022). If you feel colder than you expect, you can either update your belief about the room's temperature (perception) or put on a sweater to make your bodily sensations match your preference for warmth (action). Similarly, if you see something surprising in your periphery, you can either revise your belief about what it is (perceptual inference) or you can saccade your eyes toward it to gather more precise data, making the sensory input conform to a prediction of a clearly seen object (active inference).
This perspective offers a radical reimagining of motor control. Under Active Inference, the brain does not issue motor "commands" to muscles. Instead, it generates top-down predictions of the proprioceptive sensations (the sense of body position and movement) that would accompany a desired action (Adams, Shipp, and Friston, 2013). For instance, to lift your arm, the motor cortex sends a prediction of the sensory consequences of a lifted arm down to the targets of descending motor pathways that engage spinal reflex arcs. The spinal reflex arcs then act to minimise the prediction error between this desired, predicted state and the current sensory feedback from the limb, thereby causing the muscles to contract and the arm to rise until the prediction is fulfilled.
For this process to work, a critical mechanism is required: sensory attenuation (the temporary 'muting' or down-weighting of sensory feedback from the body during an action). For a willed movement to occur, the brain must temporarily reduce the weight, or precision, it assigns to proprioceptive prediction errors. If it did not, any discrepancy between the current limb position (e.g., arm at rest) and the predicted limb position (e.g., arm raising) would simply be resolved through perceptual inference—the brain would "correct" its prediction back to "arm at rest," and no movement would ever be initiated (Brown, Adams, Parees, Edwards, and Friston, 2013; Parr, Pezzulo, and Friston, 2022). By attenuating, or momentarily ignoring, the sensory feedback that contradicts the desired movement, the brain allows action to resolve the prediction error and bring reality into line with its goals.
The delicate balance between trusting internal predictions versus trusting incoming sensory evidence is therefore actively managed by a key computational quantity known as precision (the brain's estimate of the reliability or confidence it should place in any piece of information).
3.0 Precision: The Currency of Belief and Attention
For the brain to effectively navigate the world, it must constantly weigh its own predictions against the stream of incoming sensory data. To do this, it needs to estimate the reliability or confidence of each source of information. In the Active Inference framework, this estimate of reliability is termed precision (confidence). Precision is the computational currency that determines the influence of different information streams on the brain's beliefs and, ultimately, on perception and action.
Formally, precision is defined as the inverse of variance, or uncertainty. In simpler terms, high precision corresponds to high confidence and low uncertainty, signalling that an information stream is reliable. Conversely, low precision signifies low confidence and high uncertainty, indicating that an information stream should be down-weighted or trusted less (Parr, Pezzulo, and Friston, 2022).
Precision plays a dual role in modulating the flow of information, shaping how the brain balances its internal model with external reality:
Precision of Priors (when the brain is highly confident in its existing beliefs/expectations): When the brain assigns high precision to its prior beliefs, top-down predictions exert a powerful influence over perception. This makes the agent's experience more reliant on its expectations and less sensitive to contradictory sensory evidence. The brain, in effect, trusts its own model more than the data coming from its senses.
Precision of Sensory Evidence (when the brain highly trusts the new information coming from the senses): When high precision is assigned to sensory evidence (i.e., prediction errors), bottom-up sensory data strongly drives belief updating. This allows the agent to be highly responsive to changes in the environment and to rapidly correct its internal model when its predictions are wrong.
This process of dynamically adjusting the precision of sensory signals provides a powerful neurocomputational account of attention (the process of boosting the confidence assigned to specific sensory signals, making them more influential). From the perspective of Active Inference, attending to a particular sensory stream is functionally equivalent to increasing the precision (or "gain") of the associated prediction errors (Feldman and Friston, 2010). By turning up the gain on a specific channel of information, the brain amplifies its influence on belief updating, effectively prioritising that information for conscious processing. In essence, attention is the computational mechanism for turning up the 'volume' on sensory data that matters most for resolving uncertainty.
When the mechanisms that control precision are properly calibrated, this system enables flexible and adaptive behaviour. However, aberrant or miscalibrated precision can lead to persistent false inferences, providing a powerful framework for understanding psychopathology.
4.0 A Computational Model of Functional Neurological Disorders (FND)
From a computational standpoint, Active Inference provides a formal, mechanistic account of Functional Neurological Disorders (FND) (a condition involving neurological symptoms that are genuinely experienced but cannot be explained by structural disease). This perspective reframes FND symptoms—such as functional paralysis (the inability to move a limb), tremors, or non-epileptic seizures—not as feigned, imaginary, or purely "psychological," but as the logical and predictable outcome of an underlying pathology in the brain's inferential mechanisms. It proposes a mechanistic account of FND as a computational pathology, where aberrant belief updating leads to genuinely experienced yet medically unexplained sensorimotor symptoms (Edwards et al., 2012; Parr, Pezzulo, and Friston, 2022).
A core hypothesis emerging from this framework is that FND symptoms arise from the assignment of abnormally high and inflexible precision to high-level, top-down prior beliefs (an overwhelming and fixed confidence in deeply ingrained assumptions about the body's condition, overriding real-time sensory data) about bodily states. These powerful prior beliefs (e.g., "my arm cannot move," "my leg is weak") come to dominate the process of perceptual inference, effectively overriding contradictory evidence from the senses and intentions.
The consequences of this aberrant precision can be illustrated using the example of functional paralysis:
A patient develops a strong, high-precision prior belief that their arm is paralysed. This belief may originate from various factors, including previous illness, injury, or psychological stress, but once established, it functions as a powerful prediction. This prior generates a continuous, top-down prediction of sensory inputs consistent with paralysis—namely, the absence of movement-related proprioceptive and motor feedback.
When the individual forms an intention to move their arm, this creates a desired future state that is in direct conflict with the deeply entrenched prior belief of paralysis. This conflict generates a massive prediction error.
In a healthy motor system, the intention to move generates a prediction error that is resolved via action. This is only possible because the brain attenuates the precision of proprioceptive signals confirming the limb’s current, static position, effectively 'ignoring' them to allow movement to occur. In the FND model, the aberrantly high precision of the 'paralysis prior' makes this crucial step impossible. The brain cannot attenuate sensory evidence from the motionless limb because the belief that the limb should be motionless is afforded overwhelming confidence. Consequently, the prediction error generated by the intention to move is resolved not by action, but by perception. The powerful prior belief quashes the motor prediction, and the brain's final inference—its perceptual reality—is that the limb is indeed paralysed (Brown, Adams, Parees, Edwards, and Friston, 2013).
From this perspective, FND is a disorder of inference. It is a condition where deeply held (high-precision) beliefs about the body's state sculpt perception and action, creating a reality that persists even in the face of intentions and sensory evidence to the contrary.
5.0 Conclusion: Reframing FND as a Disorder of Inference
In summary, the Active Inference framework recasts FND not as a disorder of will or emotion, but as a disorder of perception itself—a failure of inference. It posits that symptoms are the direct, tangible consequence of the brain's predictive machinery becoming pathologically convinced of its own beliefs about the body, even when they contradict reality.
This computational model has profound implications for both our understanding and treatment of FND:
A Non-Stigmatising Framework: By grounding FND in the mathematics of belief updating and predictive processing, this approach moves the discourse away from historical and often pejorative notions of "hysteria," conversion, or malingering. It reframes FND as a disorder of brain mechanisms—a tangible, understandable pathology of inference that can be studied and, ultimately, treated.
Potential Therapeutic Targets: The model points directly toward novel therapeutic strategies. If FND symptoms are maintained by aberrantly precise prior beliefs, then effective treatment should aim to recalibrate this precision (to therapeutically adjust the confidence level the brain places on its faulty beliefs). This might be achieved through interventions that provide overwhelming, reliable, and unambiguous sensory feedback that contradicts the pathological belief, thereby forcing a belief update. Physical therapies, biofeedback, and virtual reality could be leveraged to systematically challenge and retrain the brain's predictive model of the body.
Ultimately, the value of computational frameworks like Active Inference lies in their ability to build mechanistic bridges between seemingly disparate fields. By providing a common language and formal structure, this approach helps to unify neurology, psychiatry, and the foundational science of the predictive brain, paving the way for a more integrated and effective approach to complex brain disorders.
6.0 Bibliography
Adams, R.A., Shipp, S. and Friston, K.J. (2013) 'Predictions not commands: active inference in the motor system', Brain Structure and Function, 218(3), pp. 611-643.
Brown, H., Adams, R.A., Parees, I., Edwards, M. and Friston, K. (2013) 'Active inference, sensory attenuation and illusions', Cognitive processing, 14(4), pp. 411-427.
Edwards, M.J., Adams, R.A., Brown, H., Pareés, I. and Friston, K.J. (2012) 'A Bayesian account of “hysteria”', Brain, 135(11), pp. 3495–3512.
Feldman, H. and Friston, K. (2010) 'Attention, uncertainty, and free-energy', Frontiers in Human Neuroscience, 4, p. 215.
Friston, K.J. (2010) 'The free-energy principle: a unified brain theory?', Nature Reviews Neuroscience, 11(2), pp. 127–138.
Parr, T., Pezzulo, G. and Friston, K.J. (2022) Active Inference: The Free Energy Principle in Mind, Brain, and Behavior. Cambridge, MA: MIT Press.



Comments
Post a Comment