Abstract illustration of paper shapes representing different kinds of ideas and speech.
Illustration by Michela Buttignol

Must a theory be falsifiable to contribute to good science?

Four researchers debate the role that non-testable theories play in neuroscience.

Falsifiability is considered by some to be the cornerstone of any scientifically sound theory. Indeed, certain schools of thought within the philosophy of science label theories that are not falsifiable as “pseudoscience.” But is the falsifiability, or testability, of a theory actually crucial for good science? Or can non-testable theories still help drive progress?

Neuroscience is a relatively young scientific endeavor, so we may benefit from more unconstrained exploration of ideas and theories. Making room for non-testable theories may help expand the space of ideas in neuroscience and provide footholds to grapple with increasingly large and complex datasets. Also, given that non-testable theories already pepper the field of neuroscience, it is important to explore the role they are playing and how well suited they are for it.

To dig into these issues, I moderated a week-long conversation with three researchers—David Barack, Cian O’Donnell and Stephanie Palmer—over Slack to try to define what qualifies as “non-testable” and what traits such theories tend to have. We describe specific examples from neuroscience, with an emphasis on normative approaches. We also discuss challenges in assessing the quality of non-testable theories, and the role they play in inspiring testable predictions. By the end, we reconsider referring to these ideas as “theories” at all, opting for “frameworks” instead.

The discussion reveals the different, and even differing, intuitions and ideas that scientists within the same field can have. It also shows that the base philosophical assumptions underlying the science we do often go undiscussed. Unearthing these implicit assumptions might help clarify the benefits of different work styles and caution us about their limitations.

Our Slack exchange in its entirety appears below, only lightly edited for typos.

Discussion:

Wednesday, March 20th

Ok @channel we are here to discuss the role (if any) of non-testable theories in neuroscience. To start I think we need to try to offer a definition, and ideally some examples of, what “non-testable theories” are. First let’s imagine a theory that in order to test requires collecting an amount or type of data that is beyond any current experimental ability (e.g. we need to know all the synaptic strengths of all connections in the human brain). This would be non-testable in practice. That is not what we should focus on here. Rather, I think we should focus on theories that are non-testable in principle.

To consider what kind of theories those might be, we can first look to canonical “non-testable theories” in other domains. For example, the Church of the Spaghetti Monster claims, satirically, that “an invisible and undetectable Flying Spaghetti Monster created the universe”. The point, of course, is to highlight that religious appeals to undetectable entities cannot be tested, falsified, or argued. So if a theory in neuroscience/psychology appeals to a non-physical entity that cannot be measured in any way, this would also be non-testable in principle (and in practice). Other elements that can make a theory non-testable may be that key components of it are under-specified, intentionally vague, or shifting.

I’ll stop here to give everyone a chance to weigh in on this description of non-testable theories to make sure we are all on the same page.

Are any tests at all permitted? If yes, then even the FSM is testable–for example, if it leads to inconsistency, then it would have failed a test (but not an empirical one!). I’m happy to limit the relevant tests to something like empirical ones; though I fear that we might accidentally lose out on a lot of mathematical theories…

Ok @David Barack so you are saying that if a theory can be submitted to some kind of logical analysis (perhaps where logical contradictions could be found) then that is a form of “testing” and we could therefore still consider it “testable”?

It might also help (me) if we define “theory” a bit here, too. I’m thinking about things like the idea of evolution (not a theory on its own) versus the theory of evolution by natural selection (testable concept). In neuroscience, we have a lot of ideas, but fewer theories. Is anything that’s not testable simply not a theory? I’m influenced by early reading of Imre Lakatos’ work, when I was in college, but cannot claim even an iota of bona fides in philosophy smiles

Lakatos would say scientific theories need to predict novel facts, in his own words “What really counts are dramatic, unexpected, stunning predictions: a few of them are enough to tilt the balance; where theory lags behind the facts, we are dealing with miserable degenerating research programmes.” (source: Science and Pseudoscience was originally broadcast on 30 June 1973 as Programme 11 of The Open University Arts Course A303, ‘Problems of Philosophy’ and its text was subsequently published in Philosophy in the Open edited by Godfrey Vesey, Open University Press, 1974 and also as the Introduction to Lakatos’s The Methodology of Scientific Research Programmes: Philosophical Papers Volume 1 (edited by John Worrall and Gregory Currie) Cambridge University Press, 1978.)

So this seems stronger even than testable – a progressive research program needs a theory that is predictive

In neuroscience, we have a lot of ideas, but fewer theories. Is anything that’s not testable simply not a theory?

Stephanie Palmer

Thursday, March 21st

Ok @channel so for today, I want to start by building off of the idea that theories based on non-physical entities are not testable and ask a perhaps bold question: Does this mean that theories of mental capacities (commonly studied in cognitive science and psychology, for example) are not testable?

For example, in my own domain of attention, there has been debate over “early vs late selection” of attended items. This debate comes from Broadbent’s attentional bottleneck theory (or filter theory of attention) which says that where there is a bottleneck in information processing there must a system that decides what information passes through that bottleneck. Whether the bottleneck occurs “early” in processing or “late” is the matter of debate. Now technically, none of these things—the bottleneck, the attention system, the information processor itself—are meant to refer to specific physical things (at least certainly not in the initial stages of the inquiry). So was none of this testable? And, separately, is it useful?

Will cheap in at length later, but note that the recommendation of @Cian re: theory uses theory in the description, so it is not as far as it goes a specification of ‘theory’. I agree with taking a wide stance, however! Happy to chip in re: how phil sci conceptualizes theory (sneak peak: no one agrees! yay)
🤪 2  ✅ 1

The “no one agrees” part is a big reason why i want to sidestep defining it here, lol

+1 100% cannot agree more hahaha
👍 1

Also @grace.lindsay you say, “theories based on non-physical entities are not testable”–but that’s more substantial than there needs to be empirical tests for a theory.

I’m just taking that as inspiration to consider what might be non-testable in neuroscience. Seems like starting with non-physical mental components or capacities that are still widely influential might get us somewhere in this conversation.

I want to know: if we have two non-testable theories A and B, how do we decide a priori which is more likely to inspire those experiments that will lead to a good testable theory?

Grace Lindsay

Agree with debating this topic Grace. I think many theories of brain function take this form: describe some non-physical thing which may refer to an implicit process (eg attention, learning rules), or a low-dimensional abstraction of some high-d more concrete thing (eg neural manifolds, fMRI functional connectivity) and attribute it some kind of causal powers in driving brain dynamics. But if these ethereal abstract entities don’t correspond to something we can see under a microscope or perturb with optogenetics, then are they really valid components of falsifiable theories?

Fwiw (at this stage of the debate) my view is 1) no, they are not falsifiable; 2) yes, they are nevertheless useful – as long as we don’t take them too seriously

Great, Cian. Can you expand on why you think they are nevertheless useful?

Friday, March 22nd

Ok @channel I want to start today by building on this. Cian is saying here that a theory can be useful (even if it is not itself testable) if it inspires new experiments to collect new data and that data leads to a testable theory.

I want to know: if we have two non-testable theories A and B, how do we decide a priori which is more likely to inspire those experiments that will lead to a good testable theory?

Models, theories, abstractions, they are all just tools to enable our puny minds to reason about this overwhelmingly complex object, the brain. So if a theory give us the mental crutches needed think of the next cool experiment to collect more data, and that data sheds some insight on how the brain works, then it is useful, regardless of whether the theory itself is directly falsifiable. I would like to think that even if we had a bunch of competing unfalsifiable theories, over time some winners would emerge because they proved most useful in helping us reason about brain function.

Followup question: can we decide a priori? Or do we randomly choose which nontestable theory to be influenced by?

Okay, I’m liking this prompt and will give it a shot!

Maybe we can discriminate between even untestable theories because of their generalizability or explanatory power? Presumably these theories account for some known phenomena; maybe we can rate them based on how rich that explanatory potential seems.

A sidebar on this: it seems that once you can test a theory, there’s a distinguishability criterion that should come into play/practice – like Theory A explains Phenomenon X, but if Theories B, C, D … do so as well, without any discriminability between the theories in terms of their predictions, we might be at a dead end, even though things are testable (edited) 

 

Maybe we can discriminate between even untestable theories because of their generalizability or explanatory power? Presumably these theories account for some known phenomena; maybe we can rate them based on how rich that explanatory potential seems.

Stephanie Palmer

Point of clarification David: you are saying that false models can be useful, but we are asking if models/theories that are not falsifiable can be useful. What do you see as the connection there?
👍 2

Yes, I was making a distinct point about false models. By analogy, I would say something similar about unfalsifiable models. They may be useful for helping us understand whatever we are trying to explain; there are many roles for false or unfalsifiable models in science.  So I suppose I would say that the concepts or models in an unfalsifiable theory might provide us with new insight despite not being falsifiable. hmmm I can’t think of a ready example at hand…
👍 1

For the ‘able to see under a microscope’ I suppose I meant that it is an observable thing in principle. Eg early virologists might have hoped to one day see a virus under a future microscope, however I am pretty confident we will never be able to see ‘attention’ under a microscope.
👍 2

This is a tricky one. Suppose attention is implemented in some discrete network. When I dissect the brain, do I see ‘attention’ or just the ‘attention network’? Compare to, say, a tuner on a radio. When I turn the tuning know, am I seeing the tuner or just the tuning knob? For attention, we can certainly see the brain bits whose activity implements or instantiates attention–we can even see (let’s suppose) the various electrical events that occur when the network is functioning. We might not know that those events implement the computations required for attention; but aren’t we seeing attention? In other words, we can see wings flapping when birds fly. Are we seeing flight? Hearts have the function to pump blood. We can see the heart pump in the sense of the visual compressions and rarefactions; do we see it perform the function of pumping blood? What of kidneys clearing waste from the blood? It might sound strange to use now to say we ‘see’ attention when we see the neural activity, but might that not reflect merely the novelty of the concept and its associated neural tissue?

So you can still be testable if you posit the existence of an observable thing, even if it is not observable now (or even very well-specified). But working with entities that by their nature are not directly observable under any circumstances may cause a theory to be non-testable.

This makes sense to me and would protect us from the “not even wrong” ditch that string theory finds itself in today.

I’m not sure–strictly speaking, we don’t obviously see cars, trees, and such, all we see are various colors and patches of illumination. To say we see these things directly is what’s called ‘naive realism’ in the philosophy of mind, and it is a very contentious view. So I think it depends on what is meant by ‘direct observation’. I think we can see all kinds of things; when I observe bubbles in a bubble chamber, am I seeing subatomic particles? Why not?

David what do you think about mental objects/capacities like attention or memory? Are they directly observable? How does that compare to the way that a car is directly observable?

I guess I’d class many things as observable even if what we see are the effects of their action on other materials/fields/objects in our measurements. I’m a theoretical physicist by training, so saying we observe something by only physically touching, seeing, or whatnot just isn’t in my list of necessary conditions
👍 1

I have a strong belief that some things physically exist in the world, like apples and proteins, and others don’t, like the concept of attention or an eigenvector

(But I’m an applied physicist by training, so maybe I would say that)

hahaha Interesting to see the theory vs applied contrast coming out in the views!

Presumably the concept of attention exists somewhere in the world, at least as a (distributed) representation in someone’s head. I would describe attention as a function of some set of neural regions; we can observe the substrate and the outputs or effects of attention. But we can’t see attention any more than we can see the filtering by the kidneys or the circulation of blood by the heart. They are functions; functions are inferred, not observed. So I think I agree with that.
👍 1

Thanks. This is a tangent but what do philosophers think about math? I know mathematicians argue about whether math is created vs discovered

Oooo there are dozens of views of metaphysics for math. Three big ones are nominalism, platonism, and formalism; whether math is created or discovered somewhat depends on your metaphysics. So, for example, I’m a nominalist (which makes me a bit of a rare bird); I don’t think the number 3 exists as such, but rather identify it with sets composed of three objects. On my nominalist view, math is discovered insofar as we discover truths about the world, such as relations between sets of actual things. Formalism, on the other hand, would be more consistent with creation. Formalism is the view that math just is demonstrating what follows from certain formal systems; but formal systems are systems that we create. So, on this view, math is created. But there are literally dozens of flavors here.

Monday, March 25th

@Stephanie Palmer are you ready to start our FEP discussion?

Ok! Sorry about that, I thought that got nixed when David said it wasn’t generalizable. Will get some thoughts here ASAP

Just catching up now, I see the earlier thoughts 🙂

I think we can at least say FEP aims to explain a broad range of data, and we may consider that a good a priori trait of a non-testable theory, so I think it is worth exploring.
👍 2

First off, disclosures. I myself work on information bottlenecks for predicting the future, but I would call that prospection, not precisely predictive coding. I also obviously think about bottlenecks a lot, but the particular kinds of trade-offs that I like to work up, mathematically, and test with data, have flexibility on what we decide is relevant to the biological system.

The FEP essentially defines a trade-off between creating a model that reduces surprise in what new data arrives into the system, while exercising a constraint on the cost of creating that machinery. The idea is that the brain might minimize a form of free energy that is an explicit balance between complexity and accuracy in a model of the external world (either external to the organism or external to a brain region). There’s also room within this formulation for constraints on internal dynamics and external inputs. This is all, of course, very closely related to Bayesian inference and Kalman filtering. (edited) 

Thanks @Stephanie Palmer! Building off what we previously discussed, I suppose we could say that free energy is a non-physical entity that we wouldn’t expect to ever observe directly, yes? And that is partly what makes FEP non-testable?

Just a reminder @channel to chime in here as we want to try to wrap this conversation up on Wednesday. Particularly, if we all agree FEP is non-testable, maybe we can move onto what it may offer that is helpful (if anything)?

Well I would imagine that it offers a body of formal thinking that can underlie testable hypotheses. But there’s another aspect I think is equally important. The FEP offers a set of concepts or ideas, an ontology, that can help us structure and interpret our evidence. Something similar is true for dynamical systems theory or for concepts drawn from topology for understanding, say, manifolds in neuroscience.

I guess non-testability of a theory doesn’t really bother me, so long as it is joined to a part that makes testable empirical predictions.

David Barack

Right! I think within that framework, you’ve gotta work out the functional co sequence of the idea. Like: FEP says this should be optimized, so the filter from sensation to response in area Z should have this property.

An important next step is to compare that consequence to what happens in other frameworks. We’ve been working on this for comparing prospective filters you’d get for different timescales of prediction, and comparing to what a Kalman filter would do, or what an optimal linear system would do.

Finding the daylight between these things I think is some of the most important work of neuro theory

I think Matty Chalk and Gašper Tkačik also have some nice work in this domain (and many others, too)
👍 2

Tuesday, March 26th

I like the comparisons to dynamical systems theory and topology. I’d throw another one out there: Paul Cisek’s work, which is basically about encouraging neuroscientists to remember that the brain is an evolved system. This is meant to give us some structure to interpret our data around and to plan experiments. But the core claim isn’t really something to be tested, it is just a framework for thought. And in each topic area you need to work out how it would manifest.
👍 1

But I still feel like there might be something different about FEP, particularly in that its core claim (that the brain is always trying to reduce free energy) is more controversial than the other claims (i.e. that the brain is evolved, or that it is a dynamical system). So I’m wondering if the recipe for a more “successful” non-testable theory is that it starts with a fairly undeniable claim and just encourages using that fact to structure thought (not to say FEP hasn’t been successful, but it is more divisive than the others). Thoughts, @channel?
👍 1

Huge fan of considering the fact that brains have evolved! I think it’s a hard ask, but I do think brain evolution is testable, from a computational perspective. Our own attempts at that are something like: 1) measure sensory features of the ecological niche the animal occupies, 2) define (and test) the behavioral goals of that organism, focusing on “mission critical” things like survival and mating that are more prone to selective pressure, 3) work out the consequences of these constraints, computationally, on particular species and sensing systems, 4) try to lift off a more general “theory of the evolution of computation” from this. 4 is tough! 🙂

Coming back to FEP, though, I think there are some factors that contribute to the divisiveness of the idea, some are very human/sociological : 1) papers, even reviews, that are really hard to parse/understand, even if you are an expert in the field, 2) a dearth of even discussions of how to test it
👍 2

If I may share a person journey along these lines with info theory and the retina, it might help explain my take on some of this: when I was a PD, we looked to see if predictive information was maximized in retina ganglion cell populations; we computed this thorny/abstract bound on max pred info and managed to measure it in real neurons: the neurons hugged the bound and we declared success, but reviewers were SOOOOO irritated and confrontational about the paper. This is when I got the worst review I have ever received in my entire career. Happy to say more, but me and my colleagues refer to it as the “orgasms and bananas review”, because those were the actual words used by the reviewer to shoot down the notion that anything interesting in the sensory world is predictable. I. Kid. You. Not! So why all the spiciness? Why all the extreme pushback?
😵 1

I took the best possible parts of these reviews and realized that we needed to compare what the RGC’s did to what a well-fit LN neuron would do under the same stimuli (i.e. is this just a fancy and abstract way of defining a new function in the brain that is actually just a consequence of things we know already from the textbook picture of RGC encoding?)

This falls into the “couldn’t this just be …?” category of critiques
😅 1

So I did test the LN model, it failed to hit the pred info bound, and we had a much easier time publishing the paper and getting folks to pay attention to the result. We’ve done a bunch since then to tease apart possible mechanisms in the retina for this, embracing the question of how this might be implemented.

ASIDE OVER

So, coming back to FEP, I think, though I’m sure I’ve missed things, that there has been less of this testing/grappling/teasing apart/exploring mechanisms for that theory

thanks for sharing! Would you say this is a problem particularly for “normative” theories? I.e. those that are trying to answer a “why” question, usually by appealing to the idea that the brain is optimal in some way? The Bayesian brain idea was mentioned in the context of FEP as well, and this is also a normative theory that is sometimes considered untestable or at least not falsifiable

Maybe that’s part of it, but I think normative theories are testable. One might have the urge to say “look, I found some data that aligns with my big normative idea!” but that often leaves folks cold. Taking a few more steps is possible and crucial, IMHO, in differentiating theories and comparing to standard models and frameworks already in the zeitgeist

Perhaps the sociological factor is really strong here – maybe too many theoretical physicists (calling my own people out here!) came along early in neuro and tried to develop field theories of the brain or quantum explanations of consciousness and they were so poorly informed by data that they were straight-up useless and folks got fed up, or developed strong priors that that kind of discourse isn’t helpful if you want to understand how real brains work. Maybe it’s a really strong sociological prior that is having trouble getting updated?

What do you think of the critique, which I’ve heard most in the context of the Bayesian brain, that to some extent there are too many free parameters, so you can always find some formulation by which the brain can be considered “bayes optimal”? I think one could make a similar claim about FEP, in that the exact way to map it to a specific problem is so flexible that it can’t be dis-proven, it just changes shape? This is, in some ways, just another way of saying it is untestable, but is this a flaw that reduces the utility of a non testable theory?

Or perhaps, are the many mappings a benefit that helps us explore our data in more creative ways? And this is in line with these theories being more about inspiration that concrete theoretical progress?

I think the so many parameters makes some of this fall into the “not even wrong” category for sure, which is a problem. I think committing to a formulation that had some constraints, or took the constraints in neural systems into account in some way would make this more solid and less controversial

sorry for my awolness. I did read yesterday’s late but didn’t feel I had anything useful to contribute.

have enjoyed all the above discussion though thanks Grace and Stephanie.

I guess the way I think of both Bayesian brain and FEP is that they are ‘distal’ theories in the sense that they are more like high level principles. To map them into specific predictions ‘proximal’ enough to be compared with data (even qualitative, never mind quantitative) requires specifying some additional model setup. The particular setup chosen by a researcher may be a matter of subjective taste, or appropriateness for the target data/experimental paradigm. So there is some flexibility in the model choice that means there is a substantial degree of researcher freedom in the mapping between ‘the brain is bayesian’ and ‘here’s how far left I predict that person to move their arm in the maniplandum’ (edited) 

so it might be that in its ‘distal’ form (not entirely happy with that terminology) a theory is unfalsifiable – transcendental even. But once instantiated and embodied in a more proximal and specific and mortal model it could become falsifiable (edited) 

IF we believe all that, then the role of unfalsifiable theories is to act as inspiration for more concrete theories – as Grace said.

btw your peer review story is very stressful @Stephanie Palmer. I am more of a newbie to normative modelling and will bear your advice in mind. I even had a plan to do your “look, I found some data that aligns with my big normative idea!” thing… now need to think harder about the null models etc I will compare to

I hope it’s useful info, and that maybe some community thawing has been achieved overall so that you won’t face any kind of pushback like that! 🙂

A lot going on today–sorry for the delay as well!

All good stuff.

Some thoughts on the various topics:

1. On evolution and normativity. The selective environment that gave rise to the present form of neural circuits is essentially unknown. We can make some guesses, but I think it is dangerous to infer from contemporary selection to historic selection. The best evolutionary biology works with organisms that have pretty rapid turnover–I think there’s an e coli experiment that’s been going on for decades–but in the critters-with-substantial-brains category, that approach won’t fly.

I want to push back a little here – I think there are some pretty clean things one can glean from extant species, especially if you know their relatedness on an evolutionary tree and such – and there’s some very nice stuff from e.g. David Stern on evo of fly behaviors that I believe shows that not only is making some evo of computation arguments possible, it’s even testable

Oh totally! I’m a huge fan of the evolutionary approach, especially comparative. I just meant that we don’t have access to the selective environments of ancestral populations, so these kinds of inferences are fraught.

But there are dangers of homology vs analogy, etc

yep, I totally agree with “tread carefully, but go ahead and tread”
❤️ 1

Ooops–that thought wasn’t finished: Continuing: this is where normativity can really come in. By showing what should happen in various environmental contexts, we can rationalize where selection might be headed. We examine some aspects of a selective environment, and we say,

dammit, happened again! we say, ‘ok, here’s what should be selected for in light of this cost function’. that cost function can be max(# offspring) or min(energy outgo) or something like that. These kind of optimality models are important for making inferences about the direction of selective pressures.

2. On optimal models vs model optima. Optimality models are essential in biology, in the foregoing fashion; they allow us to get a handle on the sorts of selective pressures that could exist in the environment and the sorts of computations that brains might have been selected for. But obviously, brains are often sub-optimal! So, while those optimality models can give us a clue, they might not specify the right model for whatever is being computed. This is where the model-experiment-compare-revise-repeat structure of explaining behavior in cognitive neurobiology steps in. To explain some target behavior, first, we specify a model. This initial model is often an optimality model. We then compare the model’s behavior to the animal’s. We find the discrepancy between the reality of the behavior and the optimality model. We then go back to the optimality model and we plug in new variables, or new constraints, to attempt to better explain the behavior by decreasing the divergence between the behavior and the model. We then repeat. The new models, however, may not be optimal. But when we examine the model’s performance, we find the optima for that model, the best that model can do. I think this confusion between optimality models and (sub-optimal) model optima is rife.

I totally agree about confusion here, in general. But I do want to say that some of the best normative models of biological computation pitch towards places where the organism had a strong selective push to be bona fide optimal – like flies sensing visual contrast in a scene for threat avoidance and whatnot

Interesting! I typically just assume that evolutionary pressures haven’t been functioning long enough to arrive at the globally optimal thing to do. But threat avoidance can be a powerful selective pressure!

I’m also having fun lately thinking about things that have really strong selective pressures and effects, which has been a fun new playground for thinking about evo of computation (we’ve been talking with yeast folks and cyanobacteria (circadian rhythm) folks)

Ooo this sounds super interesting. Like, computation has to evolve under certain circumstances, to do the optimal thing?

exactly! The fun one in circadian rhythms, that’s actually testable because the little beasties divide every 20 mins, is to see how latitude and local weather sets internal rhythms and limits or facilitates migration to other environments
👍 1

we can compute optimal internal circadian frequencies (because folks know the circuits pretty well) as well as the optimal internal/external input tradeoffs to hedge on weather “noise” and see how well it predicts how the real things evolve
👀 1

lesson for me was: don’t be snobby about your nonlinearities! single cells compute a ton without neurons – ha!
💪 1  🙌 1

Yes, they do! Tons of single cell computations going on. Sam Gershman has been working on that vis-a-vis learning as well.
👍 1

Just had a great talk with him last year! 100% fun stuff

I’ve been thinking about this issue with regard to the typical tasks used in primate neuroscience (technically my home sub-discipline). Basically, I have increasingly little faith (decreasing faith? faith first derivative large and negative, at any rate) in just about every cognitive finding in that field. All the tasks are far, far too simple to explain what very, very metabolically expensive neocortex is doing. This is especially true of all those value-based decision tasks, which it looks like even plants can do?! C’mon now, OFC is huge, recently mega-expanded, and energy hungry. It’s not just representing the values of options!

…and might be that some of the low-D manifold findings reflect the task design over the more natural embedding space

ah, I see what you did there… No question that task complexity will inform and constrain the neural activity. But I stand by the manifold thesis. Note, maintaining the manifold computations are the primary explanatory objects is not the same as claiming that low-D manifolds will always do so. The finding of low-D manifolds must be tested against more complex task designs. So, the low-D manifold findings might reflect more natural embedding spaces that are revealed in simpler task designs. That is just as much a viable hypothesis as the one you propose. But ‘simpler task design’ does not mean ‘simpler tasks’. For example, most decision tasks are stupid simple–no increase in complexity, such as number of options or range of values, will convince me that OFC is simply representing values. But latent state inference? That’s a more complex cognitive computation, one that OFC might excel at; and, using a simpler task design (with, say, two or few latent states) might in fact reveal the computation therein, even via manifold computations, which scales up. So that would be a case of simpler task design for a more complex task. And, there will be some low-D computations that are consistent across cognitive tasks, such as the Chaudhuri paper on head direction. (edited) 

Yep! I concede all this! While maintaining it was fun to probe 🙂

Also a big fan of the Chaudhuri recent results

So, to go a bit deeper, do you think this relates to the necessity of computation in a finite, noisy neural network or is it simply the “best” way to compute?

Ahh hahaha I love these conversations. They’re the stuff that keeps me going!
🎉 1

Yeah, the Chaudhuri stuff is super important as well because they used a topological technique. When I run into philosophy people on this, they are usually like, ‘oh those manifolds are just linear models of neural activity’. The Chaudhuri paper is so on point as a ‘well, actually’ reply (edited)
💯 1

Hmmm… what do you mean by ‘necessity of computation’?

Just that the brain has to make computations on these representations – they don’t just exist in some abstract space to be read out by an infinite-size, well-designed decoder. Finite parts, finite energy, noise and all that – some downstream area has to do something with the representation

Ah, I see. I take you to be referring to what I would call the ‘reality’ or ‘objectivity’ of computation. That is, the model of transformations over representations refers to actual things in the system, not some fictive components that merely exist in a false model.

I certainly am not making any claims about computation with neural manifolds as ‘the best way’ to compute. I’ve got a paper on computation with neural manifolds that I’m happy to share, if you are interested. It’s almost wrapped up, but it gives a nice albeit brief overview of theories of computation in philosophy of mind/neuroscience. The key question to answer first is how the brain computes, not whether computation with one set of neural entities is better than another–I’d like to know how the brain does it, and then I can turn to answer whether the way that it does it is the most efficient, or most robust to noise, etc. But I do think that it is the right level at which to describe what’s going on, and further, I would argue that manifolds allow for the computation (that is, the transformation over representations) to be robust to changes in the neural population underlying the manifold. I don’t think you can reduce the manifold to that population because there is a one computation : many patterns of correlated activity mapping; picking out any one of those patterns as the computation is a mistake.

I also happen to think that manifolds are the right level at which to describe regularities that will generalize across animals and species. So, they are pitched at the right level of description to capture regularities in the system. I think that made it in to the Nature Reviews article; but regardless, it is a second argument for the approach in addition to the robustness one. Also, Juan’s lab’s recent nature paper supports that second argument.

Incidentally, there’s an interesting question about computation and falsifiability. The mathematical theory of computation–e.g., TM’s–is not falsifiable. It’s just a mathematical theory which can guide our thinking. What is falsifiable is whether any meaningful computations map on neural activity.

Incidentally, I very much enjoyed this thread and would be keen on collaborating on an opinion piece or something!
🙌 1  🎉 1  💯 1

that sounds like a lot of fun, I’d be up for that!
🙌 1

3. Frameworks like the FEP or Bayesian approaches can be tremendously useful as optimality models. But we shouldn’t expect those frameworks to explain actual behavior–there’s not guarantee that the noisy history of selection for some organism landed on what’s optimal. Instead, we should start with those frameworks and then revise them in the fashion I outlined above. Further, those frameworks aren’t falsifiable; after all, they (often) state what is the global best that you can do at some task in the environment as judged by some cost function granted some minimal (…or not so minimal, depending on who you ask) assumptions. So there’s another useful role for those models. But we happily and readily depart from them once our explanatory project gets off the ground. I take it that this is basically how behavioral decision theory has operated. Expected utility theory is a (generally Bayesian) optimality framework. But humans and animals manifestly are not optimal according to EUT. So, we shift to prospect theory, or other nonlinear utility functions. Those actually do pretty awful as well. So, we shift to sets of heuristics, like Gigerenzer’s approach. And so on. I think this applies to the computations performed by neural circuits as well.

4. Distal theories vs proximal ones. In the philosophy of science, there is a ton of beautiful work showing how you need general theories–think, here, of Newton’s laws–and specific theories that apply those general theories to particular subject matter (such as applying Newton’s laws to different materials). It is often a substantial scientific advance to state a derived form of a general model that can apply to some specific subject. However, Newton’s laws are still falsifiable (I think–the physics tends to move pretty quick here, and there are certainly physicists who think the 3 laws are derivable from more abstract, mathematical structure; so e.g. it simply can’t be the case that F equaled anything other than ma, etc). So I think Bayesian framework, or the FEP framework (which afaik builds in Bayesian assumptions), aren’t quite like distal theories because they aren’t theories to begin with. They are non-falsifiable frameworks that play a substantial role in providing concepts and tools, that help us state optimality models at the beginning of inquiry, and that presumably play other roles but that don’t constitute the empirical meat of our scientific theories. (edited) 

5. Optimality models vs model optima redux: Grace, you said “Would you say this is a problem particularly for “normative” theories? I.e. those that are trying to answer a “why” question, usually by appealing to the idea that the brain is optimal in some way?” But I think there are two sorts of optimality at play here, a globally optimal sense present in optimality models and the best that a constrained, realistic etc model can do–the model optima. BOTH are normative but in importantly different ways, because the latter take into account reality whereas the former essentially do not. So, I would predict some normative theories are on better grounds, those account for the facts as we know them about some neural circuit; and that type of normative theory might be falsifiable by looking at the model’s performance (what I call model optima) and comparing it to behavior. And Stephanie, you said “I think normative theories are testable” and that “Taking a few more steps is possible and crucial, IMHO, in differentiating theories and comparing to standard models and frameworks already in the zeitgeist”. I 100% agree, and this just is that distinction between an optimality model, which is a model that states the best that one can do and to hell with reality, vs. finding model optima, which incorporates those next few steps which constrain the hell out of a model! But I’ll add that some normative theories manifestly are not testable and they are not designed to be, because they aren’t empirical theories at all–they are built to solve a problem precisely when ignoring any realistic constraints.
🙌 1  🤩 1

Ok, sorry for the text barf!
😅 1

Upon further thought, it seems to me that one might be able to treat any model as an optimality model or in the sense of using model optima to compare to behavior… so not 100% endorsing my previous claims. might need some tempering.

My takeaway is: we don’t take the models too seriously. Is the brain Bayesian? Kind of. Do photoreceptors encode natural stimuli efficiently? Approximately yeah. Some of these guesses will be woefully wrong, others look good if you squint a bit. We learn something and move on.

Cian O’Donnell


🙌 1

Yes exactly! But note, this doesn’t dissuade the optimality model purist–they don’t care so much about the pragmatics, and they don’t care that their model fails miserably. I think economists are a wonderful illustration of this; the vast majority still operate within an EUT framework, even though that model is miserable at explaining behavior (‘don’t let bad data ruin a good model!’).
🤪 2

Luckily economists have zero impact on the economic policies of our political representatives and therefore on the welfare of our people… right?
✅ 1  😱 1  😭 2

Wednesday, March 27th

Thanks for all that @channel! I want to push on one comment from Cian and then wrap up the discussion.

Cian said:

So my takeaway is: we don’t take the models too seriously. Is the brain Bayesian? Kind of. Do photoreceptors encode natural stimuli efficiently? Approximately yeah. Some of these guesses will be woefully wrong, others look good if you squint a bit. We learn something and move on.

But I want to clarify: Thinking about the Bayesian Brain approach, for example, is the beneficial outcome of exploring that idea just that in the end we can say “kind of” to the question of “is the brain bayes optimal?”. Or is the benefit that by doing these detailed mappings to specific conditions where we find the constraints on the model that explain performance under a bayes-inspired framework we learn about those constraints and caveats. In other words, do we benefit because we can say the brain approximately matches the ideal theory or do we benefit because we can say how it deviates from it?

Can’t it be both? That is, can’t we say that the Bayesian optimal thing to do is explanatory insofar as it sets the ‘gold standard’ which should be approximated by whatever the brain is in fact doing, but it is also explanatory insofar as investigating the violation of its assumptions helps explain why the brain deviates from it?
👍 1

Fair point!

Yes I agree. Eg Bayesian brain, nobody thinks the brain is solving high dimensional integrals analytically. So at minimum we are retreating to MCMC

Ok, @channel I think we can start to wrap things up. Looking back I see we have discussed:

  •  some features that may make a theory un-testable (with a focus on non-physical/non-observable constructs and what counts as that, including mathematics). Also some debate over the extent to which normative models are testable.
  • The idea that non-testable theories are helpful if they inspire new experiments/data analyses in the form of “mental crutches”
  • We may prefer non-testable theories that can be applied to a wider range of data
  • How these non-testable or “distal” theories/frameworks can be mapped to specific testable models under specific circumstances
  • The importance of determining the different predictions made by different theories and evaluating a non-testable “distal” theory based on how well it leads to distinguishable (and ultimately accurate) hypotheses
  • How FEP/Bayesian brain play the role of these “distal theories” that inspire specific approaches to the study of the brain
  • A potential sociological aversion to certain non-testable theories
  • How these non-testable theories sometimes represent an “ideal” and part of the work/payoff is finding the specific ways in which the brain deviates from or matches this ideal

If you have any corrections you want to make to that summary please do. But also let’s take this time to throw out any other thoughts you think should be included in this discussion, maybe things you expected we’d talk about but didn’t etc.

Sorry I’m on a plane to France now! But will think about it en route! 🇫🇷
✈️ 3

Thanks @grace.lindsay! I would not characterize these frameworks as theories, not even distal ones. See my comments above about that topic. But that might just be my preference?

Yes, perhaps that is another takeaway, that these things we are discussing are frameworks, not theories.
👍 2

As to anything missing: we didn’t get a chance to discuss how falsified frameworks or theories can still allow us to make progress and, even, provide some kind of understanding of a system! That’s a hot topic in philosophy of science for the past few decades. Idealizations are, strictly speaking, false, and frameworks like FEP, Bayesian approaches, or EUT contain assumptions that look a lot like idealizations but are certainly false. Nonetheless, they can help us make progress and themselves provide understanding. At least, that’s a standard hot take. I think we did cover how they can help make progress by giving us a comparison point and by providing a set of assumptions whose violations might further our understanding of a system. I am not sure if we think they provide understanding themselves despite being false; in fact, I’m not even sure I agree with that. But it is an interesting idea!
👍 2

I’m also not sure where I fall on whether an unfalsifiable theory provides insight in and of itself. I think these frameworks can provide a good space for spawning new ideas/analyses/experiments (which is covered in other points in the summary), as long as poking holes in them and open discourse is allowed (pointing to another summary note about sociology). Reminds of discussions of the utility of “false dichotomies” in scientific debate that start conversations and provide useful insights into where to go next, even if the premise for the dichotomy is flawed.
👍 2

I really like that summary @grace.lindsay, thanks!

As a wrap note, this was really fun and I’m glad we had time over the week to get to know each other, dig deeper on some threads, and explore different topics. The format was better than I thought it might be when we started and I’m really glad that you brought us all together this way, @grace.lindsay!
❤️ 2

Great! I’m glad our little experiment has set a good precedent

Yes agree, thanks Grace, it was a great format idea and you guided the discussion impeccably. I enjoyed it and learned a lot

Now I now much more about what I don’t know

One last q: if we agree with David that FEP etc are frameworks not theories, do we have any remaining examples of an unfalsifiable theory?

Yea I suppose if being testable is somehow inherent to the definition of theory, then no

Oooo I really like this! Unexpected conclusion but I think I agree?

Then again you do see the word ‘theory’ used in math, like set theory, but that could just be a different sense

Thank you Grace! I had a blast and your guidance was superb

Subscribe to get notified every time a new “Breakout room” is published.