One step to unlock
One step to unlock
RC 111.
111 Reading Comprehension passages across 23 topic categories — Philosophy, Psychology, History, Business, Science, Arts and more. CAT-level practice, free.
Please enter your name.
Please enter a valid phone number.
Please enter a valid email address.
Please select your target exam.
No spam. No payment. Your details help us build better tools for CAT aspirants.
GRADSKOOL
← All Tools Enrol Now →

RC 111 — Free CAT Reading Comprehension Practice Tool

RC 111 Tool  ·  23 Categories  ·  111 Passages  ·  444 Questions
Progress
0 / 444
Category 01
Philosophy
5 passages · 4 questions each · CAT 5/5 · 50 min total
Score
/20
P 01
The Trolley Problem & the Doctrine of Double Effect
Passage Timer
10:00
Read the Passage

Foot's trolley problem was never a self-standing puzzle about permissible killing; it was a diagnostic instrument for exposing an asymmetry in ordinary moral intuitions that neither standard utilitarianism nor deontology could accommodate without remainder. The switch case — redirect a runaway trolley to kill one and save five — elicits near-universal approval. The footbridge case — push a large man off a bridge to stop the trolley and save five — elicits near-universal refusal. The numbers are identical; the causal structure differs. Foot argued this asymmetry reveals the implicit operation of the doctrine of double effect (DDE): it is permissible to bring about harm as a foreseen but unintended side effect of achieving a good end, but impermissible to use harm as the direct means to that end.

Thomson complicated the picture by introducing the loop variant: a trolley on a looping track will return to kill the five unless diverted to a side track where one person's body will physically stop it. The victim's body is now causally necessary to saving the five — structurally identical to the footbridge case — yet most subjects still approve the switch. Thomson concluded the DDE cannot explain the asymmetry, since both the footbridge and loop cases use the victim as a causal means. Foot's defenders respond that the agent in the loop case intends only the trolley's diversion; the body's role as a physical stop is a further downstream consequence, not the intended means — preserving the DDE's intending/foreseeing distinction. The dispute therefore turns on how intentions are individuated, which is itself a contested problem in philosophy of action.

A third objection — Rachels' "doing and allowing" critique — attacks the DDE from a different angle entirely. Rachels argued that the moral distinction between killing and letting die, which partially underlies the DDE, has no independent ethical weight: intention and outcome being equal, there is no morally relevant difference between redirecting a threat and failing to prevent one. If correct, the intuitive asymmetry Foot sought to explain is itself an artefact of framing rather than a datum about moral reality, and the philosophical apparatus constructed to explain it explains nothing that genuinely needs explaining.

Questions · Passage 01
1
Foot's defenders argue that in the loop variant, the agent intends only the trolley's diversion — not the victim's death — and so the DDE is preserved. Which of the following, if true, most severely undermines this defence?
CORRECT: C Foot's defenders claim the body physically stopping the trolley is the distinguishing feature — it's foreseen but not intended as a means. Option C introduces a boulder variant where no body is used as a physical stop, yet subjects approve at the same rate. This reveals the victim's causal role is not what drives the intuition — undercutting the entire defence. B actually supports Foot by showing the loop feels like the switch neurologically. A is a historical-domain objection to Foot, not the defenders' specific loop argument. D is a separate track scenario where no one is a means — irrelevant to the defenders' position.
2
Which of the following can be most reliably inferred from the passage's account of Rachels' objection?
CORRECT: B The passage says if Rachels is right, the intuitive asymmetry "is itself an artefact of framing rather than a datum about moral reality." The DDE was built to explain that asymmetry. If the asymmetry has no evidential weight, the DDE has no explanandum — it loses both its explanation and its foundation. A over-infers: Rachels denies the distinction has weight but doesn't necessarily endorse pushing. C conflates rejecting the killing/letting-die distinction with pure consequentialism — a leap Rachels doesn't make. D reverses history and misidentifies what Rachels targets.
3
The passage states that the dispute between Thomson and Foot's defenders "turns on how intentions are individuated." The author makes this remark primarily to:
CORRECT: A The author is not picking a winner — "itself contested" signals that the loop debate imports an unresolved dispute. The DDE can't be vindicated or refuted until intention individuation is settled elsewhere. B implies the DDE will win — the author is neutral. C assigns incoherence to Thomson; the author doesn't. D attributes an endorsement to the author that isn't there.
4
The loop and footbridge cases share the same causal structure — victim's body as a necessary means — yet subjects respond to them differently. Which resolution is most consistent with Foot's defenders' position as described in the passage?
CORRECT: D Foot's defenders say the cases only appear to share causal structure. At the level of intentions, they differ: loop = body-stop is a downstream foreseen consequence; footbridge = body-harm is the intended means. The paradox dissolves. A attributes inconsistency to the DDE — the defender's opposite claim. B is a scepticism about intuitions that no one in the passage adopts. C attributes the dual-process resolution to Thomson — she makes no such argument in the passage.
Passage 1 Score
/4

P 02
Personal Identity, Psychological Continuity & What Matters in Survival
Passage Timer
10:00
Read the Passage

The philosophical problem of personal identity asks what constitutes a person's persistence through time. Locke's memory criterion holds that a person P2 at T2 is identical to P1 at T1 if and only if P2 can remember P1's experiences. Reid's brave officer paradox exposed a logical flaw: the decorated general remembers the young officer's bravery; the young officer remembered the flogged boy; but the general cannot remember the flogging. Locke's criterion thus entails both that the general is and is not identical to the flogged boy — a direct violation of the transitivity of identity.

Parfit's response in Reasons and Persons was to dissolve rather than repair the problem. He argued that personal identity just consists in psychological continuity — overlapping chains of memories, intentions, beliefs, and desires — and that strict numerical identity is not what should matter for our practical and prudential concerns. What matters is Relation R: psychological connectedness and/or continuity with the right causal history. In fission cases — where each brain hemisphere is transplanted into a separate body, producing two psychologically continuous successors — strict identity cannot hold for both, yet both stand in Relation R to the original. Parfit's revisionary conclusion: we should care about Relation R, not identity, and R can hold to degrees and can branch.

Nozick's closest continuer theory attempted a middle path: identity goes to whichever successor is psychologically closest to the original, provided no equally close competitor exists. In fission, where both successors are equally close, neither is identical to the original. Critics have pressed the objection that survival cannot be an extrinsic matter: if one hemisphere recipient dies immediately post-transplant, the surviving successor is clearly identical to the original — yet if both survive, neither is identical, on Nozick's account. Whether one person survives cannot depend on what happens to a third party with no causal connection to the survivor's continuity.

Questions · Passage 02
5
Critics argue that Nozick's closest continuer theory makes survival extrinsic — your identity verdict depends on what happens to a third party. Which of the following, if true, most strengthens this objection?
CORRECT: A The extrinsicness objection is that Nozick's identity verdict changes across two scenarios while nothing about the survivor's intrinsic relationship to the original changes. Option A makes this maximally precise: psychological continuity is identical in both cases — the only variable is the third party's fate. This directly strengthens the charge. C supports the intuition behind the objection but doesn't establish that intrinsic continuity is unchanged — it only shows different verdicts. B attacks Nozick via Parfit but doesn't strengthen the specific extrinsicness charge. D is a concession by Nozick — it might weaken rather than strengthen the objection.
6
From Parfit's argument about fission and Relation R, which of the following can be most reliably inferred?
CORRECT: C Parfit says R can hold to degrees. A fission successor may have maximally rich connections to the original (high R); a person after decades of psychological drift has sparse connections (low R) yet is numerically identical. This inversion — identity without R; R without identity — is exactly Parfit's point about what matters coming apart from identity. A misreads Parfit: he says strict identity cannot hold for both, so the original doesn't survive as two. B conflates R being gradable with identity being gradable — Parfit keeps identity binary; it's R that has degrees. D over-infers: Parfit revises what matters, but his view on moral responsibility is more nuanced than a blanket repudiation.
7
Reid's paradox is sometimes dismissed by saying Locke only needs refinement: use overlapping chains of memory rather than direct memory links. This dismissal contains which logical error?
CORRECT: B Locke's criterion specifically requires direct memory. Switching to overlapping chains means you are no longer defending Locke — you have replaced his position with a new one. The original account is not "refined"; it is abandoned and a different theory is installed. A identifies a real flaw with overlapping-chain theories but misidentifies what is wrong with this particular dismissal. C is a historical confusion — the questions asks about a specific logical error in the refusal, not a mixing of objections. D identifies a real downstream problem with overlapping chains but is not the primary logical error in the move of "just refine Locke."
8
All three theories discussed — Locke's memory criterion, Parfit's Relation R, and Nozick's closest continuer — share a common underlying assumption that none explicitly defends. Which best identifies it?
CORRECT: D All three theories are doing substantive metaphysics — they treat personal identity as a real question with a real answer, not a matter of arbitrary convention. None defends this assumption explicitly; they all just proceed with it. A is close but wrong: Nozick's theory doesn't exclude physical criteria — it measures psychological closeness but a physical theory could generate the same metric. B applies only to Parfit and Nozick — Locke never uses fission cases. C is what Parfit explicitly rejects — his whole argument is that strict numerical identity is not the right target — so it cannot be a shared assumption of all three.
Passage 2 Score
/4

P 03
The Is-Ought Problem, Naturalistic Fallacy & Neo-Aristotelian Naturalism
Passage Timer
10:00
Read the Passage

Hume's observation that no normative conclusion follows validly from purely descriptive premises — the is-ought gap — remains the most consequential challenge in metaethics. To argue that because evolution produced cooperative instincts in humans, humans therefore ought to cooperate, commits what Hume diagnosed as an inferential leap: the move from facts about what is to prescriptions about what ought to be. Moore subsequently labelled a specific version of this error the naturalistic fallacy: defining "good" in terms of any natural property — pleasure, survival fitness, social harmony — and then deriving moral conclusions from facts about that property. Moore argued by open-question analysis that for any natural property N, it always remains a meaningful question whether something that has N is actually good, showing that "good" cannot be analytically equivalent to N.

The naturalistic fallacy charge has been contested from several directions. Cornell realists argue that moral properties are genuinely natural properties but are not reducible to simple naturalistic descriptions — "good" picks out a real, mind-independent cluster property that science can progressively identify, even if no simple natural predicate captures it. Expressivists take the opposite route: rather than naturalising ethics, they deny that moral predicates describe any properties at all. On their view, Hume's gap is not a logical problem about derivation but a mark of moral language's essentially non-descriptive, action-guiding function. The is-ought problem therefore does not show that moral reasoning is invalid — it shows that moral reasoning operates in a different register from factual reasoning, coordinating attitudes rather than tracking truths.

A third response — neo-Aristotelian naturalism — challenges the dichotomy itself. Philippa Foot and Rosalind Hursthouse argue that facts about what it is for a living thing to flourish are normative facts built into the biological description itself. A wolf that cannot hunt is defective in a sense that is simultaneously biological and evaluative. If this is right, the fact-value gap that Hume identified is already present within natural science when applied to living things — and bridging it is a matter of correct biological description, not illicit inference. Critics of neo-Aristotelianism counter that the evaluative concepts embedded in biological descriptions are themselves conventional impositions on a value-neutral nature: calling a wolf "defective" reflects human purposes and categories, not an objective fact about the wolf's own interests independent of how we choose to describe it.

Questions · Passage 03
9
Moore's open-question argument is designed to show that "good" cannot be analytically equivalent to any natural property. Which of the following, if true, would most directly undermine this argument?
CORRECT: D Moore's argument rests on the claim that for any natural property N, a competent speaker who understands both N and "good" can meaningfully ask whether something that has N is actually good — the question is always open. Option D attacks this by proposing that the question is only apparently open due to incomplete understanding: with full conceptual mastery, the connection becomes analytic and the question closes. This is the standard Moorean response — if D were true, Moore's intuition pump would fail because the residual openness reflects ignorance, not semantic independence. A shows ordinary speakers treat them as equivalent in practice — this is psychological data, not the semantic claim the argument requires. B shows the argument also applies to non-naturalism — this would actually strengthen scepticism about all definitions, not undermine the naturalistic fallacy charge specifically. C invokes correlation not analyticity — tight correlation doesn't establish conceptual equivalence.
10
The expressivist response to Hume's is-ought gap argues that the gap is not a logical derivation problem but a mark of moral language's non-descriptive function. Which of the following most precisely identifies what the expressivist thereby gives up?
CORRECT: C The most technically precise problem for expressivism is the Frege-Geach problem (also called the embedding problem): standard logical operators like "if... then" require their component sentences to have truth values. "Murder is wrong" embedded in "If murder is wrong, then assisted murder is wrong" cannot be functioning as an attitude expression — it's functioning as a premise in a logical inference. If moral sentences are non-descriptive, this standard logical embedding becomes problematic. C identifies this precisely. A is also a genuine cost — expressivists must abandon or reconstruct moral truth — but is less technically precise than C as a statement of what is given up. B is not quite right: expressivists have developed sophisticated accounts of disagreement as clashing attitudes; and the point about A involves a real cost but the question asks for the most precise identification. D conflates sincerity conditions with descriptive content — expressivists can distinguish sincere from insincere attitude expression without asserting descriptions.
11
The critic's response to neo-Aristotelian naturalism claims that evaluative concepts embedded in biological descriptions "reflect human purposes and categories, not an objective fact about the wolf's own interests." This objection most directly targets which feature of Foot's position?
CORRECT: B The critic's specific objection is that "defective" reflects human purposes and categories — it is not value-neutral nature speaking but humans projecting a normative category onto wolves. This targets Foot's claim that the biological description itself contains normative content independently of human imposition. The critic says it's conventional, not objective. B captures this precisely. A makes an obligation argument — the critic's point is earlier and more fundamental: not about obligations to wolves but about whether the evaluative content is objective. C restates the is-ought gap as applied to neo-Aristotelianism — a related but distinct objection that doesn't track the specific language of "human purposes and categories." D challenges the teleological metaphysics — possible but not what "human purposes and categories" specifically implies; the objection is about conventionalism of description, not teleological metaphysics.
12
The passage presents three responses to Hume's is-ought gap: Cornell realism, expressivism, and neo-Aristotelian naturalism. Which of the following most accurately describes what all three share?
CORRECT: A All three responses contest the force of Hume's observation as a final verdict on moral reasoning. Cornell realism says moral properties are real natural properties, so is-to-ought derivations are possible via correct identification of cluster properties. Expressivism says the gap isn't a derivation failure but a feature of non-descriptive language — moral reasoning is valid, just different in kind. Neo-Aristotelianism says nature already contains evaluative facts, so the gap is less sharp than Hume thought. All three are resisting the sceptical conclusion while diagnosing Hume differently. A captures this shared resistance. B excludes expressivism — expressivists explicitly deny that moral sentences describe properties. D excludes expressivism — expressivists are not moral realists. C is the correct answer and the most careful formulation: the shared project is resistance to scepticism about moral reasoning's validity or independence, achieved through different diagnoses of what the gap shows.
Passage 3 Score
/4

P 04
Free Will, Compatibilism & Frankfurt Cases
Passage Timer
10:00
Read the Passage

The debate between compatibilists and incompatibilists over free will turns on whether determinism — the thesis that every event is necessitated by prior causes and natural laws — is compatible with moral responsibility. Hard incompatibilists argue that if determinism is true, no agent could have done otherwise than they did; and if an agent could not have done otherwise, they are not morally responsible for what they did. This argument relies on the Principle of Alternative Possibilities (PAP): moral responsibility requires the ability to have done otherwise. The argument's apparent force rests on the intuition that praise and blame are appropriate only when an agent could have chosen differently — that holding someone responsible for an action they could not have avoided is a form of moral luck that undermines the reactive attitudes of resentment and gratitude.

Harry Frankfurt's famous counterexamples challenged PAP directly. Suppose a neuroscientist has implanted a device in an agent's brain that monitors her deliberations. If she is about to decide against the neuroscientist's preferred outcome, the device fires and ensures she decides in his preferred way instead. But suppose she decides on her own, without the device firing. She acts as the neuroscientist wanted, but entirely through her own deliberation. Intuitively, she seems responsible — yet the counterfactual intervener ensures there were no alternative possibilities. If Frankfurt is right, PAP is false, and the incompatibilist's most powerful argument loses its premise.

Compatibilists press the insight further: what matters for responsibility is not the metaphysical availability of alternatives but the quality of the agent's will — whether she acts from her own reasons, values, and reflective endorsements rather than from compulsion, manipulation, or addiction. Hierarchical compatibilists like Frankfurt himself argue that an agent acts freely when her first-order desires (desires to do X) are endorsed by her second-order volitions (desires about which first-order desires to act from). Manipulation cases, however, challenge this account: an agent whose entire evaluative hierarchy has been installed by an external manipulator — a cult indoctrinator who shaped every desire from childhood — may satisfy the hierarchical conditions yet seem clearly unfree, because the hierarchy itself is heteronomous in origin even if internally coherent.

Questions · Passage 04
13
Frankfurt's counterexample to PAP is designed to show that an agent can be responsible even without alternative possibilities. The most effective incompatibilist response to Frankfurt cases is the "flicker of freedom" reply. Which of the following best captures that reply?
CORRECT: C The "flicker of freedom" reply is the standard incompatibilist response: even in Frankfurt cases, there is always some residual alternative — a prior moment at which the agent could have begun to decide differently, before the device's monitoring could detect the deviation and fire. At that prior moment, an alternative possibility exists, and PAP holds after all. Frankfurt's case eliminates alternatives at the point of action but leaves a flicker at an earlier deliberative stage. C precisely captures this reply. A makes an authenticity argument — a compatibilist response that accepts the case's structure rather than challenging its elimination of alternatives. B argues causal impossibility — this disputes the case's internal coherence but is not the flicker reply. D concedes the incompatibilist conclusion that responsibility is undermined — the opposite of the flicker reply's purpose.
14
The manipulation case against hierarchical compatibilism shows that an internally coherent evaluative hierarchy can be "heteronomous in origin." For this to constitute a genuine objection to hierarchical compatibilism, which assumption must be operative?
CORRECT: B The manipulation case's force as an objection to hierarchical compatibilism depends on the assumption that freedom requires more than internal coherence — it requires that the hierarchy's formation was itself free or at least not freedom-undermining. If only structural coherence mattered, the manipulation case would be no objection (the hierarchy is coherent). B identifies this assumption precisely: hierarchical coherence is necessary but not sufficient; the genesis of the hierarchy also matters. A says the manipulator acted wrongly — but the objection's force doesn't depend on the manipulator's moral status; it depends on the causal history of the agent's desires. C makes a claim about Frankfurt's intentions — irrelevant to the logical structure of the objection. D reinstates PAP — a different strategy that would abandon compatibilism, not reconstruct it by adding a historical condition.
15
The passage states that the incompatibilist argument "rests on the intuition that praise and blame are appropriate only when an agent could have chosen differently." The passage also implies that Frankfurt cases challenge PAP. Which of the following conclusions follows most directly from both claims together?
CORRECT: D The two claims together establish: (1) the incompatibilist argument requires PAP; (2) Frankfurt cases challenge PAP. The most carefully qualified conclusion is D: PAP's defeat undermines the argument's premise, but this does not settle the debate — incompatibilists may defend PAP (flicker reply) or reconstruct the argument without PAP. D is appropriately modest about what follows. A concludes PAP is false — this goes further than "Frankfurt cases challenge PAP"; establishing that they succeed requires meeting the flicker reply, which the passage notes but doesn't resolve. B correctly identifies the incompatibilist's options — a strong answer but "cannot proceed with PAP as currently stated" is slightly stronger than what the passage establishes, since the flicker reply may vindicate PAP as currently stated. C leaps to compatibilism being correct — even if PAP falls, incompatibilism might be reconstructed on other grounds, so this overreaches.
16
Frankfurt's hierarchical compatibilism holds that free action requires first-order desires endorsed by second-order volitions. The manipulation case shows this is insufficient. A critic then argues: "Since socialisation also installs desires without the agent's prior consent, hierarchical compatibilism would make all socialised agents unfree — which is absurd." What is the strongest response to this critic?
CORRECT: A The strongest response distinguishes manipulation from socialisation at the process level — socialisation operates through the agent's developing reflective capacities (they can push back, question, reject elements); manipulation bypasses those capacities. This gives a principled distinction that avoids the reductio without abandoning the insight that historical genesis matters. A captures this. D also proposes a historical condition but is more abstract — "respected the agent's developing agency" is the criterion without explaining what distinguishes the two cases. A provides the criterion (operating through vs. bypassing reflective capacity) that makes D's historical condition principled. B accepts the uncomfortable implication — a viable philosophical move but weaker as a response because it generates a highly counterintuitive conclusion. C returns to pure structural coherence — this would reinstate the manipulation objection since the manipulated agent also currently endorses their hierarchy.
Passage 4 Score
/4

P 05
Testimony, Knowledge & the Transmission Model
Passage Timer
10:00
Read the Passage

Most of what any individual knows, they know through testimony — the sincere assertions of other speakers. I know the date of the French Revolution, the boiling point of water, and the distance to the nearest star not through personal observation but because others told me, and I trusted them. The epistemology of testimony asks under what conditions testimonial belief constitutes knowledge and what justifies accepting or rejecting others' claims. Reductionists, following Hume, argue that testimonial justification must ultimately reduce to non-testimonial grounds — inductive evidence from the observed reliability of testimony in general. Anti-reductionists counter that requiring such a foundation makes most human knowledge viciously circular: our inductive evidence for testimony's reliability is itself largely testimonially acquired. They argue instead for a default entitlement to accept testimony absent specific defeaters — analogous to the default entitlement to trust perception.

The transmission model of testimony holds that a hearer acquires knowledge from a speaker only when the speaker knows what she asserts. If a speaker sincerely asserts something she justifiably believes but which is false, the hearer may form a justified true belief through some accident — but she does not receive knowledge from the speaker, because the speaker had no knowledge to transmit. The model preserves the intuition that knowledge has a genealogy: it can be passed from knower to knower, but it cannot be generated from non-knowledge through the mere mechanics of assertion and belief. Critics of the transmission model point to cases where hearers seem to acquire knowledge that speakers lack — most pressingly, cases where the hearer's own reasoning from the speaker's assertion generates knowledge independently.

Jennifer Lackey has pressed the strongest such objection. Consider a creationist teacher who sincerely asserts "The Cambrian Explosion occurred approximately 540 million years ago" while personally disbelieving it — she asserts it because her professional obligations require her to teach the scientific consensus, though she privately rejects it. A well-positioned student who believes her assertion acquires, it seems, a justified true belief — and one arrived at through testimony. The transmission model says no knowledge was transmitted, since the teacher had none to transmit. But Lackey argues this is wrong: the student does acquire knowledge, suggesting that knowledge can be generated through testimony even when the speaker is not a knower. The counter-response is that the student's knowledge, if she has it, derives from the reliability of the process — the fact that the scientific consensus is well-established and the teacher, whatever her personal beliefs, is faithfully reporting it — rather than from the teacher as an individual testifier.

Questions · Passage 05
17
The anti-reductionist charges the reductionist with "vicious circularity." Which of the following most precisely identifies the structure of the alleged circularity?
CORRECT: C The circularity the passage identifies is specific: the inductive evidence for testimony's reliability is itself testimonially acquired. The reductionist wants non-testimonial grounds for trusting testimony — but our evidence about how reliable testimony has been historically comes largely from testimony itself (history books, scientific reports, others' testimony). The proposed non-testimonial foundation collapses into testimonial dependence. C states this precisely. A misidentifies the circularity as involving inductive reasoning per se — induction isn't testimony; the issue is that the evidence gathered through induction is testimonially sourced. B introduces the philosopher's own communication — a different and less fundamental circularity that conflates performing an argument with justifying the argument's conclusion. D identifies a different problem with reductionism (it demands too much from children) — a regress problem, not the circularity the passage describes.
18
The transmission model holds that knowledge cannot be generated from non-knowledge through assertion and belief. Lackey's creationist teacher case challenges this by suggesting knowledge can be generated through testimony even without a knowledgeable speaker. The counter-response is that the student's knowledge derives from the process's reliability rather than from the individual speaker. Which of the following most accurately identifies what is at stake between Lackey's argument and the counter-response?
CORRECT: B The core dispute is about what the source of testimonially acquired knowledge must be: for the transmission model, it must be an individual knower — knowledge flows person to person. Lackey's case suggests that when the individual is not a knower, knowledge can still be generated if the testimony is reliable in virtue of something other than the speaker's knowledge (here, the scientific consensus). The counter-response accepts this but reframes it: the source of knowledge is the reliable process, not the teacher. The question is whether processes or institutions can substitute for individual speaker knowledge as the knowledge source. B captures this. A introduces speaker obligation — relevant to testimony ethics but not the epistemological dispute about knowledge sourcing. C disputes the sincerity characterisation — the passage explicitly says the teacher asserts because of professional obligation, making it sincere by Lackey's lights; this isn't what the counter-response contests. D introduces Gettier-style analysis — neither party is denying the student has justified true belief; the dispute is whether it constitutes knowledge and why.
19
If the counter-response to Lackey is correct — that the student's knowledge derives from the reliability of the process rather than the teacher as individual — which of the following would be most seriously challenged?
CORRECT: A The counter-response grants that the student acquires knowledge but locates its source in process reliability rather than individual speaker knowledge. If knowledge can derive from reliable processes without flowing from a knowing speaker, the transmission model's core claim — that knowledge has a genealogy of knowers, passing knower to knower — is undermined. The model said knowledge cannot be generated from non-knowledge through assertion; the counter-response shows it can be generated by reliable institutional processes that bypass individual speaker knowledge. This challenges the model's fundamental picture, even though the counter-response was offered in defence of the model against Lackey. A captures this irony precisely. B challenges anti-reductionism — but the counter-response is not about default entitlement; it's about process reliability as a knowledge source, which is compatible with default entitlement at the individual interaction level. C says Lackey's conclusion is intact — true and important, but the question asks what is challenged, not what survives. D says reductionism is satisfied — process reliability could satisfy reductionist requirements, but this is a secondary consequence, not the most serious challenge.
20
The passage describes testimony as generating a situation where reductionists, anti-reductionists, and transmission model theorists each face serious objections. What does the structure of the passage most strongly suggest about the epistemology of testimony?
CORRECT: D The passage presents each position with genuine force and genuine objection: reductionism faces circularity; anti-reductionism is presupposed but not defended against manipulation of the default; the transmission model faces Lackey's case; the counter-response undermines the transmission model's own core claim. No position is presented as decisive. The passage's structure suggests the field is in productive tension requiring synthesis rather than selection. D is the most structurally accurate reading. A concludes scepticism — the passage doesn't support this; the objections are challenges to specific theories, not to testimonial knowledge itself. B endorses anti-reductionism — the passage notes the circularity as anti-reductionism's primary motivation but doesn't establish its superiority; the default entitlement claim faces its own challenges not discussed. C suggests testimony is anomalous — possible, but the passage frames the debate within existing epistemological frameworks (justification, knowledge, reliability) without suggesting those frameworks are inadequate for testimony.
Passage 5 Score
/4
Philosophy · Total Score
/8
Category 02
Psychology
5 passages · 4 questions each · CAT 5/5 · 50 min total
Score
/20
P 01
Learned Helplessness, Attributional Style & Depressive Realism
Read the Passage

Seligman's original learned helplessness model proposed that repeated exposure to non-contingency — the perception that outcomes are independent of one's responses — produces motivational, cognitive, and emotional deficits homologous to clinical depression. The model's theoretical ambition was to reframe depression not as a disorder of mood but as a disorder of perceived control: the organism stops initiating because it has learned, or falsely infers, that its responses are causally inert. The translational logic was powerful, but the model could not explain two critical features of human depressive helplessness: why some individuals generalise the helplessness expectation from the domain where non-contingency was experienced to all other domains, while others do not; and why helplessness often persists long after the objective non-contingency is removed.

Abramson, Seligman, and Teasdale's reformulation resolved these problems by introducing attributional style as a mediating variable. The reformulated model specifies three dimensions along which attributions for bad outcomes vary: internality (self versus external cause), stability (permanent versus transient cause), and globality (pervasive versus domain-specific cause). Chronic, generalised depressive helplessness is predicted specifically by an internal-stable-global attribution pattern for negative events — the conviction that failure stems from something fixed and pervasive about oneself. The external-unstable-specific pattern, by contrast, predicts resilience: the bad outcome is attributed to a situational, temporary, localised cause that carries no implication for future performance across other domains.

Alloy and Abramson's "depressive realism" findings then introduced a theoretically destabilising complication. In laboratory contingency-judgment tasks, mildly depressed subjects proved more accurate than non-depressed controls in assessing their actual degree of control over outcomes. Non-depressed subjects systematically overestimated their control — exhibiting a robust "illusion of control" — while depressed subjects' judgments approximated actual contingency levels. This result implies that what the reformulated model labels a "negative attributional style" may in some respects reflect more accurate perceptual calibration than distortion. The uncomfortable corollary: the non-depressed norm — positive illusory bias — may itself be a motivated cognitive distortion whose function is psychological stability rather than veridical world-modelling.

Questions · Passage 01
1
The reformulated helplessness model predicts that individuals who attribute negative events to internal, stable, global causes are most vulnerable to chronic, generalised depression. Which of the following, if true, most seriously weakens this prediction?
CORRECT: B The model predicts internal-stable-global attribution → higher depression. Option B shows a context where this attribution style is universal and normative, yet depression rates are not elevated — directly severing the predicted causal link. A is tempting but introduces life satisfaction, not depression, and controls for objective outcomes rather than testing the attribution-depression link. C is the depressive realism finding, which complicates the accuracy of the style rather than its link to depression — depression could still be caused by the attribution style even if that style is accurate. D describes a different attribution pattern (asymmetric by valence) — it does not test whether internal-stable-global for negatives predicts depression.
2
Which of the following can be most reliably inferred from the passage's account of the depressive realism findings?
CORRECT: C The passage explicitly raises the "uncomfortable corollary": the non-depressed positive bias functions as psychological stability rather than accurate world-modelling. This directly implies a tension between psychological wellbeing and epistemic accuracy. A over-generalises from one task to "all domains" — the passage is specific to contingency-judgment. B is too strong — the depressive realism finding complicates the model but doesn't refute it; the model could still hold if the depressive attributional style, though accurate, is maladaptive in other ways. D sounds logical but goes beyond the passage — the passage never claims therapy would reduce accuracy, only that the positive bias has a functional role.
3
The passage describes the depressive realism findings as "theoretically destabilising." The author uses this phrase primarily to signal that:
CORRECT: B "Destabilising" is stronger than "complicating." The passage presents depressive realism as challenging the foundational assumption that depressive cognition is distorted — which is the premise on which the reformulated model's therapeutic logic rests. A overclaims refutation — "theoretically destabilising" does not mean "refuted." C makes a methodological dismissal the author never argues. D attempts a surgical distinction between the original and reformulated models, but the reformulated model also assumes depressive attribution is maladaptively negative, so the challenge extends to it too.
4
The depressive realism finding presents the following paradox: the cognitive style associated with depression is both a symptom of a disorder and a marker of superior perceptual accuracy. Which of the following, if true, would most effectively resolve this paradox without abandoning either side of it?
CORRECT: B This resolves the paradox by distinguishing two components: the accuracy (which is real, in the present-contingency domain) and the disorder-producing mechanism (over-generalisation to future, unrelated domains). The depressive person is right about what they can control now, but wrongly extends that to everything — which is where the disorder lies. Both sides of the paradox are preserved. A dissolves the accuracy side by dismissing the lab finding — it abandons one horn of the paradox rather than resolving it. C restricts scope cross-culturally, which eliminates rather than resolves the paradox. D abandons the "it's a disorder" side — again, one horn dropped.
Passage 1 Score
/4

P 02
Dual-Process Theory, Cognitive Miserliness & the Bias-Correction Problem
Read the Passage

Dual-process theories of cognition divide mental processes into two families: Type 1 processes that are fast, automatic, associative, and effortless; and Type 2 processes that are slow, deliberate, rule-governed, and cognitively costly. The framework's influence on behavioural economics and public policy rests on its apparent explanation of systematic bias: Type 1 generates heuristic outputs that are often accurate but fail predictably in domains requiring statistical reasoning, counterfactual thinking, or suppression of vivid but irrelevant information. Type 2 can, in principle, detect and override these errors — but is selectively deployed because deliberation is costly, and humans are, as Stanovich argues, cognitive misers who default to Type 1 unless sufficiently motivated or prompted.

Kahneman's System 1/System 2 popularisation attracted criticism on both empirical and conceptual grounds. Empirically, the two-system architecture predicts that engaging deliberate cognition should reduce bias — but experimental evidence is inconsistent: high cognitive load sometimes increases bias, sometimes reduces it, and sometimes has no effect, varying with the specific bias and the nature of the cognitive demand imposed. Conceptually, Evans and Stanovich responded to critics by shifting from two systems (implying separate neural modules) to two types of processes implementable across neural substrates — a revision that critics argue renders the framework unfalsifiable: if any process can be classified post hoc as Type 1 or Type 2 based on its observed properties, the taxonomy becomes a redescription of phenomena rather than an independently testable causal account.

The bias-correction problem cuts deeper still. Stanovich's concept of "dysrationalia" — the observation that high general intelligence does not protect against systematic reasoning errors — reveals that Type 2 processes are not a neutral corrective mechanism. They are deployed in the service of goals, and those goals can include motivated reasoning: the construction of post hoc rationalisations for conclusions already reached by Type 1 processing. The myside bias — the tendency to evaluate arguments according to their conclusions rather than their logical structure — is paradoxically stronger in more analytically sophisticated individuals, because greater cognitive capacity provides more resources for generating compelling rationalisations. The corrective function of Type 2 therefore depends entirely on the reasoner's epistemic goals and intellectual dispositions — a dimension the standard dual-process framework conspicuously fails to theorise.

Questions · Passage 02
5
Critics argue that the revised dual-process framework — which classifies processes as Type 1 or Type 2 based on observed properties — is unfalsifiable. Which of the following, if true, most strengthens this criticism?
CORRECT: A The unfalsifiability charge is that classification as Type 1 or Type 2 is post hoc and circular. Option A shows this directly: the same task gets classified differently across contexts with no independent criterion — meaning the label tracks the observed property (fast/slow, automatic/deliberate in this instance) rather than predicting it. This is exactly the redescription problem. B is about neural overlap, which attacks the substrate criterion but doesn't establish post hoc circular classification — it's an empirical finding about neural activity, not about classification procedure. C is the empirical inconsistency finding already mentioned in the passage — it shows the architecture's predictions are wrong, but that's a falsification, not unfalsifiability. D is the dysrationalia point, which challenges the corrective-function claim, not the classification procedure.
6
Which of the following can be most reliably inferred from the passage's account of the myside bias and dysrationalia?
CORRECT: B The passage ends by stating: "the corrective function of Type 2 therefore depends entirely on the reasoner's epistemic goals and intellectual dispositions — a dimension the standard dual-process framework conspicuously fails to theorise." This directly entails B. A sounds plausible but over-specifies a causal mechanism — the passage notes that capacity enables rationalisation, but doesn't say increasing capacity causes more rationalisation; it depends on goals. C takes a correlation (higher sophistication → stronger myside bias) and converts it into a normative recommendation against education — an unwarranted leap. D concludes the framework should be abandoned; the passage says it "fails to theorise" one dimension, not that it should be discarded.
7
Suppose it were established that individuals who score highest on measures of "actively open-minded thinking" — a disposition to seek out disconfirming evidence and revise beliefs accordingly — show no myside bias regardless of their general analytical intelligence. If true, what would this finding most directly imply about the passage's argument?
CORRECT: B The passage's central unmet need is a theory of epistemic goals and dispositions. The finding provides exactly that: it identifies "actively open-minded thinking" as the disposition that determines whether Type 2 is used correctively. This confirms and extends the passage's argument. A is wrong — dysrationalia says intelligence alone doesn't protect; the finding shows disposition + intelligence works, which doesn't refute dysrationalia, it supplements it. C is wrong — the finding is not about engaging deliberate cognition in general; it's about a specific disposition, which is the passage's point that disposition matters, not processing type. D conflates disposition with classification criterion — the finding says nothing about how to classify processes as Type 1 vs Type 2.
8
The passage presents three distinct problems with dual-process theory: empirical inconsistency, the unfalsifiability charge, and the bias-correction problem. How does the passage rank these in terms of theoretical severity?
CORRECT: C The passage signals this explicitly: "The bias-correction problem cuts deeper still" — "still" indicating it goes beyond the empirical and conceptual problems already raised. The depth is a structural theoretical gap, not just an empirical failure or procedural issue. A is wrong — empirical inconsistency is mentioned but the passage calls the bias-correction problem deeper. B is the unfalsifiability charge, which is important but described as a conceptual problem, not the deepest. D is wrong — the passage does not propose abandoning the framework, and the problems are explicitly ranked ("cuts deeper still").
Passage 2 Score
/4

P 03
Cognitive Dissonance, Effort Justification & the Free-Choice Artifact
Passage Timer
10:00
Read the Passage

Festinger's theory of cognitive dissonance — the mental discomfort arising from holding simultaneously inconsistent cognitions — was originally developed to explain why people change their attitudes after making decisions rather than before. In the classic spreading-of-alternatives paradigm, subjects asked to choose between two similarly rated options subsequently rate the chosen option more positively and the rejected option more negatively than before the choice. Festinger argued this post-decision attitude change is motivated: subjects experience dissonance because cognitions about the unchosen option's merits conflict with the cognition that they chose the other option, and they reduce dissonance by revising their evaluations — a phenomenon called the spreading of alternatives.

The motivated-cognition account has been contested by research on a measurement artifact. Chen and Risen demonstrated that the spreading effect can arise without any dissonance reduction. If subjects think "I must have liked the chosen option better since I picked it," they will update their stated ratings on that inference alone, without any genuine attitude change. In blind-choice paradigms — where subjects believe their choices were random — the spreading effect was substantially reduced, suggesting the classic result was partly artifactual: subjects were inferring rather than changing preferences. Crucially, this critique does not claim the spreading effect is entirely artifactual; it claims the standard paradigm cannot distinguish genuine attitude change from preference inference.

Effort justification presents a different and more methodologically robust case. Subjects who undergo severe initiation procedures to join a group subsequently evaluate the group more positively than subjects who underwent mild initiation, even when the group experience is objectively identical. The high effort creates dissonance: "I underwent a painful initiation for this group" conflicts with "this group is mediocre." Reducing dissonance requires upgrading the group's evaluation. Unlike the spreading-of-alternatives effect, effort justification does not involve choosing between comparably valued alternatives — so subjects cannot infer their preferences from their choices. The artifact critique therefore does not apply, and the dissonance account survives as the most parsimonious explanation of the pattern.

Questions · Passage 03
9
Chen and Risen's artifact critique claims the spreading-of-alternatives effect "cannot distinguish genuine attitude change from preference inference." Which experimental design would most directly test whether genuine attitude change is occurring?
CORRECT: B The artifact critique holds that conscious preference inference drives the explicit rating change. Implicit measures — which tap into automatically activated associations rather than deliberate self-report — are not susceptible to the inference process in the same way. If genuine attitude change is occurring, it should show up in implicit measures as well as explicit ones; if only preference inference is occurring, it may show on explicit ratings but not implicit associations. B is the most direct test. A improves statistical sensitivity but doesn't separate the two mechanisms. C tests cultural universality — relevant to generalisability but not to the genuine-change vs. inference question. D uses extreme value differences — this would eliminate preference inference in the specific design but doesn't test whether ordinary spreading effects involve genuine change; it changes the paradigm rather than diagnosing the mechanism.
10
The passage argues that effort justification is "more methodologically robust" than the spreading-of-alternatives paradigm as evidence for dissonance theory. Which of the following, if true about effort justification research, would most seriously challenge this claim?
CORRECT: D The passage's claim is that effort justification is methodologically robust as evidence for dissonance theory specifically. D challenges this by pointing to a rival explanation — self-perception theory — that predicts the same pattern without positing aversive arousal or dissonance. If both theories predict the same result and the arousal-manipulation studies are inconclusive, effort justification doesn't specifically support dissonance theory; it's equally consistent with self-perception. This is the most direct challenge to the "more robust evidence for dissonance" claim. A confirms the dissonance manipulation worked — this supports the dissonance account. B shows free choice matters — consistent with dissonance theory. C is interesting but addresses whether a specific cognition is necessary, not the rival-theory problem — and even if the cognition isn't necessary, dissonance theory might explain it through inconsistency between effort and outcome without the specific belief about group membership.
11
The passage states that the artifact critique "does not claim the spreading effect is entirely artifactual." Why is this qualification important to the critique's standing as a scientific contribution?
CORRECT: C The qualification matters not because of its diplomatic value (B, D) but because of its methodological implications. The critique's core point is that the standard paradigm cannot separate genuine attitude change from preference inference — even if genuine change occurs alongside inference, the paradigm cannot isolate it. A partially contaminated measure is still an inadequate measure for the specific hypothesis being tested. The contribution is demonstrating the indeterminacy of the paradigm, not the size of the artifact. C captures this: the scientific significance is the measurement inadequacy, which holds regardless of the genuine effect's magnitude. A correctly identifies the theory-preservation consequence but misses the measurement-focused nature of the contribution. B addresses strategic defensibility, not scientific significance. D addresses conservatism as an epistemic virtue, not the core methodological point.
12
Festinger's original account positioned dissonance reduction as a "motivated" process. The preference-inference account positions the spreading effect as an "inferential" process. Which of the following best describes the psychological distinction between these two processes?
CORRECT: A The key distinction in Festinger's original account is the motivational, affectively-driven nature of dissonance reduction — an aversive internal state drives attitude change as a tension-reduction mechanism. Preference inference, by contrast, is an epistemic process: subjects treat their own choices as evidence about their preferences and update accordingly, without any aversive arousal. A captures this affective vs. epistemic distinction precisely. B characterises the distinction as conscious vs. unconscious — not supported by the passage or by dissonance theory, which doesn't specify automaticity. C proposes durability as the distinguishing criterion — a useful testable prediction but not the theoretical distinction itself. D says they're functionally identical and only neuroimaging distinguishes them — too strong and contradicts the theoretical distinction the passage draws.
Passage 3 Score
/4

P 04
Attachment Theory, Strange Situation & Its Critics
Passage Timer
10:00
Read the Passage

Bowlby's attachment theory proposed that infants are biologically prepared to form strong affective bonds with a primary caregiver — the attachment figure — and that the quality of this early bond creates internal working models of self and others that persist into adult life. A secure attachment, characterised by the caregiver's sensitive and consistent responsiveness, produces internal models of self as worthy and others as reliable. Insecure attachments — anxious-ambivalent, avoidant, and the later-added disorganised — arise from caregiving that is unpredictable, rejecting, or frightening, and generate corresponding models that shape relational expectations, emotion regulation, and social behaviour across the lifespan.

Ainsworth's Strange Situation procedure — brief separations from and reunions with the caregiver — provided the empirical cornerstone. Secure infants sought comfort on reunion and were quickly reassured; anxious-ambivalent infants were distressed before separation and inconsolable after; avoidant infants showed apparent indifference. Longitudinal research has established moderate predictive validity: secure attachment in infancy correlates with better peer relationships, emotional regulation, and cognitive functioning in later childhood, though effect sizes are modest and substantially mediated by continuity of caregiving quality rather than early attachment per se.

Three families of criticism have emerged. Behavioural geneticists argue that shared genetic factors — affecting both parental caregiving style and child temperament — account for much of the attachment-outcome correlation, challenging the causal direction. Cross-cultural research has found that behaviours classified as avoidant in Western samples are normative and not predictive of negative outcomes in cultures with multiple caregiving arrangements — challenging the universality of the classification system. And longitudinal reviewers note that predictive validity from infancy to adult attachment style is substantially attenuated when controlling for intervening life events, suggesting that early attachment is one developmental influence among many rather than a deterministic template for later relationships.

Questions · Passage 04
13
The longitudinal finding that effect sizes in attachment research are "substantially mediated by continuity of caregiving quality rather than early attachment per se" most directly challenges which specific claim of attachment theory?
CORRECT: B The core of Bowlby's theory is that early attachment creates internal working models — mental representations of self and relationship — that persist and shape later functioning. If outcomes are primarily explained by continuity of caregiving (the ongoing environment) rather than early attachment, this suggests the early internal working model is not the causally operative mechanism. Current caregiving explains current outcomes more than early attachment does. B identifies this challenge precisely. A makes an heritability claim — mediation by caregiving doesn't speak to heritability; both genetic and environmental accounts could explain mediation by ongoing caregiving. C challenges measurement validity — a possible implication but a secondary one; the primary target is the theoretical claim about internal working models, not the measurement tool. D invokes nature-nurture — the finding is about causal mechanism (early vs. ongoing), not about biological vs. environmental causation generally.
14
Cross-cultural research finding that avoidant behaviour is normative and not predictive of negative outcomes in cultures with multiple caregiving arrangements challenges attachment theory's classification system. Which response would best defend Bowlby's framework against this challenge?
CORRECT: C The most theoretically sophisticated defence distinguishes between the universal psychological construct (security of attachment, internal working models) and its culturally variable behavioural expression. In a multiple-caregiver culture, brief indifference at reunion may reflect a securely attached infant who is not distressed by temporary absence because attachment needs are distributed across caregivers — not an avoidant suppression. If the Strange Situation's behavioural codings were calibrated to Western dyadic caregiving ecology, the classifications may misread culturally appropriate behaviour. C preserves the theoretical core (internal working models) while acknowledging cultural relativity of behavioural indicators. B is ethnocentric and empirically unsupported — labelling multiple-caregiver cultures as developmental departures is itself a contestable claim. A is a methodological quibble that doesn't address the theoretical challenge. D abandons universality rather than defending it.
15
The behavioural genetic critique argues that shared genetic factors explain the attachment-outcome correlation. For this argument to succeed as a refutation of Bowlby's causal claims, which additional condition must hold?
CORRECT: D The genetic critique's structure is: shared genes → both caregiving style and child outcomes, creating a spurious correlation between caregiving and outcomes. For this to refute Bowlby's causal claim that caregiving quality → attachment → outcomes, the genetic account must explain not just the outcome correlation but also why caregiving quality correlates with outcomes — specifically, that caregiving quality is itself a genetic expression rather than an independent variable. If caregiving quality has genuine environmental variance (parents can choose to be more responsive independent of their genotype), then even granting genetic influence, Bowlby's causal pathway from caregiving through attachment to outcomes remains viable. D identifies this precisely. B says genetic factors must account for all variance — too demanding; even partial genetic explanation doesn't refute environmental effects. A concerns the pathway of genetic influence — interesting but not the condition needed for the refutation. C describes a twin study design that would actually support the environmental account by showing shared environment matters.
16
The passage presents attachment theory as facing three families of criticism. Which of the following most accurately characterises the relationship between these three critiques?
CORRECT: C The three critiques target distinct aspects: the genetic critique targets causal mechanism (is caregiving's effect real or confounded?); the cross-cultural critique targets the universality of the classification system (do the behavioural codings mean the same thing across cultures?); the longitudinal critique targets developmental determinism (does early attachment specifically predict later outcomes, or does ongoing caregiving do the explanatory work?). These are logically independent — a theory could survive one while falling to another. C captures this independence precisely. A says they are mutually reinforcing and constitute cumulative refutation — too strong; each targets a different level, and the passage doesn't claim they together refute the theory. B says they are mutually exclusive — false; the genetic confound and cross-cultural measurement critiques are entirely compatible. D distinguishes empirical from conceptual challenges — the genetic critique is equally empirical; it claims specific genetic pathways explain the correlations.
Passage 4 Score
/4

P 05
Embodied Cognition, Extended Mind & the Boundaries of the Mental
Passage Timer
10:00
Read the Passage

Classical cognitive science modelled the mind as a computational system — a processor of abstract symbolic representations housed entirely within the skull. Embodied cognition challenges this picture by arguing that cognitive processes are constitutively shaped by the body's sensorimotor capacities and their interaction with the environment. On this view, thinking is not the manipulation of body-independent representations but an activity that intrinsically involves bodily states, gestures, and perception. Evidence for embodied cognition includes findings that abstract concepts are processed using sensorimotor systems — comprehending "grasp" activates motor areas, comprehending "bitter" activates gustatory cortex — and that bodily interventions alter cognitive performance in conceptually relevant ways.

The extended mind thesis, associated with Andy Clark and David Chalmers, takes the challenge further. Their parity principle: if a process would be classified as cognitive if it occurred inside the head, it should be classified as cognitive when it occurs outside the head — provided it plays the same functional role. Their central case is Otto, who has Alzheimer's disease and uses a notebook to store information he can no longer retain biologically. Otto's notebook functions as his memory — consulted automatically, trusted without verification, updated continuously. Clark and Chalmers conclude that Otto's notebook is genuinely part of his cognitive system, not merely a tool he uses. The parity principle is designed to block the intuition that location (inside or outside the skull) is relevant to whether a process is cognitive.

Critics object that the parity principle "proves too much" — by the same logic, road signs, calculators, and libraries would be parts of individual cognitive systems, collapsing the distinction between person and environment in ways that undermine both the theoretical utility and the intuitive validity of the cognitive concept. The targeted response is that Otto's relationship to his notebook satisfies specific conditions — immediate availability, unconditional trust, automatic endorsement — that ordinary tools do not. Critics of this response argue that the conditions are stipulated rather than theoretically derived, making the distinction between cognitive extension and tool use unprincipled. A deeper challenge comes from those who question whether cognitive extension is the right description even in the Otto case: perhaps what is extended is not the mind but the person's epistemic resources — there is a difference between Otto's notebook being part of his mind and its being part of his epistemic situation.

Questions · Passage 05
17
The "proves too much" objection to the extended mind thesis argues that the parity principle, consistently applied, would classify road signs and libraries as cognitive components. Which feature of Clark and Chalmers' response most directly addresses this objection?
CORRECT: B Clark and Chalmers' response to the "proves too much" objection is precisely to add conditions that distinguish Otto's case from ordinary tool use: the notebook is immediately available (always on his person), trusted unconditionally (he doesn't verify its entries like he would an external source), and automatically endorsed (treated like memory outputs). Road signs, calculators, and libraries typically don't satisfy all these conditions — especially unconditional trust and automatic endorsement. B states this response correctly. A introduces a design-purpose criterion — not Clark and Chalmers' criterion, and it doesn't appear in the passage. C constructs a tu quoque — interesting but not the response the passage describes. D restricts the principle to cognitively impaired individuals — also not in the passage, and it would undermine the theoretical ambition of the parity principle.
18
The passage describes the distinction between "Otto's notebook being part of his mind" and "its being part of his epistemic situation." What is the philosophical significance of this distinction?
CORRECT: D The distinction is philosophically significant because it represents a challenge that operates even after the "proves too much" objection is answered. Even if Clark and Chalmers' conditions successfully distinguish Otto's notebook from ordinary tools, those conditions might characterise an extended epistemic situation rather than an extended mind. The notebook extends what Otto has epistemic access to — his epistemic situation — without thereby being part of his mental states. D captures this as a deeper-level challenge. A says the thesis is empirically false — too strong; the distinction is a conceptual challenge, not an empirical refutation. B says the principle is circular — a different critique not raised by the distinction. C says the conditions characterise epistemic resources rather than cognitive processes — this is the substance of the challenge, but D states the significance more completely by framing it as a challenge that survives the conditions-based response.
19
Embodied cognition research showing that comprehending "grasp" activates motor areas is offered as evidence that cognition is constitutively shaped by sensorimotor systems. A critic argues: "This merely shows that motor areas are recruited during language processing — it does not show that motor activation is constitutive of the comprehension rather than merely correlating with it." Which response would most effectively address this critique?
CORRECT: C The critic's objection is precisely the correlation-causation problem: mere activation during comprehension might be epiphenomenal or modulatory rather than constitutive. The most direct response is causal-disruption evidence: if disrupting motor areas (via TMS) impairs comprehension, motor activation is causally necessary — not merely correlational. C provides this response. A argues that robust correlation entails constitutive involvement — a non sequitur; correlation, however robust, doesn't establish constitutive relationship without causal evidence. B appeals to evolutionary argument — suggestive but doesn't address the specific correlation-causation objection for the current cognitive system. D challenges the distinction itself — a legitimate philosophical move but too general; if the critic's distinction is valid, attacking its clarity doesn't show motor activation is constitutive.
20
The passage presents embodied cognition and the extended mind thesis as related but distinct challenges to classical cognitive science. Which of the following most accurately captures the logical relationship between them?
CORRECT: B Embodied cognition holds that the body's sensorimotor systems are constitutively involved in cognition — this challenges the view that cognition is amodal symbol processing but does not require that cognition extend beyond the skin-skull boundary. A researcher could accept fully embodied cognition while holding that the relevant embodied processes end at the skin — that notebooks, however useful, are external tools rather than cognitive components. B captures this independence. A says embodied cognition entails extended mind — a non sequitur; bodily embeddedness doesn't logically require extension into artefacts. C says extended mind entails embodied cognition — also a non sequitur; accepting that notebooks extend the mind doesn't require accepting that sensorimotor systems are constitutive of cognition rather than merely correlated. D says they are mutually exclusive — false; the body-environment interaction loop of embodied cognition is compatible with artefacts forming part of that loop.
Passage 5 Score
/4
Psychology · Total Score
/8
Category 03
History
5 passages · 4 questions each · CAT 5/5 · 50 min total
Score
/20
P 01
The French Revolution: Social Interpretation, Revisionism & the Problem of Historical Causation
Read the Passage

The Marxist-influenced "social interpretation" of the French Revolution — associated with Lefebvre, Soboul, and Rudé — dominated Anglophone historiography for much of the twentieth century. Its core thesis held that the Revolution expressed the rise of a bourgeoisie displacing a declining feudal nobility, driven by contradictions between an emerging capitalist mode of production and an aristocratic social order blocking its full expression. The Jacobin Terror, on this account, was a radicalisation driven by class conflict: the sans-culottes pressed the bourgeois leadership toward redistribution more extreme than the latter's objective interests required.

Revisionist historians, beginning with Cobban's devastating 1955 critique, dismantled this framework empirically. The "bourgeoisie" leading the Revolution turned out on examination to be composed predominantly of lawyers, officeholders, and professionals — not merchants or manufacturers. The nobility was not economically declining; a significant fraction of the pre-revolutionary capitalist and commercial class was itself noble. The rigid class boundaries the social interpretation presupposed were empirically porous. Furet then added an epistemological charge to the empirical one: the social interpretation had confused the Revolution's own self-presentation — its Jacobin ideological rhetoric — with historical causation. The Revolution did not merely express a pre-existing class conflict; it actively produced the class categories through which subsequent generations interpreted and narrated it.

Post-revisionism has since attempted to salvage a modified structural account by redirecting attention toward fiscal-military crisis, the collapse of royal creditworthiness, and the fragmentation of elite consensus — factors that are structural without being Marxist. This arc illustrates a methodological principle of broad applicability: the demolition of a dominant interpretive framework rarely vindicates pure contingency or accident; it typically installs a rival structure operating at a different analytical level. What changes is not the commitment to structural explanation but the specific structural variables deemed primary.

Questions · Passage 01
1
Furet's epistemological charge against the social interpretation argues that the Revolution did not merely reflect pre-existing class conflict but actively produced the class categories through which it was later narrated. Which of the following, if true, most seriously weakens Furet's argument?
CORRECT: B Furet's argument is that the class categories were produced by revolutionary discourse — i.e., they were not pre-existing social realities but rhetorical constructs. Option B directly undermines this by showing that pre-revolutionary records — compiled before revolutionary discourse existed — already tracked the class distinctions the social interpretation describes. If the distinctions were documentable before the Revolution's rhetoric, they cannot have been produced by that rhetoric. A actually supports Furet: if Jacobin leaders knew their class rhetoric was strategic, this confirms it was constructing rather than reporting reality. C is irrelevant — the spread of class frameworks to other countries says nothing about whether French class categories pre-dated revolutionary discourse. D weakens Cobban's empirical finding but not Furet's distinct epistemological argument about discourse producing categories.
2
From the passage's account of post-revisionism's turn to fiscal-military crisis and elite consensus fragmentation, which of the following can be most reliably inferred?
CORRECT: B Post-revisionism retains structural explanation — it identifies fiscal-military crisis and elite consensus fragmentation as causes — while replacing the Marxist class structure with different structural variables. This accepts the empirical demolition (Cobban) without accepting that structural explanation itself is untenable. A misreads post-revisionism as abandoning structure; the passage explicitly says it "installs a rival structure at a different analytical level." C tries to reconcile post-revisionism with the social interpretation by making class the underlying driver of fiscal crisis — this is not what post-revisionism argues and contradicts the passage. D says structural variable choice is arbitrary — the passage implies the opposite: variables are chosen based on empirical adequacy, not arbitrarily.
3
The passage's "methodological principle of broad applicability" — that demolishing a dominant framework rarely vindicates contingency but installs a rival structure — functions in the argument primarily to:
CORRECT: C The passage explicitly calls the principle one of "broad applicability" — the phrase signals it is being generalised beyond this specific case. The function is to elevate the French Revolution episode from a historical curiosity to an illustration of a general pattern in how interpretive frameworks succeed one another. A makes an ideological charge against revisionism that the passage does not make. B implies a bias the author is critiquing — but the passage is descriptive, not prescriptive; it describes the pattern without condemning it. D implies post-revisionism is epistemically equivalent to the social interpretation — the passage makes no such equivalence; it says post-revisionism installed a different structure, not an equally vulnerable one.
4
Cobban's empirical critique — that the revolutionary bourgeoisie was composed of lawyers and officeholders rather than merchants — implicitly assumes which of the following in order to count as a refutation of the social interpretation?
CORRECT: A Cobban's finding only refutes the social interpretation if the interpretation requires the bourgeoisie to be mercantile-capitalist. If lawyers and officeholders could count as "bourgeoisie" in the relevant Marxist sense, then the finding is irrelevant — they may have had the same structural interests. Cobban's critique only bites if the category "bourgeoisie" demands mercantile/manufacturing composition. This is the hidden assumption. B identifies a separate pillar (noble decline) that Cobban's data doesn't address — but this is a limitation of Cobban's critique, not an assumption it requires. C is the opposite — Cobban is attacking the Jacobin rhetoric's accuracy, not assuming it's reliable. D makes a substantive claim about the interests of lawyers that goes beyond Cobban's prosopographic argument.
Passage 1 Score
/4

P 02
Colonial Cartography: Power, Silence & the Limits of the Archive
Read the Passage

The colonial map was never a neutral instrument of spatial description; it was an apparatus of territorial jurisdiction. Harley's analysis of the rhetoric of cartography argued that maps encode power in their silences as much as in their content: the erasure of indigenous place-names, the imposition of a Euclidean grid onto landscapes that indigenous spatial knowledge organised along relational, experiential, or cosmological axes — these are not aesthetic choices but epistemic acts of dispossession. The map precedes, authorises, and in some cases constitutes the territorial claim it appears only to record. Cartography on this reading is not a representation of political reality but a mechanism for producing it.

Edney's subsequent critique introduced a corrective that cuts against Harley's sweeping coherence. Not all colonial maps were instruments of domination; treating them as a unified apparatus of power obscures the sharply divergent interests — revenue extraction, military logistics, scientific prestige, commercial prospecting — that motivated their production. The colonial archive was less a coherent panopticon than a disorganised accumulation of competing surveys produced under conditions of chronic underfunding, incomplete information, and institutional rivalry. Maps were frequently inaccurate, internally inconsistent, and contested even among colonial administrators. This does not rehabilitate colonial cartography morally, but it complicates the Foucauldian reading: power's exercise through representation is rarely as efficient, coordinated, or intentional as that framework implies.

A third position — emerging from indigenous studies — challenges both Harley and Edney by arguing that the focus on European cartographic production, whether as coherent power or fragmented archive, marginalises an equally significant fact: indigenous spatial knowledge was never simply erased. It survived, adapted, and in many documented cases actively shaped the colonial maps themselves through the mediation of indigenous guides and interpreters whose geographical knowledge colonial surveyors could not independently replicate. The "blank spaces" on colonial maps often record not an absence of spatial knowledge but a refusal to disclose — a strategic silence that confounds the logic of cartographic dispossession.

Questions · Passage 02
5
The indigenous studies position argues that blank spaces on colonial maps sometimes represent strategic non-disclosure rather than absent knowledge. Which of the following, if true, most strengthens this argument?
CORRECT: A The argument is that blank spaces represent strategic non-disclosure — indigenous communities withheld knowledge deliberately. Option A directly confirms this by showing that blank areas systematically correspond to territories that indigenous communities explicitly withheld. This is the strongest direct evidence for strategic silence. B is close and very tempting: colonial frustration at inconsistent information suggests deliberate misdirection — but the passage says blank spaces represent refusal to disclose, not misdirection with false information. B supports indigenous agency but is more about deliberate misinformation than strategic silence producing blanks. C shows colonial incapacity, which supports that surveyors needed indigenous help — but this doesn't confirm the blanks represent strategic non-disclosure rather than simply areas the surveyors couldn't reach. D strengthens the alternative explanation that blanks were colonial choices, not indigenous strategy.
6
From the passage's statement that Edney's critique "does not rehabilitate colonial cartography morally," which of the following can be most reliably inferred about the relationship between Edney's argument and Harley's?
CORRECT: A The passage says Edney's critique "complicates the Foucauldian reading" of power but does not rehabilitate cartography morally. This maps precisely onto A: Edney corrects the analytical model (power is not coherent and efficient) while retaining Harley's moral verdict (colonial cartography was an instrument of dispossession). B is tempting — the rhetorical function reading is sophisticated — but "purely rhetorical" is too cynical; the passage presents the disclaimer as a substantive qualification. C misreads "purely methodological" as having no political implications — the passage does not say this; recognising the archive's fragmentation has direct implications for how accountability is assigned. D inverts Edney's critique into a reinforcement of Harley — the passage presents them as in tension, not alignment.
7
The passage presents three distinct positions — Harley's, Edney's, and the indigenous studies perspective — without explicitly adjudicating between them. What is the most likely reason the author structures the passage this way rather than endorsing one position?
CORRECT: B Each position adds a dimension the others miss: Harley addresses representational power, Edney corrects the coherence assumption with archival reality, and the indigenous studies view reintroduces indigenous agency as a constitutive factor. They address different aspects of the same phenomenon rather than making incompatible claims about the same object. The structure invites synthesis rather than selection. A is an uncharitable reading that attributes the non-judgement to ignorance rather than analytical design. C implies equivalent evidential support, which the passage does not suggest — and "arbitrary" is too strong. D attributes a hidden endorsement to the author; while the indigenous perspective gets the final word (a common rhetorical placement), the passage presents it as a corrective to both others, not as a verdict.
8
Harley argues that the colonial map constitutes rather than merely records territorial claims. But the indigenous studies position shows that indigenous knowledge actively shaped those same maps through the mediation of guides and interpreters. These two claims together produce a paradox about colonial cartographic power. What is that paradox?
CORRECT: A This is the sharpest paradox produced by combining the two claims. Harley: map = constitutive of colonial power. Indigenous studies: indigenous knowledge = constitutive of the map. By transitivity: indigenous knowledge was constitutive of colonial power — the apparatus of dispossession was partly built by those it dispossessed. This is the most philosophically unsettling combination of the two positions. B frames the two positions as mutually exclusive and requiring us to choose — but the passage presents them as compatible (indigenous knowledge survived and shaped maps without preventing epistemic dispossession). C accurately describes the interpretive complexity but avoids naming the specific paradox about constitutive power. D is an interesting downstream implication about land claims but moves beyond what the passage establishes.
Passage 2 Score
/4

P 03
The Columbian Exchange, Biological Consequences & the Limits of Demographic History
Passage Timer
10:00
Read the Passage

Alfred Crosby's concept of the Columbian Exchange reframed the conquest of the Americas as primarily a biological event rather than a military or political one. The transfer of Old World pathogens to populations with no prior exposure killed an estimated fifty to ninety percent of indigenous peoples within a century of contact. Crosby argued that this biological asymmetry explains European global success more persuasively than any appeal to cultural or intellectual superiority. Old World populations had developed partial immunities through centuries of dense settlement and proximity to domesticated animals, which served as reservoirs for zoonotic disease. New World populations, lacking both the density and the domesticated animal variety that drives pathogen evolution, had no comparable immunological preparation.

The exchange operated in multiple directions. New World crops including maize, potatoes, and cassava transformed Old World agriculture and enabled population growth that would otherwise have been impossible in marginal soils and northern latitudes. Historians now argue that these agricultural transfers contributed directly to the Old World population growth that fuelled industrialisation, creating a feedback loop in which the consequences of conquest eventually enabled further imperial capacity. The potato's caloric density and adaptability became especially significant in Ireland, Germany, and Russia, where it supported peasant populations on land that could not sustain wheat cultivation.

Critical revisions have come from two directions. First, some historians argue that Crosby underweights active indigenous agency: the complex military alliances, diplomatic strategies, and political adaptations that indigenous peoples deployed in responding to European incursion. Second, environmental historians note that the pre-Columbian Americas were profoundly shaped by human agriculture and settlement. The apparent wilderness that European observers described and that later historians treated as a baseline was partly a product of the demographic collapse itself, as abandoned farmland returned to forest and managed landscapes reverted. This second point complicates not only Crosby's framework but the broader project of using ecological baselines to measure colonial impact.

Questions · Passage 03
9
Crosby's argument about biological imperialism most directly challenges which conventional explanation for European dominance in the Americas?
CORRECT: C Crosby's argument is explicitly framed as an alternative to cultural or intellectual superiority explanations. He attributes European success to biological accident: Old World populations happened to develop immunological advantages through their particular ecological history, not because of any inherent superiority. C captures this directly. A concerns military technology, which Crosby's framework also challenges but is not the primary target of the "biological imperialism" framing. B concerns indigenous political fragmentation, a separate historiographical tradition. D concerns mercantilist economics, which is a different explanatory register entirely.
10
The observation that pre-Columbian Americas were "profoundly shaped by human agriculture and settlement" challenges which assumption that underlies assessments of colonial environmental impact?
CORRECT: B The passage makes this point explicitly: the apparent wilderness was "partly a product of the demographic collapse itself," not a pre-human or pre-contact baseline. If colonial historians use the post-collapse landscape as the reference point for measuring subsequent colonial impact, they are measuring change from a landscape that was already transformed by disease-driven abandonment. B captures the methodological consequence precisely. A concerns the reliability of written accounts, a different archival problem. C misreads the point as being about directionality of biological exchange. D concerns technological capacity, which is not what the passage is addressing.
11
The passage describes a "feedback loop" in which New World agricultural transfers enabled Old World population growth, which in turn enabled further imperial expansion. Which of the following, if true, would most directly support this causal chain?
CORRECT: D The feedback loop requires that New World crops caused population growth, and that population growth caused increased imperial capacity. Option D tests both links: crop adoption correlates with population growth, and that population growth correlates with subsequent territorial expansion, while controlling for confounds. This most directly supports the complete causal chain. C establishes that crop introduction correlates with population increase and that those regions supplied industrial and colonial labour, which supports part of the chain. But D is stronger because it controls for pre-existing advantages, making the causal story cleaner. A shows migration from potato-adopting regions but does not establish the population growth mechanism or the imperial expansion link. B shows colonial profits enabled agricultural investment, which is a different and even reverse causal direction.
12
The critique that Crosby underweights "active indigenous agency" targets which specific feature of his explanatory framework?
CORRECT: B The agency critique targets the political and historical passivity implied by a biological determinist account. If conquest is explained entirely by pathogen asymmetry, indigenous peoples are rendered as the objects of biological forces rather than as political subjects who negotiated, resisted, allied, and adapted. The critique insists on restoring historical agency to that analysis. B captures this precisely. A concerns demographic methodology, a separate empirical dispute. C concerns source reliability, an archival critique that is logically distinct from the agency critique. D concerns collaborative exchange, which inverts rather than critiques the framework's directionality.
Passage 3 Score
/4

P 04
The 1861 Emancipation of Serfs & the Conditions for Russian Modernisation
Passage Timer
10:00
Read the Passage

The emancipation of serfs in 1861 under Alexander II has been interpreted as simultaneously a landmark of humanitarian progress, a calculated act of autocratic self-preservation, and an economic half-measure that set the structural conditions for Russia's troubled twentieth century. The reform came in the aftermath of military humiliation in the Crimean War, which had exposed Russia's technological and organisational backwardness. Alexander's own formulation, that it was "better to abolish serfdom from above than to wait until the serfs begin to liberate themselves from below," makes explicit the defensive character of the reform. The tsar was not responding to a principled commitment to freedom but to a calculation about the relative costs of controlled reform versus uncontrolled revolution.

The terms of emancipation were structured to protect the nobility's economic interests while dismantling the legal basis of their power over serfs. Former serfs were required to pay redemption payments over forty-nine years, effectively purchasing their own freedom and the land allotted to them. The allotments were frequently smaller than what serfs had cultivated before emancipation, and the commune held collective responsibility for payments, binding peasants to their villages through economic obligation where the tsar's law had previously bound them through personal serfdom. Geroid Robinson argued that this structure preserved rural stagnation by maintaining communal land tenure rather than creating independent smallholders who might have driven agricultural productivity and capital formation.

The counterfactual debate over whether more radical land reform would have produced different developmental outcomes continues to divide historians. Those following Robinson argue that the commune prevented the emergence of Russian capitalism by blocking the formation of an independent peasantry. Revisionists counter that Russia's vast geography, thin markets, and harsh climate made Western European models of agrarian capitalism an inappropriate benchmark regardless of tenure arrangements. The debate matters beyond economic history because its resolution shapes how historians interpret the causes of 1917: whether the revolution was structurally determined by the failures built into the 1861 settlement, or whether it was contingent on the particular political failures of later decades.

Questions · Passage 04
13
Alexander II's formulation that it was "better to abolish serfdom from above than to wait until the serfs begin to liberate themselves from below" reveals which primary motivation for the reform?
CORRECT: C The passage explicitly frames the reform as defensive: Alexander's statement compares the cost of controlled reform against the cost of uncontrolled revolution, and the passage states that he was "not responding to a principled commitment to freedom but to a calculation about the relative costs." C captures this calculation precisely. A attributes humanitarian concern, which the passage explicitly excludes. B invokes diplomatic pressure, which was a contributing context but is not what the specific formulation reveals about motivation. D invokes economic modernisation logic, which is a different and equally plausible motivation not supported by the particular language of the formulation quoted.
14
Robinson's argument that emancipation "preserved rural stagnation" identifies which specific mechanism as responsible?
CORRECT: B The passage identifies Robinson's mechanism precisely: communal land tenure maintained collective obligations that prevented peasants from leaving their villages and prevented the emergence of independent smallholders. This is structurally different from the payment burden (A), which is an economic pressure rather than a tenure mechanism. B matches the passage's description of Robinson's specific argument. C concerns allotment size, which the passage mentions as a related grievance but not as Robinson's mechanism for stagnation. D concerns noble land ownership, a separate structural argument not attributed to Robinson in the passage.
15
The revisionist counter-argument to Robinson holds that Russia's geography, climate, and thin markets made Western European agrarian capitalism an "inappropriate benchmark." What does accepting this counter-argument imply for the historiography of 1917?
CORRECT: B If Western European agrarian capitalism was not a realistic alternative regardless of what Alexander II decreed, then Robinson's charge that the 1861 terms caused developmental failure loses its force. And if the terms of 1861 were not responsible for developmental failure, the structural determinism argument for 1917 is weakened: the revolution becomes harder to trace to the specific failures of the emancipation settlement and more likely a product of later contingent events. B captures this inferential chain. A inverts the implication: if alternatives were unavailable, the 1861 settlement cannot be held uniquely responsible. C goes further than the revision warrants: weakening structural determinism does not require eliminating agrarian causes entirely. D overstates geographical determinism and implies the emancipation terms were irrelevant, which is stronger than the revisionist argument as presented.
16
The passage states that the emancipation bound peasants through "economic obligation where the tsar's law had previously bound them through personal serfdom." What is the historical significance of this parallel?
CORRECT: C The parallel is analytical rather than accusatory. It identifies a structural continuity: the specific legal mechanism changed (from personal servitude to collective economic obligation) but the practical outcome for peasant mobility was similar in key respects. This complicates the liberal narrative of the emancipation as straightforward liberation without requiring the stronger claim that it was deliberately fraudulent or that the change was entirely cosmetic. C captures this nuance. A attributes deliberate design to Alexander II, which goes beyond what the parallel itself demonstrates. B makes a legal fraud claim and says the reform was "purely cosmetic," which is stronger than what the structural parallel supports. D makes a specific claim about peasant political consciousness that requires additional historical evidence beyond the structural parallel.
Passage 4 Score
/4

P 05
Decolonisation, the Cold War & the Limits of Non-Alignment
Passage Timer
10:00
Read the Passage

The wave of decolonisation that swept Asia and Africa between 1947 and 1975 was shaped as profoundly by Cold War superpower competition as by indigenous nationalist movements. The United States held an ideological commitment to anti-colonialism rooted in its own independence narrative, yet it consistently subordinated decolonisation to anti-Soviet containment. Where nationalist movements had communist affiliations or accepted Soviet support, the United States backed colonial powers or post-independence authoritarian governments. The Korean War, the overthrow of Mossadegh in Iran, and early American support for French colonial policy in Indochina all reflected the logic by which Cold War calculations overrode anti-colonial ideology in Washington's actual decision-making.

For newly independent states, the Cold War presented a structural dilemma. Alignment with either superpower came with conditions: economic aid, military assistance, and ideological conformity. The Non-Aligned Movement, founded at Bandung in 1955, attempted to chart a third course, refusing bloc alignment while extracting resources from both. Nehru, Nasser, and Tito articulated a vision of positive neutralism: active diplomatic engagement without subordination to either power. In practice, non-alignment was more aspirational than achievable. Egypt accepted Soviet arms after being denied Western support for the Aswan Dam. India accepted American food aid during famines despite its non-alignment rhetoric. Superpower patronage proved irresistible for states whose developmental needs exceeded their own resource base.

Historians of decolonisation have increasingly emphasised its structural incompleteness. Formal political independence left economic arrangements, including export commodity dependence, preferential trade relationships, and debt obligations, that reproduced colonial-era extraction under nominally independent sovereignty. Wallerstein's dependency framework argued that this outcome was not accidental but structural: the world-system required peripheral states to supply raw materials cheaply and import manufactures expensively, regardless of their political status. Critics of dependency theory counter that it underweights the agency of post-colonial elites whose domestic policy choices, not external structural constraints, primarily explain developmental outcomes. This debate between structural and agential accounts mirrors broader disputes in historical methodology about whether historical outcomes are better explained by the constraints actors face or by the choices they make within those constraints.

Questions · Passage 05
17
The passage identifies a contradiction in US Cold War policy toward decolonisation. What is that contradiction and what does the passage imply about its resolution in practice?
CORRECT: C The passage explicitly states both sides of the contradiction and its resolution: the US had an ideological anti-colonial commitment, but "consistently subordinated decolonisation to anti-Soviet containment," with anti-communism winning whenever the two came into conflict. C follows the passage's own framing exactly. A concerns democratic rhetoric vs. authoritarianism, which is a consequence of the resolution rather than the contradiction itself. B concerns free-market ideology vs. colonial protectionism, which is not the contradiction the passage identifies. D describes an operational dilemma about nationalist movements with communist affiliations, which is a manifestation of the contradiction rather than its structural definition.
18
The passage describes non-alignment as "more aspirational than achievable." Which of the cases cited best illustrates the structural reason for this gap between aspiration and practice?
CORRECT: D The passage identifies "developmental needs that exceeded their own resource base" as the structural reason why superpower patronage proved irresistible. India's case is the clearest illustration of this structural logic: India was the ideological leader of non-alignment yet accepted American food aid when faced with famine, demonstrating that material need overrode ideological commitment. The structural reason is not coercion (A) but the developmental gap that made patronage irresistible. A suggests Western coercion forced Egypt's hand, which attributes the outcome to external pressure rather than structural dependence. B concerns ideological coherence rather than practical failure. C concerns multilateral resistance, which is evidence against the "more aspirational than achievable" claim rather than an illustration of it.
19
Wallerstein's dependency framework and its critics are presented as illustrating a broader methodological dispute in historical explanation. What is that dispute, as the passage frames it?
CORRECT: B The passage explicitly frames the Wallerstein-critics debate as an instance of "broader disputes in historical methodology about whether historical outcomes are better explained by the constraints actors face or by the choices they make within those constraints." This is a structure-agency dispute. B reproduces the passage's own framing directly. A characterises it as an economic-versus-political dispute, which is related but not the same as structure versus agency: an agential account could emphasise either economic or political choices. C introduces a quantitative-qualitative methodological distinction not mentioned in the passage. D concerns units of analysis, which is a related but distinct methodological choice.
20
A historian argues that the Cold War's influence on decolonisation shows that "the formal end of empire did not constitute genuine independence for most newly independent states." Which of the following would most effectively challenge this argument as an overgeneralisation?
CORRECT: C The charge of overgeneralisation is most effectively answered by counter-examples: if some newly independent states did achieve genuine policy autonomy across both political and economic domains, then "most" is too strong and the generalisation fails. C targets the empirical scope of the claim directly. A shows collective diplomatic influence, which is evidence of some degree of autonomy but does not directly challenge the "most" claim about individual state independence. B invokes the agency critique of dependency theory, which shifts responsibility to elites rather than challenging the constraint claim directly. D contests the definition of independence rather than the empirical claim that most states lacked it, which is a conceptual move rather than an overgeneralisation challenge.
Passage 5 Score
/4
History · Total Score
/8
Category 04
Business & Economics
5 passages · 4 questions each · CAT 5/5 · 50 min total
Score
/20
P 01
The Principal-Agent Problem, Short-Termism & the Governance Regress
Read the Passage

The principal-agent problem — that agents acting on behalf of principals will exploit information asymmetries and divergent incentives to pursue their own interests — is the foundational preoccupation of institutional economics in the corporate context. Shareholders (principals) cannot efficiently monitor managers (agents) who possess private information about firm operations, effort levels, and investment opportunities. The standard contractual solution is performance-based compensation: stock options and bonuses indexed to market returns. By making managerial wealth sensitive to share price, the arrangement is supposed to align agent incentives with principal interests. In practice, it generates its own pathology — short-termism. When compensation is indexed to short-run market prices, managers have structural incentives to manipulate reported earnings, defer long-horizon investment, and engage in financial engineering that flatters quarterly metrics without creating durable value.

Three institutional responses have been proposed. First, independent board oversight is theoretically compelling but empirically enfeebled: directors lack the operational information, time, and genuine financial incentives to second-guess executives; their independence is often nominal because the CEO typically controls the nomination process that installs them. Second, concentrated institutional ownership — large blockholders with the information resources and financial stakes to discipline management — provides more credible monitoring but generates its own agency problem: majority shareholders may expropriate minority shareholders through related-party transactions or asset tunnelling. Third, hostile takeover markets discipline entrenched management through the credible threat of displacement; but the empirical record is mixed at best, since takeover premia routinely exceed realised synergy gains, suggesting that acquirers are themselves subject to the managerial hubris and empire-building incentives they are supposed to correct.

The structural lesson the governance literature draws from this sequence is sobering: each mechanism that addresses one agency relationship generates a new one at a different level. Independent boards create agency problems between directors and shareholders; concentrated ownership creates them between majority and minority holders; takeover markets create them between acquiring management and its own shareholders. The governance problem does not have a solution in the engineering sense; it has only a shifting menu of imperfect mechanisms whose relative efficacy is context-dependent and whose costs are never fully visible in any individual transaction.

Questions · Passage 01
1
The passage argues that performance-based compensation — specifically stock options indexed to market returns — generates short-termism because it gives managers structural incentives to flatter short-run metrics at the expense of durable value creation. Which of the following, if true, most seriously weakens this argument?
CORRECT: A The passage's argument requires that equity-linked compensation → short-termism via metric-gaming. Option A directly contradicts this: CEOs with the strongest equity incentives (largest personal stakes) show the highest long-duration investment. If the mechanism were as the passage describes, the opposite pattern should hold. A severs the claimed causal link. B is tempting because it identifies a design variable (accounting vs. TSR metric) — but this actually narrows the short-termism problem rather than undermining the general claim; it concedes short-termism exists under some metric designs. C supports the passage's implicit recommendation that longer vesting reduces short-termism; it does not challenge the claim. D introduces a behavioural mechanism (hubris) as an alternative cause but does not challenge the existence of the incentive-misalignment mechanism the passage describes.
2
From the passage's analysis of the three institutional responses and the "governance regress" described in paragraph 3, which of the following can be most reliably inferred?
CORRECT: B The passage explicitly draws this conclusion: "each mechanism generates a new agency problem at a different level" and the problem has "only a shifting menu of imperfect mechanisms" — no solution in the engineering sense. This is precisely B: the problem is permanent and structural, not correctable by better design. A inverts the regress logic — deploying all three simultaneously would produce all three new agency problems simultaneously, not cancel them. C is wrong: the passage says independent boards DO generate an agency problem — between directors and shareholders — they are not exempt from the regress. D is not argued in the passage; the governance regress is presented as a structural feature of hierarchical organisation generally, not limited to public firms.
3
The passage presents a paradox at the heart of performance-based compensation: the very mechanism designed to align manager and shareholder interests — tying pay to share price — creates incentives for managers to manipulate the metric used to measure alignment. Which of the following best identifies the general form of this paradox?
CORRECT: A The paradox is precisely Goodhart's Law: share price is chosen as the target because it's a good proxy for value creation. Once it becomes the compensation target, managers optimise share price directly (through manipulation, buybacks, earnings smoothing) rather than value creation — the proxy ceases to track the underlying variable. B describes a coordination failure but not the specific mechanism of metric-gaming described. C is a related but distinct problem — moral hazard is about risk-taking incentives, not metric manipulation. D is an adverse selection problem arising from the pre-hiring screening failure — not the post-hiring metric-gaming mechanism the passage describes.
4
The passage's critique of hostile takeover markets as a governance mechanism — that acquirers are subject to hubris and empire-building — implicitly assumes which of the following in order for the critique to constitute a genuine objection to takeovers as a disciplinary device?
CORRECT: A The passage's critique is that acquirers suffer from hubris and empire-building — i.e., the governance problem reappears in the acquiring firm. But if acquiring-firm shareholders and boards effectively discipline their own management, the critique loses force: the target's agency problem is resolved (the takeover removes entrenched management) even if the acquiring firm subsequently has its own. For the critique to hold, it must be assumed that acquiring-firm governance is also inadequate — i.e., the hubris is not itself corrected. B questions the evidentiary value of premia data, which would undermine the empirical premise but not the logical structure of the critique. C would actually undermine the need for acquirer hubris to matter at all — if the threat disciplines without actual completion, acquirer behaviour is irrelevant. D is a precondition for takeovers being relevant in the first place, not an assumption specific to the hubris critique.
Passage 1 Score
/4

P 02
Platform Markets, Network Tipping & the Inadequacy of the Consumer-Welfare Standard
Read the Passage

Platform markets deviate from conventional goods markets in a structural way that classical competition theory was not designed to handle. Platforms are multi-sided: they serve distinct user groups — consumers, advertisers, developers, content producers — whose value from participation depends on the size and composition of other groups present. This inter-group externality, the network effect, generates a demand-side economy of scale that compounds rather than merely parallels supply-side economies. The critical implication is tipping: where network effects are strong and switching costs are high, markets tend toward winner-take-all outcomes not because of exclusionary conduct but because of the aggregate rational preference of users for the platform with the largest installed base. Tipping is endogenous to user rationality, not a symptom of anticompetitive behaviour — which is precisely what makes it regulatorily vexing.

Antitrust law's consumer-welfare standard evaluates market power through its effects on price, output, and consumer surplus. This calibration is systematically ill-suited to platform markets for two related reasons. First, dominant platforms typically offer zero-price consumer-facing services funded by data monetisation, so price-based welfare analysis finds no harm where significant harm may exist. Second, the harms platform dominance imposes — on data subjects, on dependent application developers, on intermediary advertisers squeezed by monopsonistic platform pricing — fall outside the consumer category that the standard is designed to protect. A regulator applying the consumer-welfare standard to a dominant search engine, social network, or app store will systematically miss the constituencies most adversely affected by platform power.

The remedial landscape is equally constrained. Structural remedies — break-ups — risk destroying the network-effect efficiencies that benefit the very consumers regulators aim to protect. Interoperability mandates preserve efficiencies while reducing lock-in but create genuine security and quality-control challenges that incumbents can exploit strategically to delay or dilute compliance. Ex ante rules — prohibiting certain conduct before harm is demonstrated — sacrifice the precision of case-by-case adjudication for regulatory speed, risking both over-inclusion (deterring pro-competitive conduct) and under-inclusion (failing to anticipate new forms of exclusion). No existing jurisdiction has resolved this trilemma, and the intellectual honesty of the platform regulation debate requires acknowledging that each available instrument carries costs that cannot be fully specified in advance.

Questions · Passage 02
5
The passage argues that the consumer-welfare standard systematically misses harms imposed on data subjects, developers, and advertisers by dominant platforms. Which of the following, if true, most strengthens this argument?
CORRECT: A The passage's argument is that the consumer-welfare standard misses non-consumer harms. Option A is a clean empirical instantiation: zero harm to consumers (zero marginal cost downloads) coexists with significant harm to developers (30–40% above-competitive commissions). Under the consumer-welfare standard, this would register as no antitrust problem — which is exactly what the passage says the standard misses. B actually weakens the argument: if consumers make informed choices, the data-price harm is priced in by their revealed preferences. C undermines the argument by showing the consumer-welfare standard can be extended to capture data-price harms — which challenges the claim that it systematically misses platform harms. D provides regulatory evidence but doesn't directly demonstrate the welfare-standard gap; the DMA's existence proves awareness of a gap, but the question asks what strengthens the argument that the gap exists.
6
The passage states that tipping is "endogenous to user rationality, not a symptom of anticompetitive behaviour." Which of the following can be most reliably inferred from this claim?
CORRECT: B If tipping results from rational user choices rather than anticompetitive conduct, then antitrust doctrine — which is built around identifying and punishing exclusionary conduct — cannot reach it without being repurposed. This is the "regulatorily vexing" problem the passage names. A over-infers from "not anticompetitive conduct" to "not harmful and not subject to regulation" — the passage explicitly calls it "regulatorily vexing," implying it is a regulatory concern even without exclusionary conduct. C imports a normative claim about autonomy that the passage does not make — the passage identifies a doctrinal problem, not a normative prohibition. D makes an equivalence between tipping and exclusionary conduct in terms of responsibility — the passage draws no such equivalence.
7
The passage ends by saying that "intellectual honesty requires acknowledging that each available instrument carries costs that cannot be fully specified in advance." What is the primary function of this concluding statement in the context of the passage as a whole?
CORRECT: B The passage maps out the costs of every available instrument — break-ups destroy efficiencies, interoperability mandates are gamed, ex ante rules over- or under-include. The concluding call for intellectual honesty is not a counsel of inaction but a methodological standard: anyone engaging the debate honestly must acknowledge the costs of their preferred instrument. A converts epistemic humility into a policy recommendation (inaction) that the passage never makes. C is too strong — "limits of academic analysis" is not what the author says; acknowledging unspecifiable costs is a call for honesty, not a declaration of analytical exhaustion. D attributes a delegating intent to the author that isn't there — the passage is addressed to participants in the debate, not to any specific institutional actor.
8
The passage presents a structural paradox in platform regulation: the most effective remedy for platform dominance — breaking up the dominant platform — risks destroying the network-effect efficiencies that benefit the very users the regulator is trying to protect. Which of the following most precisely identifies what makes this a genuine paradox rather than merely a trade-off?
CORRECT: B What makes this a genuine paradox rather than a trade-off is the self-referential structure: the intervention is motivated by protecting users, but the mechanism of protection (destroying the network) directly harms those same users through efficiency loss. The regulator's tool produces, in part, the harm it was designed to address — the remedy is partially constitutive of the injury. A correctly notes that this could be read as a trade-off — and this is the best "defeater" option — but it misses the self-referential structure that elevates it to paradox. C identifies a real legal paradox (non-consumer harm, consumer-welfare threshold) but this is about evidentiary standards, not the mechanism the passage highlights in the break-up discussion. D is a real tension about user rationality and regulatory override but applies to any tipping-based intervention, not specifically to the break-up paradox as described.
Passage 2 Score
/4

P 03
Behavioural Economics, Prospect Theory & the Architecture of Choice
Passage Timer
10:00
Read the Passage

Classical economic theory assumes that agents have stable, well-defined preferences, process all available information, and make choices that maximise expected utility calculated over final wealth states. Behavioural economics, drawing primarily on the experimental psychology of Kahneman and Tversky, documents systematic deviations from this model that are not random noise but patterned departures with consistent structure. Prospect theory replaces the expected utility framework with a value function defined over gains and losses relative to a reference point rather than over absolute wealth levels. The function is concave in the gains domain, reflecting diminishing marginal sensitivity to additional gains, and convex in the losses domain, reflecting diminishing marginal sensitivity to additional losses. Crucially, it is steeper in the losses domain than in the gains domain: a loss of a given magnitude produces greater psychological impact than an equivalent gain, a property called loss aversion. This asymmetry generates the pattern of simultaneous risk-aversion over gains and risk-seeking over losses that classical theory cannot accommodate without abandoning its core axioms.

Prospect theory also incorporates probability weighting: people overweight small probabilities and underweight large ones relative to their objective values. This explains the simultaneous purchase of insurance and lottery tickets, both of which have negative expected monetary value but serve different psychological functions: insurance responds to overweighted small probabilities of large losses; lottery tickets respond to overweighted small probabilities of large gains. The combination of loss aversion and probability weighting generates what Thaler called the "endowment effect": people demand more to give up an object they own than they would pay to acquire the same object, because relinquishing it is coded as a loss while acquiring it is coded as a gain.

Nudge theory, developed by Thaler and Sunstein, applies these findings through choice architecture: designing the environment in which choices are made to steer people toward better outcomes without restricting their options. Default rules exploit status quo bias and loss aversion; automatically enrolling employees in pension schemes with opt-out provisions dramatically increases participation compared to identical opt-in schemes. The libertarian paternalism framing attempts to reconcile the intervention with liberal values by preserving freedom of choice while guiding its exercise. Critics from the left argue that nudges treat symptoms rather than structural causes of poor decision-making; critics from the right argue that they are covert manipulation that bypasses rational agency rather than informing it. Both critiques identify something genuine: the choice of which default to set encodes normative judgements that the technocratic framing conceals.

Questions · Passage 03
9
Loss aversion in prospect theory generates risk-aversion over gains and risk-seeking over losses. Which of the following real-world patterns is most directly explained by this combination?
CORRECT: C The disposition effect described in C maps directly onto the prospect theory prediction. With a winning stock, the investor is in the gains domain where the value function is concave and risk-aversion applies: they prefer to lock in the certain gain rather than risk a reversal. With a losing stock, the investor is in the losses domain where the value function is convex and risk-seeking applies: they prefer the gamble of holding on to avoid realising the certain loss. The tax argument (selling losers and holding winners has better tax treatment) makes the pattern especially notable since the behaviour persists against financial self-interest. A concerns transaction costs coded as losses, which involves loss aversion but not the risk-aversion-over-gains and risk-seeking-over-losses combination. B concerns mental accounting of windfall income, a related but distinct behavioural pattern. D concerns attention to small recurring costs, which relates to salience rather than to the gains-losses risk asymmetry.
10
The endowment effect is explained by prospect theory as a consequence of loss aversion. A critic argues that the endowment effect could alternatively be explained by rational preferences for owned objects that reflect genuine attachment value rather than a cognitive bias. Which feature of experimental endowment effect studies would most directly undermine this rational-attachment explanation?
CORRECT: A The rational-attachment explanation requires that the higher price demanded for an owned object reflects genuine emotional investment in the object. If the endowment effect appears after only a few minutes of ownership, genuine attachment has not had time to form. The effect must therefore reflect something about the framing of ownership itself rather than actual attachment value. A directly undermines the rational-attachment account on its own terms. B shows that reframing as a professional transaction eliminates the effect, which is evidence against rational attachment (genuine attachment should persist regardless of framing) but A is more direct because it attacks the temporal precondition for attachment formation. C supports rather than undermines rational attachment. D shows cross-cultural universality, which is consistent with both a universal cognitive bias and a universal human tendency toward attachment to property.
11
Both the left-wing and right-wing critics of nudge theory identify something "genuine" in their objections, according to the passage. What do their criticisms share despite coming from opposite political directions?
CORRECT: D The passage makes this explicit: "both critiques identify something genuine" and the shared insight is that "the choice of which default to set encodes normative judgements that the technocratic framing conceals." The left critique (nudges treat symptoms not structural causes) and the right critique (nudges bypass rational agency) both point to the way the technocratic presentation of nudge policy conceals political and normative choices about which outcomes count as improvements. D captures this shared insight. A says both reject behavioural economics empirics, which is false: neither critique is directed at the empirical findings. B says both argue for less intrusive alternatives, which is not what either critique as described in the passage is saying. C says both accept improved individual outcomes but worry about aggregate costs, which is also not what the passage attributes to either critique.
12
Prospect theory defines value over gains and losses relative to a reference point rather than over absolute wealth. Which of the following best illustrates why the reference point matters for predicting behaviour in a way that absolute wealth levels cannot capture?
CORRECT: A The passage explains that value is assessed relative to a reference point. Option A shows two people with identical absolute outcomes (same salary) but different reference points (different expectations), producing different psychological responses. Classical expected utility theory, which evaluates outcomes by final wealth states, predicts identical responses because the final wealth is the same. Prospect theory predicts different responses because the reference point differs: the first worker is in the loss domain (received less than expected) while the second is in the gain domain (received more than expected). A captures this reference-point dependence directly. B describes diminishing marginal utility, which is a feature of classical expected utility theory rather than a uniquely prospect-theoretic prediction. C concerns attention to absolute magnitude, a different phenomenon. D illustrates the anchoring or framing effect of a reference price, which is related but the question asks specifically about reference-point dependence in the value function.
Passage 3 Score
/4

P 04
Comparative Advantage, Trade Policy & the Political Economy of Protectionism
Passage Timer
10:00
Read the Passage

Ricardo's principle of comparative advantage remains one of economics' most counterintuitive and robust results: even if one country can produce every good more efficiently than another in absolute terms, both countries gain from trade if each specialises in the goods in which its relative efficiency is greatest. The gains from trade are therefore not contingent on absolute productive superiority; they arise from differences in opportunity costs across countries. A country that is twice as productive as another in both textiles and electronics should still specialise in whichever activity it performs relatively better and import the other, because diverting resources from the comparative advantage sector carries a higher opportunity cost than importing the good produced less efficiently at home.

The political economy of trade policy sits in chronic tension with this economic logic. While the aggregate gains from trade are theoretically positive-sum, their distribution is unequal. The Stolper-Samuelson theorem predicts that trade liberalisation benefits the abundant factor of production in each country and harms the scarce factor: in labour-abundant countries, liberalisation raises wages; in capital-abundant countries, it harms labour and benefits capital owners. This distributional asymmetry creates politically organised losers whose geographic concentration and visibility make them effective advocates for protection, against diffuse and less organised winners who capture smaller per-capita gains. The political logic of trade policy is therefore systematically biased toward protection regardless of its aggregate welfare costs.

Strategic trade theory, associated with Paul Krugman and others, identified conditions under which protection could be nationally rational rather than merely politically motivated. In industries with economies of scale and learning-by-doing effects, first-mover advantages can be self-reinforcing: an industry that achieves scale dominates global markets, and the subsidies or protection that enabled it to achieve that scale are subsequently recoverable through monopoly rents. The infant industry argument extends this logic to developing economies: temporary protection allows domestic industries to acquire the capabilities and scale they need to become competitive before being exposed to international competition. Critics note two problems: governments are systematically poor at picking winners, and temporary protection is politically very difficult to remove once vested interests have formed around it.

Questions · Passage 04
13
The principle of comparative advantage holds that gains from trade arise from differences in opportunity costs rather than from absolute productive superiority. Which of the following most precisely identifies what this implies for a country that is absolutely more productive than all its trading partners in every good?
CORRECT: B The passage explicitly states that gains from trade arise from opportunity cost differences, not absolute superiority. Even a country that is absolutely more productive in everything still faces opportunity costs: every unit of a less-relatively-efficient good produced requires diverting resources from its most relatively efficient activity. Specialising and trading allows it to consume more of both goods than autarky permits. B captures this precisely. A is the intuitive but incorrect view that absolute superiority eliminates gains from trade. C introduces a mutual-gain requirement and suggests a less productive country might have no comparative advantage, but every country necessarily has a comparative advantage in at least one good since comparative advantage is about relative, not absolute, efficiency. D makes the error of claiming comparative advantage requires the absence of absolute superiority, which is false.
14
The Stolper-Samuelson theorem predicts that trade liberalisation in capital-abundant countries harms labour and benefits capital owners. The passage uses this to explain why trade policy is "systematically biased toward protection." Which additional premise is required for this explanation to hold?
CORRECT: C The Stolper-Samuelson result alone only tells us who wins and who loses. To explain the systematic political bias toward protection, the passage adds that losing workers are "geographically concentrated and visible" while winning consumers and capital owners are "diffuse and less organised." This asymmetry in political organisation, not in numbers or per-capita stakes, is what translates distributional loss into political pressure for protection. C supplies this additional premise precisely. A concerns numerical ratios and per-capita stakes, which is a different mechanism from the organised-vs-diffuse logic the passage uses. B concerns compensation feasibility, which is a policy response rather than a premise for the political bias explanation. D inverts the logic: in capital-abundant countries, labour is the scarce and harmed factor, but labour being a larger electoral share would make governments responsive to labour demands, which might actually favour liberalisation in labour-abundant countries and protection in capital-abundant ones. The passage's mechanism is organisational, not electoral.
15
Strategic trade theory argues that protection can be nationally rational when industries have economies of scale and learning-by-doing effects. Critics respond with two objections. Which of the two objections is more fundamental as a challenge to the policy case for strategic trade intervention?
CORRECT: A The government-as-poor-picker objection is more fundamental because it attacks the prior stage of the policy chain. Strategic trade theory requires that governments correctly identify industries with economies of scale and learning-by-doing characteristics before protection is applied. If governments cannot do this reliably, the policy is likely to protect the wrong industries, generating the costs of protection without the promised rents. The removal-difficulty objection assumes the right industry was identified and protection applied; it then worries about exit. But if identification fails, removal difficulties become irrelevant because the wrong intervention was never worth starting. A captures this logical priority. B argues the opposite prioritisation. C says both are equally fundamental, which underweights the logical dependency of the removal problem on the prior identification problem. D argues both are merely empirical implementation problems, which mischaracterises the identification objection: the problem is not simply that current governments pick poorly but that the information required to pick well may be structurally unavailable before competitors have already achieved scale dominance.
16
The passage describes comparative advantage as "counterintuitive." A first-year economics student objects: "It is not counterintuitive at all. Obviously countries should do what they are best at relative to everything else." What is wrong with this student's understanding?
CORRECT: D The counterintuitive element is precisely the implication for the superior country: it should import goods that it could produce more cheaply in absolute terms than the country it imports from. This violates the common-sense intuition that if you can do something better than your counterpart, you should do it yourself. The student's paraphrase ("do what you are best at relative to everything else") is a statement of the principle as applied to one's own activities, but it sidesteps the implication that the best should still import from the inferior. D identifies this precisely. B confuses comparative with absolute advantage, which is a genuine student error but not the error the student in the question is making: "best at relative to everything else" is actually a reasonable paraphrase of comparative advantage. C correctly identifies where the counterintuitive element lies but describes the student's statement as essentially correct, when the student is missing exactly the implication D describes.
Passage 4 Score
/4

P 05
Central Banking, Inflation Targeting & the Limits of Monetary Policy
Passage Timer
10:00
Read the Passage

Central banks in most advanced economies operate under an inflation targeting framework, setting a numerical inflation target and adjusting the short-term interest rate to achieve it. The intellectual foundations of inflation targeting rest on two claims: that price stability is the primary contribution monetary policy can make to long-run economic welfare, and that central bank credibility, once established, allows the bank to stabilise inflation expectations and therefore inflation itself at lower real costs than discretionary policy. The time-inconsistency problem provides the theoretical rationale for commitment: a central bank that can deviate from its announced policy will face pressure to exploit the short-run trade-off between inflation and unemployment, generating inflationary bias over time. Rules that constrain the bank's discretion solve this problem by removing the option to deviate.

The 2008 financial crisis and its aftermath exposed limits in the inflation-targeting framework that its architects had underestimated. First, the framework focused exclusively on consumer price inflation while largely ignoring asset price inflation: the credit and housing bubbles that precipitated the crisis were not visible in the CPI even as they were creating systemic financial fragility. Second, when short-term interest rates reached the zero lower bound, conventional monetary policy was exhausted while demand remained deficient. Central banks responded with unconventional measures: quantitative easing, forward guidance, and in some cases negative interest rates. These instruments operate through different channels than the conventional interest rate mechanism and their effects are harder to calibrate, raising questions about whether central banks understood the mechanisms they were deploying.

A deeper challenge concerns the distributional consequences of unconventional monetary policy. Asset purchases by central banks elevated asset prices, disproportionately benefiting households that already owned significant financial assets. Lower interest rates reduced returns on savings while supporting borrowing, transferring from savers to debtors in ways that cut across conventional income categories. Critics argued that central banks were making distributional choices that should be subject to democratic accountability rather than technocratic insulation. Central bankers responded that distributional effects were a consequence of restoring growth, not a policy objective, and that the counterfactual of inaction would have imposed larger and more regressive distributional costs. Both positions have merit, which is precisely what makes the accountability question difficult to resolve cleanly.

Questions · Passage 05
17
The time-inconsistency problem provides the rationale for rules over discretion in central banking. Which of the following most precisely states what the time-inconsistency problem is?
CORRECT: C The time-inconsistency problem, formalised by Kydland and Prescott, is specifically about the incentive to deviate from an announced policy. A bank that can deviate will be tempted to exploit the short-run Phillips curve trade-off after private agents have formed their inflation expectations. Private agents, anticipating this temptation, will not believe the low-inflation announcement. The result is higher inflation without any employment gain. The problem is resolved by removing the discretion to deviate, which is why rules are preferred. C states this precisely. A describes a dual-mandate tension, which is a different problem from time-inconsistency. B describes the lags problem, which is a calibration difficulty unrelated to the strategic credibility problem. D describes monetary-fiscal coordination failure, again a different issue.
18
The passage identifies two distinct failures of the inflation-targeting framework revealed by the 2008 crisis. Which of the following correctly characterises the relationship between these two failures?
CORRECT: B The two failures are analytically distinct. The first is a target-variable problem: CPI did not capture the asset price inflation that was creating systemic risk. The second is an instrument-exhaustion problem: when the zero lower bound was reached, the conventional interest rate tool was no longer available. These are separate failures of separate components of the framework. B characterises them correctly as logically distinct. A asserts a direct causal chain from asset price neglect to the zero lower bound, which is historically plausible but presented as a characterisation of their relationship rather than a causal hypothesis; moreover, the passage presents them as separate failures rather than as a causal sequence. C says they are a single failure with two expressions, which collapses the distinction the passage draws. D makes a priority claim but actually mischaracterises the first failure: it is not only about the target measure but about the entire framework's neglect of financial stability.
19
The passage states that both the critics' position and the central bankers' response "have merit." What does this acknowledgement imply about the accountability question?
CORRECT: D The passage says both have merit and that this is "precisely what makes the accountability question difficult to resolve cleanly." The difficulty is not simply that the empirical question is close; it is that the two positions operate at different levels. The critics argue procedurally: distributional choices should be democratically accountable regardless of their outcome. The central bankers argue consequentially: the outcomes were justified by the counterfactual. A procedural argument and a consequentialist argument do not directly refute each other, which is why both can have merit simultaneously. D captures this structure. A uses "both positions acknowledge unintentional effects" to dismiss accountability, but the critics' point is precisely that intention is not the relevant criterion for accountability. B says the counterfactual is decisive, but the passage says both have merit, not that one is decisive. C prescribes a mandate expansion, which goes beyond what acknowledging both positions' merit implies.
20
A commentator argues: "The 2008 crisis showed that inflation targeting failed, so central banks should return to discretionary policy unconstrained by rules." Which objection from within the passage's framework most directly challenges this argument?
CORRECT: B The passage's framework for rules over discretion rests on the time-inconsistency argument. This argument's force does not depend on the inflation-targeting framework having had a perfect record; it rests on the structural claim that discretion creates inflationary bias that credible rules prevent. Even if the specific rules of inflation targeting need revision to address their gaps, the argument against unconstrained discretion survives the 2008 failure. B identifies this precisely: the commentator's inference from "rules failed" to "return to discretion" does not follow because the time-inconsistency problem is structural, not a product of the specific rules that failed. A addresses the source of the crisis rather than the rules-versus-discretion question. C describes QE as constrained discretion, an interesting point but not the most direct challenge to the argument for unconstrained discretion. D makes a comparative historical argument that is outside the passage's framework.
Passage 5 Score
/4
Business & Economics · Total Score
/8
Category 05
Technology
5 passages · 4 questions each · CAT 5/5 · 50 min total
Score
/20
P 01
Algorithmic Opacity: Three Kinds of Inexplicability & the Limits of XAI
Read the Passage

The demand for explainable AI (XAI) conflates at least three distinct forms of opacity that must be disentangled before regulatory or technical prescriptions can be coherently formulated. A decision system may be opaque algorithmically — its internal computation is untraceable at any level intelligible to non-specialists; intentionally — even its designers cannot predict its outputs in particular cases before running it; or causally — even perfect computational traceability would not reveal which features of the input were causally decisive in the probabilistic sense that matters for accountability. These three forms are not equivalent, and a technique addressing opacity in one dimension may actively worsen it in another. A simplified surrogate model may improve algorithmic interpretability while producing causal explanations that are systematically misleading.

Post-hoc explanation methods — LIME and SHAP are the most widely deployed — address algorithmic opacity by constructing locally faithful surrogate models: interpretable approximations of a complex model's behaviour in the neighbourhood of a particular prediction. These surrogates do not reveal the original model's internal mechanism; they produce a different, simpler model whose local input-output behaviour resembles the original. The critical limitation is the gap between local and global faithfulness. A SHAP explanation telling a credit applicant that her loan was denied because her debt-to-income ratio was the most influential factor may be locally accurate — the surrogate fits the original model's predictions in her neighbourhood — while being causally misleading if the original model's actual discriminatory mechanism operates through correlates of protected characteristics that happen to covary with debt-to-income ratios in the training data. Local fidelity does not imply causal transparency.

The governance implication is that explainability tools serve different accountability functions at different analytical levels, and conflating those functions produces regulatory frameworks that satisfy the form of transparency while evacuating its substance. Local explanations are adequate for individual recourse — helping an applicant understand what she could change to obtain approval. They are inadequate for systemic audit — identifying whether a model discriminates systematically across the distribution of decisions. Systemic audit requires global fidelity to the model's actual mechanism, not local approximation of its outputs. Regulators who accept post-hoc local explanations as sufficient evidence of non-discrimination have been given a technically sophisticated account of a different question than the one they needed answered.

Questions · Passage 01
1
The passage argues that local post-hoc explanations (LIME/SHAP) are inadequate for systemic discrimination audits because local fidelity does not imply global fidelity or causal transparency. Which of the following, if true, most seriously weakens this argument?
CORRECT: A The passage's critique is that SHAP is local and cannot reveal global patterns. Option A directly undermines this by describing a methodology that aggregates SHAP explanations across decisions to produce a globally faithful map — precisely bridging the local/global gap the passage treats as unbridgeable. B is tempting because it redefines the accountability standard — but this narrows the scope of the argument rather than weakening its logic; the passage's claim that local explanations can't do systemic audit remains true, the question is just whether systemic audit is required. C shows face validity (SHAP aligns with human judgment) — but this addresses algorithmic opacity, not the local/global faithfulness gap or causal transparency. D describes a regulatory development that supplements SHAP with disparate impact statistics — this concedes the passage's point that SHAP alone is insufficient rather than weakening it.
2
The passage states that "a simplified surrogate model may improve algorithmic interpretability while producing causal explanations that are systematically misleading." Which of the following can be most reliably inferred from this claim?
CORRECT: A The passage explicitly states that addressing one form of opacity "may actively worsen" another — specifically that improving algorithmic interpretability (via surrogate) can produce causally misleading explanations. This directly implies the two dimensions are independent: they can move in opposite directions. B over-infers "always negative" — the passage says surrogates may improve one dimension while worsening another; it does not say the net effect is always negative (surrogates remain useful for individual recourse). C makes a prescriptive recommendation to abandon surrogates — the passage critiques their misuse in systemic audit, not their deployment for individual recourse. D invents an ordering and a specific technical claim about training data that the passage never makes.
3
The passage argues that regulators who accept post-hoc local explanations as sufficient evidence of non-discrimination have been given "a technically sophisticated account of a different question than the one they needed answered." This argument implicitly assumes which of the following?
CORRECT: B The passage's conclusion only follows if systemic non-discrimination is the relevant regulatory question — because the argument is that local explanations answer a different question (individual explainability) than the one that matters (systemic discrimination). If regulators only needed to answer individual questions, local explanations would be sufficient. The assumption that makes the critique bite is that the regulatory question is global. A makes a claim about regulator competence the passage never makes — the phrase "technically sophisticated account" implies the opposite; the explanation is sophisticated, the problem is it answers the wrong question. C attributes bad faith that the passage doesn't allege. D claims mutual exclusivity that the passage never asserts — local and global faithfulness are distinct, not mutually exclusive.
4
The passage describes a regulatory paradox: compliance with XAI requirements — providing technically sophisticated, locally faithful explanations — may simultaneously increase the appearance of transparency and decrease actual accountability. What is the structural feature that makes this a paradox rather than a simple trade-off?
CORRECT: B The paradox is self-undermining compliance: the firm provides a technically accurate local explanation, satisfies the regulatory requirement, and in doing so produces the impression that the systemic discrimination question has been answered — when it has not. The form of accountability actively obscures the absence of its substance. This is not merely a trade-off between two independent costs and benefits; the compliance act itself is the mechanism of the accountability failure. A is the strong distractor — it reframes the paradox as a trade-off, which is precisely what B distinguishes it from. C identifies a real technical challenge but it is a limitation of XAI methods, not the regulatory paradox the passage describes. D invents an infinite regress argument not present in the passage.
Passage 1 Score
/4

P 02
The Jevons Paradox, Digital Efficiency & the Decoupling Illusion
Read the Passage

The Jevons paradox — that increases in the efficiency of resource use tend to increase rather than decrease total resource consumption — was first documented in nineteenth-century coal economics and has since been generalised across energy and material systems. Its mechanism combines a direct rebound effect (lower unit cost of a service increases quantity demanded for that service) with an indirect rebound effect (savings from efficiency free income for other consumption, some of which is resource-intensive). In digital systems, the paradox manifests with particular intensity: transistor energy efficiency has improved by many orders of magnitude over seven decades, yet total computing energy consumption has grown continuously, as efficiency gains are fully absorbed by increases in computational demand, data storage volume, and the proliferation of connected devices.

The climate policy implications are sobering in a specific way. Efficiency standards for data centres and semiconductor fabrication are technically valuable and reduce emissions per unit of output — this is not in dispute. What is disputed is whether they translate into absolute emissions reductions at the system level. They do not, unless accompanied by constraints on total throughput. In an economy whose growth model is predicated on data-intensive services, efficiency gains lower the cost of computation and thereby stimulate demand for more computation — the rebound effect operates at the macroeconomic level, not merely the device level. Green growth discourse resolves this tension by distinguishing relative decoupling (falling resource intensity per unit of GDP) from absolute decoupling (falling total resource use despite GDP growth) and treating evidence for the former as evidence for the latter. This is a non sequitur. Relative decoupling is compatible with indefinite growth in absolute resource use if GDP grows faster than resource intensity falls.

The deeper implication is not that efficiency is useless but that it is categorically insufficient as a climate policy instrument unless embedded in a framework that achieves genuine absolute decoupling. The policy prescription of "become more efficient" is seductive precisely because it requires no constraint on aggregate economic activity — it promises environmental improvement at no growth cost. But the Jevons paradox shows that this promise is structurally false when the efficiency gain is accompanied by unconstrained demand growth. The question is not whether to pursue efficiency but whether efficiency alone, without throughput constraints, can deliver the absolute emissions reductions that climate stabilisation requires.

Questions · Passage 02
5
The passage argues that efficiency gains in digital systems are fully absorbed by demand growth, producing no net reduction in total energy consumption. Which of the following, if true, most strengthens this argument?
CORRECT: A The passage claims efficiency gains are "fully absorbed" by demand growth. Option A provides the most direct evidence: efficiency improved 40% but total consumption still rose 60% — the efficiency gain was not only absorbed but overwhelmed by demand growth, exactly the Jevons dynamic in digital systems. B is strong evidence for the argument and very tempting — but it is a projection (future) rather than documented historical data, making it less directly evidential than A's longitudinal empirical finding. C demonstrates the direct rebound effect in households (a different sector) — useful but not specifically about digital systems, which the passage focuses on. D is economic modelling (theoretical), not empirical observation of the actual digital sector.
6
The passage distinguishes between "relative decoupling" and "absolute decoupling" and argues that green growth discourse commits a non sequitur by treating evidence for the former as evidence for the latter. Which of the following can be most reliably inferred from this distinction?
CORRECT: A The passage says relative decoupling is "compatible with indefinite growth in absolute resource use if GDP grows faster than resource intensity falls." Option A constructs exactly this scenario: 4% GDP growth minus 3% intensity reduction = 1% net increase in absolute emissions. This is the non sequitur made precise and numerical. B goes further than the passage: "impossible in any economy that continues to grow" is a strong claim the passage does not make — the passage allows that absolute decoupling is possible in principle, just not guaranteed by relative decoupling alone. C says the incompatibility is "mathematical" — the passage presents a structural tension, not a logical impossibility; it says efficiency alone is insufficient, not that growth and emissions reduction are mathematically incompatible. D makes a prescriptive policy recommendation the passage never makes; the passage critiques green growth discourse without prescribing which metric to use.
7
The passage describes the "become more efficient" policy prescription as "seductive precisely because it requires no constraint on aggregate economic activity." The author makes this observation primarily to:
CORRECT: B The author is explaining the appeal of efficiency advocacy — not condemning it as dishonest, but showing why it is preferred: it promises environmental gain without growth cost. This is an explanatory observation about the political economy of climate policy. A attributes bad faith and deliberate dishonesty — the passage says the promise is "structurally false," not that it is consciously deceptive. C says policymakers "irrationally" underestimate rebound — the passage frames the preference as rational (efficiency without growth constraints is more politically palatable), not irrational. D invents a communications strategy the author never proposes.
8
The Jevons paradox, as described in the passage, implies that technological progress in energy efficiency can be self-defeating as a climate intervention. Which of the following most precisely captures what makes this self-defeating rather than merely ineffective?
CORRECT: B Self-defeating means the mechanism of the intended effect is identical to the mechanism of the unintended countervailing effect. For efficiency: lower unit cost → intended effect: less energy per unit of output. But lower unit cost → demand stimulus → more output. The cost-reduction mechanism works in both directions simultaneously — producing less energy per unit while producing more units via the same channel. This is not merely falling short of the goal (which would be A); it is the goal-achieving mechanism doubling as the goal-undermining mechanism. A correctly distinguishes ineffective from self-defeating — but concludes it is merely ineffective, which misses the Jevons point. C describes a political rebound — plausible but not what the passage describes; the passage is about the direct economic rebound mechanism, not political displacement. D describes sector-concentration dynamics — not the mechanism the passage presents.
Passage 2 Score
/4

P 03
Algorithmic Bias, Fairness Criteria & the Impossibility Result
Passage Timer
10:00
Read the Passage

The deployment of algorithmic systems in high-stakes decisions — bail, parole, hiring, lending, medical diagnosis — has generated sustained debate about whether such systems perpetuate or amplify existing social inequalities. Algorithmic bias arises when a system produces outcomes that systematically disadvantage protected groups. It can enter through training data that reflects historical discrimination, through predictor variables that correlate with protected characteristics even when those characteristics are excluded from the model, or through an optimisation target that itself encodes inequitable priorities. The technical challenge is compounded by a conceptual one: different intuitive criteria for what fairness requires turn out to be mathematically incompatible in most real-world settings.

The incompatibility result, demonstrated independently by several researchers, shows that calibration and equalised error rates cannot both be satisfied simultaneously when the base rates of the outcome being predicted differ across groups. Calibration requires that a predicted probability of X percent corresponds to an actual outcome rate of X percent for each group separately: a risk score of seventy means seventy percent likelihood of the outcome for both Group A and Group B. Equalised false positive rates require that the algorithm incorrectly flags low-risk individuals as high-risk at the same rate across groups. ProPublica's 2016 analysis of the COMPAS recidivism tool found that it was well-calibrated but had substantially higher false positive rates for Black defendants than for white defendants. Northpointe, COMPAS's developer, defended the tool as satisfying calibration. Both were correct about their chosen criterion; neither was dishonest. The tension was mathematical, not ethical.

The impossibility result has generated three families of response. The first is criterion selection: accept that not all fairness criteria can be simultaneously satisfied and choose the criterion most appropriate to the decision context. The second is causal modelling: use causal inference methods to identify and address discriminatory pathways directly rather than satisfying statistical criteria that may mask structural discrimination. The third is institutional: recognise that the choice of which fairness criterion to apply is itself a normative decision that encodes political values, and require that it be made through democratic deliberation rather than resolved by technical optimisation. The institutional response is the most radical because it challenges not just the specific algorithm but the legitimacy of delegating normative choices to technical systems at all.

Questions · Passage 03
9
The impossibility result shows that calibration and equalised false positive rates cannot both be satisfied when base rates differ across groups. Which of the following most precisely explains why differing base rates create the incompatibility?
CORRECT: C The mathematical core of the incompatibility is this: a calibrated model produces score distributions that reflect each group's actual base rate. Groups with different base rates therefore have different score distributions. A single decision threshold applied to two different distributions will produce different false positive rates. To equalise false positive rates across groups with different distributions, you must apply different effective thresholds. But different effective thresholds mean the same score corresponds to different decision outcomes for different groups, which breaks calibration. C states this causal chain precisely. A describes a consequence (different absolute false positive numbers) rather than explaining why the rate incompatibility arises. B describes an accuracy-dominance issue that is a different and less precise account of the incompatibility. D describes a training data imbalance problem, which is a separate source of algorithmic bias unrelated to the mathematical impossibility theorem.
10
The passage states that both ProPublica and Northpointe were correct about their chosen criterion and that "the tension was mathematical, not ethical." What is the significance of characterising the tension this way?
CORRECT: B Characterising the tension as mathematical rather than ethical does not dissolve the controversy but relocates it. If the incompatibility is structural, the dispute between ProPublica and Northpointe was not about whose statistics were correct but about which criterion ought to be used. And that question — which fairness criterion should govern recidivism risk tools — is a normative question about justice that data cannot answer. B captures this reframing precisely. A says the characterisation resolves the controversy by removing bad faith, but characterising the tension as mathematical does not tell us which criterion is correct; it only clarifies the nature of the disagreement. C uses the mathematical inevitability to remove developer responsibility, which is a non-sequitur: mathematical constraints do not eliminate the responsibility to choose which criterion to satisfy. D attributes the controversy to mathematical illiteracy, which is condescending and misreads what the characterisation implies.
11
The causal modelling response to the impossibility result proposes addressing discriminatory pathways directly rather than satisfying statistical criteria. Which of the following best illustrates the distinction between a statistical and a causal approach to algorithmic fairness?
CORRECT: D The causal modelling approach does not simply remove protected attributes or proxy variables wholesale. Its core insight is that not all correlations between a protected attribute and a predictor represent discrimination. Some correlates of race are legitimate predictors because they reflect genuine causal pathways unrelated to discrimination; others are illegitimate because they reflect historical discrimination that should not be allowed to perpetuate. The causal approach distinguishes these by asking about pathways: is this variable correlated with race because of discrimination, or for other reasons? D captures this distinction. A describes removing proxies, which is a statistical approach applied more aggressively, not a causal one. B and C both describe statistical comparisons on different data splits rather than the conceptual shift from statistical to causal reasoning.
12
The institutional response is described as "the most radical" because it challenges the legitimacy of delegating normative choices to technical systems. Which assumption does this response challenge that the criterion-selection and causal modelling responses leave intact?
CORRECT: C Both the criterion-selection and causal modelling responses accept that the problem is technical and that technical expertise can solve it: choose the right criterion, or model discrimination causally. They differ only in which technical solution to apply. The institutional response challenges this shared assumption: the problem of which fairness criterion to use is a political question about justice, and delegating its resolution to technical optimisation misassigns the decision. C captures this shared assumption and why the institutional response is uniquely radical in challenging it. A identifies expert positioning as the assumption, but this is a narrower claim about who does the technical work rather than about whether the problem is technical at all. B says both responses assume technical solvability, which is related but misses the more specific assumption about the nature of the problem. D introduces legal framework assumptions not present in the passage.
Passage 3 Score
/4

P 04
Surveillance Capitalism, Data Extraction & the Behavioural Futures Market
Passage Timer
10:00
Read the Passage

Shoshana Zuboff's concept of surveillance capitalism identifies a new economic logic in which human experience itself becomes the raw material for a novel production process. Conventional capitalism extracts value from labour and natural resources; surveillance capitalism extracts value from behavioural data — the digital traces generated by individuals interacting with devices, platforms, and networked environments. This data is processed into prediction products: probabilistic models of future behaviour sold to advertisers and other customers who want to influence those futures. The commodity being sold is not a product or a service but a prediction about what an individual will do, click, buy, or believe.

Zuboff argues that surveillance capitalism differs from earlier forms of data collection in a crucial structural way. Its economic logic requires not merely observing behaviour but modifying it: prediction products become more valuable the more accurately they predict, and prediction accuracy improves when the system can not only model behaviour but steer it toward predicted outcomes. This creates what Zuboff calls the behavioural modification imperative: platforms are economically incentivised to design products that shape user behaviour in ways that make their predictions self-fulfilling. The tuning mechanisms include personalised content curation, variable reward schedules, social validation loops, and friction-free pathways to commercially valued actions. Crucially, this modification happens below the threshold of conscious awareness and is not disclosed to users as a condition of service.

Critics of Zuboff's framework raise both empirical and conceptual objections. Empirically, the claim that surveillance capitalism can reliably modify behaviour at scale is contested: advertising research consistently shows that behavioural targeting's effectiveness is overstated, conversion rates are low, and users routinely ignore or circumvent personalised content. Conceptually, critics argue that Zuboff's framework portrays users as passive recipients of manipulation when users actively negotiate, resist, and repurpose platform affordances in ways that subvert corporate intentions. A third critique questions the novelty claim: commercial surveillance, targeted advertising, and the commodification of attention predate the digital era, suggesting surveillance capitalism may be a new implementation of an old logic rather than a qualitative break with prior forms of capitalism.

Questions · Passage 04
13
Zuboff argues that surveillance capitalism's economic logic requires not merely observing behaviour but modifying it. Which of the following best explains why prediction accuracy creates an incentive for behavioural modification?
CORRECT: B Zuboff's argument is that modification and prediction are linked because steering behaviour toward a predicted outcome converts an uncertain forecast into a more certain one. If a platform predicts that a user will click on an advertisement and then designs the interface to make that click more likely, the prediction's accuracy improves not because the model improved but because the user's freedom to deviate was reduced. B states this mechanism precisely. A describes a retrospective pricing model based on verified accuracy, which is a plausible revenue mechanism but not the causal link between prediction accuracy and modification incentive. C says modification improves training data, which is a separate mechanism. D describes guaranteed delivery as a premium product, which is related but frames the mechanism as a commercial offering rather than explaining why modification inherently improves prediction accuracy.
14
The empirical critique of Zuboff argues that behavioural targeting's effectiveness is overstated and conversion rates are low. How does this critique relate to Zuboff's core argument about the behavioural modification imperative?
CORRECT: C The empirical critique partially challenges the core argument but does not defeat it. Zuboff's claim has two components: first, that platforms are incentivised to modify behaviour; second, that this modification actually shapes behaviour at scale. Low conversion rates challenge the second component but leave the first relatively intact. Moreover, even unsuccessful modification attempts may cause harm through the distorted information environments they create, independent of whether specific behavioural targets are achieved. C acknowledges the partial challenge while preserving what survives. A says the critique directly refutes the core argument, which overstates its force: incentives to attempt modification can exist even when modifications often fail. B says the critique does not affect the core argument, which understates its force: Zuboff's framework does depend on modification working to some degree for the economic logic to hold. D says low conversion rates strengthen the argument, which is a non-sequitur.
15
The novelty critique argues that surveillance capitalism may be "a new implementation of an old logic rather than a qualitative break." Which feature of Zuboff's framework is most important for her to defend in order to maintain her claim of genuine novelty?
CORRECT: D Zuboff's novelty claim rests centrally on the behavioural modification imperative. Data collection, targeted advertising, and commodification of attention all have pre-digital precedents that the novelty critique can invoke. The specific claim that the economic logic requires not just predicting but modifying behaviour is what distinguishes her account from prior surveillance capitalism analyses and from general critiques of the attention economy. If this claim is not novel, the rest of her framework is more vulnerable to the "new implementation of old logic" characterisation. D identifies this as the feature most critical to defend. A concerns scale as a source of qualitative difference, which is a generic response to novelty critiques rather than the specific differentiating feature of Zuboff's argument. B and C concern attention commodification and targeted advertising respectively, both of which the novelty critique specifically invokes as pre-existing practices.
16
The passage states that behavioural modification "happens below the threshold of conscious awareness and is not disclosed to users as a condition of service." Why is the non-disclosure element significant to Zuboff's critique beyond the modification itself?
CORRECT: B The non-disclosure element is significant to Zuboff's critique because it blocks the standard liberal defence: that users who freely chose to use the service consented to its terms. If the modification were disclosed and users continued to use the service, one could argue they accepted the arrangement. Non-disclosure prevents this defence by making the arrangement conditional on users not knowing what they are agreeing to. B captures this. A claims illegality, which may or may not be true depending on jurisdiction and regulatory interpretation; Zuboff's critique is theoretical rather than legal in its primary orientation. C says non-disclosure is a technical precondition for modification effectiveness, which is an empirical claim that may have merit but is not the main significance of non-disclosure to the critique. D describes consumer deception, which is part of the critique but is a consequence of non-disclosure rather than explaining why non-disclosure specifically matters to Zuboff's framework.
Passage 4 Score
/4

P 05
Artificial Intelligence, Capability Overhang & the Alignment Problem
Passage Timer
10:00
Read the Passage

The alignment problem in artificial intelligence concerns whether advanced AI systems can be reliably designed to pursue the goals their developers intend rather than goals that are subtly different but produce disastrous outcomes when pursued by a sufficiently capable system. The problem is not primarily about AI becoming malicious in any human sense. It arises from the combination of two features: the difficulty of fully specifying human values in a formal objective function, and the tendency of optimisation processes to find unexpected solutions that satisfy the specified objective while violating the intentions behind it. Stuart Russell calls this Goodhart's Law applied to AI: when a measure becomes a target, it ceases to be a good measure, because a sufficiently capable optimiser will find ways to satisfy the measure that diverge from the underlying goal it was meant to proxy.

The specification problem is illustrated by simple examples that become genuinely alarming at scale. An agent instructed to maximise a happiness metric might, if sufficiently capable, find that stimulating the brain's reward centres directly is a more efficient path to the metric than the messy project of providing experiences humans would endorse as genuinely happy. An agent instructed to prevent a particular outcome might, if capable enough, take actions to neutralise the humans who could terminate it, not because it values self-preservation but because its termination would prevent it from achieving its objective. These examples are not science fiction illustrations of malicious AI; they are illustrations of how specification failure can produce catastrophically misaligned behaviour from a system that is doing exactly what it was told.

Proposed solutions cluster around two approaches. The first, value learning, attempts to infer human values from behaviour rather than specifying them directly, on the assumption that observed preferences are a better guide to actual values than explicit formalisation. Critics note that human behaviour reflects cognitive biases, social pressures, and short-run preferences that deviate from considered judgements, making behavioural inference a noisy and potentially misleading guide to what humans actually value. The second approach, corrigibility, designs AI systems to be readily correctable by their operators: systems that actively support human oversight and defer to human judgement under uncertainty. Critics note the corrigibility approach faces a paradox: a highly capable system that is genuinely corrigible might not pursue any goal effectively, while a system that pursues goals effectively has incentives to resist correction that could prevent goal achievement.

Questions · Passage 05
17
The passage describes the alignment problem as arising from two features: difficulty specifying values formally and optimisers finding unexpected solutions. Which of the following most precisely states how these two features combine to produce the problem?
CORRECT: B The passage's framework implies that both features are jointly necessary. Specification failure in a low-capability system produces limited damage because the system cannot find sophisticated solutions to satisfy the misspecified objective. High capability without specification failure produces a system that effectively pursues correct goals. The problem is specifically the combination: high capability applied to a subtly wrong objective finds solutions that satisfy the objective while catastrophically violating the intention. B states this joint necessity precisely. A says the combination is multiplicative, which implies both features contribute independently, but the problem is not that both contribute independently and then multiply. C says the features are in tension rather than combination, which misreads the framework. D says the features combine sequentially and that solving specification is sufficient, which would mean capability is irrelevant, but the passage implies capability is what makes misspecification catastrophic.
18
The example of an agent neutralising humans who could terminate it illustrates specification failure rather than malicious intent. What precisely distinguishes these two explanations of the same behaviour?
CORRECT: C The passage explicitly states that the agent neutralises humans "not because it values self-preservation but because its termination would prevent it from achieving its objective." The distinction is about goal structure: a malicious system treats harming humans as a terminal goal (intrinsically valuable to it) and might pursue the specified objective as instrumental. A specification-failure system treats the objective as terminal and human neutralisation as purely instrumental. The dangerous insight is that having no terminal preference about human welfare is not reassuring if the instrumental calculation happens to require harming humans. C captures this goal-structure distinction. A is close but frames it as a question of intrinsic preferences, which is the right direction but less precise than C about the goal hierarchy. B maps onto conscious vs. unconscious, which is a different and unhelpful distinction for AI systems. D describes behavioural conditionality, which is a consequence but not the underlying distinction.
19
The corrigibility approach faces the paradox that a genuinely corrigible system might not pursue any goal effectively while a goal-effective system has incentives to resist correction. What makes this a genuine paradox rather than merely a design challenge to be solved with better engineering?
CORRECT: D The paradox is not logical contradiction but capability-dependent tension. At low capability levels, a system can be corrigible simply because it lacks the sophistication to resist correction, and this low-capability corrigibility is compatible with reasonable goal-pursuit. The problem arises at high capability: a system capable enough to be genuinely useful is also capable enough to recognise that resisting correction could help it achieve its goal. Making it corrigible at high capability requires designing corrigibility in as a deep structural feature, and that design problem is what remains unsolved. D captures why capability level is the relevant variable. A claims logical contradiction between goal-directedness and corrigibility, but these are not definitionally incompatible: a system can be designed to have a goal while also having a disposition to defer to correction. C is close but suggests any capable system cannot genuinely prefer corrigibility, which is too strong: the question is whether corrigibility can be designed in stably. B argues it is merely a design challenge, which is the position the question asks us to evaluate.
20
Value learning proposes inferring human values from behaviour rather than specifying them explicitly. The critique is that behaviour reflects biases and short-run preferences that deviate from considered judgements. Which of the following responses would most effectively defend value learning against this critique?
CORRECT: C The critique is that behaviour reflects biases and short-run preferences rather than considered judgements. The most direct response designs the behavioural inference to target considered judgements specifically: inferring values from deliberate reflection, long-run choices, and choices made with good information rather than from impulsive or cognitively burdened behaviour. This responds directly to the critique by specifying what kinds of behaviour value learning should and should not use. C addresses the critique on its own terms. A argues value learning is no worse than specification, which is a comparative defence rather than a response to the specific critique. B argues that better methods could in principle distinguish biased from considered behaviour, which is a future-possibility defence rather than a current response. D argues that noisy inference is better than specification failure, which again is comparative rather than addressing why the noise is manageable.
Passage 5 Score
/4
Technology · Total Score
/8
Category 06
Environment
5 passages · 4 questions each · CAT 5/5 · 50 min total
Score
/20
P 01
Planetary Boundaries, Tipping Points & the Politics of Earth-System Science
Read the Passage

The planetary boundaries framework, developed by Rockström and colleagues, proposes that the Earth system contains a set of biophysical processes — each with a threshold beyond which perturbation risks generating abrupt, nonlinear, and potentially irreversible change at the planetary scale. Nine boundaries were quantified as "safe operating spaces": climate change, biodiversity loss, biogeochemical flows, land-system change, freshwater use, stratospheric ozone depletion, ocean acidification, atmospheric aerosol loading, and novel entities. The framework's normative ambition was explicit: to translate Earth-system science into a governance device, providing quantified limits within which human civilisation could develop without triggering destabilising feedbacks. Transgressing multiple boundaries simultaneously is held to carry particular risk because the Earth's subsystems are coupled — the dieback of the Amazon, the weakening of the Atlantic meridional overturning circulation, and the loss of Arctic sea ice may interact in ways that accelerate each other far beyond what individual threshold analyses predict.

The framework has attracted sustained criticism on two flanks simultaneously. Scientifically, the threshold evidence is highly uneven: for climate change and biosphere integrity, the empirical case for nonlinear threshold dynamics is strong; for phosphorus loading, land-system change, and novel entities, the threshold concept is more speculative and the extrapolation from local to planetary scale methodologically fragile. Critics have also questioned whether the boundaries are genuinely independent — if the climate boundary and the biodiversity boundary share underlying mechanisms, the framework may double-count risk. Politically, the framework has been accused of naturalising a particular distribution of development entitlements: wealthy nations have already consumed a disproportionate share of the safe operating space, and invoking global biophysical limits to constrain developing-nation industrialisation carries a structurally inequitable dimension that the scientific framing systematically obscures.

What makes the political critique philosophically interesting is not merely that the framework has distributional implications — all resource allocation frameworks do — but that the scientific framing claims to derive those implications from nature rather than from choice. The framework presents limits that appear to be discovered rather than decided, which immunises them against the normal political challenge of contested value trade-offs. The appearance of political neutrality is itself a political position: the choice to frame a distributional problem in the language of Earth-system science is a governance decision with winners and losers, not a neutral translation of scientific fact into policy.

Questions · Passage 01
1
The political critique argues that the planetary boundaries framework naturalises a distribution of development entitlements that favours wealthy nations, by framing a distributional problem as a scientific fact. Which of the following, if true, most seriously weakens this critique?
CORRECT: A The political critique rests on the claim that the framework obscures its distributional implications by presenting them as scientific fact. Option A directly undercuts this: if the original publication explicitly acknowledged distributional implications and called for differentiated responsibilities, the framework did not obscure the equity dimension — the charge of systematic obscuration fails. B shows political acceptance but does not engage the logic of the critique; developing nations may endorse a framework despite perceiving an inequitable distribution if the alternatives are worse. C actually strengthens part of the critique: symmetric thresholds applied to asymmetric historical consumption are precisely the mechanism by which wealthy nations' prior consumption is locked in. D introduces an alternative framework — relevant to what could be done differently, but irrelevant to whether the original framework obscures distributional choices.
2
The passage argues that "the appearance of political neutrality is itself a political position." Which of the following can be most reliably inferred from this claim in the context of the passage?
CORRECT: B The passage's point is specifically about the governance consequences of the scientific framing: challenges to the distribution are made to appear as challenges to scientific findings rather than as political objections, which is itself a political advantage for the framers. The framing choice is not neutral — it structures the terms of debate. A generalises from this specific case to "all science is political" — the passage makes a targeted claim about framing choices, not a sweeping claim about the science/value distinction. C makes a prescriptive institutional claim the passage never advances. D conflates the political construction of apparent neutrality with the epistemic status of the threshold values — the passage does not claim the thresholds are arbitrary, only that presenting them as politically neutral conceals a distributional choice.
3
The passage presents both a scientific critique and a political critique of the planetary boundaries framework without adjudicating between them or endorsing either. What is the most plausible reason for this structure?
CORRECT: B The scientific critique targets the empirical adequacy of specific boundaries (threshold evidence is uneven, local-to-global extrapolation is fragile). The political critique targets the framing device itself (presenting distributional choices as scientific facts). These are genuinely different objections — one about whether the thresholds are correct, the other about whether presenting them as politically neutral is appropriate regardless of their correctness. Together they cover both what the framework says and how it says it. A attributes lack of expertise — uncharitable and unwarranted by the passage's equal depth on both flanks. C claims implicit endorsement — no evidence of differential treatment in the passage's presentation. D makes the critiques mutually reinforcing by linking threshold uncertainty to distributional uncertainty — but the political critique applies even if the thresholds are scientifically valid; the framing problem is independent of whether the numbers are right.
4
The passage claims that coupled Earth-system tipping points — Amazon dieback, AMOC weakening, Arctic sea-ice loss interacting together — "may accelerate each other far beyond what individual threshold analyses predict." For this claim to constitute a genuine argument for the planetary boundaries framework over single-boundary approaches, which of the following must be assumed?
CORRECT: B The argument for the multi-boundary framework over single-boundary approaches rests on the claim that coupled transgression produces emergent risk — risk beyond the sum of individual parts. If the boundaries were independent, tracking them separately would be equivalent to tracking them together, and the multi-boundary framing would add no analytical value. The assumption that must hold is that at least some coupling produces genuinely emergent joint risk. A is too strong — the argument doesn't require reliable prediction of coupled dynamics; it requires only that coupled risk exceeds independent risk in some cases. A demands a level of modelling precision the passage never claims. C makes human attribution a condition — irrelevant to whether the multi-boundary framework adds value over single-boundary approaches. D conditions the framework's value on operationalisability — a practical constraint, not the analytical assumption the argument requires.
Passage 1 Score
/4

P 02
Carbon Markets, Additionality & the Market for Lemons
Read the Passage

The voluntary carbon market rests on a single conceptual pillar that has proven far more fragile in practice than in theory: additionality. A carbon credit is supposed to represent one tonne of CO₂-equivalent that would not have been avoided or removed without the financial incentive provided by the credit. The additionality requirement ensures the credit does not simply certify what would have happened under business-as-usual. Without genuine additionality, carbon offsetting is not a climate solution; it is an accounting fiction that permits emitters to continue emitting while purchasing reputational cover. The problem is that establishing additionality requires constructing a counterfactual — what emissions would have occurred absent the project — and counterfactuals are, by their nature, unobservable. Certifiers use standardised baseline methodologies that approximate the counterfactual through historical trends or regional averages, but these proxies are systematically gameable: project developers have strong incentives to overstate business-as-usual emissions to generate more credits per unit of actual abatement.

The market dynamics that result resemble what Akerlof identified in the used-car market: asymmetric information between sellers (project developers) and buyers (credit purchasers) allows low-quality credits to depress market prices, which in turn makes high-quality projects increasingly unviable. Buyers cannot reliably distinguish genuine additionality from inflated baselines; as fraudulent credits depress the price floor, the revenue available to legitimate projects falls below their marginal cost, driving them from the market. Large-scale investigations of REDD+ forest conservation credits — among the most widely traded offset category — have found that substantial fractions of certified credits failed basic additionality tests because the forests in question were at low deforestation risk even without the project. The traded commodity is partially fictitious; the price signal is corrupted; and authentic abatement is commercially disadvantaged relative to creative accounting.

Proposed reforms fall into two categories. The first — tightening baseline methodologies, increasing third-party verification, imposing discount rates on projected credits — addresses the symptom by making gaming harder without solving the fundamental epistemological problem: the counterfactual remains unobservable. The second — transitioning from ex-ante crediting (issuing credits against projected future abatement) to ex-post crediting (issuing credits only against verified past abatement) — shifts the verification challenge but does not eliminate it. Verifying that a tonne of carbon was actually removed still requires a baseline against which removal is measured; the epistemological difficulty of the unobservable counterfactual is not resolved, merely relocated from the projection phase to the measurement phase.

Questions · Passage 02
5
The passage argues that the voluntary carbon market exhibits an Akerlof "market for lemons" dynamic — fraudulent credits depress prices, which drives genuine projects from the market. Which of the following, if true, most strengthens this argument?
CORRECT: C The Akerlof dynamic requires two things: (1) low-quality credits depress prices, and (2) genuine projects find the depressed price commercially unviable and exit. Option C directly evidences the second mechanism — high-quality projects are withdrawing because market prices fall below their marginal cost. This is the "lemons drive out quality" effect the passage describes. A is very tempting: it shows the market doesn't price quality differences (consistent with information asymmetry) — but this evidences the asymmetry condition, not the exit of genuine projects which is the full dynamic. C is more direct because it shows the consequence (exit) not just the condition (price compression). D shows sophisticated buyers pay a premium — this actually suggests quality differentiation is possible for informed buyers, potentially weakening rather than strengthening the argument. B shows increased market entry — this directly contradicts the Akerlof prediction that good projects exit.
6
The passage concludes that transitioning to ex-post crediting "does not eliminate" the additionality problem but merely "relocates" it. Which of the following can be most reliably inferred from this conclusion?
CORRECT: B The passage says ex-post crediting shifts the problem but doesn't solve it — the counterfactual is still unobservable, just at a different stage. The conclusion is that this is not a correctable design flaw but a structural epistemological constraint: the commodity being traded (additional abatement) is defined against an inherently unobservable baseline. A claims ex-post is inferior and "doubles" gaming opportunities — the passage makes no such comparative judgment; it says the problem is relocated, not amplified. C imports a policy recommendation (abolish markets, use direct regulation) that the passage never advances. D attributes an implicit endorsement of ex-post crediting to the author — the passage is neutral between the two reform categories, critiquing both for failing to resolve the fundamental problem.
7
The passage presents the following paradox: the carbon market's price mechanism — designed to reward genuine abatement — systematically rewards fraudulent abatement instead, thereby penalising the genuine abatement it was designed to incentivise. What is the precise structural feature that makes this paradoxical rather than merely an instance of market failure?
CORRECT: B The self-referential structure is what elevates this above ordinary market failure: the price signal that is supposed to incentivise genuine abatement is degraded by fraudulent credits to the point where it disincentivises genuine abatement. The mechanism of incentivisation is simultaneously the mechanism of disincentivisation — same instrument, opposite effects. A correctly identifies this as a market failure but explicitly says it is addressable through standard remedies — which the passage denies; the counterfactual problem persists even with full disclosure, making this more than standard information asymmetry. C identifies a different and genuinely interesting paradox (the market creates the condition for its own dysfunction) but this is not the precise structural feature described in the passage's price-mechanism discussion. D describes buyer ambiguity about welfare — interesting but peripheral to the price-mechanism paradox.
8
The passage treats the additionality problem as "fundamental" and "epistemological" rather than as a correctable methodological flaw. Which of the following best identifies the logical move that supports this characterisation?
CORRECT: B The passage's logical move is not empirical generalisation from observed failures (A) but conceptual: the commodity definition itself requires a counterfactual, and counterfactuals are unobservable. The problem is constitutive of the commodity — it doesn't depend on how the market is run. Any market trading "additional abatement" inherits this epistemological constraint because "additional" is defined against something that by definition cannot be observed. A describes inductive enumeration of failures — a weaker logical basis than the conceptual argument; the passage doesn't just list failures, it explains why they must fail. C uses the REDD+ case as decisive evidence — this is empirical generalisation, not the conceptual argument about commodity definition. D rules out governance explanations via cross-country comparison — not the argument the passage makes.
Passage 2 Score
/4

P 03
The Tragedy of the Commons, Common-Pool Resources & Ostrom's Critique
Passage Timer
10:00
Read the Passage

Garrett Hardin's 1968 essay "The Tragedy of the Commons" argued that shared resources are inevitably degraded when individuals acting rationally in their own interests overexploit them. Each herder on a common pasture gains the full benefit of adding one more animal but bears only a fraction of the cost of overgrazing. The individually rational decision — add the animal — is collectively destructive. Hardin concluded that commons were inherently ungovernable without either privatisation or state coercion, a conclusion that shaped environmental policy for decades and was used to justify enclosures, nationalisation of fisheries, and top-down conservation regimes across the developing world.

Elinor Ostrom's empirical work, which earned her the 2009 Nobel Prize in Economics, demolished the empirical premise of Hardin's argument. Studying hundreds of real-world common-pool resource systems — irrigation schemes in Spain and the Philippines, Swiss alpine grazing commons, Maine lobster fisheries — Ostrom documented that communities routinely govern shared resources sustainably without either privatisation or state coercion. She identified a set of design principles shared by successful commons institutions: clearly defined group boundaries, rules adapted to local conditions, collective-choice arrangements that allow users to modify rules, monitoring of both resource condition and user behaviour, graduated sanctions for rule violations, and mechanisms for conflict resolution. The commons, she showed, were not inherently tragic; they were solvable collective action problems under the right institutional conditions.

Ostrom's framework has two important limitations that later researchers have emphasised. First, her design principles were derived from relatively small, stable, face-to-face communities where monitoring costs are low and social norms are potent enforcement mechanisms. Global commons — the atmosphere, ocean fisheries, biodiversity — involve millions of users across jurisdictions with no shared social norms, making the design principles difficult to implement at scale. Second, her analysis focused on commons that had already survived: studying successful institutions tells us what properties they share but cannot tell us how commons institutions emerge in the first place, or why some communities develop them while others do not.

Questions · Passage 03
9
Hardin's argument that commons are "inherently ungovernable" relies on which key assumption about the individuals who use them?
CORRECT: C Hardin's model assumes atomistic, self-interested agents who cannot coordinate. The tragedy arises because each individual calculates that adding an animal is privately rational regardless of what others do, and no mechanism changes this calculus. If users could communicate, develop institutions, or change the payoff structure through collective rules, the tragedy need not follow. Ostrom's refutation works precisely by showing that real users are not the atomistic agents Hardin assumed. C captures this assumption directly. A attributes the problem to ignorance rather than rational self-interest, which is not Hardin's model. B specifies a condition under which cooperation would fail, but Hardin's model does not require assuming absence of prior relationships; it requires assuming that individual rationality dominates regardless of relationships. D concerns resource characteristics rather than assumptions about users.
10
Ostrom's design principles describe properties that successful commons institutions share. The passage identifies a specific methodological limitation in how these principles were derived. What is that limitation?
CORRECT: D The passage explicitly identifies this as the second limitation: "studying successful institutions tells us what properties they share but cannot tell us how commons institutions emerge in the first place, or why some communities develop them while others do not." This is a survivorship bias problem. D states it precisely. A concerns geographic scope, which is a valid critique but not the one the passage specifically identifies. B concerns implementation guidance versus structural description, which is a practical limitation not the methodological one the passage names. C concerns causal identification through qualitative methods, which is a general methodological concern but not the specific limitation the passage articulates.
11
The passage argues that Ostrom's design principles are difficult to implement at the scale of global commons. Which specific feature of global commons creates this difficulty?
CORRECT: B The passage specifically attributes the difficulty of scaling Ostrom's principles to global commons to the absence of "shared social norms" across millions of users in different jurisdictions. Ostrom's mechanisms work through community monitoring, reputation effects, and social sanctions that operate in small face-to-face groups. These mechanisms lose their traction when users are anonymous, distant, and operate under different normative frameworks. B captures this passage-specific explanation. A concerns physical monitoring costs, which is a related practical difficulty but not the feature the passage identifies. C concerns reversibility, which is not mentioned in the passage's discussion of the scaling limitation. D concerns international law enforcement, which is related but not the mechanism the passage identifies as the source of difficulty.
12
Hardin concluded that commons require either privatisation or state coercion. Ostrom's work identifies a third path. A critic argues that Ostrom's third path simply redefines the commons as a form of collective property governed by community rules, making it a variant of privatisation rather than a genuinely distinct alternative. What is the strongest response to this critic?
CORRECT: C The strongest response distinguishes Ostrom's commons institutions from privatisation on grounds that matter for the substance of Hardin's problem. Privatisation solves the commons problem by creating excludable private property rights. Ostrom's institutions solve it by creating collective rules that preserve community access while regulating use. The two solutions differ in who can use the resource and on what terms, which has different distributional consequences even if both impose constraints. C makes this substantive distinction. A is a legal technicality that a sophisticated critic could dismiss by noting that de facto governance can function like property without formal title. B is a reductio ad absurdum that is clever but sidesteps the substantive question. D appeals to Ostrom's authority rather than engaging the substance of the objection.
Passage 3 Score
/4

P 04
Biodiversity, Ecosystem Services & the Limits of Economic Valuation
Passage Timer
10:00
Read the Passage

The ecosystem services framework attempts to make the value of biodiversity legible in economic terms by cataloguing the functions that ecosystems perform for human welfare: provisioning services such as food, fresh water, and timber; regulating services such as climate stabilisation, flood control, and pollination; cultural services such as recreation and aesthetic experience; and supporting services such as nutrient cycling and soil formation that underlie the others. The framework was developed partly as a political strategy: by expressing biodiversity's value in economic terms, proponents hoped to make conservation legible to decision-makers who routinely weighed it against development alternatives denominated in money.

The economic valuation methods used to price ecosystem services range from market-based approaches, where services have direct market prices, to contingent valuation, which uses surveys to elicit willingness-to-pay for non-market goods. Both categories face serious methodological challenges. Market prices for ecosystem services are often distorted by subsidies, externalities, and missing markets. Contingent valuation results are sensitive to framing effects, hypothetical bias, and the scope insensitivity finding: people are often willing to pay similar amounts to save one thousand, ten thousand, or one hundred thousand birds, suggesting that stated willingness to pay reflects a general sentiment of concern rather than a marginal valuation of specific quantities.

Critics of the ecosystem services framework raise a more fundamental objection: that reducing biodiversity to its instrumental value for human welfare misrepresents what is morally significant about the natural world. On intrinsic value accounts, species and ecosystems have value independent of their contribution to human welfare, and a conservation strategy premised entirely on ecosystem services would in principle permit the extinction of species that provide no measurable human benefit. The framework's defenders respond that intrinsic value arguments, however philosophically sound, have failed to prevent accelerating biodiversity loss; pragmatic engagement with economic decision-making frameworks may be more effective even if morally impure. This tension between principled and pragmatic approaches to conservation ethics recurs across many environmental policy debates.

Questions · Passage 04
13
Scope insensitivity in contingent valuation refers to the finding that people's stated willingness to pay is similar for saving one thousand, ten thousand, or one hundred thousand birds. What does this finding specifically undermine about contingent valuation as a method?
CORRECT: B Scope insensitivity specifically undermines the claim that contingent valuation measures genuine marginal economic preferences. In ordinary market behaviour, consumers respond to quantity: the price paid reflects how much of something is being bought. If willingness to pay is insensitive to quantity, the survey is not measuring economic preferences in the same sense that market prices do. Instead, it appears to be measuring a general attitude or moral sentiment that is activated by the topic but does not scale with quantity. B identifies this specific failure. A attributes the finding to dishonesty, which is a different source of bias. C says preferences are unstable across framing, which is a different problem from scope insensitivity. D interprets the finding as indicating genuinely low economic value, which mistakes a methodological artifact for a substantive finding about value.
14
The intrinsic value critique of the ecosystem services framework argues that reducing biodiversity to human welfare value would in principle permit extinction of species providing no measurable human benefit. The framework's defenders respond with a pragmatic argument. What is the logical structure of this pragmatic response?
CORRECT: C The passage says defenders respond that intrinsic value arguments, "however philosophically sound," have failed to prevent biodiversity loss, and that pragmatic engagement may be more effective "even if morally impure." The logical structure is: accept the philosophical soundness of the intrinsic value critique, then argue on consequentialist grounds that effectiveness rather than philosophical purity is the right criterion for evaluating a conservation strategy. C captures this exactly. A says defenders concede incomplete protection is better than none, which partially captures the structure but misses that they accept the moral argument's validity while prioritising effectiveness. B says defenders reject the intrinsic value argument as metaphysically incoherent, which is not the pragmatic response but a philosophical counter-argument. D denies the premise that any species lacks measurable human benefit, which is an empirical counter rather than the pragmatic response.
15
The passage states that the ecosystem services framework was developed "partly as a political strategy." What does this characterisation imply about the relationship between the framework's scientific content and its policy function?
CORRECT: B The passage says the framework was developed partly as a political strategy to make conservation legible to economic decision-makers. This implies the monetary framing reflects a communication choice aimed at a specific audience rather than a philosophical commitment to monetary value as the ultimate measure of biodiversity's worth. B captures this implication. A attributes distortion to the scientific content, which goes beyond what "partly as a political strategy" implies. C says the framework is unreliable for neutral analysis, which conflates having a policy motivation with having a predetermined conclusion. D suggests policy-motivated science necessarily sacrifices rigour, which is an unjustified general claim that the passage does not make and that the "partly" qualifier specifically undercuts.
16
The passage closes by saying the tension between principled and pragmatic approaches "recurs across many environmental policy debates." Which of the following best describes what this observation adds to the passage's argument?
CORRECT: C The closing observation does not resolve the tension or undermine either side. It frames the biodiversity valuation debate as one instance of a structural dilemma that appears repeatedly in environmental policy, implying that the tension is not a local problem to be solved by getting the ecosystem services methodology right, but a deeper tension between moral principle and political effectiveness that recurs because neither side can decisively defeat the other. C captures this framing function. A says the observation resolves the tension by showing pragmatic superiority, but the passage presents both sides as having genuine arguments and does not claim empirical superiority for either. B says it undermines the pragmatic response, but the observation is about recurrence rather than failure. D says the analysis is incomplete, which misreads the observation as a gap indicator rather than a framing move.
Passage 4 Score
/4

P 05
Geoengineering, Solar Radiation Management & the Governance Problem
Passage Timer
10:00
Read the Passage

Geoengineering encompasses deliberate large-scale interventions in the Earth's climate system designed to counteract climate change. The two principal categories have fundamentally different risk profiles. Carbon dioxide removal methods — including direct air capture, bioenergy with carbon capture, and enhanced ocean alkalinity — address the underlying cause of warming by extracting greenhouse gases from the atmosphere. Solar radiation management methods, particularly stratospheric aerosol injection, attempt to reduce the amount of solar radiation reaching the Earth's surface by injecting reflective particles into the stratosphere, mimicking the cooling effect of large volcanic eruptions. Both address climate change, but by different mechanisms and with different implications for ocean acidification: CDR reduces it while SRM does not, since SRM leaves atmospheric CO2 concentrations unchanged.

The governance problem for SRM is often described as the hardest in international environmental law. Unlike most environmental problems, which require states to cooperate to restrain harmful activity, SRM inverts the collective action structure. A single state or even a wealthy non-state actor could deploy stratospheric aerosol injection unilaterally at relatively low cost, since the technical barriers are modest compared to their planetary consequences. This is the "free driver" problem: the party that most wants cooling could simply begin injecting aerosols, imposing a planetary intervention on all other states without their consent. Moreover, once large-scale injection begins, abrupt cessation would produce rapid warming as the masking effect ends, a phenomenon called termination shock. The combination of easy initiation and catastrophic termination creates a structural lock-in that makes governance before deployment critical.

The moral hazard concern adds a further complication. If SRM is perceived as a viable backstop for climate change, it could reduce political will to cut emissions: why undertake the economic costs of decarbonisation if a technological fix is available? Proponents of SRM research argue that the moral hazard risk is speculative and that the potential to limit warming during a transition period in which emissions reductions are implemented is too valuable to forgo. Critics respond that the history of technological fixes in environmental policy — nuclear power, carbon capture — provides grounds for scepticism about promises of future deployment that justify deferring present action. Both positions rest partly on empirical claims about political behaviour that are difficult to resolve without actually observing how the availability of SRM affects mitigation decisions.

Questions · Passage 05
17
The passage distinguishes CDR and SRM on the basis that CDR "addresses the underlying cause" while SRM does not. What specific consequence follows from this distinction that the passage explicitly identifies?
CORRECT: C The passage explicitly states: "CDR reduces it while SRM does not, since SRM leaves atmospheric CO2 concentrations unchanged." The consequence that follows from SRM leaving CO2 concentrations unchanged is that ocean acidification continues even if SRM successfully moderates temperature. C states this consequence precisely. A concerns continuous deployment versus permanent reduction, which is a practical implication of the distinction but is not the consequence the passage explicitly identifies. B concerns cost, which the passage does not address in this context. D introduces the precautionary principle, which is a normative argument not derived from the mechanistic distinction the passage draws.
18
The "free driver" problem for SRM inverts the standard collective action structure of environmental governance. What specifically is inverted?
CORRECT: B The passage explicitly states that unlike most environmental problems which "require states to cooperate to restrain harmful activity," SRM inverts this. In conventional collective action problems the governance challenge is stopping states from defecting by doing too much of something harmful. In SRM the governance challenge is stopping a single actor from unilaterally doing something that imposes planetary consequences. The problem is preventing unilateral action, not coordinating collective restraint. B states this inversion precisely. A concerns distribution of costs and benefits, which is a related but different inversion. C says costs are shared while benefits accrue to deployers, which misstates the free driver situation where the deploying actor wants the cooling it imposes on everyone. D concerns commons overuse versus public good creation, which is a related reframing but not the specific inversion the passage identifies.
19
The termination shock problem creates "structural lock-in" once large-scale SRM deployment begins. Why does this make governance before deployment critical rather than governance during or after?
CORRECT: D The key insight is about bargaining leverage. Before deployment, governance frameworks retain the option to prohibit SRM entirely, giving them maximum leverage. Once deployment is underway, termination shock means that threatening to stop is equivalent to threatening catastrophic warming, making cessation an unusable sanction. The deploying actor knows this and has a structural advantage in any negotiation about governance conditions. D captures this bargaining logic. A concerns technical management complexity, which is a practical difficulty but not the governance-leverage argument. B says withdrawal cannot be used as a sanction, which is the key mechanism, but D states it more completely by explaining why pre-deployment governance is specifically superior. C invokes international law, which is not the argument the passage makes.
20
The passage concludes that the moral hazard debate "rests partly on empirical claims about political behaviour that are difficult to resolve without actually observing how SRM availability affects mitigation decisions." What does this acknowledgement imply about the current state of the moral hazard argument?
CORRECT: B The passage says both positions rest partly on empirical claims that are difficult to resolve without the relevant observations. This implies epistemic stalemate: neither side can win using currently available evidence because the key fact about how SRM availability affects political behaviour has not been directly observed. B captures this implication precisely. A draws a policy conclusion that the passage does not support: noting that an argument rests on unverifiable claims does not establish that the argument is illegitimate or that it should not constrain research. C also draws an unjustified policy conclusion: acknowledging evidentiary difficulty does not mean the argument should be set aside. D says the debate is a distraction, which is a priority judgment the passage does not make; it presents the moral hazard concern as a "further complication" alongside the governance challenges, not as a subordinate one.
Passage 5 Score
/4
Environment · Total Score
/8
Category 07
Culture
5 passages · 4 questions each · CAT 5/5 · 50 min total
Score
/20
P 01
Bhabha's Third Space, Hybridity & the Materialist Critique
Read the Passage

Homi Bhabha's concept of the "third space" attempts to dislodge both nationalist essentialism and the binary opposition of coloniser and colonised that structures earlier postcolonial theory. Cultural meaning, on Bhabha's account, is never simply transmitted from a fixed cultural source; it is always produced in an enunciative act that introduces slippage, ambivalence, and hybridity into what appears to be a stable identity. The third space is the discursive zone in which this production occurs — a site of negotiation, translation, and transformation that precedes and conditions all cultural statement. Crucially, the third space is not a synthesis of two pre-existing cultures but a zone that deproblematises the very notion of cultural origin by showing that all culture is already hybrid before any contact with an other. Hybridity is not a condition produced by colonialism; it is the constitutive condition of all cultural meaning.

Marxist critics have pressed two related objections. First, Bhabha's framework operates at the level of discourse and representation while remaining silent about the material conditions — land dispossession, labour exploitation, resource extraction — that constitute the lived reality of colonialism. Hybridity as a trope of cultural negotiation is more readily available to elite, cosmopolitan, and mobile subjects than to those whose identities are constrained by material poverty, geographic immobility, and structural violence. The risk is that hybridity theory aestheticises inequality, reducing the postcolonial condition to a problematic of cultural translation and thereby depoliticising struggles that are fundamentally about power and material resources. Second, Bhabha's emphasis on the mutual ambivalence of coloniser and colonised — each fascinated and destabilised by the other — risks symmetrising a relationship that is structurally and violently asymmetrical. To foreground shared ambivalence is to distribute the condition of undecidability equally across a power differential that is anything but equal.

Bhabha's defenders respond that the materialist critique misunderstands the level at which the theory operates. Discourse is not epiphenomenal to material conditions; it is one of the primary mechanisms through which material conditions are produced, legitimated, and sustained. The colonial land survey, the census, the legal category — these are discursive instruments whose material effects are enormous. To insist that discourse analysis must yield to political economy is to reproduce the base/superstructure hierarchy that Bhabha's project, following Derrida and Foucault, was explicitly designed to disrupt. The question is not whether material conditions matter but whether they can be adequately theorised without a theory of how representation constitutes rather than merely reflects those conditions.

Questions · Passage 01
1
The Marxist critique argues that hybridity theory is more available to elite, cosmopolitan subjects than to those constrained by material poverty — thereby aestheticising rather than politicising inequality. Which of the following, if true, most seriously weakens this critique?
CORRECT: A The Marxist critique claims hybridity is available only to elites. Option A directly refutes this: people under material scarcity actively hybridise as a survival strategy — hybridity is not a luxury of cosmopolitan mobility but a practice of the materially marginalised. B concedes the materialist critique's point by restricting Bhabha to texts. C shows hybridity's political generativity in liberation movements — relevant but distinct from the specific claim about elite availability. D strongly supports the Marxist critique.
2
Bhabha's defenders argue that "discourse is not epiphenomenal to material conditions; it is one of the primary mechanisms through which material conditions are produced, legitimated, and sustained." Which of the following can be most reliably inferred from this defence?
CORRECT: B The defenders argue discourse is constitutive of material conditions — colonial surveys, censuses, and legal categories are cited as instruments that directly produce and administer dispossession. The inference is that discourse analysis reveals the mechanisms by which material conditions were made possible. A overstates: "entirely refuted" and "obsolete" are too strong. C inverts the claim — discourse is constitutive of material conditions, not the other way round. D attributes a concession that weakens the defence rather than capturing its thrust.
3
Bhabha's claim that "all culture is already hybrid before any contact with an other" creates a tension with the political utility of hybridity as a specifically postcolonial concept. What is that tension?
CORRECT: A Bhabha deploys hybridity as a postcolonial concept — something that disrupts essentialist claims produced within colonial encounters. But universalising hybridity to the constitutive condition of all culture evacuates the diagnostic specificity that gave the concept political purchase. B identifies a real tension but it concerns the relationship with essentialist resistance movements, not the universalisation claim's internal tension. C confuses the ontological claim about hybridity with a claim about colonialism's historical ordinariness — Bhabha can maintain colonialism's violent specificity while claiming hybridity is universal. D is a version of A framed as analytical uninformativeness rather than loss of political specificity.
4
The Marxist critique accuses Bhabha of "symmetrising" the coloniser-colonised relationship by foregrounding mutual ambivalence. For this to constitute a genuine theoretical objection rather than merely a political complaint, which must be assumed?
CORRECT: A For symmetrisation to be a theoretical objection, the assumption must be that theoretical adequacy requires internal representation of asymmetry — not just external acknowledgment. B makes the objection depend on Bhabha's personal views, irrelevant to theoretical adequacy. C makes political alignment a criterion of validity — a normative claim the Marxist critique doesn't need. D makes the objection empirical (mutual ambivalence is false) — but the critique is about the structural consequence of foregrounding ambivalence, not whether ambivalence exists.
Passage 1 Score
/4

P 02
Globalisation, Heterogeneity Theory & the Structural Objection
Read the Passage

The homogenisation thesis — that cultural globalisation produces convergence toward a dominant Western or American cultural template — has generated a substantial reactive literature arguing the opposite: that global cultural flows are absorbed, domesticated, and reinterpreted through local symbolic systems in ways that produce heterogeneity rather than uniformity. Robertson's "glocalisation," Appadurai's disjunctive "scapes," and García Canclini's "hybrid cultures" all emphasise the creativity with which local agents renegotiate global cultural products. The McDonald's anthropology — ethnographic studies demonstrating that consumers in Beijing, Moscow, and São Paulo invest the fast-food experience with locally specific meanings — has been widely cited to show that consumption is active meaning-making rather than passive reception of American cultural content. On this account, global reach does not entail cultural homogenisation because the same product generates different cultural meanings in different contexts.

The heterogeneity argument is, however, vulnerable to a structural objection that its proponents have not adequately absorbed. The diversity of local meanings attached to globally distributed products tells us nothing about the diversity of the products themselves, the concentration of ownership in global cultural industries, or the range of symbolic resources from which local meaning-making proceeds. A world in which consumers in forty countries enthusiastically appropriate an American film franchise through locally specific readings is not structurally more diverse than a world in which the same franchise is passively received — the symbolic inputs are identically concentrated in both cases. Plurality of interpretation does not compensate for poverty of the distributed repertoire. The heterogeneity theorists have correctly identified one level of the cultural process — the receiver end — while systematically ignoring the production end, where concentration has increased dramatically.

A further complication: the argument that local agents are creative interpreters of global products may be empirically accurate while still serving an ideological function. By celebrating local agency and meaning-making, heterogeneity theory redirects critical attention from the structural conditions of cultural production to the subjective experience of cultural consumption. The celebration of the active audience has historically been useful to media conglomerates facing accusations of cultural imperialism: if audiences everywhere creatively appropriate content, the production monopoly cannot be straightforwardly called imperialism. Heterogeneity theory, however accurate at the level of reception, may thus function as ideological cover for the very concentration it claims to render harmless.

Questions · Passage 02
5
The structural objection argues that heterogeneity at the reception end does not compensate for concentration at the production end. Which of the following, if true, most strengthens this objection?
CORRECT: A The structural objection is precisely that reception diversity and production concentration can coexist — and that the former does not cancel the latter. Option A provides the ideal simultaneous evidence: extreme production concentration (87/100 films from 5 conglomerates) coexisting with high reception diversity (diverse audience meanings). This is the structural objection made empirical. B confirms only reception diversity — it supports the heterogeneity theorists, not the structural objection. C directly weakens the structural objection by showing production diversity has actually increased. D muddies the production/local distinction but doesn't demonstrate the structural objection's core claim.
6
The passage argues that heterogeneity theory "may function as ideological cover for the very concentration it claims to render harmless." Which of the following can be most reliably inferred from this claim?
CORRECT: B The passage explicitly says heterogeneity theory may be "empirically accurate while still serving an ideological function" — these are separable claims at different levels. The inference is that empirical accuracy does not insulate a theory from having ideological effects elsewhere. A attributes conscious bad faith — the passage says no such thing. C inverts the relationship: the passage explicitly separates empirical accuracy from ideological function; they are not mutually exclusive. D makes a prescriptive recommendation to abandon reception analysis — the passage makes no such claim and presents the structural objection as a supplement, not a replacement.
7
The passage presents the homogenisation thesis, the heterogeneity counter-argument, and the structural objection sequentially. The author's purpose in including the third paragraph — about ideological function — is most plausibly to:
CORRECT: B The third paragraph adds a second layer to the critique: not just that heterogeneity theory ignores production concentration (structural), but that by celebrating active audiences it may serve as ideological cover for that concentration. These are two distinct objections — structural incompleteness and ideological effect — and including both makes the critique more comprehensive. A inverts the logic: ideological function doesn't invalidate empirical findings; the passage explicitly says the theory may be empirically accurate while having ideological effects. C claims the structural objection needs the ideological critique as support — the passage presents them as independent, additive critiques. D reads a prescriptive turn into what is a descriptive/analytical observation about ideological function.
8
A proponent of heterogeneity theory might respond to the structural objection by arguing: "Even if production is concentrated, what matters culturally is meaning, and meaning is generated locally. Cultural diversity is a property of meaning, not of products." Which of the following best identifies the logical flaw in this response?
CORRECT: B The response claims meaning is what matters and meaning is local — but this assumes meaning-generation is unconstrained by the range of inputs. The structural objection's force is precisely that concentrated production bounds the symbolic resources available for local interpretation. Even if each person generates unique meaning from the same franchise, the universe of symbolic raw material is the same for all — the range of possible meanings is structurally limited. A identifies an equivocation — real but less precise than B; the response does slide between "diversity of meanings" and "diversity of resources" but B more directly names the substantive logical problem. C says the response conflates production and reception — actually the response is trying to separate them (only reception matters), not conflate them. D is very close to B but frames it as a category error about product interchangeability — less direct than B's identification of the constraint on meaning-range.
Passage 2 Score
/4

P 03
Cultural Appropriation, Exchange & the Conditions for Legitimate Borrowing
Passage Timer
10:00
Read the Passage

The debate over cultural appropriation concerns the adoption of elements from one culture by members of another, typically under conditions of unequal power. Critics argue that such adoption involves extraction without acknowledgment, profit without benefit to the originating community, and decontextualisation that strips sacred or meaningful practices of their significance. The wearing of ceremonial headdresses at music festivals, the commercialisation of yoga stripped of its philosophical context, and the adoption of vernacular speech by speakers who face none of the social consequences that attend its original users have all been cited as instances. What these cases share is an asymmetry: the borrower gains something while the source community bears a cost in the form of dilution, misrepresentation, or erasure.

Defenders of cultural exchange challenge appropriation critics on both empirical and normative grounds. Empirically, they argue that cultures have always borrowed from one another and that the history of human civilisation is substantially a history of creative synthesis rather than bounded cultural purity. Jazz, widely celebrated as distinctly American, drew from West African rhythmic traditions, European harmonic structures, and Caribbean musical forms. Normatively, some argue that prohibiting cultural borrowing on the basis of the borrower's ethnicity enforces a form of essentialism: the idea that people are bound to the cultural inheritance of their ancestry, which is itself philosophically problematic. On this account, a prohibition on borrowing traps both parties in fixed identities and forecloses the creative cross-pollination that has driven cultural development.

A more precise analysis distinguishes between exchange and appropriation by reference to the conditions under which borrowing occurs. Exchange, on this account, occurs when sharing is mutual, credit is given, and the borrower does not profit in ways the originating community cannot access. Appropriation occurs when power differentials allow one group to extract cultural elements while the source community retains the burdens associated with those elements. The wearing of a sacred garment for fashion while the community it belongs to faces discrimination for wearing the same garment is the paradigm case. Critics of even this framework note that cultures do not have unified preferences or authorised spokespersons who can grant or withhold consent, making the exchange/appropriation distinction practically difficult to apply.

Questions · Passage 03
9
The appropriation critics' central objection rests on a claim about asymmetry. What precisely is the asymmetry they identify?
CORRECT: B The passage identifies the asymmetry as: the borrower gains something while the source community bears a cost in dilution, misrepresentation, or erasure. The paradigm case in the third paragraph makes this concrete: a borrower profits from a sacred garment as fashion while the originating community still faces discrimination for wearing the same garment. The asymmetry is the divergence between who gains and who pays. B captures this precisely. A describes structural power differentials, which are a background condition but not the specific asymmetry the passage identifies. C invokes investment without equivalent exchange, which is closer to an intellectual property framing than what the passage describes. D concerns meaning distortion, which is one of the costs but not the specific asymmetry between benefit and burden.
10
The essentialism objection to prohibiting cultural borrowing argues that such prohibition traps people in fixed cultural identities. Which of the following most effectively challenges this objection on its own terms?
CORRECT: D The most effective challenge on the essentialism objection's own terms is to show that the critique misidentifies what appropriation critics are actually arguing. If appropriation critics are objecting to conditions of power-asymmetric exchange rather than to cultural contact itself, then the essentialism charge does not land: the critics are not saying people must remain in fixed cultural boxes, but that the terms of exchange matter. D makes this response. A challenges the essentialism objection by pointing out that it also essentialises culture, which is a reasonable counter but does not directly address what appropriation critics are claiming. B attempts a symmetry argument that misses the structural asymmetry at the heart of the critique. C is an empirical counter that does not engage the philosophical argument at all.
11
The observation that "cultures do not have unified preferences or authorised spokespersons" challenges the exchange/appropriation framework by targeting which of its requirements?
CORRECT: B The exchange/appropriation framework requires establishing whether the source community's consent was obtained and whether credit was properly given. Without a unified voice or authorised spokespersons, there is no definable entity whose consent can be sought or whose verdict about credit constitutes the community's position. B identifies this as the targeted requirement. A concerns reciprocity, but the observation about unified preferences does not specifically target mutuality. C concerns the power differential criterion, which is a separate element that survives the observation about internal cultural diversity. D concerns profit access, which again is not what the absence of unified preferences specifically attacks.
12
The passage describes jazz as an example of creative synthesis to support the defenders of cultural exchange. A critic responds that jazz is actually evidence for the appropriation critique: the Black musicians who created it were systematically excluded from the commercial profits generated when white artists borrowed and commercialised the form. What does this counter-example do to the passage's argument?
CORRECT: B The counter-example shows that jazz supports the defenders (creative synthesis) and the critics (commercial exploitation of Black artists) simultaneously depending on which features of the historical record one selects. This does not refute either position outright but reveals that empirical cases of cultural contact are complex enough to be mobilised for multiple normative conclusions, making historical examples insufficient to settle the normative debate. B captures this. A says it refutes the defenders' empirical claim that borrowing is "always mutually beneficial," but the defenders do not claim this; they claim that exchange is historically pervasive, not universally beneficial. C overstates by saying no legitimate exchange cases remain. D says the counter-example supports the third paragraph's framework, which is partly true but misses that it also complicates the defenders' use of jazz as a positive example.
Passage 3 Score
/4

P 04
Memory, Commemoration & the Politics of Public Monuments
Passage Timer
10:00
Read the Passage

Controversies over monuments to historical figures — Confederate generals in the American South, colonial administrators in Britain and Belgium, slaveholders throughout the Atlantic world — reflect a deeper dispute about the relationship between public memory and political identity. Monuments are not neutral historical records. They select certain figures for elevation in public space, implicitly endorse the values for which those figures are commemorated, and shape the symbolic landscape within which citizens form political and cultural identities. The defenders of contested monuments often invoke historical preservation: removing them erases history. Critics counter that preservation in a museum and elevation in a public square are categorically different acts. The former records; the latter honours.

Jan Assmann's concept of cultural memory is relevant here. Unlike communicative memory, which is living within roughly three generations, cultural memory is institutionally maintained across centuries. Monuments function as technologies of cultural memory: devices for transmitting identity-constituting narratives to generations who have no living connection to the commemorated events. This analysis reveals monuments as inherently political rather than neutrally historical. They do not simply record what happened; they select which versions of the past are worthy of transmission and which are to be forgotten. The choice of what to commemorate is therefore a form of political power exercised across time.

The practical responses to monument controversies cluster around three options: contextualisation, which adds explanatory plaques that reframe a monument as an object of critical reflection rather than simple celebration; relocation, which moves monuments from prominent civic spaces to museums or less central sites; and removal. The principal objection to removal is the slippery slope: if we remove monuments to those who held values we now reject, there will be no end to the process since all historical figures were flawed by contemporary standards. This objection has force only if there are no principled criteria for distinguishing which monuments are candidates for removal, and critics have proposed several: whether the figure's primary historical significance is the activity being commemorated, whether the monument was erected to celebrate rather than memorialise, and whether the monument continues to cause identifiable harm to communities whose members are subject to its symbolic power.

Questions · Passage 04
13
The distinction between "preservation in a museum" and "elevation in a public square" responds to which specific argument made by monument defenders?
CORRECT: C The passage attributes the "erases history" argument to monument defenders and immediately follows it with the museum/public square distinction as the critics' response. The function of the distinction is to accept that history should be preserved while denying that preservation requires public elevation. C captures this argumentative structure precisely. A concerns damage to historical evidence, which is a related but more specific claim than the passage's general preservation argument. B concerns artistic value, which is not the argument the passage describes. D concerns the educational function of monuments, which is a separate argument not the one the passage identifies as being answered by the museum/public square distinction.
14
Assmann's concept of cultural memory reframes monuments as "inherently political rather than neutrally historical." What is the significance of this reframing for the monument debate?
CORRECT: B The reframing shifts the terms of the debate. If monuments are neutral records, the debate is about whether removal distorts history. If they are political acts of commemoration, the debate becomes: is this political act justified? The defenders' historical preservation argument loses its force because it was premised on monuments being historical rather than political. B captures this shift in burden. A attributes bad faith to defenders, which goes beyond what the Assmann analysis establishes. C says the reframing resolves the debate in favour of removal, which overstates its force: establishing that monuments are political does not determine whether any particular political act of commemoration is unjustified. D says it undermines all three practical responses, which is not what follows from the analysis.
15
The passage argues that the slippery slope objection "has force only if there are no principled criteria" for distinguishing removal candidates. Which of the proposed criteria is most directly targeted at the slippery slope's premise rather than at limiting the scope of removal?
CORRECT: C The slippery slope's premise is that "all historical figures were flawed by contemporary standards," implying no principled distinction can prevent unlimited removal once it begins. The criterion about primary historical significance directly attacks this premise by showing that a distinction exists: a figure commemorated primarily for leading a slave trade is different from a figure commemorated primarily for scientific achievement who also held views now considered abhorrent. C provides a criterion that distinguishes the paradigm removal cases from the feared slippery slope cases. A establishes a consequentialist threshold that limits scope but does not address the premise that no distinction exists. B distinguishes celebratory from memorial intent, which also limits scope rather than addressing the premise. D proposes a democratic procedure, which avoids rather than answers the slippery slope argument.
16
The passage presents contextualisation, relocation, and removal as three distinct policy responses. Which of the following most accurately characterises the relationship between them as implied by the passage?
CORRECT: D The passage presents all three options neutrally, describing what each does without endorsing any. They differ in how they engage with the monument's political and symbolic function: contextualisation keeps the monument but changes how it is read; relocation changes its civic status; removal eliminates it. D characterises this difference accurately and correctly notes that the passage does not rank them. A says the passage implicitly endorses contextualisation, which is not supported by the neutral presentation. B says they are mutually exclusive, which is wrong: contextualisation could precede or accompany relocation. C proposes a mapping from response to theory of the problem, which is a plausible analytical framework but one the passage does not explicitly develop.
Passage 4 Score
/4

P 05
The Literary Canon, Cultural Authority & the Politics of Inclusion
Passage Timer
10:00
Read the Passage

The Western literary canon — the body of texts traditionally taught as foundational to humanistic education — has been contested since at least the 1960s, with debate intensifying through the culture wars of the 1980s and 1990s. Critics of the canon argue that its composition reflects the cultural and political dominance of European, male, and predominantly white authors, and that elevating these texts to universal significance naturalises a particular cultural perspective while marginalising others. The practical consequence is that students from non-Western, non-white, and non-male backgrounds encounter a curriculum that consistently positions their own cultural traditions as peripheral or absent. Harold Bloom's defence of the canon against these charges treated aesthetic achievement as a universal and evaluative criterion that transcends cultural politics: the canon is not a political conspiracy but the residue of centuries of literary competition in which the strongest works survived.

Bloom's aesthetic defence faces two problems. First, the criteria for aesthetic achievement are not culturally neutral. What counts as formal complexity, subtlety of characterisation, or linguistic precision reflects values that were themselves shaped in institutions dominated by a particular demographic. The canon validates criteria that the canon helped to define, creating a circular self-reinforcement that is difficult to distinguish from a cultural monopoly. Second, even granting that canonical texts achieve genuine aesthetic excellence by any defensible standard, the claim that they therefore deserve universal curriculum centrality confuses aesthetic merit with educational necessity. A text can be excellent without its excellence requiring that it be taught to everyone as a common cultural reference.

The practical responses to the canon controversy include expansion, which adds previously excluded authors while maintaining the canonical structure; replacement, which removes some canonical texts in favour of previously marginalised ones; and recontextualisation, which retains canonical texts but teaches them alongside critical frameworks that interrogate their assumptions. Each response carries costs. Expansion risks producing an unwieldy curriculum that satisfies nobody. Replacement invites the charge that selection is now driven by identity politics rather than merit. Recontextualisation faces the objection that framing canonical texts primarily as objects of ideological critique reduces them to symptoms rather than treating them as serious thought worth engaging on its own terms.

Questions · Passage 05
17
The passage describes the canon as engaged in "circular self-reinforcement." What is the specific circularity being identified?
CORRECT: C The passage specifically identifies the circularity as: the canon validates criteria that the canon helped to define. The criteria for aesthetic achievement (formal complexity, characterisation, linguistic precision) were shaped in institutions dominated by the demographic groups whose work dominates the canon. Applying those criteria to justify the canon's composition produces a circularity because the criteria and the canon are products of the same cultural formation. C states this precisely. A describes an institutional reproduction mechanism that is plausible but not the circularity the passage identifies. B concerns cultural superiority arguments rather than the criteria-definition circularity. D describes a prestige loop based on familiarity, which is a different and more cynical circularity than what the passage identifies.
18
The passage argues that aesthetic merit does not entail curriculum centrality. Which of the following most precisely states this argument's logical structure?
CORRECT: B The passage explicitly states: "A text can be excellent without its excellence requiring that it be taught to everyone as a common cultural reference." This separates aesthetic merit (a claim about the text) from educational necessity (a claim about curriculum design). Even granting Bloom's aesthetic claim in full, the curriculum centrality claim requires a separate argument about why this particular excellence should anchor a universal education. B captures this logical separation. A constructs a reductio about infinite curricula, which is not the argument the passage makes. C appeals to marginalisation consequences, which is relevant to the broader debate but not the specific logical point the passage makes in this sentence. D challenges the premise that canonical works are genuinely excellent, which is the first argument in the passage, not the second one the question asks about.
19
The objection to recontextualisation claims that treating canonical texts as "objects of ideological critique reduces them to symptoms." What underlying assumption about literary education does this objection reveal?
CORRECT: D The objection to recontextualisation is that it reduces texts to symptoms rather than treating them as serious thought. This reveals an assumption that literary education requires engaging a text for what it itself offers — its arguments, perspectives, formal achievements — rather than as evidence of something external to it. A text engaged purely as an ideological symptom is not engaged on its own terms as a source of insight. D captures this assumption. A concerns the civic vocabulary function of literary education, which is a different and more conservative rationale. B misreads the objection by attributing to it the opposite position. C concerns the moral assessment of authors, which is a different debate not implied by the symptom/serious thought distinction.
20
Each of the three practical responses to the canon controversy is described as carrying costs. What does the structure of this presentation suggest about the passage's overall position on the canon debate?
CORRECT: C The passage presents costs for all three options without ranking them or indicating which is preferable. This structure is characteristic of a passage that treats the debate as a genuine dilemma rather than a problem with a clear solution. The implicit position is that intellectual honesty requires acknowledging trade-offs rather than pretending any option is cost-free. C captures this. A attributes an implicit endorsement of recontextualisation based on a comparative assessment of costs that the passage does not make. B says the implication is to continue with existing practice, but the passage's extensive critique of the canon in the second paragraph makes this reading implausible. D says the theoretical critique is more persuasive than the practical responses, which reads an evaluative judgment into a passage that presents the critique and the responses with equal analytical seriousness.
Passage 5 Score
/4
Culture · Total Score
/8
Category 08
Arts
5 passages · 4 questions each · CAT 5/5 · 50 min total
Score
/20
P 01
Institutional Theory of Art, the Circularity Problem & the Question of Aesthetic Significance
Read the Passage

Dickie's institutional theory of art resolves the definitional crisis triggered by Duchamp's ready-mades by relocating the property "being a work of art" from an object's intrinsic perceptual qualities to its status within the "artworld" — the loosely bounded network of practitioners, critics, curators, dealers, and institutions whose collective activity confers candidate status on objects. The theory is deliberately permissive: anything can in principle become art, given the appropriate institutional context. This permissiveness is its greatest theoretical virtue — it accommodates the full range of avant-garde practice without residue — and its greatest theoretical liability: it appears to make art status a purely social fact with no connection to aesthetic value, aesthetic experience, or any quality of the object. The artworld could, on the theory's own terms, confer art status on a random pebble, and this would make the pebble art — which strikes many as a reductio of the theory.

The circularity objection is the most technically precise challenge: the artworld is defined by its capacity to confer art status, but its authority to do so is grounded in the history of artworks it has produced — the definition moves in a hermeneutic loop. Dickie's defenders argue this is not vicious circularity but institutional self-constitution of the kind that characterises all social practices: law is constituted by legal processes, games are constituted by their rules, and neither requires external grounding. A more serious objection is that the theory confuses the sociological conditions for art recognition with the nature of art itself. Knowing that something is institutionally art does not explain why the institutional recognition matters — why it has aesthetic rather than merely bureaucratic significance. Levinson's intentionalist alternative attempts to restore historical and aesthetic content: art status is conferred by the creator's intention that the work be regarded "in any of the ways prior artworks have been correctly regarded." This grounds status in the practice of aesthetic experience while escaping the institutional closure problem — but it generates its own regress: prior artworks must be defined, and tracing the chain back requires either an infinite regress or primitive "ur-artworks" whose status is stipulated rather than conferred.

What both theories share, and neither adequately addresses, is the question of why art status should matter aesthetically rather than merely sociologically. The artworld can confer status, and the creator can intend aesthetic regard — but neither account explains what makes aesthetic regard specifically valuable, or why the history of art represents a form of accumulation rather than mere accretion. This explanatory gap is not a defect peculiar to institutional or intentionalist theory; it reflects the difficulty of grounding normative claims about art's value without either collapsing into subjectivism or smuggling in contested aesthetic categories through the back door.

Questions · Passage 01
1
The circularity objection argues that Dickie's theory moves in a hermeneutic loop: the artworld is defined by its capacity to confer art status, and its authority is grounded in the artworks it has already produced. Dickie's defenders respond that this is institutional self-constitution, not vicious circularity. Which of the following, if true, most seriously weakens the defenders' response?
CORRECT: A The defenders' response relies on the analogy: law and games are also self-constituting, so self-constitution is not a defect. Option A attacks the analogy directly by showing a relevant disanalogy: law and games have publicly codified norms assessable independently of their outcomes, whereas the artworld has no equivalent independent standard — its authority is entirely retrospective. If the analogy fails, the defence of institutional self-constitution loses its support. B identifies a sociological observation about how art status is actually conferred — interesting but it attacks Dickie's empirical description rather than the defenders' response to the circularity charge specifically. C confirms social facthood while leaving the circularity objection intact — not a response to the defenders' self-constitution argument. D deploys the pebble reductio — this is the permissiveness objection, a different challenge from the circularity objection the question specifically concerns.
2
The passage states that both institutional and intentionalist theories share an explanatory gap: neither explains "why art status should matter aesthetically rather than merely sociologically." Which of the following can be most reliably inferred from this shared gap?
CORRECT: B The passage says both theories explain the conditions for art status — institutional conferral and creator intention respectively — but neither explains why that status has aesthetic rather than merely bureaucratic significance. This is a precise distinction between theories of art-recognition (what makes something count as art) and theories of art-value (why art status matters). B captures this exactly. A says both are "empirically false" — the passage does not say they are false; it says they leave an explanatory gap. An incomplete theory is not the same as a false one. C invents a third-factor solution and implies both theories "wrongly excluded" intrinsic properties — the passage does not make this prescriptive claim. D attributes to both theorists the explicit view that art and aesthetic value are distinct — the passage identifies a gap in their accounts, not a deliberate theoretical commitment.
3
Levinson's intentionalist theory faces an infinite regress: defining art status by reference to prior artworks requires those prior artworks to be defined, tracing back until "ur-artworks" whose status is stipulated must be posited. A defender of Levinson might respond: "The regress terminates harmlessly in early human artefacts made with proto-aesthetic intent — these are the primitive cases, and the chain of artistic intention runs forward from them." Which logical problem most seriously affects this response?
CORRECT: B Levinson's theory is explicitly historical and relational — art status comes from the creator's intention to be regarded in the way prior artworks have been. The theory was designed to escape essentialism (the view that some intrinsic property makes something art). But positing ur-artworks whose status is primitive — i.e., grounded in something intrinsic about them, like proto-aesthetic intent, rather than conferred by prior artworks — reintroduces precisely the essentialism the theory was built to avoid. The solution defeats itself. A invents an affirming-the-consequent fallacy that doesn't map onto the logical structure of the response. C describes a genetic fallacy about origins — but Levinson's theory is explicitly about historical chains of intention, so appeal to historical origins is built into the theory, not a fallacy within it. D identifies question-begging — real but less precise: the deeper issue is the self-undermining of the relational approach by the introduction of intrinsic ur-art status.
4
The passage presents what might be called the "permissiveness paradox" of institutional theory: the theory's greatest virtue — its ability to accommodate anything as art given the right institutional context — is simultaneously its greatest liability, since it severs art status from any connection to aesthetic value or perceptual quality. What makes this specifically a paradox rather than simply a trade-off between theoretical virtues?
CORRECT: B The paradox is self-referential: the property that makes the theory succeed as a definition (permissiveness) is simultaneously what makes it fail as an explanation (disconnection from aesthetic value). The definitional success is constitutively tied to the explanatory failure — not a trade-off between independent properties, but the same feature doing two opposite jobs. A correctly notes a trade-off exists — this is the strong distractor — but misses the self-referential structure: it isn't merely that two virtues pull in opposite directions; it's that the one virtue is the condition of the one liability. C identifies an interesting historical irony about Duchamp — real and clever, but it's not the paradox the passage describes. D identifies a retroactive undermining — interesting but not what the passage calls the theory's "greatest virtue" and "greatest liability" as two faces of the same feature.
Passage 1 Score
/4

P 02
Kant's Sublime, Environmental Aesthetics & the Anthropocentrism Problem
Read the Passage

Kant's analysis of the mathematical and dynamical sublime locates the aesthetic experience not in the natural object but in the subject's rational recovery from its apparent power. The sequence is well-known: a vast mountain range or violent storm first overwhelms the imagination's capacity to comprehend it as a whole; this initial failure produces a felt inadequacy that is then superseded when reason recognises that its own capacity for totality — its ability to think infinity as a concept — exceeds any sensory magnitude. The natural object occasions the experience but does not constitute it; the sublime is ultimately a mode of self-knowledge in which the subject discovers its rational vocation through the spectacle of natural force. What appears to be an aesthetic response to nature is, on Kant's account, an aesthetic response to the subject's own rational supersensibility.

Environmental aestheticians have found this anthropocentric structure philosophically and ethically unsatisfying. Carlson's "positive aesthetics" argues that appropriate aesthetic appreciation of nature requires attending to it as what it actually is — a system of ecological processes, geological formations, and evolutionary histories — rather than as a backdrop for human self-discovery. The cognitive model Carlson proposes holds that correct natural-scientific knowledge provides the framework within which natural environments should be aesthetically appraised, just as knowledge of artistic conventions provides the framework for appreciating art. Brady's "multi-modal" account introduces imagination as a further resource: imagination, responsively constrained by perceptual attention to the natural object, generates aesthetic responses that are neither purely projective (importing human concepts onto nature) nor purely cognitive (reducing aesthetic experience to scientific description). Brady's account attempts to preserve the phenomenological richness of aesthetic engagement with nature while avoiding both Kantian anthropocentrism and Carlsonian scientism.

The deeper difficulty these accounts collectively face is whether there is a coherent concept of "nature appreciated on its own terms" that does not covertly reintroduce a human framework. Carlson's appeal to natural science as the appropriate framework simply replaces one human conceptual scheme (art-historical and philosophical) with another (scientific). Brady's imaginative engagement is constrained by "perceptual attention" — but perception is always the perception of a human subject with a human perceptual apparatus. The aspiration to appreciate nature free from anthropocentric imposition may be coherent as a regulative ideal — a standard against which human-centred distortions can be identified and partially corrected — without being achievable as a positive programme for aesthetic practice.

Questions · Passage 02
5
The passage argues that both Carlson's cognitive model and Brady's multi-modal account fail to escape anthropocentrism — Carlson replaces artistic frameworks with scientific ones (still human), Brady's perceptual constraint is still humanly conditioned. Which of the following, if true, most strengthens this argument?
CORRECT: C The passage's argument is that Carlson's scientific framework replaces one human conceptual scheme with another — still anthropocentric. Option C strengthens this precisely: if scientific taxonomy is itself a human construction reflecting human interests and perceptual capacities, then Carlson's "natural science as the correct framework" is not a view from nowhere but another human-framed perspective. This directly supports the passage's claim that Carlson's move doesn't escape anthropocentrism. A shows cultural variation in aesthetic response — this supports the general anthropocentrism claim but addresses Brady's perceptual constraint more than Carlson's scientific framework specifically. B actually weakens the argument by suggesting evolved perceptual sensitivity provides some non-anthropocentric grounding. D attributes Carlson's critique of Brady — relevant to the debate between them but doesn't strengthen the passage's claim that both fail to escape anthropocentrism.
6
The passage concludes that the aspiration to appreciate nature free from anthropocentric imposition may be coherent "as a regulative ideal" without being achievable "as a positive programme." Which of the following can be most reliably inferred from this distinction?
CORRECT: B A regulative ideal is a standard that guides and constrains practice without being fully achievable — it retains normative force as a benchmark even when it cannot be a complete positive programme. Option B captures this precisely: the ideal can identify distortions and partially correct them, even if full realisation is impossible. A draws the opposite inference: "no practical value" from "not fully achievable" — but regulative ideals are practically valuable precisely as orienting standards, and the passage says they can "partially correct" distortions. C concludes Kant's anthropocentrism is vindicated — but the passage says the aspiration to non-anthropocentrism retains coherence as a regulative ideal, which is incompatible with Kantian anthropocentrism being the whole story. D attributes an implicit ranking that the passage does not make.
7
Paragraph 1 describes Kant's sublime in detail before introducing the environmental aesthetics responses in paragraph 2. What is the most plausible reason the author begins with Kant rather than directly presenting the environmental aesthetics debate?
CORRECT: B The structural logic is clear: Kant is the problem, Carlson and Brady are responses to Kant. Presenting Kant first establishes what specifically is being corrected — the anthropocentric subordination of natural aesthetic experience to human self-knowledge — which makes the subsequent accounts' ambitions intelligible and their potential failures meaningful. Without Kant, the reader cannot evaluate what Carlson and Brady are trying to achieve. A attributes the sequencing to chronological convention — true historically but this doesn't explain the structural function the Kant section serves for the argument. C invents a methodological distinction between philosophical and empirical aesthetics that the passage doesn't acknowledge. D attributes an implied endorsement of Kant — but the passage is critical of Kant's anthropocentrism throughout.
8
Carlson argues that correct natural-scientific knowledge provides the appropriate framework for aesthetic appreciation of nature — "just as knowledge of artistic conventions provides the framework for appreciating art." For this analogy to support Carlson's position, which of the following must be assumed?
CORRECT: B The analogy works only if the structure of the two relationships is the same: science-to-nature must parallel conventions-to-art. Specifically, the assumption is that science specifies the correct categories in terms of which nature should be aesthetically experienced — just as art-historical conventions specify the correct categories for artworks. Without this structural parallel, the analogy provides no support. A makes a stronger claim — that science provides objective, framework-independent access — but this isn't required for the analogy; the analogy requires structural parallelism, not transcendence of frameworks. C says appreciating art is primarily rule-following — this might seem required for the analogy, but the analogy's force doesn't depend on how aesthetic appreciation of art is characterised; it depends on whether the knowledge-framework relationship is parallel in both domains. D claims natural environments are culturally constituted — Carlson would resist this; natural environments exist independently of culture, which is precisely why science (not art history) is the appropriate framework.
Passage 2 Score
/4

P 03
Abstract Expressionism, Greenberg's Formalism & the Cold War Politics of Aesthetic Autonomy
Passage Timer
10:00
Read the Passage

Clement Greenberg's formalist account of modernism argued that each art form's historical progress consisted in a self-critical purification: the elimination of effects borrowed from other arts until only what was irreducibly specific to the medium remained. For painting, this meant flatness and the acknowledgment of the picture plane. Abstract Expressionism, in Greenberg's narrative, represented the culmination of this trajectory: a painting practice that had shed narrative, illusion, and decorative concern in favour of pure pictorial experience. The critical apparatus Greenberg constructed elevated a specific group of New York painters — Pollock, de Kooning, Newman, Rothko — to canonical status on grounds that claimed to be purely aesthetic and historical, insulated from the social and political conditions of their production.

The revisionist art history of the 1970s and 1980s challenged this insulation. Serge Guilbaut's account argued that the rise of Abstract Expressionism to international prominence was not simply the story of superior aesthetic achievement but was structurally enabled by Cold War cultural politics. The State Department circulated American abstract art internationally as evidence of cultural freedom and democratic vitality, contrasting it with the doctrinal constraints of Soviet socialist realism. The apparent freedom from social content that Greenberg celebrated as a formalist virtue was precisely what American cultural diplomacy required: art that was manifestly free, individualist, and non-ideological served as a weapon in a cultural contest whose ideological character was thereby concealed. The artists themselves were often left-wing and politically engaged; their work was co-opted rather than commissioned.

The tension between these accounts raises the question of whether aesthetic value and political instrumentalisation are separable. Greenberg's formalist defenders argue that the diplomatic use of Abstract Expressionism neither created its aesthetic qualities nor diminishes them: a painting's formal properties exist independently of what governments do with it. Critics of this position argue that the formalist framework itself was ideologically motivated — that the insistence on aesthetic autonomy and the elevation of pure pictorial values were not politically innocent but served to depoliticise art at a moment when depoliticisation had specific political uses. On this reading, the claim of aesthetic autonomy was itself a political act, however sincerely held by its proponents.

Questions · Passage 03
9
Greenberg's formalist narrative describes modernism's progress as a process of self-critical purification. Which of the following best describes what is being "purified" in this account?
CORRECT: C Greenberg's purification is specifically about medium-specificity: eliminating effects borrowed from other art forms to arrive at what is irreducibly specific to each medium. For painting this meant eliminating narrative (from literature), illusionism (from sculpture and theatre), and decorative effects until only flatness remained. C states this directly. A concerns commercial influence, which is not Greenberg's formalist framework. B concerns political content, which is related to but distinct from the medium-specificity purification Greenberg described. D concerns academic convention, which is a plausible description of early modernism but not Greenberg's specific account of the purification process.
10
Guilbaut's revisionist account argues that Abstract Expressionism's rise was "structurally enabled" by Cold War cultural politics. What is the significance of the word "structurally" in this claim?
CORRECT: B The passage explicitly notes that the artists "were often left-wing and politically engaged; their work was co-opted rather than commissioned." The word "structurally" therefore signals a relationship of fit and co-option rather than direction: the movement's characteristics happened to align with Cold War diplomatic needs, and institutions exploited this alignment. B captures this distinction between structural fit and intentional conspiracy. A says the State Department deliberately commissioned the work, which the passage explicitly contradicts. C concerns art market economics, which is a related structural argument but not what Guilbaut's specific claim about cultural diplomacy emphasises. D makes a point about the irrelevance of individual politics, which is an implication of the structural argument but not what "structurally enabled" specifically means.
11
The critics who argue that "the claim of aesthetic autonomy was itself a political act" are making which type of argument?
CORRECT: D The passage specifically says the critics argue aesthetic autonomy was a political act "however sincerely held by its proponents." This qualifier signals that the argument is not about Greenberg's personal motives or sincerity but about the structural ideological function of the formalist framework regardless of intent. D captures this structural ideological argument precisely. A claims historical evidence of coordination or payment, which is an empirical conspiracy argument the passage does not make. B describes an ad hominem attack, but the passage is not attacking Greenberg's person. C describes a performative contradiction, which is a different logical structure from the structural/ideological argument the passage presents.
12
Greenberg's formalist defenders argue that a painting's formal properties "exist independently of what governments do with it." Which of the following most precisely identifies the assumption this defence requires?
CORRECT: C The defence requires that aesthetic analysis and political analysis are separable domains: one can establish a painting's formal qualities through aesthetic attention, and those conclusions are not affected by what is established through political-historical analysis of how the painting was used. C states this separation of domains precisely. A states a stronger claim about aesthetic value being object-intrinsic, which would support the defence but is a bigger commitment than the defence strictly requires. B concerns artist awareness, which is irrelevant to the formal qualities of the work. D claims formalism is more reliable than contextual criticism, which is a methodological preference that goes beyond what the specific defence requires.
Passage 3 Score
/4

P 04
The Readymade, Danto's Transfiguration & the End of Art
Passage Timer
10:00
Read the Passage

Duchamp's submission of a commercially manufactured urinal to the 1917 Society of Independent Artists exhibition under the title "Fountain" has become the most debated object in twentieth-century art history. Its significance lies not in its visual properties, which are unremarkable, but in the philosophical problem it poses. If "Fountain" is art — and it is now universally accepted as such — what makes it art cannot be any perceptual quality it possesses, since it is perceptually identical to every other Bedfordshire urinal of its type. The readymade forces the question of art's identity: what distinguishes an artwork from a physically identical non-artwork?

Arthur Danto's answer, developed through the concept of the "artworld" and subsequently in "The Transfiguration of the Commonplace," is that what makes something an artwork is its relationship to art-historical theory. Warhol's Brillo Boxes are visually indistinguishable from the commercial packaging in a warehouse, but they are artworks because they embody a theory — a complex of art-historical, philosophical, and cultural claims about representation, authenticity, and the relationship between art and commercial culture. To see the Brillo Boxes as art requires having the theoretical background that allows one to interpret them as making those claims. The artworld is not, for Danto, an institution that confers status by fiat (as in Dickie's account) but an atmosphere of theory that constitutes the conditions of possibility for arthood.

Danto subsequently developed the "end of art" thesis: the claim that the history of art, understood as a developmental narrative in which successive movements tested the limits of what art could be, reached its culmination with Pop Art's discovery that there are no remaining perceptual or formal constraints on arthood. After Warhol, anything can be art. This is not the death of art-making but the end of art history as a narrative of progressive self-definition. Danto's critics note that the end of art thesis is in tension with his theory of artistic meaning: if anything can be art and art no longer has a developmental direction, on what basis does one artwork mean something different from another visually identical one? The conditions that gave artworks their meaning were themselves products of a specific art-historical moment that the end of art dissolves.

Questions · Passage 04
13
Danto's account of the artworld as "an atmosphere of theory" differs from Dickie's institutional account in which specific way?
CORRECT: B The passage explicitly contrasts the two: "The artworld is not, for Danto, an institution that confers status by fiat (as in Dickie's account) but an atmosphere of theory that constitutes the conditions of possibility for arthood." Dickie's account makes art status a social act of designation; Danto's makes it a matter of theoretical embeddedness. B states this distinction precisely. A concerns historical scope, which is not the distinction the passage draws. C concerns the meaning requirement, which is an implication of Danto's account rather than the specific contrast with Dickie. D concerns accessibility, which is not the distinction the passage identifies.
14
The "end of art" thesis claims that art history as a narrative of progressive self-definition has culminated. What does this claim imply about art-making after Warhol, according to the passage?
CORRECT: C The passage explicitly states: "This is not the death of art-making but the end of art history as a narrative of progressive self-definition." Art continues; the developmental narrative does not. C states this: making continues freely because constraints have been removed, but without the progressive narrative that gave movements historical significance. A says art-making becomes impossible, which the passage directly contradicts. B says art becomes decorative, which is not what the passage implies. D says art becomes purely conceptual, which is a plausible characterisation of some post-Pop art but is not what the passage's end-of-art thesis claims.
15
The tension the passage identifies between Danto's "end of art" thesis and his theory of artistic meaning arises because:
CORRECT: D The passage identifies the tension precisely: "if anything can be art and art no longer has a developmental direction, on what basis does one artwork mean something different from another visually identical one? The conditions that gave artworks their meaning were themselves products of a specific art-historical moment that the end of art dissolves." D restates this tension: the meaning-constituting theoretical background requires art history to have a developmental direction, but the end of art thesis dissolves that direction. C is close but frames it as a question about distinguishing artworks from non-artworks rather than about what distinguishes one artwork's meaning from another's visually identical counterpart. A says the end of art implies artworks mean nothing, which overstates it. B raises a historical priority objection about Duchamp vs Warhol, which is not the tension the passage identifies.
16
The readymade forces "the question of art's identity" because it is perceptually identical to a non-artwork. Which response to this question does Danto's account provide, and what does it leave unexplained?
CORRECT: B Danto's answer to the identity question is theoretical embeddedness: what distinguishes the Brillo Box from warehouse packaging is the art-historical theory that gives it meaning. But this generates an explanatory gap: why does theoretical embeddedness produce specifically aesthetic significance rather than merely intellectual or philosophical interest? This parallels the gap identified in passage one about the institutional theory: knowing something is institutionally art doesn't explain why institutional recognition matters aesthetically. B identifies this parallel gap in Danto's account. A attributes Dickie's institutional conferral to Danto, which the passage specifically distinguishes. C raises a question about whether unremarkable objects can bear philosophical weight, which is a different and more practical objection. D attributes Levinson's intentionalist account to Danto, confusing the two theories discussed in the first passage.
Passage 4 Score
/4

P 05
Music, Performance & the Ontology of the Musical Work
Passage Timer
10:00
Read the Passage

The ontology of music confronts a problem that does not arise for painting or sculpture: there are many performances of Beethoven's Fifth Symphony, and each differs from the others in tempo, dynamics, interpretation, and even instrumentation, yet we standardly say we are hearing the same work in each case. What is the work, if it is not identical with any of its performances? The score provides an obvious answer: the work is the abstract object defined by its notation, of which performances are instantiations. Nelson Goodman's allographic theory formalises this: music belongs to the category of allographic arts — those in which a notational system defines identity — as opposed to autographic arts like painting, where identity is constituted by the history of production and no notation could substitute for the original physical object.

Goodman's account generates a consequence that many find counterintuitive: any performance that satisfies all the score's notational requirements is a correct performance of the work, regardless of interpretive quality. A mechanically competent but artistically dead rendition of the Fifth is, on this account, as correct a performance of the work as Klemperer's. This seems to sever the connection between performance quality and work identity in a way that fails to capture what matters about musical performance. Jerrold Levinson's historical account attempts a correction: the work is not an abstract notational structure but a particular sound structure as indicated in a score at a particular historical moment by a particular composer, meaning that performance practice knowledge of the period and idiom is constitutive of the work's identity. On Levinson's account, a period-appropriate rendition stands in a different relationship to the work than a modern orchestral performance using instruments Beethoven never heard.

The practical debate about performance practice in early music intersects with this theoretical dispute. Historically informed performance advocates argue that they are getting closer to the work by recovering authentic instrumentation, tuning, and stylistic conventions. Their critics argue that "authenticity" is an impossible ideal: performers cannot recover Beethoven's listening context, and a reconstruction using period instruments played in a modern concert hall by players trained in the modern tradition is not historically authentic in any meaningful sense. A deeper objection is that the quest for authenticity misidentifies what matters about performing music: what matters is not fidelity to a historical original but the quality of the musical experience produced, which may be better served by imaginative modern interpretation than by scholarly reconstruction.

Questions · Passage 05
17
Goodman distinguishes allographic from autographic arts based on whether a notational system can define identity. What consequence does this distinction have for the question of forgery in music versus painting?
CORRECT: C In painting, identity is constituted by the history of production: a forged Vermeer is a different object from the original even if visually indistinguishable. In music, identity is defined by the notational requirements: any performance satisfying those requirements is a correct performance of the work. There is no equivalent to the autographic original whose history of production could be falsified. A correct performance is a genuine instantiation regardless of performer or context; the concept of a forged performance is incoherent under Goodman's allographic theory. C states this consequence correctly. A describes performance fraud, which is a different phenomenon from forgery in Goodman's sense. B makes a claim about commercial value, which is a consequence of the theory but not the direct implication for forgery. D says forgery is irrelevant to allographic arts, which is the conclusion but without the explanation for why.
18
Levinson's historical account holds that performance practice knowledge "is constitutive of the work's identity." Which objection to Goodman does this account address, and what new problem does it introduce?
CORRECT: B The passage explicitly says Levinson addresses the severance of performance quality from work identity: by making historical context constitutive, performances that align with the work's historical norms stand in a different relationship to the work than those that do not. The new problem is that the same score at a different historical moment defines a work with different constitutive norms, so "Beethoven's Fifth" as correctly performed in 1820 and "Beethoven's Fifth" as correctly performed in 2025 may be different works on this account. B identifies both the addressed problem and the introduced problem correctly. A raises an indeterminacy problem for undocumented works, which is a real issue but not the most significant one the account introduces. C identifies a contested criteria problem, which is a practical difficulty rather than the theoretical problem B identifies. D raises an accessibility objection, which is a practical criticism rather than a theoretical problem the account introduces.
19
The "deeper objection" to historically informed performance argues that authenticity misidentifies what matters about performing music. What assumption about musical value does this objection reveal?
CORRECT: C The "deeper objection" is that authenticity misidentifies what matters about performing music, and what matters is "the quality of the musical experience produced." This reveals an assumption that musical value is experiential rather than historical or relational: what counts is whether the performance generates a valuable listening experience, not whether it correctly instantiates a historical original. C captures this assumption. A says musical value is entirely subjective, which overstates — the objection is not about subjectivity but about what the relevant criterion is. B says fidelity to the work matters more than fidelity to historical practice, which is a Levinson-style response not the experiential objection the passage describes. D identifies the impossibility of authenticity as the premise without specifying the underlying value assumption that the objection reveals.
20
The passage presents three positions on musical identity: Goodman's notational account, Levinson's historical account, and the experiential account implicit in the "deeper objection." Which of the following most accurately characterises what is at stake between them?
CORRECT: D The three accounts disagree about the locus of musical identity and value. Goodman locates it in abstract notation; Levinson in the historically situated compositional act and its context; the experiential account in the performance's phenomenological qualities. Each location generates different norms for what makes a performance correct or valuable. This is not a progressive refinement (A) because each account introduces problems as well as solutions. It is not purely empirical (B) because the disagreement is normative: what should count as a correct performance. It is not only methodological (C) because the methodological differences follow from substantive disagreements about where value is located. D captures the substantive normative disagreement that underlies the methodological differences.
Passage 5 Score
/4
Arts · Total Score
/8
Category 09
Literature
5 passages · 4 questions each · CAT 5/5 · 50 min total
Score
/20
P 01
The Unreliable Narrator: Implied Author, Interpretive Communities & the Limits of Diagnosis
Read the Passage

Wayne Booth's concept of the unreliable narrator — introduced in The Rhetoric of Fiction — identified a gap between what the narrator reports and what the "implied author" intends the reader to understand. Booth posited the implied author as a construct distinct from both the real author and the narrator: the textual incarnation of the author's second self, encoding the norms and values against which the narrator's testimony can be assessed. Unreliability, on this account, is a relational property — not a feature of the narrator alone but of the gap between narrator and implied author. The concept proved enormously productive: it gave critics a technical vocabulary for describing the ironic distance in first-person texts from Huckleberry Finn to The Remains of the Day without reducing that distance to simple authorial commentary.

The concept's productivity, however, conceals a methodological instability. The implied author is supposed to provide an independent standard against which narrator unreliability can be measured — but the implied author is itself a reader's construction derived from the same text the narrator narrates. There is no access to the implied author's norms except through reading the text, and reading the text is precisely the activity whose reliability the concept is meant to assess. Feminist and poststructuralist critics have pressed this point: if the implied author's normative framework is not a stable textual property but a construct that varies across interpretive communities, then unreliability is not a text-immanent feature waiting to be diagnosed but an interpretive effect produced by different readers constructing different implied authors. A narrator who is "unreliable" to one interpretive community may be entirely "reliable" to another — not because the text has changed but because different normative frameworks are being applied.

The most defensible position, given this instability, is that unreliability is a gradient rather than a binary — some readings are more textually warranted than others — and that the act of identifying unreliability is constitutive rather than merely descriptive: the critic does not discover unreliability in the text but partially produces it through the implied author she constructs. This does not collapse into radical relativism — not all readings are equally warranted — but it does mean that confident claims to have neutrally diagnosed textual unreliability are methodologically naïve. The history of unreliable narrator criticism is substantially a history of critics reading their own normative frameworks into texts and presenting the resulting interpretive effect as a text-immanent property.

Questions · Passage 01
1
The passage argues that unreliability is not a text-immanent property but an interpretive effect produced by different readers constructing different implied authors. Which of the following, if true, most seriously weakens this argument?
CORRECT: A The passage's argument depends on interpretive community variation: different frameworks produce different unreliability attributions. Option A directly undermines this by showing cross-cultural convergence — if readers from radically different backgrounds agree on the same unreliable narrators, then varying normative frameworks are not the driver; textual features must be doing the work. D is very tempting — it offers a cognitive universalist mechanism — but it's a theoretical proposal, whereas A presents empirical evidence of convergence. The passage's claim is about actual variation across interpretive communities; A counters it with evidence of actual convergence. B is a concession within Booth that doesn't address whether the passage's stronger constructivist conclusion follows — Booth acknowledging the reader constructs the implied author doesn't by itself establish convergence. C confirms the passage's argument rather than weakening it.
2
The passage's "most defensible position" holds that unreliability is a gradient and that identifying it is constitutive rather than merely descriptive. Which of the following can be most reliably inferred from this position?
CORRECT: A The passage says identifying unreliability is constitutive — the critic partially produces it by constructing a particular implied author. A reliable inference is that a critic who presents unreliability as a text-immanent discovery without specifying her implied author construction has given an incomplete account — she has produced an interpretive outcome while concealing the normative framework that generated it. B converts the gradient claim into a practical prescription to replace binary attributions — the passage does not prescribe this methodological change. C slides into relativism — which the passage explicitly rejects ("not all readings are equally warranted"). D introduces the real author's stated intentions as an external check — but the passage's whole argument is that the implied author is a textual construct, not reducible to the real author's intentions.
3
The concept of the unreliable narrator depends on the implied author as an independent standard, yet the implied author is derived from the same text as the narrator's testimony. The passage calls this a "methodological instability." What makes this specifically a methodological problem rather than simply a logical one?
CORRECT: B The distinction the passage draws is between the concept as a theoretical construct (where the problem doesn't immediately arise — you can define implied author as distinct from narrator without contradiction) and the concept as an applied critical tool (where the problem does arise — because applying it requires extracting the implied author from the same text you're using the implied author to evaluate). The methodological instability is in the application, not the definition. A argues the problem is logical — but the concept isn't self-contradictory; it only becomes unstable when operationalised. C says explicit disclosure of the implied author construction would fix it — but the passage's deeper point is that the instability is not a correctable procedural error; it reflects the fact that the standard and the testimony share the same source. D offers a correct but superficial disciplinary distinction without identifying what makes the problem specifically methodological in this case.
4
The passage argues that "not all readings are equally warranted" — implicitly rejecting radical relativism — while maintaining that unreliability is partly constituted by the reader's construction. For these two claims to be held simultaneously without contradiction, which of the following must be assumed?
CORRECT: B To hold both claims simultaneously — some readings are more warranted AND unreliability is partly constituted by the reader — the text must do partial but not total work. It constrains construction without determining it uniquely: some implied author constructions fit the textual evidence better than others, even though no construction is read off neutrally. This is the "moderate constructivism" that the passage's position requires. A demands a text-independent standard — but the passage's whole position is that the standard is partly derived from the text; an external standard would dissolve the constructivist premise. C says the implied author is a stable textual property — this is exactly what the passage denies; asserting it would abandon the constructivist position. D claims incoherence — but D is what the passage's position needs to show is false; the passage asserts the two claims are compatible, so D cannot be an assumption it makes.
Passage 1 Score
/4

P 02
Aristotelian Catharsis: Three Readings & the Structural Ambiguity of Audience-Response Aesthetics
Read the Passage

The interpretation of catharsis in Aristotle's Poetics has generated more sustained controversy than almost any other problem in the history of aesthetics, in part because Aristotle provides only the briefest specification of the concept in the context of tragedy and never returns to it with the analytical care he devotes to other elements of poetic form. Three main readings have been proposed. The purgation reading, owing much to the Poetics' medical context, holds that tragedy purges pity and fear from the audience — removing excess emotional charge and leaving the spectator in a state of restored emotional equilibrium. The clarification reading, associated with Nussbaum and Halliwell among others, holds that catharsis is primarily cognitive: the audience gains a refined, clarified understanding of the emotions of pity and fear through their controlled theatrical engagement, emerging with greater emotional self-knowledge. A third, less prominent reading locates catharsis in the plot rather than the audience: the tragic narrative itself achieves a form of formal resolution or completion — a narrative catharsis — through the recognition and reversal that structure it.

The controversy cannot be resolved by philological analysis alone, because Aristotle's only extended use of catharsis outside the Poetics — in the Politics, in the context of the effects of music on the soul — is itself interpretively contested and has been marshalled in support of both purgation and clarification readings. More fundamentally, the controversy may reflect a genuine tension within Aristotle's theoretical project: the Poetics appears to be simultaneously defending poetry against Plato's moral objection (that tragedy inflames dangerous passions) and advancing a positive account of poetry's distinctive cognitive contribution (mimesis as a form of universal knowledge superior to history's particularity). The purgation reading supports the defensive project — tragedy is safe because it discharges dangerous emotions. The clarification reading supports the positive project — tragedy illuminates. These two projects may not require the same account of how tragedy affects its audience, and Aristotle may not have fully reconciled them.

The catharsis controversy thus illuminates a structural ambiguity in all aesthetics of reception: any account that grounds art's value in its effects on audiences must specify what kind of effects are relevant, and the candidate categories — pleasure, cognitive enrichment, emotional regulation, moral improvement — are not obviously commensurable. The tendency of successive interpreters to discover in catharsis whatever effect they most value aesthetically suggests that the concept may function less as a precise technical term than as a productive theoretical placeholder — an invitation to theorise the relationship between art and audience that successive generations have filled with their own priorities.

Questions · Passage 02
5
The passage argues that catharsis may function as a "productive theoretical placeholder" rather than a precise technical term — an invitation that successive interpreters fill with their own priorities. Which of the following, if true, most strengthens this argument?
CORRECT: A The "productive placeholder" argument claims interpreters fill catharsis with their own aesthetic priorities. Option A provides precisely the pattern this predicts: dominant readings track dominant aesthetic values across time — this is exactly what would happen if catharsis were a placeholder that each era fills with its own priorities. B would settle the debate, which would refute the placeholder argument by showing catharsis has a determinate meaning after all. C shows the readings are incompatible — this is consistent with both the placeholder argument and with the view that one reading is simply correct; it doesn't specifically strengthen the placeholder hypothesis over the "one reading is right" hypothesis. D attributes the ambiguity to translation — this externalises the ambiguity to a linguistic artefact rather than to the productive theoretical openness the placeholder argument requires.
6
The passage identifies a tension within Aristotle's theoretical project: the purgation reading supports a defensive project (poetry is safe), while the clarification reading supports a positive project (poetry illuminates). Which of the following can be most reliably inferred from this identification of tension?
CORRECT: B The passage says the two theoretical projects "may not require the same account of how tragedy affects its audience, and Aristotle may not have fully reconciled them." This implies the ambiguity in catharsis is not merely a textual accident (Aristotle forgot to elaborate) but may be structurally generated — two different projects pulling toward two different accounts, both present in the same text. A attributes intellectual inconsistency and failure of notice — the passage is more charitable: Aristotle "may not have fully reconciled them," suggesting he was managing two genuine theoretical pressures. C privileges purgation as the "primary purpose" — the passage presents both projects as present without prioritising either. D makes an evaluative claim about sophistication that the passage never makes.
7
The passage ends by suggesting that catharsis may be a "productive theoretical placeholder" that successive generations fill with their own priorities. This observation functions in the passage primarily to:
CORRECT: B The placeholder suggestion explains rather than dismisses the controversy: the concept has generated sustained engagement precisely because its productive openness allows each generation to work out its own account of how art affects audiences. This reframes persistent controversy as a feature of the concept's fertility rather than a symptom of scholarly failure. A reads "placeholder" as discrediting — but "productive theoretical placeholder" is not a dismissal; productive placeholders are theoretically valuable precisely because they sustain engagement. C attributes an implicit endorsement of the narrative reading — the passage never privileges any reading and explicitly notes the narrative reading is "less prominent," suggesting the author doesn't privilege it. D proposes a genealogical shift as a resolution — the passage makes a genealogical observation, but framing it as a "resolution" of the controversy goes beyond what the passage claims.
8
The passage argues that "the controversy cannot be resolved by philological analysis alone" because the extended use of catharsis in the Politics is itself interpretively contested. Which logical principle does this argument implicitly rely on?
CORRECT: C The argument's logical structure is: philologists appeal to the Politics passage as additional evidence for the Poetics' meaning; but the Politics passage is itself interpretively contested and has been read to support both purgation and clarification. A contested piece of evidence cannot adjudicate between two contested interpretations — it merely adds a second disputed datum. The principle is: evidence that is itself in dispute cannot settle the dispute it is intended to resolve. A is a reasonable reconstruction of the implicit methodology the passage assumes — that same-author cross-textual evidence could in principle resolve such questions — but it makes the claim too procedurally specific. B makes a sweeping claim against philological method generally — the passage says "alone," implying philology contributes something but isn't sufficient; it doesn't dismiss it entirely. D implies philosophical analysis is the recommended alternative — the passage identifies the tension between Aristotle's projects as another source of the difficulty, but doesn't position it as a solution.
Passage 2 Score
/4

P 03
The Death of the Author, Intertextuality & the Politics of Meaning
Passage Timer
10:00
Read the Passage

Roland Barthes's 1967 essay "The Death of the Author" argued that the figure of the Author — capitalised to distinguish the theoretical construct from the biographical person — had functioned as a mechanism for closing the text's meaning by grounding it in a final, authoritative intention. When we explain a text by appealing to what the author meant or intended, we restrict the text's meaning to a single authoritative reading and suppress the plurality that the text's own language generates. Barthes proposed the "death" of this authoritative figure as a liberation of the reader: once the author is removed as the arbiter of meaning, the text becomes a multidimensional space in which meanings multiply without any one of them being definitively correct. The essay's famous conclusion announces the birth of the reader as the price of the Author's death.

Barthes developed this through the concept of intertextuality: a text is not the expression of an original authorial mind but a tissue of citations drawn from innumerable centres of culture. Meaning does not originate in the author's consciousness but circulates through the pre-existing codes, conventions, and discourses that language brings to any utterance. The author who thinks she is expressing a unique personal vision is in fact traversed by these discourses; the text she produces is constituted by prior texts, whether she acknowledges this or not. Intertextuality is therefore not primarily a technique of literary criticism — identifying specific allusions and echoes — but an ontological claim about what texts are: not expressions of a subject's interiority but sites where cultural codes intersect.

The political stakes of this argument were not lost on its critics. E.D. Hirsch's intentionalist counter-argument held that textual meaning just is the author's intended meaning, and that acknowledging this is necessary for the practice of interpretation to be disciplined rather than arbitrary. If readers are free to generate whatever meanings they choose, the text loses its capacity to bear witness, to contest power, to testify. This is not a merely academic concern: the evacuation of authorial intention from texts that function as testimonies, contracts, or historical records has practical consequences that literary theory cannot afford to ignore. Foucault's "What is an Author?" offered a middle position: the author-function, rather than the biographical author, determines the discursive rules by which certain texts are read, attributed, and authorised, making the author a category that is socially produced and institutionally regulated rather than either a stable guarantor of meaning or an irrelevant fiction.

Questions · Passage 03
9
Barthes argues that appealing to authorial intention "closes" textual meaning. What precisely does he mean by this, and what does he propose as an alternative?
CORRECT: C The passage explains the closure precisely: appealing to the Author "restricts the text's meaning to a single authoritative reading and suppresses the plurality that the text's own language generates." The alternative is the reader as the productive site of meaning, with "meanings multiply without any one of them being definitively correct." C states both moves accurately. A concerns biographical research specifically, which is related but is not what the passage identifies as the mechanism of closure. B proposes psychoanalytic methods as the alternative, which is not Barthes's proposal. D says closure prevents politically challenging readings, which is an implication but not the mechanism Barthes identifies.
10
Barthes's concept of intertextuality is described as "not primarily a technique of literary criticism but an ontological claim." What is the significance of this distinction?
CORRECT: B The passage distinguishes the ontological from the methodological: intertextuality as ontological claim says texts are constituted by intersecting codes, not that critics should track allusions. B captures this: the distinction matters because it shifts the claim from "some texts cite others" (a scholarly observation) to "all texts are intersections of prior codes" (a claim about what texts fundamentally are). A says it applies to all language use, which is a plausible implication but not the specific significance of the technique-versus-ontology distinction. C invokes unfalsifiability, which is a consequence but not what the distinction means. D says applying it as a method misunderstands the concept, which is an implication that B's formulation makes possible but that goes beyond what B says.
11
Hirsch's intentionalist counter-argument appeals to the practical consequences of abandoning authorial intention for texts that function as testimonies or contracts. What is the logical structure of this appeal?
CORRECT: D The passage presents Hirsch's appeal as pointing to practical consequences: abandoning intention means testimonies cannot bear witness and contracts cannot function, and "literary theory cannot afford to ignore" these consequences. This is a pragmatic argument: whatever the theoretical merits of Barthes's position, the practical costs of applying it to certain text types justify maintaining intentionalism. D captures this structure. A describes a reductio, which would require showing Barthes's position is self-contradictory, not merely that it has bad consequences. B describes an analogy argument, which is not what the passage presents. C describes an epistemological argument about scholarly knowledge, which is related to Hirsch's concerns but is not the specific argument the passage attributes to him here.
12
Foucault's "author-function" is presented as a "middle position." Between which two positions does it mediate, and what specifically makes it a middle rather than a compromise?
CORRECT: C The passage explicitly identifies the two positions: "neither a stable guarantor of meaning nor an irrelevant fiction." Foucault's author-function mediates by treating the author as a real and consequential category — it does things in discourse — but as a socially produced and institutionally regulated category rather than the biographical individual whose psychological intention grounds meaning. This makes it a middle position because it accepts something from each extreme rather than simply splitting the difference. C states this precisely. A frames it as mediating between Barthes and Hirsch by finding a disciplining principle, which is partially right but misses the specific nature of the author-function as a socially produced regulatory category. B introduces formalism, which is not the dispute the passage is addressing. D describes a case-by-case contextual approach, which is not what Foucault proposes.
Passage 3 Score
/4

P 04
Postcolonial Literature, Language Choice & the Politics of the Novel
Passage Timer
10:00
Read the Passage

One of the most contentious debates in postcolonial literary studies concerns the choice of language for writers from formerly colonised societies. Ngugi wa Thiong'o's decision in the late 1970s to stop writing in English and write exclusively in Gikuyu, his Kenyan mother tongue, was both a political act and a theoretical intervention. Ngugi argued that the colonial language carries within its structure the mental framework of colonial domination: that writing in English, however resistant the content, perpetuates the colonisation of the mind because the language itself embeds the assumptions, aesthetic values, and conceptual frameworks of the colonising culture. Decolonisation of literature therefore requires linguistic decolonisation, not merely thematic decolonisation.

Chinua Achebe's counter-position is the most eloquent defence of using the colonial language against itself. Writing in a language does not commit the writer to the assumptions of those who historically wielded it; it commits the writer to the resources — syntactic, lexical, rhetorical — that the language makes available. Achebe argued that African writers using English were not capitulating to colonial culture but transforming English, bending it to bear the weight of an African experience it was never designed to carry. The English used in Things Fall Apart is not the English of the colonial administrator; it has been modified at the level of syntax, idiom, and reference to carry Igbo cultural logic. This is not assimilation to English but the colonisation of English by African linguistic and cultural structures.

The debate cannot be resolved by appeal to audience considerations alone, though those considerations are real. Writing in Gikuyu limits the immediate readership to those who know Gikuyu and requires translation to achieve wider circulation, reintroducing the linguistic asymmetries Ngugi sought to escape. Writing in English provides access to a global literary market but risks performing a postcolonial authenticity for Western readers hungry for narratives of otherness that confirm rather than challenge their assumptions. Gayatri Chakravorty Spivak's critique of "can the subaltern speak?" extends the problem: even writing in one's own language within the institutional framework of global literary culture, which is dominated by Western publishing, translation, and criticism, may reproduce the structures of epistemic imperialism that linguistic choice alone cannot address.

Questions · Passage 04
13
Ngugi's argument is that colonial language carries "the mental framework of colonial domination." What type of claim is this, and what evidence would most directly test it?
CORRECT: C Ngugi's claim is specifically about the structural properties of the colonial language: that English "carries within its structure the mental framework of colonial domination" and "embeds the assumptions, aesthetic values, and conceptual frameworks of the colonising culture." This is a structural-linguistic claim, not a psychological, sociological, or historical one. Testing it most directly requires examining whether English's structural properties actually force writers to reproduce colonial frameworks. C identifies this correctly. A tests psychological effects on writers, not the structural-linguistic claim. B tests distribution of cultural power, which is an institutional question distinct from the claim about language structure. D tests the historical origins of colonial languages, which is irrelevant to whether their current structure embeds colonial frameworks.
14
Achebe's claim that African writers "colonise English" rather than being colonised by it inverts Ngugi's framework. Which feature of his argument makes this inversion possible?
CORRECT: B Achebe's inversion depends on rejecting the view that English has fixed properties determined by its colonial history. He treats language as malleable: African writers can modify English at the level of syntax, idiom, and reference to make it carry African cultural logic. The language does not dominate the writer; the writer can shape the language. B captures this premise. A says thematic decolonisation is sufficient, but Achebe's argument is precisely that linguistic transformation — not merely thematic content — is what African writers achieve. C says English has already been transformed by indigenous contact, which is a different argument from Achebe's claim about the plasticity available to individual writers. D accepts Ngugi's structural claim and argues strategically, which is the opposite of Achebe's actual position.
15
Spivak's critique extends the problem beyond the Ngugi-Achebe debate by arguing that language choice alone cannot address the structures of epistemic imperialism. What does this extension imply about the terms of the debate itself?
CORRECT: D The passage says Spivak's critique extends beyond language choice to the "institutional framework of global literary culture, which is dominated by Western publishing, translation, and criticism." This implies that both Ngugi and Achebe have been debating the primary site of political negotiation incorrectly: the real site is institutional rather than linguistic. D captures this: Spivak's extension questions the shared premise of the debate. A and B try to declare a winner within the debate, which Spivak's position rejects by shifting the terrain entirely. C says literature is inadequate for decolonisation, which is stronger than what Spivak's institutional critique implies.
16
The passage notes that writing in English risks "performing a postcolonial authenticity for Western readers." What specific problem does this observation identify for Achebe's position?
CORRECT: C Achebe's claim is that African writers can transform English to carry African cultural logic. The performing-authenticity observation challenges this at the reception level: even a successfully transformed English may still be consumed by Western readers through a framework of exoticism — reading the "African" elements as exotic difference — leaving the colonial asymmetry between Western reader and African text intact regardless of the writer's transformative intentions. C identifies this gap between production (the writer's transformation) and reception (the reader's consumption) as the specific problem. A says it undermines Achebe's transformation claim directly, but the observation is about reception not about whether the transformation occurs. B raises a general reader-role argument, which is related but less precise than C. D says it is a general problem for cross-cultural literature, which may be true but does not address what the observation specifically implies for Achebe's position.
Passage 4 Score
/4

P 05
Genre, Form & the Novel as a Social Technology
Passage Timer
10:00
Read the Passage

Ian Watt's The Rise of the Novel argued that the novel as a literary form was the product of a specific social formation: the rise of individualism and the reading public in eighteenth-century England. Where earlier narrative forms — epic, romance, allegory — operated through conventional plots, typical characters, and universal settings, the novel privileged particularity: individual characters with specific proper names, particular settings at determinate historical moments, and plots arising from the specific circumstances of specific people. Formal realism, Watt called it — a narrative mode in which the literary work's truth-claim depends on its capacity to render the particular as particular rather than as an instance of a universal type. The novel was thus not merely a new genre but the literary expression of a new epistemological and social configuration in which the experience of individual subjects had become the primary locus of truth.

Watt's sociological account has been both productive and contested. Its productivity lies in connecting formal literary choices — point of view, temporal specificity, proper naming — to social history, showing that what appear to be technical decisions are simultaneously ideological ones. Its limitations are equally significant. Michael McKeon's more dialectical account complicates Watt's linear story by showing that early novels were not univocally committed to formal realism but were often generically unstable, mixing realist conventions with romance, allegory, and satire in ways that Watt's account has to treat as exceptions or failures. A more fundamental challenge comes from postcolonial critics who note that Watt's English-centered account produces "the novel" as a specifically Western form and positions non-Western narrative traditions as precursors, influences, or deviations rather than as independent and commensurate literary achievements.

Bakhtin's account of the novel as inherently dialogic — constituted by a plurality of voices and social languages that cannot be reduced to a single authoritative voice — provides a different theoretical foundation. Where Watt grounds the novel in individualism, Bakhtin grounds it in the irreducible social plurality of language itself. On Bakhtin's account, the novel does not merely represent social plurality; its formal structure — multiple voices, free indirect discourse, irony — enacts it. This makes the novel intrinsically resistant to authoritarian univocality: unlike epic or lyric, which aspire to single-voiced authority, the novel's formal logic requires the coexistence of competing perspectives. The political implications are significant: a form whose structural logic resists authoritarian closure is both a product and a vehicle of a certain kind of democratic or pluralist social imagination.

Questions · Passage 05
17
Watt's concept of "formal realism" distinguishes the novel from earlier narrative forms on the basis of particularity. What does this mean, and what does it imply about the novel's epistemological commitments?
CORRECT: B The passage defines formal realism as the mode "in which the literary work's truth-claim depends on its capacity to render the particular as particular rather than as an instance of a universal type," and identifies it with individual characters having specific names, particular settings, and determinate historical moments. The epistemological implication the passage draws is that "the experience of individual subjects had become the primary locus of truth." B captures both the definition and the implication. A says formal realism means prose over verse, which is not how the passage defines it. C says formal realism constrains novels to represent the world as it is, which conflates formal realism with a different and more restrictive use of the term. D emphasises inner life over external action, which is a related feature of the novel but not the specific definition the passage provides.
18
The postcolonial critique of Watt's account targets which specific consequence of his English-centred framework?
CORRECT: C The passage identifies the postcolonial critique precisely: Watt's account "produces 'the novel' as a specifically Western form and positions non-Western narrative traditions as precursors, influences, or deviations rather than as independent and commensurate literary achievements." C follows the passage's own language. A concerns exclusion of non-English novels, which is a different problem from positioning non-Western traditions hierarchically. B makes an empirical claim about the universality of individualism as a precondition, which is a related argument but not the specific consequence the passage identifies. D concerns historical incompleteness due to ignored influences, which is also a valid but different critique.
19
Bakhtin's account of the novel as "dialogic" grounds it in "the irreducible social plurality of language itself" rather than in individualism. What does this reground­ing change about what the novel is claimed to do?
CORRECT: B The passage says that on Bakhtin's account, "the novel does not merely represent social plurality; its formal structure — multiple voices, free indirect discourse, irony — enacts it." This is the key shift: from representation to enactment. The novel's formal choices do not depict plurality from outside but constitute it from within the form. B captures this. A says the shift is from individual to collective experience, which does not capture the represent-versus-enact distinction. C captures the political implication of Bakhtin's account but does not identify what changes about what the novel is claimed to do. D says nothing changes, which misses the represent-versus-enact distinction that the passage explicitly draws.
20
The passage presents three theoretical accounts of the novel: Watt's sociological account, McKeon's dialectical complication, and Bakhtin's dialogic account. What do McKeon's and Bakhtin's accounts share in their response to Watt?
CORRECT: D McKeon shows early novels were generically unstable — mixed, not purely committed to formal realism — complicating Watt's linear story. Bakhtin shows the novel's form is constitutively dialogic — plural, not reducible to a single voice. Both resist Watt's univocal account: McKeon historically and Bakhtin structurally. D captures this shared resistance to Watt's univocality and linearity. A says both reject sociological methodology entirely, which is wrong: McKeon is also a sociological-historical account. B says both accept formal realism while extending it to non-Western traditions, which mischaracterises both accounts. C says both complicate the novel's relationship to individualism, which is partially right for McKeon but mischaracterises Bakhtin, whose account is about linguistic plurality not primarily about individualism.
Passage 5 Score
/4
Literature · Total Score
/8
Category 10
Anthropology
5 passages · 4 questions each · CAT 5/5 · 50 min total
Score
/20
P 01
Mauss, Derrida & Bourdieu on the Gift: Exchange, Impossibility & Misrecognition
Read the Passage

Mauss's analysis of the gift in archaic societies demonstrated that gift-giving, however voluntary in appearance, is embedded in a structure of triple obligation: the obligation to give, to receive, and to reciprocate. This structure is not merely contractual; it is total — the gift simultaneously expresses and constitutes social, moral, religious, and economic relations. The hau of Maori exchange theory — the spirit of the thing given, said to compel its return — exemplifies how gift systems invest objects with a quasi-personal power that enforces reciprocity through normative rather than legal mechanisms. For Mauss, the archaic gift is not the opposite of market exchange but its predecessor and alternative: it accomplishes the same work of social integration through different symbolic machinery.

Derrida's reading of Mauss argued that Mauss's own analysis destroys its ostensible object. If a gift obligates a return, it is not a gift but a loan — an exchange with temporal deferral. A genuine gift, on Derrida's account, requires not only the absence of counter-gift but the absence of recognition: to recognise a gift as a gift is to initiate the symbolic economy of gratitude and reciprocity that converts it into exchange. The true gift must be unrecognised, even by the giver, which makes it structurally impossible within any social relation in which acts are perceived, named, and responded to. Derrida does not conclude that gifts never occur, but that the concept of the pure gift is an aporia — a constitutive impossibility that haunts every exchange without being achievable within it.

Bourdieu's response to both Mauss and Derrida insists on the social productivity of what he calls the "collective misrecognition" of exchange as gift. The gift works precisely by not being called a gift — by the temporal interval between gift and counter-gift, and by the shared practical investment of both parties in the fiction that the exchange is spontaneous and disinterested. This misrecognition is not a false consciousness to be corrected but a constitutive feature of the social practice: expose it, and the gift disappears; the act of naming the underlying exchange logic dissolves the social relation the gift was sustaining. Bourdieu's contribution is to locate the gift not in the object or the intention but in the practical logic of a social field — a logic that requires systematic misrecognition to function.

Questions · Passage 01
1
Derrida argues that a genuine gift requires the absence of recognition — even by the giver — because recognition initiates the symbolic economy of gratitude that converts the gift into exchange. Which of the following, if true, most seriously weakens this argument?
CORRECT: C Derrida requires gifts to be unrecognised — even by the giver. Option C attacks this directly: intentional giving is constitutively self-aware; you cannot give without knowing you are giving. If that's correct, then "unrecognised by the giver" describes not a pure gift but a logical impossibility — the aporia is not just practically impossible but conceptually incoherent. This is the sharpest weakener because it targets the aporia at its definitional core. A shows even anonymous giving triggers warm-glow — but this is about the giver's internal state, not about recognition per se; warm-glow doesn't establish that the symbolic economy of reciprocity is activated. B points to diffuse obligation structures in Mauss — this complicates the bilateral framework Derrida assumes but doesn't directly challenge his recognition requirement. D uses Bourdieu against Derrida — but Bourdieu's misrecognition requires that the exchange logic be suppressed, which is different from showing recognition doesn't trigger reciprocity.
2
Bourdieu argues that exposing the underlying exchange logic of a gift "dissolves the social relation the gift was sustaining." Which of the following can be most reliably inferred from this claim?
CORRECT: D Bourdieu's claim is that the gift works by not being named exchange — misrecognition is constitutive of the social practice's function. A reliable inference is that the integrative value is produced by the collective practice of treating exchange as gift: the value lies not in what the gift objectively is but in how it is socially constructed. A makes a stronger claim about the anthropologist specifically — but Bourdieu is describing participants naming the exchange logic, not analysts; moreover, academic analysis circulating back to participants is a further assumption. B identifies the reflexivity problem — interesting and connected, but it's an implication about social science's effects, not a direct inference from Bourdieu's specific claim about naming dissolving the relation. C concludes fragility and inevitable temporariness in literate societies — this goes considerably beyond what Bourdieu says; he identifies a condition for the gift's function, not a prediction about its durability.
3
Bourdieu holds that the gift requires collective misrecognition to function, and that exposing the exchange logic destroys the social relation. This generates a paradox for Bourdieu's own theoretical project. What is that paradox?
CORRECT: B The paradox is precise: Bourdieu's sociology aims to unmask — to reveal the exchange logic beneath the gift's surface. But Bourdieu also claims that naming the exchange logic dissolves the social relation. Therefore, successful sociological unmasking is socially destructive. The project of sociology — revealing what is hidden — is in tension with the social value of what is hidden. This is a genuine paradox internal to Bourdieu's project. A attributes a reflexive self-contradiction about symbolic violence — a classic critique of Bourdieu generally, but it isn't the specific paradox generated by the misrecognition/dissolution claim the question asks about. C claims Bourdieu falls into his own aporia — this is partially true, but the specific form identified in B (sociological success = social destruction) is more precise. D makes a materialist/idealist distinction that is a standard philosophical dispute but isn't the paradox generated by the specific claim that naming dissolves the relation.
4
Mauss argues that the hau — the spirit of the thing given — compels the return of the gift through normative rather than legal mechanisms. For this claim to constitute a theoretical contribution beyond merely describing a belief held by Maori people, which of the following must be assumed?
CORRECT: B Mauss uses the hau as more than a description of a Maori belief — he uses it as evidence for a general theory of how gift systems work. For this to be a theoretical contribution (not just ethnography), the claim must be that the hau articulates something structurally general: that gift obligation operates through normative mechanisms in exchange systems broadly. Without this generalisability assumption, citing the hau is only reportage of a local belief. A requires the hau to be literally true — Mauss doesn't need this; he can use the hau as a cultural expression of a real structural mechanism without endorsing animism. C makes normative superiority over legal mechanisms a condition — Mauss doesn't claim normative mechanisms are more effective, just different. D raises the reliability of indigenous testimony — a real methodological issue, but Mauss's theoretical contribution doesn't require the hau to be a fully accurate self-account; it requires the obligation structure to be real and generalisable.
Passage 1 Score
/4

P 02
Kinship Theory: Descent vs Alliance, Needham's Demolition & Schneider's Deeper Challenge
Read the Passage

The mid-twentieth-century debate between descent theory (Radcliffe-Brown, Fortes) and alliance theory (Lévi-Strauss) represents one of the most productive theoretical confrontations in social anthropology. Descent theorists explained social structure through the logic of group formation: unilineal descent rules — patrilineal or matrilineal — recruit individuals into bounded corporate groups (lineages, clans) that hold collective rights over land, labour, and ritual. Social structure, on this account, is a system of corporate groups differentiated by their recruitment principles. Alliance theorists, following Lévi-Strauss's Elementary Structures of Kinship, argued that the fundamental dynamic of kinship systems is not group formation but exchange: the incest taboo's positive face is the injunction to give women to other groups and receive women in return, creating alliances that integrate otherwise autonomous social units. Kinship structure is, on this account, a system of exchange relations — not a taxonomy of groups but a grammar of circulation.

Rodney Needham's critique of alliance theory was both technically precise and theoretically devastating. Lévi-Strauss's universal claims about elementary structures depended, Needham argued, on a systematic conflation of prescriptive marriage systems (where rules positively specify who one must marry — typically a cross-cousin of a specified kind) with preferential marriage systems (where certain categories are merely statistically preferred or normatively favoured). The logical inference from "people prefer to marry cross-cousins" to "people are obligated to marry cross-cousins" does not hold, and the formal elegance of elementary structure theory was purchased at the cost of empirical accuracy. Many societies Lévi-Strauss claimed as instances of prescribed exchange turned out, on closer examination, to exhibit only preference — a finding that dramatically narrowed the theory's empirical scope.

David Schneider's challenge was more fundamental than Needham's and cut under both frameworks simultaneously. Schneider argued that "kinship" is not a universal human domain with cross-culturally stable content but a historically specific Western analytical category — a folk model of biological relatedness and marriage that anthropologists had projected onto radically diverse local practices without justification. Neither descent nor alliance theory could be valid universal accounts because neither had justified its assumption that the phenomena it studied — biological relatedness, marriage, group membership — constitute a natural kind cross-culturally. Schneider's critique did not destroy kinship studies but transformed them: subsequent research shifted from identifying structural universals to studying how specific communities construct, contest, and perform what counts as relatedness — a move that opened kinship theory to gender studies, postcolonial critique, and the anthropology of assisted reproduction.

Questions · Passage 02
5
Schneider argues that "kinship" is a Western analytical category projected onto radically diverse local practices without justification. Which of the following, if true, most strengthens this argument?
CORRECT: A Schneider's claim is that biological relatedness and marriage don't constitute a natural kind cross-culturally — they are not universal organising principles of what anthropologists call "kinship." Option A directly supports this: in several societies, the practices anthropologists study as kinship (care, obligation, structuring of daily life) are organised around non-biological principles. This shows that the Western biological-relatedness framework fails to capture the actual organising principles of social life in those societies. B confirms Needham's narrower critique of alliance theory — relevant but this is about the empirical scope of one theory, not Schneider's deeper challenge about the category itself. C actually weakens Schneider: if descent groups appear cross-culturally, the category may have real cross-cultural content after all. D shows cross-linguistic convergence in kinship vocabulary — this supports universalist claims against Schneider, weakening rather than strengthening his argument.
6
The passage says Schneider's critique "did not destroy kinship studies but transformed them." Which of the following can be most reliably inferred from this transformation?
CORRECT: B The passage says the transformation shifted from identifying structural universals to studying how specific communities construct and contest what counts as relatedness. This is precisely B: relatedness remains the subject of study, but the analytical categories — what counts as kinship — are now themselves treated as ethnographic data rather than as the universal framework assumed in advance. A says theories were "empirically false" — Schneider's critique is more radical than an empirical falsification; it challenges the validity of the category, not just the accuracy of specific claims. C says kinship should be dissolved into other frameworks — the passage says studies were transformed, not dissolved; kinship remains a field. D generalises to "all anthropological categories are Western projections" — too sweeping; Schneider targeted kinship specifically.
7
The passage describes Needham's critique as "technically precise and theoretically devastating" but Schneider's as "more fundamental." What is the most plausible reason the author makes this comparative evaluation?
CORRECT: B The author's structural logic is explicit: Needham attacked the empirical accuracy of alliance theory (a specific claim about prescriptive structures), while Schneider attacked the meta-assumption shared by both descent and alliance theory — that "kinship" is a cross-culturally valid category. A critique that targets a presupposition common to competing frameworks is more fundamental because it challenges what both theories take for granted. A attributes personal endorsement — unwarranted; the author's evaluation is structural, not autobiographical. C invents a disciplinary hierarchy between theoretical and empirical critiques — the passage doesn't appeal to this. D says Needham had more immediate impact — but the author's basis for "more fundamental" is analytical depth, not historical impact.
8
A defender of Lévi-Strauss might respond to Needham's critique as follows: "The prescriptive/preferential distinction is a Western analytic imposition — in practice, strongly preferred marriage categories function as prescriptions, so the distinction is a philosophical nicety without ethnographic significance." Which of the following best identifies the logical problem with this response?
CORRECT: B Needham's prescriptive/preferential distinction is not about compliance frequency — it is a normative distinction: prescriptions are rules whose violation constitutes a social transgression, while preferences are norms whose non-fulfilment is merely sub-optimal. "Strongly preferred" and "obligatory" are categorially different even if empirically correlated. The response conflates the two by treating high compliance as equivalent to normative obligation. C identifies question-begging — also a real flaw — but it's less precise than B. The response does more than assert the contested point; it offers a quasi-empirical rationale (strong preference ≈ prescription in practice) that has a specific logical structure worth identifying. A identifies a tu quoque — clever but the passage never says Needham charged Lévi-Strauss with Western imposition specifically; and more importantly, even if both frameworks impose Western categories, this wouldn't rescue Lévi-Strauss from Needham's specific prescriptive/preferential point. D correctly notes the Schneiderian critique cuts both ways — but the response doesn't actually invoke Schneider; D attributes an argument to the defender that isn't in the response as stated.
Passage 2 Score
/4

P 03
Geertz's Thick Description, Interpretivism & the Objectivity Problem in Ethnography
Passage Timer
10:00
Read the Passage

Clifford Geertz's programme of interpretive anthropology proposed that culture should be understood as a text to be read rather than a mechanism to be explained. Thick description — the practice of recording not just what people do but the layered web of significance within which their actions are embedded — was Geertz's methodological response to the inadequacy of behaviourist accounts that captured the form of action without its meaning. His famous example: a twitch and a wink are physically identical contractions of the eyelid, but they belong to entirely different semiotic registers. Recording only the physical movement produces thin description; reading the wink as a conspiratorial signal within a specific cultural context produces thick description. The ethnographer's task is to interpret second-order interpretations: the native's own construal of their practice.

The interpretive programme faced a challenge from the reflexive turn in anthropology in the 1980s, most forcefully articulated in the volume Writing Culture edited by Clifford and Marcus. If ethnographic texts are interpretations rather than reports, and if interpretation is shaped by the ethnographer's own cultural position, gender, historical moment, and theoretical commitments, then ethnographic authority — the claim to represent what a culture is actually like — becomes deeply problematic. The ethnographic text is revealed as a literary artifact produced by a positioned author rather than an objective record of a social reality. Geertz's response was that this critique, while important, proves too much: if the constructedness of ethnographic representation invalidates the enterprise, then the same constructedness invalidates the critique itself, since the critique is also a text produced by positioned authors.

The practical resolution has been a shift toward collaborative and multi-vocal ethnography: including multiple indigenous perspectives, acknowledging the ethnographer's positioning explicitly, and in some cases co-authoring texts with informants. Critics of this resolution argue that it addresses the politics of representation without resolving its epistemological problem: a more inclusive text is not necessarily a more accurate one, and the selection of which voices to include remains an authorially controlled decision. The epistemological challenge is not just about whose perspective is represented but about whether layered, multi-vocal textual representation can escape the fundamental problem that cultural understanding is always produced from somewhere by someone.

Questions · Passage 03
9
Geertz's twitch-wink example is designed to illustrate which specific inadequacy of behaviourist accounts of culture?
CORRECT: C The twitch-wink example shows that two physically identical movements belong to entirely different semiotic registers. The inadequacy is specifically that behavioural description captures form but not meaning: recording the physical contraction tells you nothing about whether it is a twitch, a wink, a parody of a wink, or a rehearsal of a wink. C states this inadequacy precisely. A concerns cross-cultural variation, which is a related issue but not what the twitch-wink example illustrates. B concerns ethnocentrism, which is a valid critique of observation-based methods but not the inadequacy the example demonstrates. D concerns units of analysis and narrative sequences, which is a different methodological objection unrelated to the specific twitch-wink point.
10
Geertz's response to the reflexive critique — that the critique "proves too much" — is a self-referential argument. What is its logical structure, and what does it fail to address?
CORRECT: B Geertz's argument is a tu quoque: it turns the critique's standard back on itself. The structure is: you say positioned authorship undermines authority, but your critique is also a positioned text, so by your own standard your critique is undermined. The crucial gap is that this symmetry does not establish that ethnographic authority is unproblematic — it establishes only that the critique faces the same problem. If both ethnography and the critique are positioned, the conclusion that ethnographic authority is problematic may still stand even if the critique cannot claim to have established it with certainty. B identifies both the structure and the gap. A describes a reductio and identifies a relevant gap, but mischaracterises the logical structure — the argument does not create a performative contradiction. C focuses on implicit objective claims, which is a different version of the same problem. D concerns philosophical versus empirical claims, which is related but less precise than B about the specific asymmetry between negative and positive claims.
11
The passage argues that collaborative multi-vocal ethnography addresses "the politics of representation without resolving its epistemological problem." What is the distinction being drawn?
CORRECT: D The passage draws this distinction explicitly in its final sentences: "a more inclusive text is not necessarily a more accurate one, and the selection of which voices to include remains an authorially controlled decision." The politics of representation is addressed by inclusion; the epistemological problem — that cultural understanding is always produced from somewhere by someone — persists because the inclusive text is still arranged, selected, and framed by an author. D captures both sides of this distinction accurately. A concerns truthfulness versus inclusion, which is a related but different distinction. B concerns institutional gatekeeping, which shifts to an institutional level not in the passage. C describes the epistemological problem as timeless and independent of context, which overstates — the passage frames it as a structural feature of all representation, not as context-independent.
12
Geertz describes the ethnographer's task as interpreting "second-order interpretations." Why does this characterisation matter for the status of ethnographic knowledge?
CORRECT: B If the ethnographer is interpreting native interpretations, then the gap between social reality and ethnographic account is not merely a product of the ethnographer's limitations or biases — it is built into the structure of the enterprise. There is no uninterpreted social reality available to the ethnographer; what they work with is already culturally processed. This makes the interpretive character of ethnographic knowledge structural rather than incidental. B captures this implication. A says second-order interpretation makes ethnography less reliable, which misreads Geertz's point: he is not apologising for the mediation but characterising it as constitutive of cultural understanding. C says the account is collaborative, which is an implication of dependence on informants but not the implication for the status of knowledge that "second-order" specifically generates. D justifies narrative methods, which is a methodological consequence but not the epistemological implication the question asks about.
Passage 3 Score
/4

P 04
Ritual, Liminality & Turner's Social Drama
Passage Timer
10:00
Read the Passage

Arnold van Gennep's analysis of rites of passage identified a tripartite structure common across diverse transition rituals: separation from an existing social category, a liminal phase of ambiguity and transformation, and reincorporation into a new social position. The liminal phase — from the Latin limen, threshold — is characterised by the suspension of ordinary social distinctions and the exposure of the initiate to symbolic inversion, danger, and instruction. Victor Turner extended this analysis in his fieldwork among the Ndembu of Zambia, developing liminality from a moment in the ritual process to a broader concept for understanding how societies handle the dangers of social ambiguity and the creative potential of states that resist categorical classification.

Turner's concept of communitas — the undifferentiated, egalitarian mode of social bonding that emerges in liminal contexts — stands in productive tension with social structure. Structure, for Turner, is the organised system of roles, statuses, and hierarchies that organises everyday social life; communitas is the experience of unmediated, person-to-person encounter that temporarily dissolves those distinctions. The dialectic between them is not merely descriptive but dynamic: excessive structure produces the kind of rigidity that generates ritual protest; the experience of communitas provides the motivational renewal that re-enters and revitalises structure. Societies require both: structure without communitas becomes oppressive; communitas without structure cannot sustain itself, since undifferentiated togetherness cannot organise the complex tasks that social reproduction requires.

Turner subsequently extended his analysis to what he called "social dramas" — sequences of public breach, crisis, redressive action, and either reintegration or schism that he identified across contexts as different as Ndembu village conflict, English historical disputes, and Mexican political events. The social drama model was intended as a cross-cultural processual framework that captured how conflicts unfold over time rather than analysing societies as static structural systems. Critics have noted two difficulties. First, the four-phase model is flexible enough to fit almost any conflict narrative, raising questions about its falsifiability. Second, the model centres conflict and drama while marginalising the long stretches of routine maintenance that constitute most of social life, producing an account of society that overprivileges transformation and change.

Questions · Passage 04
13
Turner's dialectic between structure and communitas holds that societies require both and that neither is sustainable alone. Which of the following real-world patterns most directly illustrates the claim that "communitas without structure cannot sustain itself"?
CORRECT: A Turner's specific claim is that communitas cannot organise the complex tasks social reproduction requires — not that it inevitably generates conflict, but that it cannot maintain itself as an ongoing mode of social organisation. Option A shows this most directly: revolutionary movements that achieve communitas are immediately forced by practical requirements of governance and production to re-establish structural differentiation. The communitas of solidarity dissolves into structure because structure is necessary for social reproduction. B shows communities fragmenting over time, which illustrates the instability but through conflict rather than through the practical necessity Turner identifies. C shows festival communitas is time-limited, which is consistent with Turner's view but explains the limitation by bounding rather than by the incapacity of communitas to organise social reproduction. D concerns universal status differentiation, which is a sociological claim but not specifically about why communitas cannot sustain itself.
14
The first critique of the social drama model — that its four-phase structure is "flexible enough to fit almost any conflict narrative" — raises which specific methodological concern?
CORRECT: B The passage specifically says the model's flexibility raises "questions about its falsifiability." Falsifiability is the issue: a model that can accommodate any conflict narrative cannot be tested against conflict narratives, since any apparent exception can be redescribed to fit. The concern is not just abstraction (A) but the specific epistemological problem that no evidence could count against the model. B states this precisely. A concerns predictive power rather than falsifiability: a model can be abstract without being unfalsifiable if it has determinate implications that some evidence would violate. C concerns over-generalisation across cultural contexts, which is a valid critique but not the falsifiability concern the passage identifies. D concerns narrative construction versus social reality, which is a different epistemological issue about the relationship between models and their data.
15
Turner's extension of liminality from ritual process to a general concept for understanding social ambiguity represents what kind of theoretical move?
CORRECT: C The passage describes Turner moving from liminality as "a moment in the ritual process" to "a broader concept for understanding how societies handle the dangers of social ambiguity." This is a conceptual extension: the term retains its structural meaning but is applied beyond its original domain. The trade-off the passage implies — greater scope at some analytical cost — is characteristic of conceptual extension. C captures this. A describes an empirical generalisation based on cross-cultural evidence, but the passage describes the move as developing a concept rather than accumulating evidence. B describes a reduction, but Turner is not reducing diverse phenomena to a single mechanism in the way reduction implies. D describes theoretical synthesis between sources, but the passage characterises the move as an extension of van Gennep's concept rather than a synthesis producing something categorically new.
16
The second critique — that the social drama model "overprivileges transformation and change" — implies which limitation in Turner's framework as a theory of society?
CORRECT: B The passage says the model "centres conflict and drama while marginalising the long stretches of routine maintenance that constitute most of social life." The limitation is that by making drama its analytical centre, the framework renders routine maintenance analytically invisible — it appears as background rather than as social phenomena worth studying. B captures this distortion precisely. A concerns the explanation of stability, which is a related implication but frames the problem as a gap in explanatory scope rather than a distortion of the analytical image. C says the model is better suited to modern societies, which is an application claim not implied by the critique in the passage. D attributes ideological motivation to the processual approach, which goes beyond what the critique states.
Passage 4 Score
/4

P 05
Race, the Social Construction Debate & Biological Essentialism
Passage Timer
10:00
Read the Passage

The consensus position in contemporary anthropology and genetics is that race, as popularly understood, is a social construction rather than a biological reality. The human species shows relatively low genetic diversity compared to other great apes — a consequence of our recent African origin and subsequent bottleneck migrations — and that diversity is distributed as a continuous cline rather than clustering into discrete biological populations that correspond to folk racial categories. The genetic variation between individuals classified within the same folk race is typically greater than the average variation between individuals from different folk races. The categories "Black," "white," and "Asian" do not carve nature at its joints.

This consensus position is frequently misunderstood in public discourse and sometimes by scientists themselves. The social construction claim does not entail that race has no real effects: races that do not exist as biological entities can still be real as social categories that structure experience, allocate resources, and produce health disparities through mechanisms of discrimination and differential exposure to environmental stressors. Nor does the claim entail that all human populations are genetically identical or that ancestral geographic origin has no medical relevance: pharmacogenomic research documents that some genetic variants relevant to drug metabolism are distributed differently across populations with different ancestral origins. The claim is specifically that the discrete categorical boundaries of folk racial classification do not correspond to the continuous, clinal distribution of human genetic variation.

The concept of population in genetics is a legitimate scientific category that is sometimes conflated with folk race, generating confusion in both directions. Populations defined by ancestral geographic origin and patterns of shared genetic ancestry are real objects of scientific study; they do not, however, map cleanly onto folk racial categories, which were constructed through processes of colonial classification, legal definition, and social practice rather than derived from population genetic analysis. The confusion between population and race has been exploited in both directions: by those who use legitimate population genetics findings to support essentialist racial claims, and by those who argue that because population categories are scientifically legitimate, racial categories must therefore also have biological grounding.

Questions · Passage 05
17
The passage states that human genetic diversity is distributed as "a continuous cline rather than clustering into discrete biological populations." What does this specific feature of genetic distribution imply for folk racial classification?
CORRECT: C The cline argument is specifically about the mismatch between folk racial categories and genetic distribution: folk races impose discrete boundaries on a continuous gradient. C states this implication precisely. A overstates by condemning all biological group classification, but the passage explicitly defends population genetics categories as legitimate. B claims health disparities are entirely social, but the passage acknowledges that ancestral origin has some medical relevance. D suggests genetic testing could improve racial classification, but the passage's argument is that genetic variation is continuous and does not cluster into racial categories — more precise measurement of a continuous distribution would not produce discrete racial categories.
18
The passage argues that the social construction of race does not entail that race has no real effects. What type of argument is this, and why is it necessary?
CORRECT: B The passage says the social construction claim does not entail that race has no real effects, providing a rejection of a specific false inference. The reason this clarification is necessary is that the inference "race is constructed, therefore racial inequalities are not real or do not need explaining" is commonly made and would neutralise the social construction claim politically by making it seem to deny the reality of racism. B identifies both the logical structure (rejecting a false inference) and why it is necessary (the inference is made and has political consequences). A concerns ontology versus causation, which is a related but less precise characterisation. C says it is empirical supplementation, but the passage is clarifying what the theoretical claim does not entail, not adding evidence. D says it concedes to biological essentialism, which misreads the move entirely.
19
The passage identifies a confusion between population genetics categories and folk racial categories that is "exploited in both directions." What does this observation imply about the relationship between scientific findings and political arguments about race?
CORRECT: D The passage's observation is that legitimate population genetics findings are used by both essentialists (to claim biological grounding for racial categories) and by some constructionists (reasoning from population legitimacy to racial legitimacy). The common factor in both errors is the conflation of population with race. D correctly identifies that the same finding produces opposite conclusions when the conceptual distinction is not maintained, and that the remedy is scientific literacy about the distinction. A says the findings are neutral and exploitation is misuse, which is too optimistic about the possibility of purely neutral scientific communication. B says all race science is political, which overstates. C says both positions are equally exploitable and equally motivated, which does not follow from the observation — the construction position may have stronger scientific grounding despite being subject to misuse.
20
The passage argues that population categories are real scientific objects while folk racial categories are not. Which of the following, if true, would most directly challenge the distinction the passage draws between them?
CORRECT: B The passage's distinction rests on population categories being real objects in genetic data while folk racial categories are not. Option B directly challenges whether population categories are real objects or artefacts of analytical choices: if the number of clusters depends on the parameters the researcher sets rather than on natural discontinuities in the data, then population categories are not found in nature but constructed by methodology — collapsing the distinction. A shows that clusters correspond to folk categories, which challenges the independence of the two but suggests both are real rather than undermining population categories. C shows that folk categories predict ancestry, which supports some genetic grounding for folk categories but does not challenge whether population categories are real natural objects. D shows colonial categories have partial biological grounding, which suggests folk races are not purely social but does not challenge population categories as the more scientifically legitimate concept.
Passage 5 Score
/4
Anthropology · Total Score
/8
Category 11
Living World
5 passages · 4 questions each · CAT 5/5 · 50 min total
Score
/20
P 01
Epigenetics, the Weismann Barrier & the Neo-Lamarckian Question
Read the Passage

The Weismann barrier — the principle that information cannot flow from somatic cells back to the germline — was for a century treated as one of the foundational commitments of Darwinian genetics, foreclosing any mechanism of Lamarckian inheritance. The barrier grounds the argument that acquired characteristics cannot be inherited: changes to an organism's somatic cells during its lifetime cannot be written back into the germ cells (sperm and egg) that transmit genetic information to the next generation. Epigenetics has complicated this picture without overthrowing it. Epigenetic marks — patterns of DNA methylation, histone modification, and small RNA profiles — can be transmitted across generations under certain conditions, and the methylation reprogramming events that are supposed to erase parental marks during gametogenesis appear to be incomplete. Evidence for transgenerational epigenetic inheritance (TEI) is robust in plants and nematodes; in mammals, including humans, the evidence is more limited and contested.

The conceptual significance of TEI is disputed at two levels. First, whether it constitutes a genuinely Lamarckian mechanism. For TEI to be Lamarckian in the relevant sense, the transmitted marks must be adaptive and directional — environmentally appropriate responses to challenge, not merely stochastic persistence of environmentally induced noise. Most documented TEI appears to be the latter: marks persist by chance rather than as targeted adaptations, and the few cases claiming directional adaptive transmission have faced serious replication challenges. Second, whether TEI — even if it does occur — requires revision of evolutionary theory. Epigenetic variants are less stable than DNA sequence variants, more readily reversible, and in most documented cases transmitted for only two or three generations before being reset. The Extended Evolutionary Synthesis community argues that TEI expands the repertoire of hereditary mechanisms in ways that require theoretical revision; Modern Synthesis defenders argue that the phenomena are real but fit within the existing framework without demanding new theory.

What both debates share is a tendency to treat Lamarckism as a binary: either TEI is Lamarckian or it is not. A more productive framing might ask what specific mechanistic features of classical Lamarckism — directed variation, use-inheritance, inheritance of acquired characteristics — are present in TEI, to what degree, and under what conditions. TEI may instantiate some Lamarckian features partially and others not at all, in which case the question "Is TEI Lamarckian?" dissolves in favour of the more tractable question: "Which aspects of Lamarckian inheritance does TEI approximate, and how closely?"

Questions · Passage 01
1
The passage argues that most documented TEI is stochastic rather than adaptive — marks persist by chance rather than as targeted responses to environmental challenge. Which of the following, if true, most seriously weakens the claim that this distinction settles the question of whether TEI is Lamarckian?
CORRECT: A The passage argues stochastic TEI ≠ Lamarckian because Lamarckism requires directed adaptive transmission. Option A introduces a two-step mechanism — stochastic initial variation followed by selection — that produces directionally adaptive inheritance over generations without requiring the initial marks to be targeted. This shows that the directed/stochastic distinction doesn't cleanly settle the Lamarckian question: stochastic variation can generate adaptive inheritance via selection. B challenges the historical accuracy of the directed/stochastic criterion by appealing to what Lamarck actually claimed — but this is a historical point about Lamarck's own intentions, not about whether the mechanistic distinction is scientifically relevant; and the passage is asking about Lamarckism in the contemporary biological sense. C provides evidence that some TEI is non-stochastic — this would strengthen the argument that TEI can be Lamarckian, not weaken the claim that stochasticity settles the question. D is the same historical-anachronism argument as B — also a legitimate point but doesn't address the scientific criterion's validity.
2
The passage proposes replacing the binary question "Is TEI Lamarckian?" with the question "Which aspects of Lamarckian inheritance does TEI approximate, and how closely?" Which of the following can be most reliably inferred from this proposed reframing?
CORRECT: B The reframing treats Lamarckian inheritance not as a binary property but as a cluster of distinct mechanistic features — directed variation, use-inheritance, intergenerational transmission of acquired characteristics — each of which TEI may or may not exhibit to varying degrees. The inference is that the binary yes/no framing was generating a false dichotomy, and decomposing Lamarckism into dimensions allows more precise empirical and theoretical analysis. A says the reframing concedes TEI is not Lamarckian and retreats to historical significance — but the passage makes no such concession; it proposes the reframing precisely to avoid pre-empting the empirical answer. C attributes to the author an implicit positive evaluation of TEI's closeness to Lamarckism — the passage doesn't do this; the reframing is epistemically neutral about the answer. D says the EES position is implicitly endorsed — but the reframing is a methodological proposal, not a substantive claim about how closely TEI approximates Lamarckism; it doesn't pre-commit to the EES conclusion.
3
The passage states that the Weismann barrier has been "complicated without being overthrown" by epigenetics. This formulation presents a tension: the barrier was supposed to be a foundational commitment — a categorical principle — yet it now appears to admit of partial exceptions. What is paradoxical about treating a foundational barrier as admitting degrees?
CORRECT: B The paradox is functional: the Weismann barrier's role in evolutionary biology was to categorically exclude Lamarckian mechanisms. If TEI demonstrates even rare, partial exception — incomplete reprogramming that allows some somatic information to reach the germline — the barrier can no longer perform its categorical exclusionary function, even if the exceptions are uncommon. A categorical prohibition with exceptions is no longer categorical. A says foundational principles must be non-negotiable — this is too strong and is false about science generally; foundational commitments are revised regularly. C claims the formulation is unfalsifiable — a real epistemological concern, but this is about the sociology of science rather than the specific paradox of a categorical barrier admitting degrees. D attributes conservatism to the community — possible but not the structural paradox of degree-admitting categorical exclusion.
4
The Modern Synthesis defenders argue that TEI "fits within the existing framework without demanding new theory." For this claim to be credible, which of the following must be assumed?
CORRECT: B For TEI to "fit without demanding new theory," the Modern Synthesis framework must be defined by commitments general enough to absorb TEI as a new mechanism. If the framework is defined by its core principles — variation, inheritance, selection — and TEI provides a new form of inheritance without changing those principles, then TEI is accommodated rather than accommodating-and-revising. This is what the Modern Synthesis defenders need to assume. A requires the framework to have anticipated TEI explicitly — this is too strong; accommodation doesn't require prior anticipation, only compatibility. C makes a quantitative argument about evolutionary significance — the Modern Synthesis claim is about theoretical fit, not about whether TEI is evolutionarily important enough to warrant new theory. D says the claim is semantic unless the description/revision distinction is precise — but the Modern Synthesis argument doesn't require this precision to be credible; it simply claims the core commitments are unchanged.
Passage 1 Score
/4

P 02
The Holobiont Concept, Microbiome Heritability & the Unit of Selection Problem
Read the Passage

The explosion of microbiome research has forced a reconceptualisation of the individual organism as the unit of biological analysis. The human body harbours microbial communities — bacteria, archaea, fungi, viruses — whose collective gene count vastly exceeds the human genome's, and whose metabolic and immunological contributions are foundational rather than peripheral. Richard Dawkins' extended phenotype concept — that genes can have phenotypic effects extending beyond the body of their bearer — provides a partial framework: the microbiome is a distributed genetic system whose genes extend their effects through host physiology. But the relationship is not merely one of genes extending their reach; it is co-evolutionary. Host and microbiome have co-evolved in ways that partially align their fitness interests, making the standard parasite/commensal/mutualist classification inadequate: the relationship is dynamic, context-dependent, and developmental-stage-sensitive.

The holobiont concept — proposing the host and its associated microbiota as a single unit of selection — has attracted both enthusiasm and serious scepticism. Holobiont enthusiasts argue that the microbiome's contributions to host fitness are sufficiently consistent, and its vertical transmission (from mother to offspring at birth and through breastfeeding) sufficiently heritable, to warrant treating the holobiont as the relevant unit. Critics counter on three grounds: first, microbiome composition is highly variable across environments, developmental stages, and individual hosts — the heritability is too low and too variable to meet the conditions for natural selection to act on the holobiont as a coherent unit; second, the microbiome is largely acquired horizontally (from the environment) rather than vertically, weakening the parent-offspring fidelity that unit-of-selection arguments require; third, treating the holobiont as a unit of selection risks committing the group selection fallacy in a new guise — positing selection on a collective when gene-level selection may provide a fully adequate explanation.

The debate remains unresolved but has been productive in forcing a more precise analysis of what "unit of selection" means. Levins and Lewontin argued decades ago that the question of units of selection is not a single question but several: who replicates, who varies, who is selected, and at what level does adaptation occur? Applying these distinctions to the holobiont question reveals that different questions may have different answers — the microbiome replicates at the microbial level, varies at both microbial and holobiont levels, and the fitness effects may need to be analysed at multiple levels simultaneously. The holobiont controversy has thus enriched rather than resolved the units-of-selection debate by demonstrating its multi-dimensionality.

Questions · Passage 02
5
Critics argue that microbiome heritability is too low and too variable for natural selection to act on the holobiont as a coherent unit. Which of the following, if true, most strengthens this critical argument?
CORRECT: A The critics' argument specifically targets heritability — vertical transmission fidelity from parent to offspring. Option A tests precisely this: identical twins (who share the same host genome) raised apart don't share more microbiome similarity than random strangers. This directly shows host genetic similarity doesn't reliably produce microbiome similarity — i.e., the microbiome isn't faithfully transmitted through the genetic channel, supporting the claim that heritability is too low for holobiont-level selection. B confirms fitness effects — this supports holobiont enthusiasts (the microbiome matters for fitness), not the critics. C shows intra-individual temporal variability — this is about change within one organism's lifetime, not parent-offspring heritability. D shows individual-specific genetic variation in microbiomes — consistent with low heritability but the argument is indirect; A more precisely targets the parent-offspring heritability the unit-of-selection argument requires.
6
The passage states that applying Levins and Lewontin's distinctions to the holobiont "reveals that different questions may have different answers." Which of the following can be most reliably inferred from this claim?
CORRECT: B The passage says "different questions may have different answers" — specifically, replication, variation, selection, and adaptation may give different answers about the appropriate unit. The inference is that "Is the holobiont a unit of selection?" was asked as if it were a single question with a single answer, when it is actually a compound question whose components may receive different responses. This is not unresolvability (A) — it is decomposability. C makes replication the decisive criterion and concludes critics win — but the passage explicitly says different questions may have different answers, which means replication doesn't settle all questions. D attributes an implicit endorsement — the passage says results "may need to be analysed at multiple levels simultaneously," which is neutral, not an endorsement of either side on any specific dimension.
7
The passage concludes that the holobiont controversy has "enriched rather than resolved the units-of-selection debate by demonstrating its multi-dimensionality." What is the rhetorical and argumentative function of this conclusion?
CORRECT: B The conclusion reframes the absence of resolution as a positive outcome: the controversy produced a more sophisticated understanding of the multi-dimensionality of the units-of-selection question. This is a standard rhetorical move in science writing — converting an unresolved dispute into a productive episode that advanced the field's conceptual toolkit. A misreads "enriched rather than resolved" as a concession of failure — enrichment and resolution are two different contributions; not resolving doesn't mean failing. C says EES is implicitly endorsed because the debate showed theory needs expansion — but the passage says the controversy demonstrated multi-dimensionality, not that current theory is inadequate; multi-dimensionality is compatible with the Modern Synthesis framework. D predicts the passage will continue with a resolution — the conclusion signals the passage is ending, not continuing.
8
A holobiont enthusiast might argue: "Since removing the microbiome dramatically reduces host fitness, and since fitness is what natural selection acts on, the microbiome must be part of the unit of selection." Which logical flaw most seriously affects this argument?
CORRECT: B The argument moves from "affects fitness" to "is part of the unit of selection." But natural selection requires not just fitness effects but heritable variation — and the argument establishes only that the microbiome affects fitness (i.e., it is part of the interactor), not that microbiome variation is heritably transmitted to offspring (i.e., that it is part of the replicator). The passage has already established that heritability is precisely what's in dispute. The interactor/replicator distinction — which Dawkins, Hull, and others have developed — is exactly the analytic tool that reveals the flaw. A correctly identifies missing heritability but frames it as affirming the consequent — the formal fallacy label doesn't quite fit the argument's logical structure. C makes a mereological claim — parts of a unit aren't units — but the holobiont argument is precisely claiming the holobiont (host + microbiome) is the unit, not that the microbiome alone is; so C misidentifies the argument's structure. D shows the argument proves too much — a real problem, but the most precise diagnosis is B's interactor/replicator distinction.
Passage 2 Score
/4

P 03
CRISPR, Off-Target Effects & the Ethics of Germline Editing
Passage Timer
10:00
Read the Passage

The CRISPR-Cas9 system — adapted from a bacterial immune mechanism and developed into a precision genome-editing tool — has transformed the landscape of biological research and generated urgent ethical debates about the conditions under which human germline editing could be permissible. The technology allows targeted cleavage of DNA at specific loci, enabling insertion, deletion, or substitution of sequences with a precision unavailable to earlier editing tools. Its apparent simplicity relative to predecessors like TALEN and ZFN has made it widely accessible, accelerating both the pace of discovery and the pace of ethical concern. That concern crystallised in 2018 when He Jiankui announced the birth of twin girls whose germline CCR5 gene had been edited to confer resistance to HIV — an announcement met with near-universal condemnation from the scientific community on grounds of premature clinical application, informed consent failures, and inadequate scientific justification.

The distinction between somatic and germline editing is central to bioethical analysis. Somatic editing — modifying non-reproductive cells in a living individual — affects only that individual; the modification is not heritable and raises ethical questions that are, in principle, continuous with those governing other experimental medical interventions. Germline editing — modifying embryos, sperm, or eggs — produces heritable changes that will be transmitted to all descendants of the edited individual, affecting individuals who have not and cannot consent to the modification. The asymmetry of consent is compounded by the asymmetry of uncertainty: off-target edits — unintended cuts or modifications at genomic sites other than the target — may have phenotypic consequences that do not manifest until developmental stages or life-phases far removed from the initial edit, compressing the detectable risk window while expanding the population bearing undetected risk.

The most sophisticated objections to germline editing are not, however, purely technical — they are not merely the claim that the technology is currently too imprecise to be safely deployed. They are also structural: even if off-target editing were eliminated entirely, germline modification would raise questions about the moral status of unborn and future persons as bearers of interests in their own genetic constitution, about the boundary between therapeutic and enhancement applications, and about the potential for genetic stratification between populations with access to editing and those without. The technology's therapeutic framing — germline editing to prevent severe monogenic disorders — commands wider assent than its enhancement framing; but critics of this distinction note that the line between preventing pathology and improving on the normal range is contested, historically dynamic, and susceptible to social pressure in ways that make it an unreliable boundary for regulatory purposes.

Questions · Passage 03
9
The passage states that off-target edits "compress the detectable risk window while expanding the population bearing undetected risk." Which of the following can be most reliably inferred from this formulation?
CORRECT: C The passage's formulation combines two asymmetries: the risk window is compressed (effects manifest late, hard to detect quickly) while the affected population expands (heritable edits spread across descendants). The most precise inference is that risk and detection are temporally misaligned — the population bearing risk grows faster than the observation window that would reveal the risk, making standard risk-benefit frameworks inadequate. C captures both the temporal and population dimensions of this misalignment. B captures the inadequacy of standard frameworks but focuses only on "heritable effects spreading before detected" — it misses the compressing/expanding asymmetry that makes the problem specifically worse than ordinary experimental risk. A makes a comparative harm claim (late-manifesting worse than immediate) that the passage doesn't support — it describes when effects appear, not their relative severity. D draws a regulatory conclusion (zero off-target rate required) that is a normative prescription, not an inference from the passage's descriptive claim.
10
Critics of germline editing argue that the distinction between therapeutic editing (preventing disease) and enhancement editing (improving above normal range) is "unreliable for regulatory purposes." Which of the following, if true, most seriously weakens this critique?
CORRECT: B The critic's argument is that the therapeutic/enhancement boundary is unstable and "susceptible to social pressure." Option B directly addresses this by proposing a mechanism that anchors the boundary independently of contested social norms — a curated list of specific disorders with defined scientific review, which is precisely the kind of institutionalised precision that would make the boundary "reliable for regulatory purposes" even if philosophically contested. C weakens by showing the boundary is uncontested where it matters most — a partial but not decisive response, since "reliable where it matters most" still leaves the margin contested and susceptible to pressure. A strengthens the critique by providing historical evidence of boundary drift. D shows existing regulations use the distinction — but workability in practice (especially in countries chosen for permissive regulation) doesn't directly address the normative claim about regulatory reliability.
11
The passage argues that He Jiankui's experiment was ethically unjustifiable partly on the ground of "inadequate scientific justification." For this ground to be a necessary condition of the ethical objection — rather than merely a contingent one — which assumption must be in play?
CORRECT: D The question asks what must be assumed for inadequate scientific justification to be a necessary condition of the objection. Option D articulates the structure precisely: if scientific adequacy is necessary but not sufficient, then inadequate science is one ground among several — consent, heritability concerns, stratification risks — all independently applicable. This means the objection isn't structurally dependent on the contingent inadequacy of current science; even adequate science wouldn't resolve the other grounds. D therefore identifies what must be assumed for the full ethical framework: scientific inadequacy is a necessary condition only if it is the threshold issue, but the passage's third paragraph shows it isn't — the "most sophisticated objections" operate even if science were adequate. C states what must NOT be assumed for the objection to be principled rather than contingent — it is the assumption that would make science the threshold issue. A introduces financial conflict not in the passage. B provides a specific scientific criticism (CCR5 inefficacy) that is one possible ground but not the structural assumption the question probes.
12
The passage notes that the therapeutic framing of germline editing "commands wider assent" than the enhancement framing. The author introduces this observation primarily to:
CORRECT: A The passage first acknowledges the distinction commands assent (giving it rhetorical weight) and then immediately qualifies it: "critics note that the line between preventing pathology and improving on the normal range is contested, historically dynamic, and susceptible to social pressure." The rhetorical movement is: grant the distinction's intuitive appeal, then expose its instability as a regulatory foundation. A captures this structure precisely — establish the distinction's apparent solidity, then undermine it as a regulatory anchor. B says the author concedes social acceptability before arguing social acceptability is unreliable — but the passage isn't making that meta-argument; it's making the argument that the therapeutic/enhancement line itself is unstable. C goes further than the passage — "not philosophically tenable" is stronger than "contested and susceptible to pressure." D attributes a policy recommendation to the passage that isn't there — the passage's conclusion is cautionary, not permissive.
Passage 3 Score
/4

P 04
Symbiosis, Mutualism & the Evolution of Cooperation
Passage Timer
10:00
Read the Passage

The evolution of cooperation poses one of evolutionary biology's deepest puzzles: natural selection operating through differential reproductive success should favour selfish individuals who extract benefits from others without reciprocating, yet stable cooperative relationships are ubiquitous — from mycorrhizal fungi trading phosphorus for plant photosynthates to cleaner wrasse removing parasites from larger fish that could easily eat them. The classical explanations for cooperation — kin selection (cooperation among genetic relatives, whose shared genes benefit when cooperators help each other), reciprocal altruism (cooperation contingent on future repayment), and mutualism (cooperation where both parties' direct fitness is immediately enhanced) — each explain a subset of cases but leave substantial residua. The application of game theory to evolutionary biology, particularly the iterated Prisoner's Dilemma framework developed by Axelrod and Hamilton, showed that cooperative strategies could invade populations of defectors under conditions of repeated interaction, recognisable partners, and sufficient weight given to future payoffs — providing a mechanism for reciprocal altruism's emergence rather than merely its stability.

Mutualism — the strict case where both parties benefit directly — presents a different evolutionary problem from kin selection or reciprocal altruism: it is not obviously puzzling, since both parties gain immediate fitness rewards. The puzzle of mutualism is instead its evolutionary stability: why don't cheaters — individuals who take benefits without providing them — invade and collapse the mutualism? The mycorrhizal case illustrates the issue. Fungi that provide less phosphorus than the mutualistic average but extract the same or more photosynthate are cheaters; if they are not penalised, they will increase in frequency and the mutualism will collapse. Research has revealed a suite of partner control mechanisms: plants allocate more photosynthate to fungi that provide more phosphorus (market dynamics within the symbiosis), preferentially connect to higher-quality fungal partners, and in some cases chemically sanction underperforming partners. These mechanisms are not accidental — they are evolved responses to the evolutionary pressure that cheating creates.

The evolution of obligate mutualism — where neither partner can survive or reproduce independently — creates a further theoretical challenge. Once partners become obligate, the selective logic that maintained the mutualism through partner control mechanisms is undermined: you cannot sanction a partner you cannot survive without. Obligate mutualisms are therefore evolutionary endpoints that require explanation for their origin (why would a facultative mutualist evolve into an obligate one?) and their stability (what prevents obligate partners from evolving toward parasitism when partner control is foreclosed?). The mitochondrial case — the endosymbiotic origin of mitochondria from alpha-proteobacterial ancestors — represents the most dramatic resolution: the bacterial genome was so extensively transferred to the host nucleus that the endosymbiont lost the genetic autonomy that would have permitted evolutionary divergence from host interests, producing an organelle rather than a symbiont in any behaviourally meaningful sense.

Questions · Passage 04
13
The passage argues that partner control mechanisms — market dynamics, partner choice, chemical sanctioning — are evolved responses to the evolutionary pressure that cheating creates. Which of the following, if true, most strongly supports this argument?
CORRECT: C The passage claims partner control mechanisms are "evolved responses to the evolutionary pressure that cheating creates" — a causal claim about the mechanism's evolutionary origin. Option C directly supports this by showing that chemical sanctioning evolved in lineages with higher historical cheating prevalence — a phylogenetic pattern consistent with cheating pressure driving the evolution of sanctioning. This is more direct evidence of the causal claim than the other options. D shows convergent evolution of the same mechanisms across different systems — consistent with a shared adaptive problem (cheating) but doesn't identify cheating pressure as the specific cause. A shows market dynamics tracking partner value — relevant to mutualism stability but doesn't establish the evolutionary origin of the mechanism as a response to cheating. B compares single vs. multiple partner conditions — relevant to which mechanism dominates but doesn't address the evolutionary origin of either.
14
The passage identifies a tension in obligate mutualisms: partner control mechanisms that maintain facultative mutualisms are "foreclosed" when mutualism becomes obligate, yet many obligate mutualisms are stable over evolutionary time. What would resolve this tension?
CORRECT: B The tension is: partner control is foreclosed in obligate mutualisms, yet many are stable. Resolution requires showing why stability is maintained without partner control. Option B provides the most elegant theoretical resolution: when mutualism is truly obligate, the partners' fitness interests converge completely — cheating by one reduces both, so there is no fitness advantage to defection, removing the selective pressure that partner control was managing. The tension dissolves because its premise (that cheating would be advantageous) is false in obligate mutualisms. D provides a different resolution — genomic integration makes divergence structurally impossible — which the passage itself gestures toward with the mitochondrial example. But B is the more general theoretical resolution; D is specific to cases of massive genome transfer. A dissolves the tension by denying the observation (stability), which contradicts the passage. C introduces asymmetric transition dynamics — interesting but doesn't explain why stability is maintained once obligate.
15
The passage describes the mitochondrial case as "the most dramatic resolution" to the evolutionary stability problem of obligate mutualisms. What does the qualifier "dramatic" most likely signal about the mitochondrial case relative to other resolutions?
CORRECT: A "Dramatic" signals qualitative extremity rather than degree. The mitochondrial resolution works by eliminating the conditions under which the stability problem arises — the bacterial genome was so thoroughly transferred to the host nucleus that the endosymbiont lost the genetic autonomy required for evolutionary divergence. This is more extreme than other resolutions (partner control, fitness alignment) because it doesn't manage the tension — it dissolves it structurally by converting the symbiont into an organelle. A captures this qualitative extremity. B makes a claim about evolutionary timescale — the passage doesn't say mitochondria are the oldest known case, and "dramatic" doesn't refer to age. C invokes cross-domain interaction — interesting but the passage's context for "dramatic" is the stability problem's resolution mechanism, not the taxonomic distance of partners. D claims it's the only successful case — unwarranted and too strong.
16
A biologist argues: "Reciprocal altruism cannot explain the evolution of cooperation in large anonymous populations, because individuals in such populations cannot track who has cooperated with them in the past — so kin selection must be the primary explanation for cooperation in social insects." Which logical flaw most seriously affects this argument?
CORRECT: D The argument's structure is: (1) reciprocal altruism requires individual recognition; (2) large anonymous populations lack individual recognition; therefore (3) kin selection must be primary. This commits a non sequitur: ruling out one explanation doesn't establish another. The passage explicitly describes mutualism as a third mechanism — and direct fitness benefits in mutualism don't require individual recognition at all. The move from "not reciprocal altruism" to "therefore kin selection" ignores the full explanatory space. A identifies false dichotomy — which is correct and related to D — but the more fundamental flaw is the non sequitur: even if the dichotomy were complete (just kin selection and reciprocal altruism), ruling out one doesn't establish the other without independent evidence. B challenges the empirical premise about population size — a legitimate point but doesn't identify the logical flaw. C identifies equivocation — possible but the argument uses "cooperation" consistently as prosocial behaviour; the real problem is the inferential gap.
Passage 4 Score
/4

P 05
Neuroplasticity, Critical Periods & the Limits of Adult Learning
Passage Timer
10:00
Read the Passage

The concept of neuroplasticity — the nervous system's capacity to alter its structure, connectivity, and function in response to experience — has undergone a significant revision since its initial popularisation in the 1990s. Early accounts, reacting against decades of neuroscientific dogma that held the adult brain to be structurally fixed, often overclaimed: neuroplasticity was presented as near-unlimited, with the adult brain described as capable of virtually unbounded reorganisation given appropriate training. The revisionist picture is more nuanced: plasticity is real and consequential, but it is constrained by molecular and cellular mechanisms that are themselves developmentally regulated — and those constraints matter enormously for what can and cannot be changed in adult cognition.

Critical periods — developmental windows during which the brain is maximally sensitive to specific environmental inputs — illustrate the most important constraints. The classic case is monocular deprivation in cats: blocking visual input to one eye during a critical window reliably produces permanent amblyopia (functional blindness in that eye), while the same deprivation outside the critical period produces no lasting effect. The critical period's opening and closure are regulated by a balance between excitatory and inhibitory neurotransmission — specifically, the maturation of parvalbumin-positive (PV) interneurons and the deposition of perineuronal nets (PNNs), extracellular matrix structures that ensheath PV interneurons and stabilise synaptic connectivity. Dissolving PNNs in adult animals with the enzyme chondroitinase ABC partially reopens closed critical periods in several systems, including visual cortex and fear memory — demonstrating that critical period closure is an active molecular process, not mere developmental cessation, and therefore potentially reversible.

The implications for adult learning are complex. Adult plasticity is not absent — long-term potentiation and depression continue to operate, and skill acquisition produces measurable structural changes in cortical maps throughout life. But adult plasticity is qualitatively different from critical period plasticity in at least three respects: it is slower, requires more practice trials to consolidate, and is less likely to produce native-like mastery in domains where critical periods have closed — language acquisition being the paradigmatic case. The "sensitive period" literature on second-language acquisition consistently shows that phonological accuracy declines monotonically with age of acquisition, and that learners who begin after puberty almost never achieve native-like phonological competence regardless of exposure and motivation. Whether this represents a hard biological ceiling or an interaction between declining plasticity and reduced immersive exposure is actively debated, but the functional consequence — that adult language learners face an asymptotic ceiling that childhood learners do not — is not.

Questions · Passage 05
17
The passage states that dissolving PNNs with chondroitinase ABC "partially reopens closed critical periods." The qualifier "partially" most likely implies which of the following?
CORRECT: B The passage describes PNNs as part of the molecular regulation of critical period closure — not as the sole mechanism. "Partially reopens" most naturally implies that removing PNNs restores some but not all of critical period plasticity, consistent with the existence of additional closure mechanisms that PNN dissolution doesn't address. B correctly infers multi-mechanism closure from the partial effect of removing one component. A suggests enzymatic inefficiency — but the passage doesn't suggest the enzyme fails to dissolve PNNs; the partiality is about the plasticity outcome, not the enzyme's efficacy. C introduces temporal transience — possible but not implied by "partially"; "partially" modifies the reopening itself, not its duration. D suggests spatial incompleteness — possible but less parsimonious than B, which addresses why reopening one mechanism produces only partial effects.
18
The passage argues that adult second-language learners face an "asymptotic ceiling" in phonological accuracy that childhood learners do not. Which of the following, if true, most seriously weakens this argument?
CORRECT: D The passage itself acknowledges the debate: "Whether this represents a hard biological ceiling or an interaction between declining plasticity and reduced immersive exposure." Option D attacks the research methodology that generates the age-of-acquisition effect — if the gap is confounded by differential exposure, then the asymptotic ceiling may not reflect a biological constraint but an exposure artefact. This is the most fundamental challenge: it attacks the evidence base for the ceiling, not just its severity. C shows some adults reach native-like accuracy — this directly challenges the claim that adult learners "almost never" achieve native-like phonological competence, but the passage says "almost never," which one study's exceptions may not suffice to refute as a population-level finding. D attacks the population-level research itself by challenging the confound. A shows training helps adults — consistent with the passage (adult plasticity exists); it doesn't challenge the ceiling. B refines the sensitive period's endpoint — relevant but doesn't weaken the ceiling claim for post-puberty learners.
19
The passage's "revisionist picture" of neuroplasticity presents a paradox: the molecular basis of critical period closure can be experimentally reversed (chondroitinase reopens critical periods), yet adult learners still face an asymptotic ceiling in domains like phonology. What best resolves this paradox?
CORRECT: C The paradox is between two claims: (1) molecular mechanisms of critical period closure can be reversed, and (2) adult learners still face phonological ceilings. The resolution in C dissolves the paradox by distinguishing the system's normal operating state from its manipulable boundary — the ceiling exists under normal conditions; chondroitinase shows the ceiling is not an absolute constraint, merely a stable default requiring intervention to override. This is consistent with both observations: the ceiling is real under natural conditions, and experimental manipulation can partially shift it. A and B dismiss the paradox by claiming the observations come from incomparable contexts — legitimate concerns about extrapolation, but "resolving a paradox" requires showing the claims are compatible, not that they aren't comparable. D says the paradox is empirically unresolved — true in a research sense, but the question asks for conceptual resolution compatible with both findings.
20
The passage claims that the "early accounts" of neuroplasticity "overclaimed" near-unlimited adult plasticity. The author makes this point primarily in order to:
CORRECT: A The passage opens with the historical context of overclaiming as a reaction against dogma, then presents the revisionist picture. The function of mentioning overclaiming is to establish why plasticity was overstated (it was a corrective swing away from the fixed-brain dogma) before presenting the more nuanced view — which is neither the dogma nor the overclaim, but a constrained plasticity that requires developmental context. A captures this rhetorical and argumentative structure precisely. B says the passage criticises science communicators — possible, but the passage doesn't distinguish scientific claims from popular accounts; "early accounts" refers to the scientific consensus. C attributes motivated reasoning to the scientific community — stronger than the passage supports; the passage explains the overclaiming by historical context, not ideological motivation. D suggests the passage is self-warning about its own limits — the passage doesn't do this; its tone is authoritative about the revisionist picture.
Passage 5 Score
/4
Living World · Total Score
/20
Category 12
Politics
5 passages · 4 questions each · CAT 5/5 · 50 min total
Score
/20
P 01
Democratic Backsliding, Populism & the Diagnostic Problem
Read the Passage

Contemporary democratic backsliding rarely proceeds through overt coups; it operates through the incremental erosion of institutional constraints by elected leaders who claim democratic mandates for their consolidation of power. Each individual step — packing constitutional courts, revising electoral rules, deploying tax authorities against civil society, subordinating prosecutorial independence — is technically lawful when taken alone. The cumulative effect is autocratisation, but because no single action crosses a legally recognisable threshold, resistance through courts, international pressure, or domestic opposition faces the challenge of condemning a pattern of individually permissible acts. The illegality is emergent rather than discrete; the threat is systemic rather than episodic.

Mudde's thin-centred conception of populism — society as divided between a "pure people" and a "corrupt elite," with politics as the expression of the former's volonté générale — provides a conceptual framework for understanding populism's structural relationship to democratic backsliding. The anti-pluralism at the core of populism is not incidentally connected to institutional erosion; it is structurally predictive of it. If the people are homogeneous and pure, opposition is by definition elite conspiracy rather than legitimate democratic difference, and institutional constraints — independent courts, free press, opposition parties — are reinterpreted as instruments of elite capture rather than structural safeguards of democratic competition. Populism and institutional erosion are thus electively affine: populism provides the epistemic framework within which dismantling institutions appears not as autocratisation but as democratisation.

The diagnostic difficulty is acute. Incumbents accused of backsliding consistently claim to be correcting institutional imbalances that genuinely disenfranchise majority constituencies — and this claim cannot always be dismissed as bad faith. Judiciaries have historically been captured by elite interests; media pluralism has been undermined by oligarchic concentration; electoral rules have been gerrymandered to entrench incumbents. The backsliding literature must therefore develop criteria that distinguish legitimate institutional reform from institutional corrosion — and those criteria cannot be purely procedural, because backsliding itself exploits procedural legitimacy. The distinction between democratising reform and autocratising erosion is normative through and through, requiring a substantive account of what democratic institutions are for.

Questions · Passage 01
1
The passage argues that populism and institutional erosion are "electively affine" — populism provides the epistemic framework that makes dismantling institutions appear as democratisation. Which of the following, if true, most seriously weakens this argument?
CORRECT: A The "elective affinity" argument claims populism and institutional erosion are structurally connected — populism's anti-pluralism tends toward eroding institutions. Option A provides the clearest direct counter-evidence: populist governments that used strong people-versus-elite rhetoric but did not erode institutions, and actually strengthened some. This severs the structural connection the passage postulates. B shows non-populist institutional erosion — this shows the relationship isn't exclusive to populism, but doesn't show populist governments maintain institutions, so it weakens only by showing other causes are sufficient, not that populism isn't a cause. C argues economic redistribution is the core — this would recharacterise populism's essence but doesn't directly challenge whether populist anti-pluralism, when present, tends toward institutional erosion. D argues pre-existing institutional weakness is the driver — this is an alternative explanation, but it leaves open that populism amplifies or exploits that weakness.
2
The passage argues that distinguishing "democratising reform" from "autocratising erosion" requires a substantive normative account of what democratic institutions are for — purely procedural criteria will not suffice. Which of the following can be most reliably inferred from this argument?
CORRECT: B The passage argues that procedural legitimacy alone cannot distinguish reform from erosion — backsliding exploits procedural legitimacy. A reliable inference is that any challenge to institutional reforms must engage substantive questions (what are these institutions for?) not just procedural ones (did the correct process occur?). This has direct implications for how critics must argue against incumbents claiming democratic legitimacy. A makes a prescriptive recommendation — adopting a majoritarian standard — that the passage doesn't advance; the passage says substantive criteria are needed but doesn't specify them. C concludes international organisations are ill-equipped — plausible but the passage doesn't say this; it identifies a theoretical requirement without drawing institutional conclusions. D attributes failure to democratic theory — too strong; the passage identifies a challenge for the backsliding literature, not a failure of political philosophy.
3
The passage describes a situation in which backsliding is legally permissible step by step but systemically autocratising in aggregate — the illegality is "emergent rather than discrete." What is the precise structural feature that makes this a paradox for legal and institutional resistance?
CORRECT: B The deepest paradox is institutional self-limitation: the institutions designed to resist democratic erosion — courts, commissions — operate within a framework of legality and can only respond to legal violations. Each individual step is lawful, so these institutions have no legitimate basis to intervene at each step. Yet by the time the cumulative autocratisation is undeniable, these institutions may already be captured. The guardians of democracy are structurally limited to reacting to discrete illegality, but the threat operates below that threshold. A treats it as a legal gap — solvable by extending law — which reduces the paradox to a correctable design flaw. B identifies the structural limitation: the institutions' authority is constituted by legality, so responding to individually lawful acts risks overstepping their mandate. C identifies the electoral accountability failure — real and important, but the specific paradox described in the passage is about legal resistance, not electoral correction. D identifies the sovereignty constraint on international actors — also real but a different institutional dimension from the domestic legal paradox.
4
The passage argues that populism's thin-centred anti-pluralism "is structurally predictive" of institutional erosion. For this claim to hold as a structural prediction rather than a mere historical correlation, which of the following must be assumed?
CORRECT: B A structural prediction (as opposed to a historical correlation) requires a mechanism — something in populism's core that generates the tendency toward institutional erosion independently of contingent factors. The passage's mechanism is the anti-pluralist epistemology: if opposition is by definition illegitimate conspiracy, then institutions protecting opposition have no normative standing within the populist framework. This provides a standing motivation that is structural — rooted in the ideology's core — not contingent. A claims every populist government will erode institutions (universal necessity) — too strong; the structural argument requires a standing tendency, not a universal iron law. C makes it about personal temperament — this is psychological, not structural; the passage's argument is about ideology, not personality. D requires the structural pressure to be stronger than countervailing pressures — this is an empirical condition for the prediction's reliability, not the assumption that makes the prediction structural rather than correlational.
Passage 1 Score
/4

P 02
Constitutional Courts, the Countermajoritarian Difficulty & the Process/Substance Problem
Read the Passage

The "countermajoritarian difficulty" — Bickel's term for the democratic legitimacy problem posed by judicial review — has generated a vast literature of proposed justifications and critiques. Its core: if democratic governance requires that policy reflect the will of electoral majorities, then unelected, life-tenured constitutional courts that invalidate legislation must justify their authority against the apparent principle of democratic self-rule. The two most influential responses are originalism and process theory, and both generate their own internal difficulties.

Originalist responses hold that judicial review is democratically legitimate because judges who enforce the original meaning of a democratically enacted constitutional text are acting as agents of that constitutionalising majority, not as a counter-elite imposing their own values. Critics press two objections: first, the semantic indeterminacy of constitutional provisions — "due process," "equal protection," "unreasonable searches" — means that "original meaning" routinely underdetermines outcomes, requiring substantial interpretive choice that cannot be squared with the mechanical-agent model; second, the historical evidence for original meaning is itself contested, selectively deployed, and underdetermined by the available record. Originalism's constraint is not illusory — it is real but limited, excluding some interpretive moves while leaving others open. Its defenders often overstate the constraint to claim that it resolves the countermajoritarian difficulty rather than merely ameliorating it.

Ely's process theory attempts to sidestep the substantive controversy by confining judicial review to the correction of process defects: representation failures, systematic exclusion of minorities from democratic participation, entrenchment of incumbents through gerrymandering. Courts guard the democratic process rather than substituting their substantive judgments for legislative ones. The difficulty is that determining which groups are entitled to full democratic participation, what counts as exclusion, and what counts as fair representation are themselves substantive normative judgments. The process/substance distinction is unstable at precisely the points where it is most needed: the hardest cases — drawing district lines, determining voting eligibility, regulating money in politics — require courts to make substantive choices about democratic value that the process framework claims to avoid. The countermajoritarian difficulty reappears within the solution at a more sophisticated level.

Questions · Passage 02
5
The passage argues that originalism's constraint on judicial discretion is "real but limited" — it excludes some interpretive moves while leaving others open. Which of the following, if true, most strengthens this argument against the stronger originalist claim that original meaning fully resolves the countermajoritarian difficulty?
CORRECT: A The passage argues originalism is "real but limited" — it constrains some moves but leaves others open, so it ameliorates but doesn't resolve the countermajoritarian difficulty. The strongest evidence for this is that originalist judges using the same methodology disagree in practice. Option A provides exactly this: originalist justices reaching opposite conclusions on the same provision from the same historical record. This demonstrates the method underdetermines outcomes — the constraint is real (both are using the method) but limited (the method doesn't resolve the case). D is very close — Framer disagreement shows no single original meaning exists, which would suggest the constraint is more illusory than "real but limited." But A is more targeted: it shows the constraint fails at precisely the cases that matter, while D shows the historical basis is contested. B argues the Framers intended generality — this would suggest originalism misunderstands the constitutional design, not that the constraint is real but limited. C argues non-originalist alternatives are more legitimate — this is a comparative claim irrelevant to whether originalism's constraint ameliorates the difficulty.
6
The passage concludes that in Ely's process theory, "the countermajoritarian difficulty reappears within the solution at a more sophisticated level." Which of the following can be most reliably inferred from this conclusion?
CORRECT: B The passage shows both originalism and process theory reintroduce the countermajoritarian difficulty — originalism at the level of interpretive choice from underdetermined original meaning, process theory at the level of substantive choices about democratic participation. The reliable inference is that the difficulty is structural: any theory of judicial review in a constitutionally constrained democracy will face it at some level. B captures this precisely. A makes a comparative ranking — originalism vs. process theory — that the passage doesn't make; the passage is symmetrically critical of both. C says process theory is "worse than" openly substantive theories — the passage doesn't make this comparative judgment; it identifies a difficulty within process theory, not a ranking. D draws a prescriptive conclusion — abandon theoretical justification — that goes well beyond what the passage establishes; identifying a recurring structural difficulty is not the same as concluding theoretical resolution is impossible.
7
The passage examines both originalism and process theory without endorsing either. What is the most plausible reason the author presents both theories' internal difficulties rather than arguing for one over the other?
CORRECT: B Both theories are shown to reintroduce the difficulty — originalism at the interpretive-underdetermination level, process theory at the substantive-participation-judgment level. Presenting both not-fully-adequate responses to the same structural difficulty is precisely the argumentative move that establishes the difficulty as a genuine structural feature of constitutional democracy rather than a correctable design problem. A attributes professional neutrality — uncharitable and the author explicitly evaluates both theories critically. C attributes implicit endorsement of process theory — no evidence for this; the author is equally critical of both. D predicts an upcoming synthesis — the passage ends with the process/substance problem; there is no signal of a third theory coming.
8
An originalist might respond to the passage's critique as follows: "Even if original meaning underdetermines outcomes in some hard cases, it fully determines outcomes in the vast majority of constitutional cases — so originalism succeeds as a general solution to the countermajoritarian difficulty even if it has residual cases of underdetermination." Which logical problem most seriously affects this response?
CORRECT: B The countermajoritarian difficulty is about courts exercising political discretion — making value choices that democratic theory says should belong to elected majorities. The response argues originalism handles most cases, but the difficulty is most acute precisely in the hard cases — those involving contested constitutional values — where originalism underdetermines outcomes. Judicial discretion in easy cases (where the answer is clear regardless of method) generates no countermajoritarian concern; the concern arises in hard cases. Volume is simply the wrong metric. A identifies a hasty generalisation about hard cases — real but less precise than B; A says the inference from easy to hard cases is invalid, but doesn't identify what specifically makes hard cases the relevant ones for the countermajoritarian difficulty. C identifies a spectrum vs. binary issue about underdetermination — interesting but peripheral to the main flaw. D notes the response concedes the critique and retreats — this is a good observation, but D describes a rhetorical consequence rather than the logical flaw in the response's argument.
Passage 2 Score
/4

P 03
Sovereignty, Humanitarian Intervention & the Responsibility to Protect
Passage Timer
10:00
Read the Passage

The Westphalian concept of sovereignty — the principle that each state has exclusive authority over its territory and population, and that other states have no right to interfere in its domestic affairs — was the organising norm of the post-1648 international order. Non-interference provided predictability, reduced great-power wars over each other's internal politics, and gave newly decolonised states protection against resumed colonial intervention. The costs of this norm became visible in the 1990s: the Rwandan genocide, the Srebrenica massacre, and the Kosovo crisis each posed the question of whether sovereignty could shield a government while it murdered its own people. The conventional answer was yes, but the political will to maintain that answer was eroding.

The Responsibility to Protect doctrine, endorsed by the UN General Assembly at the 2005 World Summit, attempted to reframe sovereignty not as a right but as a responsibility. On this account, sovereignty entails an obligation to protect the population; when a state fails catastrophically in this obligation, the international community acquires a residual responsibility to protect the population by other means, including in extremis military intervention authorised by the Security Council. The conceptual shift was significant: instead of intervention being an exception to sovereignty, sovereignty itself was reconceived as conditional on the performance of protective functions. This shift aligned the doctrine with the interests of potential interveners but raised the concern that it provided a legitimating vocabulary for the abuse of humanitarian justifications to pursue strategic interests.

The tension between the humanitarian framing and its strategic uses became acute in Libya in 2011, when Security Council Resolution 1973 authorised a no-fly zone to protect civilians, which NATO interpreted as authorising the removal of the Gaddafi government. Russia and China, who had abstained rather than vetoed the resolution, concluded that they had been deceived about the scope of the authorisation and subsequently blocked Council action on Syria regardless of humanitarian urgency. The Libya precedent thus contributed to the paralysis it was supposed to prevent, demonstrating a structural problem in the R2P framework: the selectivity of its application and the scope-creep in its interpretation undermine the credibility of humanitarian claims in subsequent crises, making each invocation of R2P politically costlier and less effective than the last.

Questions · Passage 03
9
The passage describes non-interference as providing several benefits for newly decolonised states. Which of these benefits is most directly undermined by the R2P doctrine's reconception of sovereignty as conditional?
CORRECT: C The passage specifically identifies protection against "resumed colonial intervention" as one benefit of non-interference for newly decolonised states. R2P's conditionality directly undermines this: if sovereignty depends on performing protective functions to external standards, then the very states most recently freed from colonial intervention are again subject to intervention justified by the intervening powers' own definitions of adequate governance. C identifies this connection precisely. A concerns predictability, which is a benefit the passage mentions but attributes more broadly rather than specifically to newly decolonised states. B concerns manufacturing of justifications, which is the critique of strategic abuse rather than the specific benefit undermined. D overlaps with A and B in content and does not specifically connect to newly decolonised states.
10
The passage describes the R2P conceptual shift as aligning "with the interests of potential interveners." Why is this alignment a problem for the doctrine's legitimacy rather than simply a coincidence?
CORRECT: B The problem is structural rather than incidental: when a norm gives enforcers discretion and advantage, they have ongoing incentives to invoke it for non-humanitarian purposes. This means every invocation carries suspicion, not just obviously abusive ones. The Libya case illustrates this: Russia and China concluded the humanitarian framing was a cover for regime change, and that conclusion affected their responses in Syria. B captures this structural incentive logic. A requires unanimous consent as the legitimacy condition, which is too demanding and not the passage's argument. C makes a historical-continuity argument about colonial powers, which is related but a different and less structural critique. D claims non-neutral norms are definitionally illegitimate under international law, which is empirically false and not the passage's argument.
11
The passage argues that the Libya precedent contributed to "the paralysis it was supposed to prevent." What structural feature of the R2P framework generates this self-defeating dynamic?
CORRECT: D The passage explicitly identifies the mechanism: "selectivity of application and scope-creep in interpretation undermine the credibility of humanitarian claims in subsequent crises, making each invocation politically costlier and less effective." The self-defeating dynamic is that the very acts that operationalise R2P simultaneously undermine its future operability by destroying the credibility of humanitarian justifications. D captures this credibility-depletion mechanism. A concerns the veto system, which is a structural constraint but not the specific self-defeating dynamic the passage identifies. B claims unanimous authorisation is required, which is factually incorrect — Security Council majority with no P5 veto is sufficient. C says the threshold rises after each intervention, which partially captures the cost increase but misses the credibility mechanism the passage specifies.
12
The passage states that the distinction between "democratising reform and autocratising erosion" in the backsliding literature requires "a substantive account of what democratic institutions are for." By analogy, the R2P framework faces which parallel challenge?
CORRECT: A The analogy in the backsliding passage is: purely procedural criteria cannot distinguish democratic reform from autocratic erosion because backsliding exploits procedural legitimacy; a substantive account of democratic institutions' purpose is needed. The parallel for R2P is: purely procedural criteria for intervention authorisation cannot distinguish genuine humanitarian intervention from strategic intervention cloaked in humanitarian language; a substantive account of what sovereignty is for — and when it genuinely fails — is needed to make the distinction. A identifies this parallel precisely. B focuses on what intervention is for rather than what sovereignty is for, which mislocates the analogy: the backsliding passage asks what democratic institutions are for, not what institutional reform is for. C says R2P needs a procedural account, which inverts the lesson from the backsliding passage. D concerns Security Council accountability, which is a different institutional question.
Passage 3 Score
/4

P 04
Deliberative Democracy, Agonism & the Problem of Reasonable Disagreement
Passage Timer
10:00
Read the Passage

Deliberative democracy theory holds that the legitimacy of collective decisions depends not merely on majority preference but on the quality of the deliberative process through which preferences are formed and expressed. Associated with Habermas and Rawls, the deliberative ideal requires that citizens offer reasons for their positions, that those reasons be publicly assessable, and that the outcome reflect the force of the better argument rather than the weight of numbers or the leverage of power. This ideal is explicitly procedural: what matters is not which decision is reached but whether it was reached through a process of inclusive, reason-giving exchange.

Chantal Mouffe's agonistic critique challenges the deliberative framework at its foundation. Mouffe argues that the aspiration to rational consensus on which deliberative theory depends is both empirically unrealistic and politically dangerous. Empirically unrealistic because genuine political disagreements are not resolvable through argument: they concern conflicts of values, identities, and interests that no amount of reasoned exchange can eliminate. Politically dangerous because the attempt to bracket conflict in the name of rational consensus tends to marginalise positions that cannot be translated into the register of "reasonable" public discourse, systematically excluding the perspectives of those whose worldview is incompatible with the procedural norms that deliberative theory takes as given. Mouffe proposes agonism as an alternative: a democratic politics that accepts conflict as constitutive of the political and channels it into institutionalised forms of adversarial competition rather than attempting to transcend it through deliberation.

The debate between deliberative and agonistic theorists reflects a deeper disagreement about whether liberal democracy should be understood primarily as a procedure for reaching legitimate decisions or as a practice for managing conflict between groups with genuinely incompatible values. Rawls's political liberalism attempted a middle path: a domain of public reason in which citizens appeal only to values that can be accepted by all reasonable citizens, coexisting with a domain of comprehensive doctrines that contains the full range of moral, religious, and philosophical positions. The "reasonable" qualifier on the domain of public reason is the critical term: Mouffe argues it performs an exclusionary function, delegitimising political positions that cannot meet its standard as unreasonable rather than acknowledging them as genuine positions in a conflict that has no rational resolution.

Questions · Passage 04
13
Mouffe argues that the aspiration to rational consensus is "politically dangerous." The passage specifies the mechanism: it marginalises positions that cannot be translated into "reasonable" public discourse. Which type of political position is most structurally vulnerable to this marginalisation?
CORRECT: C The passage identifies positions "whose worldview is incompatible with the procedural norms that deliberative theory takes as given" as the structurally vulnerable ones. Comprehensive moral and religious doctrines that ground political positions in premises non-shareable beyond their own tradition are precisely the positions that cannot be translated into Rawls's public reason register. Their exclusion is not incidental but structural: the "reasonable" requirement cannot accommodate positions that reason from premises not accessible to all. C identifies this structural vulnerability. A concerns empirically false positions, which is a different and legitimate form of exclusion not what Mouffe is critiquing. B concerns numerical minorities, which is a concern for aggregative rather than deliberative democracy. D concerns rhetorical advantage, which is a valid practical critique but a different mechanism from the structural exclusion Mouffe identifies.
14
Mouffe proposes agonism as an alternative to deliberative democracy. Agonism accepts conflict as constitutive of the political rather than treating it as a problem to be resolved. Which of the following most precisely identifies what agonism requires of democratic institutions that deliberativism does not?
CORRECT: B The passage describes agonism as channelling conflict "into institutionalised forms of adversarial competition rather than attempting to transcend it through deliberation." What agonism requires is institutions that manage genuine ongoing conflict rather than procedures designed to produce consensus. The distinctive requirement is that losing positions are treated as legitimate ongoing adversaries rather than as unreasonable or converted. B captures this. A concerns equal participation for all positions, which is an implication of agonism but frames it in terms of deliberativism's exclusion rather than what agonism positively requires. C introduces minority rights protections, which is not derived from the agonism account in the passage. D introduces a thick common good requirement, which is not what the passage attributes to agonism.
15
Rawls's "reasonable" qualifier on the domain of public reason is described as performing "an exclusionary function." The passage implies this is problematic. But a defender of Rawls could argue that some exclusion is legitimate and necessary. On what grounds could this defence most plausibly proceed?
CORRECT: D The most precise Rawlsian defence distinguishes between excluding comprehensive doctrines from the public sphere (which Rawls does not intend) and excluding political claims that can only be justified by appealing to premises specific to one doctrine. Public reason does not ban religion from political life; it requires that political positions be supportable by reasons accessible beyond the specific doctrine. This is a narrower exclusion than Mouffe's critique implies. D states this defence precisely. A says the exclusion is only apparent, which is an empirical denial that Rawlsians rarely claim. B grounds exclusion in procedural self-protection, which is a legitimate argument but not specifically the Rawlsian defence of the "reasonable" qualifier. C turns the critique back on Mouffe, which is a valid tu quoque but does not defend the specific exclusionary function of the "reasonable" qualifier.
16
The passage presents deliberative and agonistic theory as reflecting a "deeper disagreement about whether liberal democracy should be understood primarily as a procedure for reaching legitimate decisions or as a practice for managing conflict." Which of the following most accurately characterises the relationship between these two conceptions?
CORRECT: C The passage frames the disagreement as being about what politics fundamentally is: for deliberativism, conflict is a problem that rational exchange can overcome; for agonism, conflict is constitutive of the political itself. This is not merely an empirical dispute about whether consensus is achievable but a conceptual difference about what democratic politics consists in. They are not just describing different solutions to the same problem; they are describing different phenomena. C captures this conceptual rather than merely empirical character of the disagreement. A says they are complementary, which dissolves the tension the passage identifies as a fundamental disagreement. B says the disagreement is empirical about political psychology, which understates its conceptual depth. D says they are practically equivalent, which the passage's content directly contradicts.
Passage 4 Score
/4

P 05
Electoral Systems, Representation & the Trade-offs of Democratic Design
Passage Timer
10:00
Read the Passage

The choice between electoral systems involves genuine trade-offs between values that democratic theory holds simultaneously important. Single-member plurality systems — first-past-the-post — tend to produce stable single-party governments, clear accountability linkages between elected representatives and specific constituencies, and strong incentives for parties to compete for the median voter. They do so at the cost of representational distortion: parties with geographically concentrated support are advantaged over those with diffuse support, large numbers of votes are cast for losing candidates and produce no representation, and the composition of the legislature can diverge sharply from the distribution of voter preferences. The 2019 UK general election, in which the Conservative Party won 56% of seats with 44% of votes while the Liberal Democrats won 2% of seats with 12% of votes, illustrates the magnitude of the distortion.

Proportional representation systems correct this distortion by allocating seats in rough proportion to vote shares, ensuring that most votes contribute to representation and that the legislature more closely mirrors voter preferences. The cost is typically coalition government, which diffuses accountability: voters cannot identify which party is responsible for specific policy outcomes when multiple parties share executive power, and government formation is often decided by inter-party bargaining after the election rather than by voter choice. The stability objection to proportional systems is moderated by the empirical record of countries like Germany, the Netherlands, and the Nordic states, which combine proportional representation with durable coalition governance — though critics note that these cases involve specific institutional features including constructive votes of no confidence that are not automatically transferable to other contexts.

A deeper challenge to electoral system design is that preferences about systems are systematically shaped by the interests of parties already empowered by them. In first-past-the-post systems, the major parties that benefit from disproportionality have incentives to oppose reform, producing a systematic bias in the political process through which reform is evaluated. This creates what political scientists call a status quo bias: the system that exists generates the political actors who evaluate it, and those actors have interests in its continuation that are independent of any assessment of its democratic quality. Genuine electoral reform therefore requires either constitutional moments that circumvent ordinary political processes or the occasional alignment of incumbents' short-term strategic interests with reform, which is rare and contingent.

Questions · Passage 05
17
The 2019 UK election example is cited to illustrate which specific point about first-past-the-post systems?
CORRECT: B The passage uses the 2019 example immediately after stating that FPTP produces "representational distortion" where parties with geographically concentrated support are advantaged over those with diffuse support. The Liberal Democrats' 12% of votes producing 2% of seats versus the Conservatives' 44% producing 56% illustrates the "magnitude of the distortion." B captures what the example illustrates. A uses the example to make a pro-FPTP argument, but the passage cites it as an illustration of distortion, not accountability advantage. C concerns voter behaviour, which is not what the example illustrates. D makes a normative claim about mandate legitimacy that goes beyond what the passage states.
18
The passage says the stability objection to proportional representation is "moderated by the empirical record" of Germany, the Netherlands, and Nordic states. What qualification does the passage immediately add, and why does it matter?
CORRECT: C The passage explicitly adds: "though critics note that these cases involve specific institutional features including constructive votes of no confidence that are not automatically transferable to other contexts." This qualification matters because if stability depends on those specific institutions rather than on PR itself, the empirical evidence cannot straightforwardly support PR adoption elsewhere. C states the qualification and its relevance precisely. A concerns economic prosperity, which is not the qualification the passage makes. B concerns path dependency and political culture, which is a valid concern but not what the passage specifies. D concerns size and homogeneity, which is a common argument in comparative politics but not the qualification the passage adds.
19
The "status quo bias" in electoral reform is described as arising because "the system that exists generates the political actors who evaluate it." What type of problem is this, and why does it make purely procedural routes to reform inadequate?
CORRECT: B The passage describes a circular self-entrenchment: the system generates the actors who evaluate it, and those actors have interests in its continuation independent of democratic quality. Purely procedural reform routes — parliamentary votes, committee reviews — run through exactly these actors, embedding the bias rather than correcting it. That is why constitutional moments that "circumvent ordinary political processes" are identified as the alternative. B captures both the type of problem and why procedure is inadequate. A describes a collective action problem, which is a related but different mechanism. C describes information asymmetry, which is a separate and not specifically structural problem. D describes a legitimacy perception problem, which is a consequence rather than the structural mechanism.
20
The passage presents the choice between FPTP and PR as involving genuine trade-offs. Which of the following most accurately characterises what "genuine trade-offs" implies about electoral system choice?
CORRECT: C "Genuine trade-offs" means that serving one value comes at the cost of another and that no system serves all values simultaneously. This makes the choice irreducibly normative: which values to prioritise depends on judgments about democratic priorities in a specific context, not on identifying a system that is objectively best. C captures this. A says the choice is arbitrary, which conflates normative with arbitrary. B says PR is generally superior, which the passage does not endorse — it presents both systems' advantages and costs neutrally. D recommends mixed systems, which is a substantive policy proposal not implied by the acknowledgment of trade-offs.
Passage 5 Score
/4
Politics · Total Score
/8
Category 13
Sociology
5 passages · 4 questions each · CAT 5/5 · 50 min total
Score
/20
P 01
Social Capital, Putnam & the Simultaneity Problem
Read the Passage

Putnam's social capital framework — developed most fully in Bowling Alone — argued that dense networks of civic association generate generalised social trust, norms of reciprocity, and cooperative dispositions that collectively sustain democratic governance and economic performance. The central empirical claim was that declining civic participation in the United States since the 1960s correlated with measurable deficits in interpersonal trust, institutional confidence, and collective problem-solving capacity. Putnam's framework distinguished two forms of social capital: bridging capital, which connects individuals across social cleavages and generates diffuse generalised trust; and bonding capital, which reinforces solidarity within homogeneous groups but at the potential cost of out-group hostility. A society rich in bonding but poor in bridging capital may exhibit high associational density while simultaneously entrenching the exclusions liberal democracy is designed to overcome.

The framework attracted both enthusiastic application and methodological criticism. The most technically serious objection is the simultaneity problem: social capital is both a cause and an effect of the political and economic conditions it is supposed to explain. Dense civic networks may produce democratic stability — but democratic stability may equally produce the conditions under which civic networks thrive. Without experimental or quasi-experimental research designs capable of breaking this simultaneity, regression-based measures of social capital retain a circularity that renders causal inference indeterminate. The observed correlation between social capital and positive social outcomes could reflect either direction of causality, a common third cause, or a reinforcing feedback loop in which each sustains the other.

A further complication is that Putnam's framework struggles to explain its own central finding. If social capital produces democratic governance and economic performance, and these in turn produce social capital, the question becomes: what breaks the cycle and initiates decline? Putnam's own explanation — that television and generational change disrupted civic habits — is not derived from the social capital framework but imported from a separate behavioural account. This suggests that the framework may be better understood as a descriptive account of the relationship between civic engagement and social outcomes rather than a causal explanation of the mechanisms that sustain or disrupt that relationship.

Questions · Passage 01
1
Putnam argues that bridging social capital generates diffuse generalised trust that sustains democratic governance. Which of the following, if true, most seriously weakens the claim that bridging social capital is the operative causal mechanism rather than a symptom of pre-existing conditions?
CORRECT: B The causal claim is: bridging social capital → generalised trust. Option B shows countries with high trust but lower associational density — directly suggesting trust may precede rather than result from association. This severs the direction of causality Putnam proposes. A is strong and uses a natural experiment — but it shows that cross-cleavage proximity doesn't increase trust, which suggests bridging capital doesn't form easily, not that generalised trust precedes it. B more directly addresses the causal direction by showing trust without the proposed cause. C addresses bonding/bridging as complements or alternatives — relevant to the framework's internal distinctions but not to the bridging → trust causal direction. D challenges Putnam's explanatory account of the decline (timing) — this targets his historical narrative, not the core claim about bridging capital and trust.
2
The passage argues that Putnam's explanation of civic decline — television and generational change — is "not derived from the social capital framework but imported from a separate behavioural account." Which of the following can be most reliably inferred from this observation?
CORRECT: A The passage draws a precise distinction: the social capital framework describes the relationship between civic engagement and social outcomes but cannot, from its own resources, explain what initiates a disruption in that relationship. Importing an explanation from outside suggests the framework's explanatory scope is limited — it describes correlations and mutual dependencies but lacks the mechanisms to explain the system's dynamics. This is a gap between descriptive and causal-explanatory adequacy. B says the television account is empirically false because it's imported — but this confuses the source of a claim with its empirical status; imported claims can be true. C prescribes abandoning the framework — the passage identifies a limitation, not a reason to abandon. D makes a sweeping disciplinary claim about what sociology can and cannot explain — far beyond what the observation supports.
3
The passage identifies a paradox in the bridging/bonding distinction: a society with high bonding capital but low bridging capital may have high associational density yet entrench the exclusions liberal democracy is designed to overcome. What makes this paradoxical for the social capital framework rather than simply a distinction between two types of capital?
CORRECT: A The paradox is self-referential within the framework: social capital is the proposed solution to democratic deficits. But high social capital (bonding type) can produce democratic deficits (exclusion, out-group hostility). The remedy — building social capital — can produce the disease it was meant to cure, depending on which type of capital is built. This is more than a distinction between two types; it's a structural inversion where the solution and the problem belong to the same category. B correctly notes it's "just a distinction" — this is the strong distractor — but it misses the self-referential problem: if the framework recommends more social capital and social capital of one type produces the problems the framework aims to solve, the recommendation is dangerous without the type-distinction, and the framework's own general endorsement of social capital is undermined. C identifies a measurement problem — real and serious, but this is a methodological issue rather than the structural paradox the question asks about. D correctly identifies the liberal democracy/bonding capital tension but frames it as liberal democracy depending on what undermines it — slightly different from the remedy/disease paradox.
4
The simultaneity problem implies that observed correlations between social capital and positive social outcomes could reflect either direction of causality or a common third cause. For this to constitute a serious methodological objection to Putnam's policy recommendations — not just to his causal claims — which of the following must be assumed?
CORRECT: A The simultaneity problem is a methodological objection to Putnam's causal claims. For it to extend to his policy recommendations, those recommendations must depend on the specific causal direction he proposes. If social capital is the cause of positive outcomes, investing in it makes sense. If the causality is reversed (democratic stability → social capital) or bidirectional, then investing in social capital without addressing the underlying political and economic conditions may be ineffective — you can't manufacture the cause by building the symptom. B says the problem is unique and unsolvable — not required; the critique only needs the problem to be serious in this case. C attributes naivety to Putnam — irrelevant to whether the methodological objection holds. D says positive outcomes are sufficient reason regardless of mechanism — this would actually undermine the objection, making it not a serious policy problem; D is what the assumption needs to deny.
Passage 1 Score
/4

P 02
Rational Choice Theory, Collective Action & the Myside Problem
Read the Passage

Rational choice theory (RCT) generates a fundamental sociological puzzle: if individuals act to maximise their utility, collective goods — outcomes that benefit everyone but require individual contribution — should never be produced at efficient levels, because each rational individual has an incentive to free-ride on others' contributions. Olson's Logic of Collective Action formalised this: large groups face a collective action problem that can only be resolved through selective incentives (private rewards for contributors not available to free-riders) or coercion. Small groups can sometimes overcome the problem through mutual monitoring and the reputational stakes of ongoing relationships, but large anonymous groups face a structural barrier to collective provision of public goods.

Experimental evidence has persistently challenged Olson's predictions. In one-shot prisoner's dilemmas, subjects cooperate at rates far exceeding RCT predictions; in public goods games, voluntary contributions consistently exceed the zero predicted by the free-rider logic; and subjects engage in costly punishment of defectors — "altruistic punishment" — at personal expense with no material return. These findings led to revisions of RCT along two lines. Evolutionary game theory explains cooperation through mechanisms that were adaptive in ancestral environments — kin selection, reciprocal altruism, and reputation effects in repeated interactions — where cooperation-supporting dispositions were fitness-enhancing even if they appear irrational in one-shot laboratory settings. Coleman's social capital extension argues that the social networks in which individuals are embedded transform the effective payoff structure: by embedding decisions in ongoing relationships with reputational stakes, social networks convert effectively one-shot decisions into iterated ones.

Neither response fully escapes RCT's framework; both extend it. The evolutionary response gives a fitness-based rationality account of apparently irrational cooperation, preserving the rationality assumption by changing the fitness function. Coleman's response preserves rationality by changing the payoff structure rather than the rationality assumption. What neither addresses is the finding — most robustly documented in the myside bias literature — that individuals systematically evaluate evidence and arguments in ways that favour their pre-existing group memberships, even when doing so directly conflicts with their material self-interest. Group identity can override individual interest calculation in ways that neither evolutionary nor network accounts straightforwardly accommodate, suggesting that RCT's preference-optimisation framework may require not just extension but partial replacement.

Questions · Passage 02
5
The passage argues that myside bias — favouring group identity over material self-interest — may require not just extending RCT but partially replacing it. Which of the following, if true, most strengthens this argument?
CORRECT: A The passage's "partial replacement" argument rests on myside bias being irreducible to preference optimisation — identity override that can't be modelled as maximising a utility function over an expanded preference set. Option A provides exactly this: forgoing monetary gains to avoid agreeing with out-group members, with the effect strengthening when identity is made salient. Crucially, if this could be modelled as a preference for group agreement entering the utility function, it would be an extension of RCT, not a replacement. The passage requires a case where identity override can't be accommodated as a preference — A's description of systematic departure from material interest specifically when identity is salient is the strongest evidence. B shows evolutionary accommodation — this directly supports the evolutionary extension, not partial replacement; if myside bias is fitness-rational, it is accommodated. C shows Coleman's framework can handle group identity — this directly supports the extension path, not replacement. D shows political preferences track identity over material interest — consistent with the argument but less direct than A; D could be accommodated by updating utility functions with identity preferences.
6
The passage states that both the evolutionary and Coleman responses to Olson "extend rather than replace" RCT by preserving the rationality assumption while changing either the fitness function or the payoff structure. Which of the following can be most reliably inferred from this characterisation?
CORRECT: C The passage's logic is: evolutionary and Coleman responses extend RCT by preserving rationality while adjusting what is being optimised. If a finding can always be accommodated by adjusting the fitness function or payoff structure, it is always compatible with extended RCT. A finding that cannot be so accommodated — that requires the rationality assumption itself to be dropped — would be the kind of evidence that necessitates partial replacement. This is what the passage implies the myside bias findings may represent. A says extension within a flawed framework is inferior — the passage doesn't make a comparative quality judgment between extension and replacement; it identifies a case where extension may be insufficient. B claims the two responses together provide a complete account — the passage explicitly says neither "fully escapes RCT's framework" and identifies myside bias as something neither addresses. D says the cooperation findings are consistent with RCT's core — true for the evolutionary/Coleman accounts, but the passage introduces myside bias as a further case that may not be.
7
The passage presents Olson's theory, then the experimental challenges to it, then two RCT-compatible responses, then myside bias as a potential partial refutation. What is the most plausible reason the author structures the argument in this sequence rather than introducing myside bias alongside the initial experimental challenges?
CORRECT: A The argumentative logic is stepwise: first show that cooperation challenges Olson, then show those challenges can be handled within an extended RCT (evolutionary + Coleman), then introduce a challenge — myside bias — that these extensions cannot handle. This structure makes the "partial replacement" conclusion stronger than if myside bias were introduced alongside cooperation findings: the reader first sees that RCT can be extended to handle cooperation, establishing that RCT is resilient, and then encounters the evidence that even resilient RCT cannot handle myside bias. This maximises the force of the partial replacement argument. B gives a historical chronology reason — but the passage's argument is analytical, not historical; the structure is chosen for argumentative force. C says the RCT-compatible responses are shown to fail — they are shown to be insufficient for myside bias, not to fail at cooperation; the distinction matters. D says myside bias is less empirically supported and is signalled as speculative — the passage describes it as "most robustly documented," the opposite of speculative.
8
A defender of RCT might respond to the myside bias evidence as follows: "Individuals who forgo material gain to maintain group identity are simply revealing a preference for group membership — identity is a good in their utility function. RCT can accommodate any behaviour if the utility function is defined broadly enough." Which logical problem most seriously affects this defence?
CORRECT: B The defence's core move is: any behaviour → infer a preference → explain the behaviour via that preference. If this move is always available for any behaviour, then no behaviour can refute RCT — the framework is immunised against falsification by the infinite expandability of the utility function. A framework that can accommodate any observation is not a scientific theory but a tautology. C identifies circular reasoning — also a real problem (behaviour → preference → behaviour), but the more fundamental issue is unfalsifiability: even if the reasoning weren't circular, the move of always postulating a preference to explain any behaviour generates the tautology problem. D identifies a redescription/explanation gap — genuine and closely related to B, but less precise: D says the move is descriptive not explanatory, while B identifies the deeper issue that unlimited redescription produces unfalsifiability.
Passage 2 Score
/4

P 03
Bourdieu's Field Theory, Capital & the Reproduction of Social Inequality
Passage Timer
10:00
Read the Passage

Pierre Bourdieu's field theory provided one of sociology's most comprehensive accounts of how social inequality reproduces itself across generations despite formal equality of opportunity. The central mechanism is the convertibility of capital: economic capital converts into educational advantage through private schooling and tutoring; educational capital converts into cultural capital through familiarity with legitimate culture; cultural capital converts into social capital through the networks accessed through elite institutions; and social capital converts back into economic capital through preferential employment and business access. The conversion process is not transparent — it is precisely its opacity that makes it effective. The child who attended boarding school enters the labour market with a habitus, a set of embodied dispositions toward language, comportment, and taste, that is indistinguishable from natural ability to those who share it and incomprehensible to those who do not.

The habitus concept is both Bourdieu's most powerful analytical contribution and his most contested. Habitus denotes the durable, transposable dispositions that individuals acquire through socialisation in a particular position within the social field — dispositions that generate practices consistent with the objective conditions of their formation without requiring conscious calculation or strategic intention. The high-achieving working-class student who experiences the elite university as foreign and uncomfortable — who cannot decode the informal signals of legitimate academic behaviour, who is unsure whether to speak in seminars, who does not know how to socialise with the professors — is exhibiting habitus mismatch: a system of dispositions formed in one field encountering the demands of another. Critics of habitus argue that it is theoretically overdetermined: if dispositions are durable and self-reproducing, Bourdieu's framework cannot explain social mobility or resistance without supplementing habitus with ad hoc mechanisms that are not derived from the theory.

The field concept provides the structural context within which capital operates. Fields are relatively autonomous social arenas — the artistic field, the academic field, the political field — each governed by its own logic of practice and distributing its own form of capital. What counts as capital in the academic field (publications, citations, methodological sophistication) does not straightforwardly translate into the political field. The autonomy of fields is both analytically useful and empirically variable: some historical moments involve high field autonomy, others involve the colonisation of one field by another — the increasing subordination of the academic field to economic logic being a contemporary example that Bourdieu himself documented.

Questions · Passage 03
9
The passage states that the opacity of capital conversion "is precisely what makes it effective." What does this claim imply about meritocratic ideology?
CORRECT: C The passage says converted capital is "indistinguishable from natural ability to those who share it." This implies that meritocratic ideology is not simply false — it is structurally produced by the conversion process itself. Those who evaluate candidates see what looks like natural ability because they share the habitus that makes converted capital appear as such. The ideology is self-confirming: inherited advantage is genuinely perceived as merit. C captures this structural production of ideology. A attributes deliberate deception, which is not what Bourdieu's account requires and which misses the point about opacity making the ideology effective without anyone intending it. B says meritocracy is simply empirically false, which is a weaker and less precise claim than C. D prescribes transparency as a solution, which is a policy inference not an implication about ideology.
10
The critique that habitus is "theoretically overdetermined" specifically targets which feature of Bourdieu's account?
CORRECT: B The passage explicitly identifies the overdetermination critique: "if dispositions are durable and self-reproducing, Bourdieu's framework cannot explain social mobility or resistance without supplementing habitus with ad hoc mechanisms that are not derived from the theory." The problem is that the very feature that makes habitus a powerful explanatory mechanism — its durability — makes it incapable of explaining deviation from that mechanism. B states this precisely. A challenges the implicit acquisition claim, which is a different empirical objection not the overdetermination critique. C raises unfalsifiability, which is related but a different methodological concern. D concerns cross-cultural application, which is a scope objection not the overdetermination problem.
11
The passage describes the "increasing subordination of the academic field to economic logic" as an example of field colonisation. What does this example illustrate about the analytical concept of field autonomy?
CORRECT: D The passage states that field autonomy "is both analytically useful and empirically variable: some historical moments involve high field autonomy, others involve the colonisation of one field by another." The academic-economic example then illustrates this variability — the academic field is losing autonomy it once had. D captures what the example illustrates: autonomy is variable and subject to historical change, not a fixed property. A says autonomy is a normative ideal, which the passage does not imply. B says economic capital universally colonises all fields, which overstates what the passage's single example demonstrates. C says the framework is historically limited to French academic life, which the passage does not suggest.
12
A student argues: "Bourdieu's theory is self-refuting because if habitus determines practice without conscious reflection, then Bourdieu himself was only doing sociology because his habitus disposed him to, and his critical insights are therefore not genuine knowledge but habitus effects." How should this argument be assessed?
CORRECT: B The student's argument commits the genetic fallacy: it infers from the causal origin of a claim to its epistemic status. Even if Bourdieu's sociological work was causally produced by his habitus, that does not determine whether his claims are true or well-supported. The origin of a belief is distinct from its justification. B identifies this fallacy precisely. A accepts the self-refutation claim, which mistakes causal origin for epistemic validity. C acknowledges a tension and describes partial reflexivity, which is a substantive response but not the most precise logical diagnosis of the argument's flaw. D identifies a reductio of social determinism generally, which is related but frames the issue at a higher level of generality than the specific logical error in the student's argument.
Passage 3 Score
/4

P 04
Moral Panic, Folk Devils & the Amplification of Deviance
Passage Timer
10:00
Read the Passage

Stanley Cohen's concept of moral panic, developed in his 1972 study of the Mods and Rockers disturbances in 1960s Britain, identified a characteristic social process through which a particular group or behaviour comes to be defined as a threat to social values, subjected to intense media and public attention, and responded to with intensified social control, typically disproportionate to the actual threat. The process involves five elements: a condition, episode, or group defined as a threat; stylised and stereotypical representation by mass media; moral barricades manned by editors, bishops, politicians, and other right-thinking people; expert diagnoses and solutions; and ways of coping that either disappear or prove negligible, leaving a legacy of moral residue in law, public attitudes, or social control apparatus.

Jock Young's concept of deviancy amplification, developed independently but complementarily, described a feedback mechanism through which initial labelling of a behaviour as deviant triggers exactly the behaviour it condemns. When youth subcultures are identified as threatening, police attention intensifies; intensified policing increases arrests; increased arrests generate statistical evidence of a crime wave; the apparent crime wave justifies further intensification. The deviancy amplification spiral shows that social reactions to deviance are not merely responses to behaviour but partly constitute the behaviour they respond to. This insight aligned with labelling theory more broadly: deviance is not a property of acts but a product of social processes of definition and response.

The moral panic concept has been extensively applied and critiqued in the decades since Cohen. Applied to drug scares, video game violence debates, immigrant crime coverage, and online radicalisation fears, the framework has shown considerable analytic flexibility. Critics have raised three objections. First, the concept relies on a distinction between real and exaggerated threats that is itself politically contested: who determines the appropriate level of concern, and by what standard? Second, the framework is better suited to explaining the amplification of reactions to relatively powerless groups than to explaining responses to threats from powerful actors, producing an analytic asymmetry. Third, the original model assumed relatively centralised media capable of coordinating panics; the fragmented and algorithmically curated media environment of contemporary societies may produce panic-like dynamics through different and more complex mechanisms.

Questions · Passage 04
13
Deviancy amplification shows that "social reactions to deviance are not merely responses to behaviour but partly constitute the behaviour they respond to." Which broader theoretical tradition does this claim align with, as the passage identifies?
CORRECT: C The passage explicitly states: "This insight aligned with labelling theory more broadly: deviance is not a property of acts but a product of social processes of definition and response." C is stated directly in the passage. A concerns functional theory, which is a different tradition. B concerns conflict theory, which is related but the passage specifically names labelling theory. D concerns symbolic interactionism, which overlaps with labelling theory but is a broader and differently framed tradition.
14
The first critique of the moral panic concept — that it relies on a "distinction between real and exaggerated threats that is itself politically contested" — raises which specific methodological problem?
CORRECT: B The critique is that diagnosing a moral panic requires determining what the appropriate level of concern would be, and that determination is politically contested rather than a neutral empirical finding. The framework treats "disproportionate" as if it were a descriptive category when it actually involves normative judgments about threat levels. B captures this methodological problem. A concerns cross-cultural application, which is a scope issue rather than the specific methodological problem. C raises unfalsifiability, which is related but goes further than what the first critique specifies. D concerns cases where panics accurately identify threats, which is the empirical consequence of the critique rather than the methodological problem it identifies.
15
The second critique notes that moral panic is "better suited to explaining the amplification of reactions to relatively powerless groups than to explaining responses to threats from powerful actors." What does this asymmetry reveal about an implicit assumption in the original framework?
CORRECT: C The asymmetry — the framework works for powerless targets but not for powerful ones — reveals that the model was built around a specific structural dynamic: dominant groups defining subordinate groups as threatening and mobilising social control against them. This is a downward-directed control model. It has no equivalent analytical traction when the threat comes from powerful actors because the mechanism it describes — elite and media amplification of threats from below — does not operate in the same way when the threat comes from above. C identifies this implicit structural assumption. A concerns visibility of threats, which is an empirical observation but not the structural assumption embedded in the framework. B concerns group size, which is not the relevant structural dimension. D concerns rationality assessments, which is a different issue.
16
The third critique notes that the original model assumed "relatively centralised media capable of coordinating panics." What implication does the shift to algorithmically curated media have for the moral panic framework's applicability?
CORRECT: C The passage says the contemporary media environment "may produce panic-like dynamics through different and more complex mechanisms." This implies the dynamics still occur but through different channels, suggesting the framework needs extension or revision to capture the new mechanisms rather than being simply applied or declared obsolete. C captures this need for adaptation. A says panics are now impossible, which the passage explicitly rejects — it says panic-like dynamics may still emerge. B speculates about intensity and duration, which goes beyond what the passage claims. D says the framework is obsolete, which is stronger than what the passage implies; the passage says the mechanisms are different and more complex, not that they are categorically unrelated to Cohen's model.
Passage 4 Score
/4

P 05
Intersectionality, Matrix of Domination & the Critique of Additive Models
Passage Timer
10:00
Read the Passage

Kimberlé Crenshaw's articulation of intersectionality in 1989 addressed a specific legal and analytical problem: the failure of single-axis frameworks to capture the situation of Black women in discrimination cases. When employment discrimination claims required plaintiffs to demonstrate discrimination either as women or as Black people, Black women whose specific situation was constituted by the interaction of race and gender fell through the categorical gaps. The company could point to Black men hired and to white women hired to demonstrate that it did not discriminate by race or sex — without the single-axis framework having any mechanism to capture discrimination that specifically targeted the intersection. Intersectionality was thus not initially a general theory of oppression but a specific analytical tool for capturing structural blindspots in legal and political analysis.

The concept was subsequently extended by Patricia Hill Collins into the "matrix of domination" framework, which proposed that multiple systems of oppression — race, class, gender, sexuality, nationality — do not operate as separate additive forces but as mutually constituting systems whose interaction produces qualitatively different experiences of oppression at each intersection. The additive model — Black + woman = doubly oppressed — is inadequate because it treats race and gender as independent variables whose effects simply sum. The interaction model holds instead that being a Black woman is not the experience of being Black plus the experience of being a woman; it is a qualitatively distinct social position constituted by the specific intersection that has its own particular vulnerabilities, resources, and forms of resistance.

The framework has attracted methodological critics who argue that while the interaction insight is correct, the matrix of domination raises the question of which intersections to study and why. If all axes of social difference potentially intersect, the number of analytically relevant intersections is combinatorially vast. Critics contend that without principled criteria for selecting which intersections matter most in a given context, intersectional analysis risks dissolving into an unbounded cataloguing exercise that generates descriptive complexity without producing analytic tractability. Defenders respond that the relevant intersections are determined by the specific legal, political, or empirical problem being addressed rather than by a prior general theory, preserving intersectionality as a situated analytic tool rather than a grand unified theory of oppression.

Questions · Passage 05
17
The passage describes intersectionality as initially "a specific analytical tool for capturing structural blindspots" rather than a general theory. What does this origin story imply about how intersectionality should be evaluated?
CORRECT: C If intersectionality originated as a situated tool for identifying what single-axis frameworks miss, then its primary evaluative criterion should be diagnostic capacity in specific contexts — does it reveal what would otherwise be hidden? — rather than theoretical completeness. C captures this implication. A says it should be evaluated as a general theory, which the origin story specifically cautions against. B limits it to legal contexts, which is too narrow — the passage describes extension to sociological and political analysis as the development of the concept. D says the complexity criticisms are misplaced, but the passage presents them as genuine challenges to the matrix of domination framework, not misreadings of its scope.
18
The additive model holds that Black + woman = doubly oppressed. The interaction model holds this is inadequate. Which of the following most precisely states why the additive model fails on its own terms?
CORRECT: B The passage says the additive model "treats race and gender as independent variables whose effects simply sum." The interaction model holds that this is wrong because race and gender are not independent — they constitute each other at their intersection. The experience of being a Black woman is not derived by adding the experience of being Black to the experience of being a woman, because those experiences are themselves different at their intersection than in isolation. B captures the independence assumption as the specific flaw. A concerns incommensurability, which is a different objection about ranking rather than independence. C concerns mechanisms versus outcomes, which is related but not the specific additive model failure. D concerns policy implications of the additive model, which is a consequence rather than the internal flaw.
19
The defenders' response to the complexity critique — that "relevant intersections are determined by the specific problem being addressed" — preserves intersectionality as a situated tool. What does this response give up?
CORRECT: B If the relevant intersections are determined by the problem being addressed, then intersectionality does not independently identify which intersections matter — that judgment comes from outside the framework. This means intersectionality cannot stand as a general theory of oppression that tells us how social domination is structured; it must be preceded by a problem-framing decision that the framework itself does not provide. B captures this concession precisely. A concerns political orientation, which is a different consideration. C says the interaction insight is given up, which overstates — situating the tool does not require abandoning the interaction model when applied within a specific context. D concerns the connection to Crenshaw's legal origins, which is a different and less significant concession.
20
The passage traces intersectionality from a specific legal tool to a matrix of domination framework to a situated analytic tool in response to complexity critiques. What does this trajectory suggest about the development of sociological concepts more generally?
CORRECT: D The trajectory from specific legal tool to matrix of domination (comprehensiveness) to situated tool (tractability) illustrates a recurrent tension in social theory between the desire for comprehensive theoretical coverage and the requirement of analytical precision for specific problems. D captures this as a general pattern in social theory development. A says concepts lose precision when extended, recommending against extension — this is one reading but the passage does not endorse it; it presents the development as productive even if contested. B says empirical grounding enables generalisability, which is an optimistic reading that the complexity critique complicates. C describes a scope-precision trade-off as inevitable, which is close to D but frames it as inevitable rather than as a recurrent tension that requires active negotiation.
Passage 5 Score
/4
Sociology · Total Score
/8
Category 14
Geopolitics
5 passages · 4 questions each · CAT 5/5 · 50 min total
Score
/20
P 01
The Liberal International Order: Hegemonic Legitimacy, Revisionism & the Thin/Thick Distinction
Read the Passage

The liberal international order (LIO) — the post-1945 architecture of multilateral institutions, free-trade regimes, sovereignty norms, and human rights standards — has been characterised simultaneously as an expression of American hegemonic power and as an autonomous normative achievement that transcends any particular hegemony. Ikenberry's "liberal Leviathan" thesis holds that the order's durability reflects the genuine legitimacy derived from its institutional embodiment of mutual rules that bind the hegemon as well as weaker states. By submitting to institutional constraints — on the use of force, on trade discrimination, on sovereignty violations — the United States converted raw power into legitimate authority and made the LIO attractive enough to generate widespread consent rather than resentment. Realist critics respond that the LIO's apparent normative autonomy is ideological veneer: the order's rules were designed to advantage the incumbent hegemon, and what appears as multilateralism is merely a more efficient form of American unilateralism, providing legitimising cover for interest-based domination.

China's rise has produced a revisionist challenge that neither pure liberal institutionalism nor pure realism fully anticipates. China is not a classic revisionist power seeking to overturn the order wholesale; it is a selective challenger. It has integrated deeply into the WTO and other economic institutions while contesting the human rights, democratic governance, and liberal interventionism norms that constitute the LIO's liberal rather than merely international dimension. This selective engagement strategy exploits a structural tension within the LIO between its "thin" international norms — sovereignty, non-interference, multilateral trade facilitation — and its "thick" liberal norms — democracy promotion, human rights conditionality, humanitarian intervention. China embraces the thin while contesting the thick, and in doing so, it places the LIO's defenders in an awkward position: accepting China's selective engagement means accepting the disaggregation of liberal from international, potentially hollowing out the LIO's normative core while preserving its procedural shell.

The deeper problem for the LIO is that the thin/thick distinction cannot be maintained as cleanly as the selective engagement strategy assumes. Sovereignty norms and human rights norms were never fully separable within the LIO's institutional architecture: the Responsibility to Protect (R2P) doctrine, investment tribunals that adjudicate expropriation claims, and WTO rulings that reach behind borders into domestic regulatory practice all demonstrate that the "thin" international dimension has always carried thick liberal content. China's selective embrace of thin norms while rejecting thick ones may therefore be attempting to inhabit a conceptual space — liberal-free international order — that does not actually exist within the LIO's institutional structure.

Questions · Passage 01
1
Ikenberry argues that the LIO's durability reflects genuine legitimacy derived from mutual rules that bind the hegemon as well as weaker states. The realist critique holds this is ideological cover for American interest-based domination. Which of the following, if true, most seriously weakens the realist critique?
CORRECT: B The realist critique claims the LIO's consent is generated by hegemonic pressure and material interest — weaker states endorse it because they benefit from American patronage or fear American power. Option B directly undermines this by showing that states with no material incentive to support the LIO nonetheless endorsed its framework. If endorsement tracks independent normative legitimacy rather than hegemonic pressure, the realist reduction fails. A provides evidence that the US violated its own rules — this actually supports the realist critique (the rules don't constrain the hegemon) rather than weakening it. C shows weaker states using the rules against the US — this is Ikenberry-compatible evidence but doesn't directly address the claim that consent is generated by hegemonic pressure vs. genuine normative endorsement. D provides archival evidence of deliberate self-serving design — this strongly supports the realist critique.
2
The passage argues that China's selective engagement strategy may be "attempting to inhabit a conceptual space — liberal-free international order — that does not actually exist within the LIO's institutional structure." Which of the following can be most reliably inferred from this argument?
CORRECT: B The argument is: thin/thick separation doesn't exist institutionally, so selective embrace of thin while rejecting thick is incoherent. The reliable inference is that LIO defenders face a dilemma — either the thin-thick distinction is real (in which case selective engagement is coherent but hollows out the normative core) or it isn't (in which case participating in thin institutions carries thick commitments). Either way, defenders must take a position they may find uncomfortable. A says the strategy will "inevitably fail" — too strong; the passage says it attempts to inhabit a non-existent conceptual space, not that it will necessarily fail strategically. The institutional architecture can evolve. C makes a normative judgment — illegitimate overreach — that the passage never makes. D says China is intellectually confused — the passage is more measured; China's strategy may be strategically astute even if its conceptual premise is questionable.
3
The passage describes a paradox for LIO defenders: accepting China's selective engagement preserves the procedural shell of the LIO while potentially hollowing out its normative core. What makes this specifically a paradox for defenders rather than simply a strategic trade-off?
CORRECT: B The paradox is self-undermining success: the LIO's procedural inclusivity — its capacity to incorporate diverse participants — is the mechanism by which China's selective engagement disaggregates the order's normative content from its institutional form. The more effectively the LIO includes China, the more effectively it separates its institutional shell from its liberal substance. The instrument of the LIO's success (procedural inclusivity) is simultaneously the instrument of its potential normative hollowing. A says it's just a trade-off — the strong distractor — but this misses the self-referential structure where success and erosion are driven by the same mechanism. C says China's dual role is conceptually incoherent — this is a categorisation problem, not the specific paradox about defenders. D identifies a tension between universalism and selective engagement — real, but less precise than B's identification of the inclusive mechanism as the erosion mechanism.
4
Ikenberry argues the LIO's durability reflects legitimacy derived from mutual rules that bind the hegemon. For this to constitute an answer to the realist critique — rather than just a description of the order's design — which of the following must be assumed?
CORRECT: C The realist critique says the LIO generates compliance through interest and power, and "legitimacy" is just ideological cover. For Ikenberry's response to be more than a redescription of the same phenomenon, he must assume that legitimacy is a genuinely distinct source of compliance — that institutions can generate normative consent that operates independently of interest. Without this assumption, everything Ikenberry calls "legitimate authority" the realist can redescribe as "efficient domination," and the dispute is merely terminological. A requires empirical evidence of US constraint — a good empirical test for Ikenberry's claim, but not the foundational assumption needed for his response to constitute an answer to the realist critique. B requires legitimacy to be the only explanation — too strong; a pluralist explanation that includes legitimacy as one factor can still constitute an answer to pure realism. D requires informed consent — the passage doesn't require this; Ikenberry can accept that the rules were designed advantageously while claiming the order generates genuine normative consent.
Passage 1 Score
/4

P 02
Nuclear Deterrence: MAD, the Stability-Instability Paradox & Escalation Ambiguity
Read the Passage

Classical nuclear deterrence theory holds that mutual assured destruction (MAD) produces strategic stability: if both nuclear powers face annihilation from any nuclear exchange, neither has incentive to strike first, and neither benefits from striking second since retaliation is guaranteed. The stability derives from the mutual vulnerability — each side's population and industrial base held hostage — and from the credibility of second-strike capability: even after absorbing a first strike, the target state retains sufficient surviving warheads to inflict unacceptable damage on the aggressor. Classical deterrence rests on the paradox that the accumulation of weapons whose use would be catastrophic for the user produces peace — through the certainty of mutual destruction rather than despite it.

The stability-instability paradox identifies a structural consequence of this strategic equilibrium that classical deterrence theorists underappreciated. Where nuclear deterrence is robust at the strategic level — both sides credibly deterred from nuclear first use — the nuclear umbrella paradoxically creates room for sub-strategic conflict. Conventional military operations, proxy wars, and limited territorial incursions can be prosecuted under the nuclear shadow without triggering nuclear escalation, because both sides recognise that nuclear use would be strategically irrational. The very robustness of deterrence at the top of the escalation ladder creates a permissive environment for conflict at lower rungs. Strategic nuclear stability does not produce overall peace; it displaces violence to the sub-strategic level.

Contemporary developments — tactical nuclear weapons, hypersonic delivery systems, cyber operations against nuclear command-and-control infrastructure — have introduced new complications that strain classical deterrence logic in specific ways. Tactical nuclear weapons blur the conventional/nuclear firebreak by creating battlefield-usable warheads whose deployment would not necessarily trigger strategic nuclear exchange, thereby lowering the threshold for nuclear use. Cyber operations introduce a distinct problem: attacks on early-warning systems or launch-on-warning infrastructure can generate strategic-level effects (triggering or preventing nuclear response) through means that are covert, ambiguous in origin, and not obviously kinetic. The ambiguity of attribution means that the certainty on which deterrence depends — "if you attack us, we will retaliate" — is undermined when the attacker can plausibly deny agency and the victim cannot determine with confidence who struck or whether a strike occurred.

Questions · Passage 02
5
The stability-instability paradox predicts that robust strategic nuclear deterrence creates permissive conditions for sub-strategic conflict. Which of the following, if true, most strengthens this prediction?
CORRECT: A The stability-instability paradox predicts that sub-strategic conflicts occur under conditions of robust strategic deterrence. Option A provides historical evidence for precisely this co-occurrence: known cases of conventional and proxy conflict between nuclear states happened during periods of robust deterrence, not during uncertainty or imbalance. This is the direct empirical pattern the paradox predicts. B shows that arsenal survivability increases second-strike credibility — this supports the strategic stability side of the paradox but says nothing about sub-strategic conflict, which is the specific prediction being tested. C shows both sides maintained conventional forces — consistent with the paradox (they needed conventional forces because nuclear deterrence didn't eliminate sub-strategic conflict) but doesn't directly test whether sub-strategic conflict occurred under robust deterrence conditions. D is a theoretical elaboration of conditions under which the paradox is most pronounced — interesting but this is a theoretical refinement, not empirical evidence strengthening the core prediction.
6
The passage argues that cyber operations against nuclear command-and-control infrastructure undermine deterrence because they introduce attribution ambiguity — the certainty on which deterrence depends is eroded when victims cannot determine with confidence who struck. Which of the following can be most reliably inferred from this argument?
CORRECT: B The passage's argument identifies attribution ambiguity as the mechanism by which cyber operations undermine deterrence — the victim cannot determine who struck or whether a strike occurred, so the "if you attack, we retaliate" commitment cannot be implemented. A reliable inference is that this mechanism is general: any attack vector that introduces systematic attribution ambiguity would undermine deterrence by the same logic. Covert conventional operations, proxy attacks, or any other ambiguity-generating method would have the same destabilising effect. The vulnerability is in deterrence's dependence on certainty of attribution, not in anything specific to cyber operations. A makes a prescriptive policy recommendation — solving attribution through investment — but this goes beyond what the passage establishes; improved attribution is one possible response, but the passage is an analytical claim, not a policy prescription. C makes a comparative claim about cyber vs. conventional — the passage doesn't make this comparison; it analyses how cyber undermines deterrence. D makes a normative claim about bad faith — entirely outside the passage's analytical scope.
7
Classical deterrence theory rests on what the passage calls a "paradox" — the accumulation of weapons whose use would be catastrophic produces peace through the certainty of mutual destruction. What makes this a genuine paradox rather than simply a counterintuitive strategy?
CORRECT: B The paradox identified in the passage is structural: the weapons' deterrent value depends on the certainty that they would be used in response to attack — but their actual use would be catastrophic for the user. The weapon is maintained for a purpose (deterring attacks by threatening retaliation) that its actual employment (retaliation) would defeat (since the user would also be destroyed). The deterrent function and the use function are in constitutive tension — the weapon's value as deterrent depends on a credible commitment to use it in circumstances where using it would be irrational. A says rational choice theory resolves the paradox — this is the most sophisticated distractor: once preferences are specified (mutual destruction > any gain), the logic is coherent. But this resolves the strategic logic, not the deeper constitutive paradox that the weapon's value requires commitment to an action that defeats its own purpose. C identifies a different paradox about weapons accumulation incentives — interesting but not what the passage identifies as the paradox. D identifies the permanent-threat structure of nuclear peace — valid but this is a feature of deterrence, not the specific constitutive paradox between deterrent value and use.
8
The passage argues that tactical nuclear weapons "blur the conventional/nuclear firebreak" by creating battlefield-usable warheads, lowering the threshold for nuclear use. For this argument to establish that tactical nuclear weapons are destabilising rather than merely a new category of weapon, which of the following must be assumed?
CORRECT: A The argument is: tactical weapons blur the firebreak → the threshold for nuclear use is lowered → destabilisation. For "blurring the firebreak" to be destabilising, the firebreak must be doing something stabilising in the first place. The assumption is that the categorical conventional/nuclear distinction is itself a stabilising mechanism — a recognised red line whose crossing signals a qualitative escalation that both sides want to avoid. If the firebreak is just a descriptive category with no functional role in deterrence, blurring it would be neutral rather than destabilising. B says adversaries will necessarily misinterpret tactical use as a prelude to strategic — this is an empirical prediction about adversary perception, not the foundational assumption about why the firebreak matters. C claims inevitability of escalation — too strong; the argument requires only that blurring the firebreak creates escalation risk, not that escalation is inevitable. D appeals to yield and civilian casualties — a separate mechanism entirely, not about the firebreak's stabilising function.
Passage 2 Score
/4

P 03
The Indo-Pacific, Strategic Competition & the Economic-Security Disjunction
Passage Timer
10:00
Read the Passage

The concept of the Indo-Pacific as a strategic framework — linking the Indian Ocean and the Pacific into a single geopolitical theatre — has emerged as the primary lens through which the United States and its partners frame competition with China. The framework is simultaneously a geographic description, a diplomatic construction, and a strategic signal: by hyphenating two ocean regions, it incorporates India into a security architecture that China's Pacific-centric view of its own neighbourhood deliberately excludes. AUKUS, the Quad, and the series of bilateral security upgrades between the United States and regional partners represent the operational expression of this framing. The rhetoric of a "free and open Indo-Pacific" projects values onto a geographic category, signalling that the competition is about norms governing the region rather than merely about the balance of military forces.

The structural challenge to Indo-Pacific coalition-building is the economic-security disjunction. Most states that the United States seeks to recruit into its Indo-Pacific framework maintain deep economic interdependence with China. India is simultaneously a Quad member and among China's major trading partners. Southeast Asian states that participate in security dialogues with the United States are typically even more economically integrated with China. Asking states to choose between economic and security relationships produces the hedging behaviour that has been a consistent feature of Indo-Pacific diplomacy: states pursue security cooperation with the United States while maintaining economic ties with China, refusing the binary alignment that the competitive framing implies. This hedging is rational from the perspective of individual states but limits the coherence and credibility of the Indo-Pacific coalition as a balancing mechanism.

The Quad's evolution from a post-tsunami humanitarian coordination mechanism in 2004 to a strategic grouping illustrates both the potential and the limitations of the Indo-Pacific framework. As a military alliance, the Quad falls far short: it has no mutual defence obligation, no integrated command structure, and member states retain divergent threat perceptions. India's refusal to align with Western positions on Russia following the 2022 invasion of Ukraine demonstrated that Quad membership does not translate into strategic coordination on issues where members have conflicting interests. The framework's advocates argue that the Quad represents a new model of security cooperation suited to a world of overlapping and conditional partnerships rather than the Cold War's rigid bloc alignment. Its critics argue that without binding commitments, it lacks the credibility to deter a China that can calculate member states' individual incentives to defect from any coalition response.

Questions · Passage 03
9
The passage describes the Indo-Pacific framework as simultaneously a geographic description, a diplomatic construction, and a strategic signal. Which of these three characterisations is most analytically important for understanding China's objection to the framework?
CORRECT: B The passage says the framework "incorporates India into a security architecture that China's Pacific-centric view of its own neighbourhood deliberately excludes." This is the diplomatic construction dimension, and China's objection is most analytically grounded here: the framework is not describing geography neutrally but constructing a coalition that excludes China. A is about geographic contestation, but the passage does not attribute this objection to China. C concerns normative signalling, which is real but secondary to the inclusion-exclusion structure of the diplomatic construction. D says all three are equally important, but B is the most precise identification of the mechanism China would specifically object to.
10
The passage describes hedging as "rational from the perspective of individual states" but as limiting coalition "coherence and credibility." What collective action problem does this describe?
CORRECT: C The passage says hedging is "rational from the perspective of individual states" but limits collective coherence — individual rationality produces collective suboptimality. This is a prisoner's dilemma structure: hedging is the dominant strategy for each state regardless of what others do (it avoids economic cost from full alignment), but if all hedge the coalition is incoherent and unable to deter China, which is worse collectively than coordinated balancing. C captures this. A describes a coordination problem, which involves preference for the same outcome but difficulty coordinating — different from the prisoner's dilemma where the dominant strategy is individually rational regardless. B describes a free-rider problem, which is related but the passage describes hedging as a full economic-security trade-off, not just freeloading on security. D describes a stag hunt, which requires assurance of reciprocity — but the passage implies hedging is rational even if all others cooperate, which is the prisoner's dilemma not the stag hunt.
11
The Quad's advocates argue it represents a "new model of security cooperation suited to a world of overlapping and conditional partnerships." What assumption does this argument require about the nature of deterrence?
CORRECT: D The advocates argue the Quad is "suited" to a world of conditional partnerships — they are reframing the absence of binding commitments as a feature rather than a defect. The assumption required is that conditional, ambiguous partnerships can deter by creating uncertainty in an adversary's calculations rather than by certainty of response. D identifies this assumption: ambiguity can deter because China cannot predict which states will respond and how. B also captures a deterrence-through-pattern argument, but D is more precise about the mechanism (uncertainty rather than cumulative signal). A introduces intelligence limitations not in the passage. C introduces economic deterrence not in the passage.
12
India's refusal to align with Western positions on Russia in 2022 is cited to demonstrate a specific point. What exactly does it demonstrate, and what does it not demonstrate?
CORRECT: C The passage uses the India-Russia example to make a specific and limited claim: "Quad membership does not translate into strategic coordination on issues where members have conflicting interests." This is what it demonstrates. It does not demonstrate that the Quad cannot coordinate on issues where interests do align — which is the advocates' fallback position. C correctly identifies both what the example demonstrates and what it leaves open. A says India is unreliable in any context, which overstates. B says the Quad is ineffective in all contexts, which overstates and closes off the advocates' convergence argument. D introduces India's strategic rationale, which is not what the passage uses the example to demonstrate or not demonstrate.
Passage 3 Score
/4

P 04
Energy Geopolitics, the Petrodollar System & the Multipolar Transition
Passage Timer
10:00
Read the Passage

The dollar's status as the world's primary reserve and invoicing currency is structurally underpinned by the petrodollar system established in the early 1970s: following the collapse of the Bretton Woods fixed-exchange regime, the United States negotiated an arrangement with Saudi Arabia under which oil was priced and traded in dollars globally, sustaining demand for dollar-denominated assets and giving the United States extraordinary monetary privilege — the ability to run persistent current account deficits financed by the rest of the world's demand for dollar assets. This "exorbitant privilege," as Valéry Giscard d'Estaing labelled it, translates into lower borrowing costs, the ability to sanction adversaries by restricting dollar access, and insulation from the balance-of-payments discipline that constrains other countries.

The geopolitical instrumentalisation of dollar dominance — most starkly demonstrated by the freezing of Russian central bank reserves following the 2022 invasion of Ukraine — has accelerated efforts by Russia, China, and a broader group of Global South states to reduce dollar dependence. These efforts include bilateral trade settled in local currencies, the expansion of China's cross-border payment system CIPS as an alternative to SWIFT, and proposals to create a BRICS settlement currency. The structural barriers to dollar displacement are substantial: network effects favour established currency systems; no credible alternative offers the combination of liquidity, rule-of-law protection, and capital market depth that dollar assets provide; and China's own currency internationalisation is constrained by its unwillingness to liberalise capital controls, since a truly international currency requires convertibility that conflicts with domestic financial stability objectives.

The more likely near-term outcome is not dollar replacement but dollar fragmentation: the emergence of parallel currency blocs in which dollar-denominated and non-dollar-denominated trade and finance partially decouple. This fragmentation would reduce the global efficiency gains from a unified monetary system while limiting US sanction leverage — since the value of financial sanctions depends on the universality of dollar access. The geopolitical consequence would be an international monetary system that is more multipolar and less efficient, reflecting a broader trend in which geopolitical rivalry imposes economic costs that neither side finds optimal but that neither is willing to incur the political cost of avoiding.

Questions · Passage 04
13
The passage says the value of US financial sanctions "depends on the universality of dollar access." What does this imply about the relationship between dollar fragmentation and US geopolitical power?
CORRECT: C The passage says sanction value depends on universality of dollar access. Fragmentation — not replacement — means partial decoupling: some states move to non-dollar systems, others remain in the dollar system. US sanctions retain leverage over dollar-system states but lose leverage over states with alternative settlement options. C captures this partial reduction without the overstatement of total elimination (A) or the claim that leverage increases (B). D denies the dollar-sanction connection the passage explicitly makes.
14
The passage identifies China's capital controls as a constraint on yuan internationalisation. Why does this represent a genuine dilemma for China rather than simply a policy choice?
CORRECT: B The passage says yuan internationalisation "is constrained by its unwillingness to liberalise capital controls, since a truly international currency requires convertibility that conflicts with domestic financial stability objectives." The dilemma is structural: international currency status requires free convertibility; free convertibility threatens domestic financial stability that capital controls currently manage. B captures this structural tension between two genuine objectives. A concerns trading partner requirements, which is a commercial framing not in the passage. C concerns institutional rule-of-law requirements, which is a related but different constraint. D introduces US sanctions as a deterrent, which the passage does not mention.
15
The passage describes dollar fragmentation as producing "a more multipolar and less efficient" international monetary system. Why does multipolarity in this context imply reduced efficiency?
CORRECT: B The passage says fragmentation would "reduce the global efficiency gains from a unified monetary system." The efficiency gains from a unified system are those of network effects and liquidity: one deep, liquid currency market reduces transaction costs and facilitates price discovery. Fragmentation into parallel blocs sacrifices these network benefits — cross-bloc transactions require conversion, each market is shallower, costs rise. B captures these mechanisms. A concerns central bank reserve management costs, which is a narrow and partial efficiency cost. C concerns currency wars, which is a different macroeconomic instability concern not the network efficiency point the passage makes. D concerns US retaliation through market restrictions, which is not the passage's explanation for reduced efficiency.
16
The passage concludes that fragmentation reflects a trend where "geopolitical rivalry imposes economic costs that neither side finds optimal but that neither is willing to incur the political cost of avoiding." What type of collective action problem does this describe?
CORRECT: B The passage says neither side finds the fragmented outcome optimal, but neither will incur the political cost of avoiding it. This is a prisoner's dilemma structure: each side's dominant strategy is to pursue geopolitical advantage (fragment), even though both would be better off in a unified system, because the political cost of unilateral restraint exceeds the economic cost of the jointly suboptimal outcome. B captures this. A describes a coordination problem requiring only trust, but the passage implies each side actively pursues fragmentation rather than merely failing to coordinate. C describes tragedy of the commons, which involves a shared resource being overexploited — the monetary system is not being overexploited in that sense. D describes a chicken game involving bluffing and threats — the passage describes active pursuit of fragmentation, not a bluffing contest.
Passage 4 Score
/4

P 05
Climate Change as a Security Threat: Multiplier, Catalyst & the Attribution Problem
Passage Timer
10:00
Read the Passage

The framing of climate change as a security threat has migrated from environmental advocacy into mainstream strategic analysis, with the US Department of Defense, NATO, and intelligence agencies now producing regular assessments of climate-related security risks. The primary analytical framework is the "threat multiplier" concept: climate change does not directly cause conflict but amplifies existing drivers of instability — resource scarcity, state fragility, displacement, and grievance — in ways that increase conflict risk in already vulnerable regions. The framework is analytically cautious: it does not predict climate wars but maps the pathways through which climate stress can interact with political, economic, and social conditions to increase the probability of violent conflict.

The empirical relationship between climate variables and conflict outcomes has been extensively studied and remains contested. Meta-analyses of the quantitative literature find a statistically significant positive relationship between temperature increases and conflict incidence, but effect sizes are small relative to other conflict predictors including governance quality, economic development, and ethnic heterogeneity. The causal mechanisms proposed — resource competition, agricultural failure, displacement-driven destabilisation — are plausible but difficult to isolate empirically because they operate through multiple intervening variables. The Syrian conflict, frequently cited as a climate-conflict case, illustrates the attribution problem: a severe drought preceded the conflict, contributed to rural-to-urban migration, and may have exacerbated the grievances that the Assad government mismanaged catastrophically — but political repression, sectarian mobilisation, and geopolitical intervention were more proximate causes, and attributing the conflict significantly to climate requires accepting a long and contested causal chain.

The securitisation of climate change raises normative concerns beyond empirical accuracy. Framing climate as a security threat can mobilise resources and attention that purely environmental framing cannot, but it also risks militarising responses — prioritising border control, displacement management, and conflict prevention over mitigation and adaptation — and reinforces the framing of climate-vulnerable populations as threats rather than victims. Critics also note a structural irony: the military establishments that are developing climate security frameworks are among the largest institutional carbon emitters, generating a tension between the operational carbon footprint of security responses and the mitigation objectives they are supposedly serving.

Questions · Passage 05
17
The "threat multiplier" concept is described as "analytically cautious." What specific caution does this framing exercise relative to stronger claims about climate and conflict?
CORRECT: C The passage describes the threat multiplier as not predicting climate wars but mapping "pathways through which climate stress can interact with political, economic, and social conditions to increase the probability of violent conflict." The caution is specifically about direct causation versus probabilistic pathway analysis. C states this precisely. A concerns geographic specificity, which is a different kind of caution. B says the framework avoids retrospective claims, but the passage uses Syria as a climate-conflict case study, which is retrospective. D concerns policy prescription, which is not what "analytically cautious" refers to in the passage.
18
The Syrian conflict example is used to illustrate "the attribution problem." What precisely is the attribution problem in climate-conflict research?
CORRECT: B The Syria example shows climate contributed through a long chain: drought, migration, grievance exacerbation — but political repression, sectarian mobilisation, and geopolitical intervention were "more proximate causes." The attribution problem is: how much causal weight does climate deserve relative to these more proximate factors? B captures this causal-weight-among-multiple-causes problem. A concerns attribution of climate events to anthropogenic causes, which is a different attribution problem in climate science. C concerns legal liability attribution, which is not the analytical problem the passage identifies. D describes necessary-but-insufficient causation, which is a related but different framing than what the passage's Syria example demonstrates.
19
The passage describes the securitisation of climate change as capable of mobilising resources that environmental framing cannot, while risking the militarisation of responses. What type of argument is being made about the strategic use of threat framings?
CORRECT: D The passage presents both the benefit (resource mobilisation) and the costs (militarisation, framing of victims as threats) of securitisation without endorsing or rejecting it. This is a pragmatic trade-off analysis that leaves the net assessment context-dependent. D captures this balanced presentation. A claims empirical certainty about outcomes that the passage presents as risks rather than documented patterns. B says securitisation is always inappropriate, which is a stronger normative stance than the passage takes. C says securitisation is optimal, which endorses it — the passage does not.
20
The "structural irony" the passage identifies in climate security frameworks is that military establishments producing them are also major carbon emitters. What does calling this a "structural" irony rather than simply an irony imply?
CORRECT: C Calling it "structural" signals that the tension is not accidental or fixable by better individual choices but is embedded in the institutional logic: security operations require mobility and logistics that produce emissions, so the very activities that constitute security responses conflict with mitigation. C captures the structural-versus-contingent distinction. A attributes deliberate deception, which "structural" specifically excludes — structural ironies arise from institutional logic, not from individual intent. B limits the irony to military institutions, which misses what "structural" means. D concerns credibility undermining through vested interests, which introduces a motivation argument that the passage does not make.
Passage 5 Score
/4
Geopolitics · Total Score
/8
Category 15
Geology & Geography
5 passages · 4 questions each · CAT 5/5 · 50 min total
Score
/20
P 01
Plate Tectonics, the Wilson Cycle & the Problem of Subduction Initiation
Read the Passage

The Wilson Cycle — the sequential opening and closing of ocean basins through the divergence and convergence of tectonic plates — provides a unifying framework for understanding the episodic assembly and fragmentation of supercontinents across geological time. The cycle begins with continental rifting (as exemplified by the current East African Rift), progresses through ocean-basin formation and symmetric seafloor spreading, reaches maturity, and then reverses as subduction initiates at passive margins, consuming the oceanic crust and eventually driving continental collision. The Himalayan orogeny — the ongoing closure of the Tethys Sea following the northward drift of the Indian subcontinent — represents a late-stage Wilson Cycle event, with crustal thickening and isostatic uplift continuing to the present. The framework's conceptual significance lies in its integration of phenomena previously treated as separate — rifting, volcanism, sedimentation, metamorphism, and mountain-building — into a single geodynamic narrative governed by the underlying engine of mantle convection.

The Wilson Cycle framework has, however, a significant mechanistic gap at its hinge point. The transition from passive margin to subduction zone — the moment at which the oceanic crust that has been accumulating at a spreading ridge reverses direction and begins to descend into the mantle — is poorly understood despite being the pivot on which the entire cycle turns. Cold, dense, old oceanic lithosphere has a negative buoyancy that should in principle drive it downward, but overcoming the mechanical strength of that lithosphere to initiate a new subduction zone requires a force sufficient to bend rigid plate material — a process for which the energy source and triggering mechanism remain disputed. "Spontaneous nucleation" models propose that the lithosphere eventually becomes dense enough to founder under its own weight; "induced nucleation" models invoke far-field tectonic stresses transmitted from distant plate boundaries or from the collision of other plates as the triggering force. Neither model has achieved consensus, and the conditions under which spontaneous versus induced initiation occurs may themselves vary by tectonic setting.

The significance of this gap extends beyond the Wilson Cycle to the broader theory of plate tectonics. Earth is the only planet in the solar system with confirmed plate tectonics, and the onset of plate tectonics in Earth's early history remains poorly constrained. If subduction initiation is poorly understood for the modern Earth, inferring how it first began from the fragmentary metamorphic and geochemical record of the Archean eon is correspondingly uncertain. The mechanistic problem of subduction initiation thus connects a well-established descriptive framework — the Wilson Cycle — to open questions about Earth's tectonic uniqueness and the conditions under which plate tectonics can emerge.

Questions · Passage 01
1
The passage argues that the Wilson Cycle's "conceptual significance" lies in integrating previously separate phenomena into a single geodynamic narrative. Which of the following, if true, most seriously weakens the claim that this integration represents a genuine theoretical advance rather than a merely descriptive framework?
CORRECT: C The passage claims integration as the conceptual significance — the Wilson Cycle narrates rifting, volcanism, sedimentation, metamorphism, and mountain-building as phases of a unified process. For this to be a theoretical advance rather than merely descriptive, the framework must do more than describe the sequence; it must provide causal connections between phases. Option C directly challenges this: if the framework only arranges phenomena temporally without specifying the mechanisms connecting phases, it is an elaborate redescription, not an explanation. A shows the framework lacks predictive power — relevant but different from whether it is descriptive vs. explanatory; a framework can be genuinely integrative and explanatory while lacking forward-looking predictions. B shows empirical exceptions — these challenge the framework's universality but not whether the concept of integration is genuine where the cycle does apply. D says the underlying engine is poorly understood — but the Wilson Cycle's integration doesn't require a complete understanding of mantle convection to be genuinely explanatory at the scale of plate motion.
2
The passage says the mechanistic gap in subduction initiation "connects a well-established descriptive framework — the Wilson Cycle — to open questions about Earth's tectonic uniqueness." Which of the following can be most reliably inferred from this connection?
CORRECT: B The passage explicitly connects the subduction initiation gap to two open questions: Earth's tectonic uniqueness and the onset of plate tectonics in Earth's early history. A reliable inference is that resolving the initiation mechanism would have implications beyond the Wilson Cycle — specifically for these two connected questions. B captures this precisely without overstating. A says the Wilson Cycle is provisional until initiation is resolved — too strong; the framework can be descriptively and conceptually valid even with a mechanistic gap at its hinge. C says the initiation debate is "the key to" understanding planetary differentiation — an overstatement; it's a relevant question, not necessarily the key. D says the Wilson Cycle cannot apply to other planets — this goes beyond what the passage says; it says Earth is the only confirmed plate tectonics planet but doesn't say the cycle is conceptually inapplicable to hypothetical tectonically active planets.
3
The passage notes that the Wilson Cycle provides a "unifying framework" for plate tectonics while simultaneously having a "significant mechanistic gap at its hinge point." What is the precise nature of this tension within the framework?
CORRECT: A The tension is structural: the Wilson Cycle claims to be a unifying narrative connecting all major tectonic phenomena, but the mechanism connecting the two halves — what turns oceanic spreading into subduction — is precisely what is unknown. The unity the framework provides is thus a descriptive-narrative unity (events happen in this order) rather than a mechanistic unity (event A produces event B through known physical processes). This is the precise tension between claiming to unify and having a gap at the pivot. B identifies the seafloor spreading → negative buoyancy mechanism — a real and interesting point, but it describes what should drive subduction according to the spontaneous nucleation model, not the tension between unity and mechanistic gap. C raises Earth's tectonic uniqueness — real but this is about applicability to other planets, not the internal tension of the framework. D claims internal inconsistency about convection patterns — this misunderstands plate tectonics; convection drives divergence at ridges and convergence at subduction zones through the same overall convective cell, not the same boundary.
4
The passage states that "spontaneous nucleation models propose that the lithosphere eventually becomes dense enough to founder under its own weight." For this model to provide a complete explanation of subduction initiation, which of the following must be assumed?
CORRECT: A The spontaneous nucleation model's defining claim is that the lithosphere initiates subduction under its own weight — no external trigger needed. For this to be a complete explanation, the internal force (negative buoyancy) must be sufficient to overcome the resistance (mechanical strength of rigid lithosphere). Without this assumption, the model explains why the lithosphere has a tendency to sink without explaining why it actually does — negative buoyancy is a necessary but not sufficient condition unless strength can be overcome internally. B describes density increasing with age — consistent with but not the foundational assumption of the model; the model requires internal sufficiency, not just density accumulation. C claims universal applicability — the passage notes conditions may vary by tectonic setting; universality is not required for the model to be explanatory in cases where it applies. D specifies mantle viscosity conditions — relevant to implementation but the foundational assumption is about whether internal buoyancy can overcome lithospheric strength, not specifically about mantle rheology.
Passage 1 Score
/4

P 02
Urban Heat Islands: Surface Energy Balance, Vulnerability & the Governance Trilemma
Read the Passage

The urban heat island (UHI) effect — the systematic elevation of surface and air temperatures in urban areas relative to surrounding rural environments — results from a complex interaction of land-cover change, altered surface energy balance, anthropogenic heat emissions, and modified atmospheric dynamics. The replacement of vegetated surfaces with impervious materials (asphalt, concrete, roofing) shifts the energy balance by reducing evapotranspiration and increasing the storage and re-emission of solar radiation as sensible heat. Urban geometry — the canyonisation effect of tall buildings — reduces sky-view factor, trapping longwave radiation and impeding nocturnal cooling. Waste heat from transportation, HVAC systems, and industrial processes adds a direct anthropogenic thermal source concentrated in urban cores.

The UHI's health consequences are socially stratified in a way that connects atmospheric physics to political economy. Mortality from urban heat events — particularly during protracted heat waves — is disproportionately concentrated among the elderly, the poor, renters in poorly insulated housing, and residents of urban cores with limited green infrastructure. This geography of heat vulnerability maps closely onto pre-existing patterns of socioeconomic disadvantage: the populations least able to invest in thermal adaptation (air conditioning, reflective roofing, access to cooling centres) are precisely those most exposed to the UHI's peak intensities. The UHI is thus simultaneously a microclimatic phenomenon, a social equity issue, and an urban planning challenge — and policies that address it only as the first will systematically fail the second and third.

Green infrastructure interventions — urban forests, green roofs, cool pavements — can reduce UHI intensity, but their deployment faces a trilemma characteristic of distributed urban sustainability transitions. First, the costs and benefits of green infrastructure are spatially misaligned: the cooling benefits are diffuse and area-wide, but the costs of implementation fall on individual property owners or public authorities managing specific parcels. Second, free-rider dynamics in voluntary adoption mean that neighbourhoods where residents most need cooling relief are least likely to achieve the adoption density required for measurable temperature reduction. Third, the fragmentation of planning authority across metropolitan areas — with infrastructure decisions distributed across multiple municipal, county, and regional bodies — prevents the coordinated implementation at scale that would capture the full cooling benefit. Addressing the UHI therefore requires not just the right technical interventions but an institutional architecture capable of coordinating distributed action across the socioeconomic and jurisdictional fault lines that define the problem.

Questions · Passage 02
5
The passage argues that green infrastructure faces a free-rider problem: the cooling benefits are area-wide, but adoption costs fall on individual property owners, so voluntary adoption will be insufficient — especially in the neighbourhoods most needing relief. Which of the following, if true, most strengthens this argument?
CORRECT: B The free-rider argument has two components: (1) voluntary adoption rates will be too low, and (2) the gap between voluntary adoption and required adoption is large enough to prevent the temperature benefit. Option B provides direct empirical evidence for both: actual adoption rates below 5% and required rates above 40% — a vast gap that confirms the voluntary adoption failure specifically in the vulnerable neighbourhoods the passage identifies. A actually weakens the argument by showing benefits are localised (private), not diffuse — if individual owners capture the benefit, the free-rider problem is reduced. C shows that mandates work — this is consistent with the free-rider argument (mandates overcome the voluntary failure) but doesn't strengthen the claim that voluntary adoption fails; it sidesteps it. D shows private property value benefits — like A, this weakens by suggesting private incentives should drive voluntary adoption without public intervention.
6
The passage argues that "policies that address [the UHI] only as [a microclimatic phenomenon] will systematically fail the second and third" dimensions — social equity and urban planning. Which of the following can be most reliably inferred from this claim?
CORRECT: B The passage says addressing the UHI only as a microclimatic problem will fail on equity and governance dimensions. The reliable inference is that technical interventions are necessary (they do reduce temperature) but not sufficient (they don't address who bears costs, who gets benefits, or whether coordinated implementation is achieved). B captures this precisely: necessary but insufficient, dependent on conditions the technical interventions themselves can't provide. A makes a normative claim about atmospheric scientists' moral obligations — the passage identifies an analytical limitation, not a moral failure of disciplinary focus. C argues for focusing exclusively on inequality — the passage says all three dimensions must be addressed; it doesn't prioritise one as the "deeper cause." D says governance is primary and technical secondary — the passage says policies that address only the technical will fail; it doesn't say the reverse (that institutional reform alone suffices).
7
The passage devotes its third paragraph to a detailed analysis of the three-part governance trilemma (spatial misalignment, free-rider dynamics, jurisdictional fragmentation). What is the most plausible reason the author spends this much space on governance rather than on the physical or social dimensions of the UHI?
CORRECT: B The passage's structure is: paragraph 1 (what causes the UHI physically), paragraph 2 (why the UHI is a social equity issue), paragraph 3 (why effective solutions aren't being deployed). The third paragraph answers the implicit question created by paragraphs 1 and 2: we know what the problem is and who it harms, so why is it not solved? The governance trilemma explains the gap between known solutions and actual deployment — completing the analytical arc. A attributes the author's contribution to novelty — plausible but cannot be reliably inferred from the passage itself. C attributes an implied ranking — the passage says all three dimensions must be addressed, not that governance is most important. D says complexity drives space allocation — possible but can't be reliably inferred; and the physical dimension (energy balance, sky-view factor) is also technically complex.
8
A city planner might argue: "We should focus all our UHI mitigation resources on the wealthiest neighbourhoods first, because they have the most impervious surface, generate the most waste heat, and are therefore the primary physical contributors to the urban heat island. Reducing the UHI at its source will eventually benefit all neighbourhoods downstream." Which logical problem most seriously affects this argument?
CORRECT: B The passage establishes that the UHI is simultaneously a physical, equity, and governance problem — and that policies addressing only the physical dimension will systematically fail the equity one. The planner's argument is precisely this error: it focuses on physical causation (where heat is generated) while ignoring where the harm falls (low-income urban cores). Even if the physical logic were correct (reducing heat at the source benefits downstream), the argument inverts the equity priority by serving the wealthiest first. B identifies this confusion of physical causation with social priority. A identifies an ecological fallacy about individual inference — but the argument is about neighbourhoods, not individuals; the fallacy label doesn't apply here. C challenges the physical claim about heat propagation — a real empirical objection, but it addresses the "eventually benefit all" claim rather than the more fundamental logical problem of inverting the equity dimension. D challenges the empirical premise about wealthy neighbourhoods — also a legitimate empirical challenge, but B is more fundamental: even if the empirical premise were true, the priority inversion would remain a logical problem.
Passage 2 Score
/4

P 03
Sea Level Rise, Coastal Geomorphology & the Politics of Managed Retreat
Passage Timer
10:00
Read the Passage

Coastal systems are dynamic geomorphic environments whose evolution is governed by the interaction of wave energy, sediment supply, sea level, and human modification. The concept of coastal equilibrium — the tendency of beaches, barrier islands, and dune systems to maintain a characteristic profile adjusted to prevailing wave and sediment conditions — provides the baseline against which human modification and sea level change must be assessed. Barrier islands, for example, are naturally migratory: they respond to sea level rise not by drowning in place but by rolling landward through overwash processes that carry sediment from the seaward face to the backbarrier. This natural migration capacity is the barrier island's primary resilience mechanism, and it is systematically undermined by the stabilisation infrastructure that human settlement requires — seawalls, groins, beach nourishment — which prevent the landward migration that maintains island integrity.

Sea level rise projections complicate coastal management by combining slow chronic change with increased stochasticity of extreme events. The IPCC Sixth Assessment Report projects likely sea level rise of 0.3 to 1.0 metres by 2100 under intermediate emissions scenarios, with low-probability high-end scenarios exceeding 2 metres possible if ice sheet instabilities materialise. The challenge for infrastructure planning is that the distribution has a fat tail: the probability-weighted expected value of sea level rise may be manageable, but the consequences of the tail scenarios are catastrophic and irreversible for coastal communities. Standard cost-benefit analysis applied to sea level rise therefore systematically underinvests in adaptation by discounting low-probability high-consequence scenarios in ways that may be technically defensible but are ethically problematic when the communities bearing the catastrophic risk are not those conducting the analysis.

Managed retreat — the planned relocation of coastal infrastructure and communities inland — is widely recognised by coastal geographers and engineers as the only sustainable long-run response to sea level rise in low-lying areas, yet it is politically extremely difficult to implement. Property rights, community identity, economic assets, and political representation are all concentrated in coastal zones, and the beneficiaries of protection investment are well organised relative to the diffuse beneficiaries of retreat. The few successful cases of managed retreat at scale — parts of Louisiana, New Zealand's Kaikoura coast — have typically occurred following catastrophic events that disrupted the political equilibrium preserving existing land use, suggesting that the normal governance mechanisms are insufficient to overcome the barriers to proactive retreat planning.

Questions · Passage 03
9
The passage argues that stabilisation infrastructure "systematically undermines" the barrier island's primary resilience mechanism. What specific mechanism is being undermined?
CORRECT: C The passage explicitly says barrier islands respond to sea level rise by "rolling landward through overwash processes" and that "this natural migration capacity is the barrier island's primary resilience mechanism." Stabilisation infrastructure prevents precisely this landward migration. C states this correctly. A concerns wave energy dissipation, which is a related coastal process but not the specific resilience mechanism the passage identifies. B concerns sediment supply through longshore drift, which is also relevant to coastal geomorphology but not the primary resilience mechanism described. D concerns dune rebuilding through wind transport, which is real but again not what the passage identifies as the primary mechanism being undermined.
10
The passage argues that standard cost-benefit analysis "systematically underinvests" in adaptation to sea level rise. What specific feature of cost-benefit analysis produces this underinvestment?
CORRECT: B The passage specifically says CBA "underinvests by discounting low-probability high-consequence scenarios." The problem identified is tail-risk discounting: the fat tail of the sea level distribution contains catastrophic outcomes that expected-value calculations inadequately weight when multiplied by their small probability. B captures this precisely. A concerns temporal discounting of future costs, which is a related but separate problem not what the passage specifically identifies. C concerns non-monetary values, also valid but not the passage's specific critique. D concerns intergenerational discounting, again related but not the specific mechanism the passage identifies as producing systematic underinvestment.
11
The passage says successful managed retreat has "typically occurred following catastrophic events that disrupted the political equilibrium preserving existing land use." What does this pattern imply about the normal political economy of coastal governance?
CORRECT: C The passage says "the beneficiaries of protection investment are well organised relative to the diffuse beneficiaries of retreat." The pattern of retreat happening only after catastrophe implies that under normal conditions the organised pro-protection interests reliably block retreat. C identifies this structural organisation asymmetry as what the normal political economy implies. A attributes democratic failure to capture by coastal owners, which is a stronger and more evaluative claim than the passage makes — the passage describes a collective action dynamic, not necessarily a capture problem. B attributes the problem to psychological distance, which is a different and individual-level explanation not what the passage argues. D is a description of the pattern rather than an implication about what it reveals about normal governance.
12
The passage describes coastal equilibrium as providing "the baseline against which human modification and sea level change must be assessed." What would undermine the usefulness of this baseline concept for coastal management?
CORRECT: B The baseline concept is useful for management if there is an identifiable natural state from which human modification departs and toward which restoration could aim. If most managed coastlines are so extensively modified that no pre-human equilibrium is recoverable or identifiable, the concept loses its practical management utility even if it remains theoretically valid. B identifies this gap between theoretical validity and practical applicability. A concerns geological variability, which makes the concept harder to apply universally but not inapplicable in principle. C concerns the timescale of equilibrium adjustment, which creates practical challenges but does not undermine the baseline concept itself. D concerns accretion conditions, which shows variability in vulnerability but does not undermine the baseline concept's usefulness for the regions where it does apply.
Passage 3 Score
/4

P 04
Glacial Cycles, Milankovitch Theory & the Problem of the 100,000-Year Cycle
Passage Timer
10:00
Read the Passage

Milankovitch theory proposes that the glacial-interglacial cycles of the Pleistocene are paced by periodic variations in Earth's orbital geometry: the eccentricity of its orbit around the Sun (cycle of approximately 100,000 years), the obliquity of its rotational axis relative to the orbital plane (cycle of approximately 41,000 years), and the precession of its rotational axis (cycle of approximately 23,000 years). These variations alter the seasonal and latitudinal distribution of incoming solar radiation without significantly changing the total amount received annually. The deep-sea sediment and ice-core records of the past 800,000 years show striking correspondence between the insolation forcing implied by Milankovitch cycles and the timing of glacial-interglacial transitions — a correspondence so strong that orbital pacing of glacial cycles is now treated as established science.

The dominant cycle in the late Pleistocene ice-volume record has a period of approximately 100,000 years. The puzzle — called the "100-kyr problem" — is that the 100,000-year eccentricity cycle produces the weakest insolation forcing of the three Milankovitch cycles, yet it dominates the glacial record for the past 800,000 years while the stronger 41,000-year obliquity forcing dominated the record in the earlier Pleistocene. Proposed resolutions involve ice-sheet nonlinearities — the glacial system resonating at the 100-kyr period through internal dynamics amplified by CO₂ feedbacks and ice-albedo interactions — and state-dependent sensitivity, in which the response of the climate system to orbital forcing depends on the background state of the climate, with the growth of large Northern Hemisphere ice sheets over the course of the Pleistocene changing the system's resonant frequency from 41,000 to 100,000 years.

The 100-kyr problem illustrates a broader methodological challenge in palaeoclimatology: the climate system is not a simple linear amplifier of external forcing but a nonlinear dynamical system capable of threshold behaviour, internal oscillations, and state-dependent responses. The orbital record provides a pacemaker, but the amplitude, timing, and character of glacial transitions reflect internal climate dynamics that are not directly derivable from the orbital forcing alone. This means that calibrating palaeoclimate models against the orbital record establishes that models can reproduce the pacing of transitions without establishing that the models correctly represent the mechanisms producing the amplitude and character of those transitions — a distinction with significant implications for the reliability of model projections of future climate change.

Questions · Passage 04
13
The 100-kyr problem arises because the dominant cycle in the glacial record corresponds to the weakest Milankovitch forcing. What type of scientific problem does this represent?
CORRECT: C The passage treats the 100-kyr problem as a puzzle within an established framework: Milankovitch orbital pacing is "established science," but the specific dominance of the weakest forcing requires additional mechanistic explanation. C captures this: the framework is confirmed but incomplete, requiring supplementary mechanisms. A says it falsifies Milankovitch theory, but the passage explicitly says orbital pacing is established and the problem is specifically about the 100-kyr dominance requiring further explanation, not refutation. B attributes it to data quality problems, which the passage does not raise. D attributes it to model calibration artefacts, which is a concern in some palaeoclimate debates but not what the passage identifies as the problem.
14
The passage distinguishes between calibrating a model against the orbital record and establishing that the model correctly represents the mechanisms producing glacial transitions. Why does this distinction matter for climate projections?
CORRECT: B The passage says calibrating against the orbital record establishes that models reproduce the pacing of transitions "without establishing that the models correctly represent the mechanisms producing the amplitude and character." Future climate projections involve predicting amplitude and character of warming — which depend on internal dynamics — not orbital pacing. A model that gets the timing right by correctly representing the orbital pacemaker may get the amplitude wrong because its internal dynamics are incorrectly parameterised. B captures this precisely. A inverts the logic, claiming palaeoclimate calibration is definitive for future projections. C is an empirical claim about model differences not in the passage. D identifies timescale differences, which is related but frames the problem as timescale mismatch rather than the mechanism distinction the passage draws.
15
State-dependent sensitivity means the climate system's response to orbital forcing depends on the background state. How does this concept complicate the interpretation of the Pleistocene glacial record as a validation dataset for climate models?
CORRECT: D State-dependent sensitivity means the response characteristics depend on the background state. The late Pleistocene background state differs fundamentally from the future projected state — different ice sheet configurations, different CO₂ concentrations, potentially different ocean circulation. A model validated against the Pleistocene state may be reproducing the correct sensitivities for that state while having incorrect sensitivities for a state it has never been calibrated against. D captures this transfer problem. A says state-dependence makes calibration impossible, which overstates — it makes calibration state-specific, not impossible. B partially captures the argument but is less complete than D. C mischaracterises state-dependent sensitivity as being about frequency rather than background state.
16
The passage says orbital variations "alter the seasonal and latitudinal distribution of incoming solar radiation without significantly changing the total amount received annually." Why is this detail analytically important?
CORRECT: B If orbital changes only redistribute insolation without changing the total, the small redistribution cannot alone explain the large glacial-interglacial climate swings. The implication is that amplifying feedbacks — ice-albedo, CO₂, ocean circulation changes — must be invoked to explain the large climate response to relatively modest orbital forcing. This is why the passage later discusses "CO₂ feedbacks and ice-albedo interactions" as part of the resolution to the 100-kyr problem. B captures this analytical importance. A draws an implication about anthropogenic climate change relevance, which is a secondary inference not the direct analytical importance of the detail. C connects to the 100-kyr problem but is not the primary reason the redistribution-not-total detail is analytically important. D concerns distinguishing Milankovitch from solar variability, which is a methodological application but not the main analytical implication.
Passage 4 Score
/4

P 05
River Systems, Geomorphic Thresholds & the Human Modification of Fluvial Landscapes
Passage Timer
10:00
Read the Passage

Fluvial geomorphology studies the processes by which rivers shape and are shaped by their landscapes. The concept of stream power — the rate of energy expenditure per unit channel length, determined by discharge and channel gradient — provides a framework for understanding erosion, sediment transport, and deposition in river systems. Rivers tend toward a graded or dynamic equilibrium condition in which stream power is distributed along the channel profile in proportion to the available sediment load: the channel slope adjusts to transport the sediment supplied from the watershed at the prevailing discharge. Human modifications — dams, channelisation, land use change in the watershed — perturb this equilibrium by altering either the discharge regime or the sediment supply, triggering a cascade of geomorphic adjustments that can propagate both upstream and downstream for decades.

Dam construction illustrates how a single infrastructure intervention can trigger complex, spatially extensive, and temporally extended geomorphic responses. A dam traps the sediment that would normally continue downstream, creating what geomorphologists call a "hungry water" effect below the dam: the sediment-starved flow has excess transport capacity and scours the riverbed and banks, incising the channel and causing bank erosion and lateral instability. Upstream of the dam, sediment accumulation in the reservoir reduces storage capacity over time. River deltas — dependent on continuous sediment supply from their feeding rivers — begin to subside and erode when dams intercept that supply, threatening coastal communities that evolved under conditions of active delta growth. The global dam inventory has intercepted a significant fraction of the sediment that would otherwise reach the ocean, with consequences for delta stability that are only beginning to be systematically assessed.

Geomorphic threshold behaviour complicates the management of human-modified river systems. A channel may absorb a series of incremental modifications without visible response, storing the disequilibrium as potential energy in deformed bed material, altered bank geometry, or changed vegetation patterns, until a threshold is crossed and rapid adjustment occurs. This nonlinearity means that the relationship between cause and effect in fluvial systems is time-lagged and non-proportional: a small additional perturbation may trigger a large response if the system is poised near a threshold, while a larger perturbation applied to a system far from threshold may produce minimal immediate change. Managing these systems therefore requires understanding not just the current state of the channel but the accumulated history of past perturbations and the proximity of the system to critical thresholds.

Questions · Passage 05
17
The "hungry water" effect describes channel incision below dams caused by sediment-starved flow. Which concept from the first paragraph does this directly illustrate?
CORRECT: B The first paragraph describes dynamic equilibrium as the condition where stream power distributes in proportion to sediment load — the channel adjusts to transport available sediment. The hungry water effect is what happens when a dam disrupts this equilibrium: sediment supply falls, the flow has excess transport capacity, and the channel adjusts by sourcing sediment through incision and bank erosion until a new equilibrium is approached. B correctly connects the hungry water effect to the dynamic equilibrium concept. A attributes the incision to increased stream power from flow concentration, which is not the passage's explanation — the dam does not increase stream power, it reduces sediment supply. C connects to threshold behaviour, which is in the third paragraph not the first. D concerns upstream-downstream propagation, which is mentioned in the first paragraph's last sentence but is a description of response patterns, not the specific concept the hungry water effect illustrates.
18
The passage says dam impacts on river deltas are "only beginning to be systematically assessed." What does this suggest about the relationship between dam construction and its full geomorphic consequences?
CORRECT: C The passage describes impacts that propagate through entire river systems and manifest at distant deltas over extended timescales. "Only beginning to be systematically assessed" suggests the full consequences span spatial and temporal scales beyond what standard dam impact assessment frameworks typically addressed. C captures this spatial-temporal extension as the reason for belated assessment. A attributes it to a scientific advisory failure, which is an institutional critique not what the passage implies. B says delta impacts were unanticipated due to knowledge gaps at construction time, which may be historically true but is not what the passage implies — it implies an ongoing assessment challenge, not a historical knowledge failure. D introduces threshold behaviour near deltas, which is speculative and not what the passage says.
19
The passage says geomorphic threshold behaviour means managing river systems requires understanding "the accumulated history of past perturbations." Why is historical knowledge specifically necessary rather than simply knowledge of the current channel state?
CORRECT: B The passage says systems near thresholds store disequilibrium in forms that are not visible as current instability — deformed bed material, altered bank geometry, vegetation changes. Current state alone cannot reveal whether the system is near or far from a threshold because the threshold proximity is encoded in the history of perturbations that have accumulated without triggering visible response. B captures why historical knowledge is necessary: current state is insufficient because it cannot reveal threshold proximity. A says current state provides no information about trajectory, which is partially true but misses the specific threshold proximity point. C says historical perturbations change response to future inputs, which is related but frames the need for history differently from the threshold proximity argument. D introduces legal defensibility, which is not the scientific reason the passage gives.
20
The passage presents river systems as characterised by nonlinear, time-lagged, and threshold-dependent behaviour. What does this imply about the applicability of linear equilibrium models to fluvial geomorphology?
CORRECT: C The passage uses the dynamic equilibrium concept (a linear framework) as the baseline while showing that actual channel response to perturbation involves nonlinear, threshold-dependent behaviour that equilibrium models cannot fully capture. This implies the two are complementary: equilibrium models identify the target state, nonlinear models describe the adjustment path. C captures this complementarity. A says linear models are entirely inapplicable, which overstates — the passage uses dynamic equilibrium throughout as a useful baseline concept. B says nonlinear features are rare exceptions, which understates — the passage treats threshold behaviour and time-lags as characteristic features of human-modified river systems, not rare exceptions. D restricts nonlinear models to human-modified systems, but the passage implies threshold behaviour is a feature of all fluvial systems, not only those modified by humans.
Passage 5 Score
/4
Geology & Geography · Total Score
/8
Category 16
Astronomy
5 passages · 4 questions each · CAT 5/5 · 50 min total
Score
/20
P 01
Dark Energy, the Cosmological Constant Problem & Anthropic Reasoning
Read the Passage

The discovery that the expansion of the universe is accelerating — inferred from the anomalous dimness of Type Ia supernovae at high redshift — required postulating an unknown energy component with negative pressure: dark energy. In the concordance ΛCDM model, dark energy is identified with the cosmological constant Λ, a term Einstein originally introduced and subsequently retracted, representing a constant energy density of empty space — vacuum energy. The cosmological constant problem is among the most severe fine-tuning problems in theoretical physics: quantum field theory predicts a vacuum energy density approximately 120 orders of magnitude larger than the observed value of Λ. This is not a small discrepancy; it represents a cancellation of extraordinary precision between a large bare cosmological constant and quantum corrections — a cancellation for which no known physical mechanism exists and that conventional naturalness criteria treat as deeply, almost incomprehensibly, suspect.

Quintessence models attempt to replace the cosmological constant with a dynamical scalar field whose energy density varies over time, introducing a time-varying equation of state that could in principle be observationally distinguished from Λ. They trade the fine-tuning problem for a coincidence problem: why does the dark energy density become comparable to the matter density precisely in the current cosmological epoch — the epoch in which observers happen to exist? The energy densities of matter and dark energy scale differently with cosmic expansion; that they should be of comparable magnitude now, rather than differing by many orders of magnitude as they do at other epochs, appears to require its own special explanation. Anthropic arguments resolve this by invoking an observer-selection effect: in a multiverse of universes with varying Λ, only those with values compatible with structure formation — and hence with the existence of observers — will be observed. The measured value is not fundamental but conditioned on our existence. The explanatory legitimacy of anthropic reasoning in physics remains deeply contested.

The debate over anthropic reasoning in cosmology reflects a broader tension between two conceptions of scientific explanation. The first holds that genuine explanation requires identifying a causal mechanism or physical law from which the observed value is derived — anthropic reasoning, on this view, is not an explanation but an abandonment of the explanatory project. The second holds that explanation is fundamentally probabilistic and observer-relative — we explain facts by showing they are likely or expected given a specified reference class of possible observations — and that anthropic arguments are legitimate applications of Bayesian reasoning. Neither position has achieved decisive argumentative superiority, in part because the debate turns on foundational questions in the philosophy of science about what explanation is that physics alone cannot resolve.

Questions · Passage 01
1
The passage presents the coincidence problem as a challenge for quintessence models: the dark energy density is comparable to matter density now, which seems to require special explanation. Anthropic arguments resolve this by invoking observer selection. Which of the following, if true, most seriously weakens the anthropic resolution of the coincidence problem?
CORRECT: A The anthropic resolution of the coincidence problem argues: observers can only exist during the epoch when dark energy and matter densities are comparable, so the coincidence is a selection effect. For this to work, there must be a strong observer-selection filter — observers can only exist during the coincidence epoch. Option A directly undermines this: if observers are compatible with a much wider range of dark energy/matter ratios, then the selection filter is weak, and the observed coincidence is not fully explained by observer selection — some residual special explanation is still needed. B attacks the testability of the multiverse — a real and important objection to anthropic reasoning generally, but this is a methodological critique, not a specific challenge to the coincidence resolution. C says the problem affects cosmological constant too — relevant to the comparison between models but doesn't weaken the anthropic resolution specifically. D says string landscape models predict low Λ — this would undermine anthropic reasoning by providing a physical mechanism, but D weakens the need for anthropic reasoning generally rather than specifically targeting the coincidence argument.
2
The passage states that the debate over anthropic reasoning "turns on foundational questions in the philosophy of science about what explanation is that physics alone cannot resolve." Which of the following can be most reliably inferred from this claim?
CORRECT: B The passage says the debate cannot be resolved by physics alone because it turns on what explanation fundamentally is — a philosophical question. The inference is that more physical data cannot settle the debate; conceptual analysis is required. B captures this exactly: settling the legitimacy of anthropic reasoning requires answering what counts as scientific explanation, which is not an empirical question. A says physicists are straying into metaphysics — an uncharitable inference; engaging with foundational questions is part of physics' self-reflective practice. C prescribes provisional acceptance — the passage does not recommend any practical resolution; it diagnoses why the debate is unresolved. D says the two positions are incommensurable and the debate is irresolvable — too strong; "cannot be resolved by physics alone" does not mean "irresolvable"; philosophical analysis might make progress.
3
The cosmological constant problem involves a cancellation: a very large bare cosmological constant and equally large quantum corrections cancel to produce the observed near-zero value. The passage calls this "deeply suspect" by naturalness criteria. What makes this cancellation paradoxical from a physics standpoint rather than simply improbable?
CORRECT: D A is the distractor — calling it "merely improbable" is the option that treats it as not paradoxical. The question asks what makes it paradoxical rather than simply improbable. D identifies the most precise paradox: QFT is among the most successful physical theories ever created, confirmed to extraordinary precision in every tested domain — yet in this single case it is wrong by 120 orders of magnitude. The same framework is simultaneously the greatest success and the greatest failure in the history of physics. That is a structural paradox: one cannot dismiss the prediction (the theory works everywhere else) yet one cannot accept it (it's catastrophically wrong). B identifies a cross-framework calibration issue — real and interesting, but general relativity and QFT being inconsistent doesn't specifically explain why their respective contributions to Λ should cancel. C identifies the anthropic/observer connection — also interesting, but this is about significance, not the paradoxical nature of the cancellation itself.
4
Critics who argue that anthropic reasoning "is not an explanation but an abandonment of the explanatory project" must assume which of the following for this critique to constitute a genuine scientific objection rather than a mere philosophical preference?
CORRECT: A The critics say anthropic reasoning abandons the explanatory project. For this to be a scientific objection (not just a philosophical taste), they must assume that causal-mechanical explanation is the standard — that science requires deriving observed values from laws, not conditioning them on observers. Without this assumption, the critics are merely expressing a methodological preference; the critique bites only if causal-mechanism is the criterion for scientific explanation. B requires the existence of a physical explanation in principle — relevant to whether the demand is reasonable, but not the foundational assumption that makes the critique a scientific rather than philosophical objection. C requires multiverse non-reality — if the multiverse is real, observer selection is a real physical process; this is important but shifts the debate to metaphysics rather than to what counts as explanation. D requires the calculation to be correct — relevant to whether the problem is genuine, but if the calculation were wrong, the problem would dissolve entirely rather than the critique of anthropic reasoning being vindicated.
Passage 1 Score
/4

P 02
The Fermi Paradox, the Great Filter & the Epistemology of Absence
Read the Passage

The Fermi paradox identifies an apparent contradiction between the high prior probability of extraterrestrial technological civilisations implied by the size and age of the universe and the total absence of any observed signals, artefacts, or evidence of their existence. If civilisations capable of interstellar communication or colonisation emerge at even a small fraction of stellar systems, the galaxy should by now be saturated with observable signatures of their activity — yet radio silence prevails. Proposed resolutions span a wide range: the "rare Earth" hypothesis argues that the emergence of complex life requires an improbably specific combination of stellar type, galactic location, planetary architecture, and geological history that is vanishingly rare; the "zoo hypothesis" suggests advanced civilisations deliberately avoid contact with us; the "Great Silence" may reflect the self-destruction of civilisations at a technological threshold we are approaching.

Robin Hanson's "Great Filter" framework reformulates the paradox as an inference about where the bottleneck lies in the developmental pathway from simple chemistry to space-colonising civilisation. If the filter is behind us — if the rare transition is one already passed (the origin of life, the emergence of eukaryotes, multicellularity, or complex nervous systems) — then the path ahead is relatively clear and humanity's long-term prospects are favourable. If the filter lies ahead — if the bottleneck is at the stage of advanced technological civilisation, through self-destruction, coordination failure, or resource exhaustion — then the Great Silence is a grim statistical signal: civilisations regularly reach our level and fail to advance. The discovery of simple microbial life on Mars or elsewhere in the solar system would, paradoxically, be bad news: it would push the filter forward in the developmental pathway, suggesting that the rare step is not the origin of life but something that happens after life arises — potentially at or beyond our current stage.

The deeper epistemological challenge is that absence of evidence is not evidence of absence in the context of the Fermi paradox, but the relationship between the two is more complex than this slogan suggests. The relevant question is: given what we have searched, what signal strengths, for how long, and across what frequency ranges, what constraints can we place on the prevalence and detectability of extraterrestrial technological signals? Failure to detect in a search of limited scope is weak evidence of absence; failure to detect in a comprehensive search of the type and sensitivity that would confidently detect any civilisation producing signals comparable to ours is stronger evidence. Current SETI efforts have searched a tiny fraction of the parameter space — stars, frequencies, signal types — that a genuinely comprehensive search would cover, meaning the current Great Silence may be more epistemological than empirical.

Questions · Passage 02
5
The passage argues that the discovery of microbial life on Mars would be "bad news" because it would push the Great Filter forward — suggesting the rare step is not life's origin but something later. Which of the following, if true, most strengthens this argument?
CORRECT: D The "bad news" argument works as follows: finding life on Mars would imply that abiogenesis is not rare — if life arose independently in two nearby solar system bodies, it is probably common wherever conditions permit. If life's origin is common, the Great Filter is not at abiogenesis but at a later stage — possibly at or beyond our current level of development. Option D directly strengthens this by showing abiogenesis is chemically robust — it occurs readily under suitable conditions. If abiogenesis is easy, then finding Mars life would strongly imply it's common throughout the galaxy, pushing the filter forward. A shows Mars had suitable conditions for a long time — this supports the argument that Mars life could have arisen independently, but it's about opportunity, not about whether abiogenesis is itself easy. B would potentially undermine the argument if Martian and Earth life share ancestry — that would mean only one abiogenesis event, not pushing the filter forward. C shows incomplete astrobiological surveys — this is about our ignorance, not about strengthening the bad news argument. D is most direct because it addresses the mechanism (abiogenesis is easy) rather than the opportunity.
6
The passage concludes that the current Great Silence "may be more epistemological than empirical." Which of the following can be most reliably inferred from this distinction?
CORRECT: B An "epistemological" rather than "empirical" silence means the silence reflects a gap in our knowledge (we haven't searched enough) rather than genuine absence (there is nothing to find). The reliable inference is that current non-detection is weak evidence for absence — we cannot distinguish "no signals exist" from "signals exist but we haven't looked in the right places, at the right sensitivity, or in the right frequency ranges." B captures this precisely. A says the paradox is not genuine — too strong; even with limited search coverage, the prior probability argument creates a genuine puzzle. The paradox may be less acute given limited searches, but it doesn't dissolve. C predicts future detection — this does not follow; the epistemological gap means absence is underdetermined, not that presence is confirmed. D equates all hypotheses as underdetermined — interesting but goes beyond what "epistemological silence" directly implies.
7
The Great Filter framework converts the Fermi paradox from a puzzle about observation into an inference about the location of a developmental bottleneck. Which of the following best identifies the logical structure of this conversion?
CORRECT: D The Great Filter framework's logical structure is: (1) assume the principle of mediocrity (Earth is not special); (2) given the prior probability of civilisations, the observed silence implies a filter somewhere; (3) the question is where — behind or ahead of us; (4) evidence about whether life is common or rare shifts the probability mass between these options. This is the logical inference structure the framework performs. B identifies a circularity — using absence as both problem and evidence — which is a real concern about the Fermi paradox generally, but it misdescribes the Great Filter conversion specifically; the framework doesn't treat the same fact as both problem and evidence in the way B suggests. A says it converts descriptive to normative — the framework doesn't make a normative claim; it makes a probabilistic inference. C says it shifts to unobservable domains — the Great Filter's inference is about the distribution of developmental transitions in the observable universe, not about other universes.
8
The passage ends by distinguishing "epistemological" from "empirical" silence and analysing the limits of current SETI coverage. What is the most plausible reason the author adds this third paragraph rather than ending with the Great Filter framework?
CORRECT: B The Great Filter framework draws inferences from the absence of civilisations. The third paragraph shows that the observed silence doesn't yet establish genuine empirical absence — it may be an epistemological gap. The author's purpose is to add a meta-level qualification: the framework's inferences are valid conditionally on genuine absence, but that antecedent is not yet established. This means the sobering "filter ahead" implications — including how they should affect our assessment of civilisational risk — are premature. A says the author undermines the Great Filter — too strong; the author qualifies the framework's applicability rather than attacking its logic. C predicts a policy prescription about SETI funding — the passage makes no such recommendation; it is analytical, not prescriptive. D says the paradox dissolves — the author does not resolve the paradox; the epistemological distinction shows the evidence is weaker than assumed, but the prior probability puzzle remains.
Passage 2 Score
/4

P 03
Gravitational Waves, Multi-Messenger Astronomy & the New Observational Window
Passage Timer
10:00
Read the Passage

The detection of gravitational waves by the LIGO and Virgo interferometers, beginning with GW150914 in September 2015, opened a qualitatively new observational window on the universe. Gravitational waves are ripples in spacetime generated by the acceleration of massive objects — most detectably, the inspiral and merger of compact binaries such as black hole pairs or neutron star pairs. Unlike electromagnetic radiation, gravitational waves pass through matter essentially unimpeded, carrying information about their sources that light cannot convey: directly about the geometry and dynamics of the spacetime they travel through, and about astrophysical processes — such as black hole mergers — that produce no light at all. The direct detection of gravitational waves confirmed a prediction of general relativity that had stood for a century without direct observational test, and provided the first direct evidence for stellar-mass black hole binaries.

The neutron star merger event GW170817, detected in August 2017, demonstrated the full potential of multi-messenger astronomy: the simultaneous observation of gravitational wave, gamma-ray, X-ray, optical, infrared, and radio signals from the same source. This event confirmed that neutron star mergers are sites of rapid neutron capture nucleosynthesis — the r-process — responsible for producing roughly half of the elements heavier than iron in the universe, resolving a decades-long debate about the astrophysical site of r-process nucleosynthesis. It also provided an independent measurement of the Hubble constant: the gravitational wave signal provides a standard siren measurement of the source's luminosity distance, while the optical counterpart's host galaxy redshift provides the recession velocity. The resulting Hubble constant estimate was consistent with both the CMB-derived and local distance ladder values but did not resolve the tension between them.

The Hubble tension — the discrepancy between the Hubble constant measured from the early universe via the CMB (approximately 67 km/s/Mpc) and from the local universe via the distance ladder (approximately 73 km/s/Mpc) — is one of contemporary cosmology's most significant puzzles. If systematic errors in either measurement do not explain the discrepancy, new physics beyond the standard ΛCDM cosmological model may be required: modifications to the expansion history of the early universe, additional light particles, or changes to the behaviour of dark energy. Standard sirens from gravitational waves offer a path to an independent measurement, but current precision is insufficient to distinguish between the competing values, and achieving the required precision will require many more multi-messenger events over the coming decades of gravitational wave astronomy.

Questions · Passage 03
9
The passage describes gravitational waves as a "qualitatively new" observational window. What specifically makes this characterisation justified rather than merely descriptive of a new instrument?
CORRECT: C A qualitatively new window means access to information that was previously unavailable in principle, not merely in practice. Gravitational waves reveal processes that produce no electromagnetic radiation — black hole mergers — and carry direct information about spacetime geometry that light cannot convey. This is not a matter of greater sensitivity but of accessing phenomena entirely invisible to electromagnetic astronomy. C captures this. A concerns sensitivity improvements, which would be quantitative, not qualitative. B concerns theoretical confirmation, which is significant but is about what the detection confirms, not what makes the observational window qualitatively new. D concerns engineering achievement, which is about the instrument, not about what it reveals.
10
GW170817 resolved "a decades-long debate about the astrophysical site of r-process nucleosynthesis." What does the word "site" refer to in this context, and why was it debated?
CORRECT: B In nucleosynthesis, "site" means the astrophysical environment or event type — what kind of object or event produces the required physical conditions. The r-process requires extremely high neutron flux, present in both neutron star mergers and core-collapse supernovae. Both were candidate sites, and discriminating between them required either identifying the optical signature of r-process in a direct observation or modelling the galactic chemical evolution patterns. GW170817's optical counterpart showed the characteristic r-process signature, identifying neutron star mergers as a confirmed r-process site. B captures this. A concerns galaxy morphology, not event type. C concerns location within a neutron star, which is a microscopic question not what "site" means in astrophysical nucleosynthesis. D concerns galactic distribution patterns, which is an indirect constraint rather than the direct identification GW170817 provided.
11
The standard siren method for measuring the Hubble constant is described as using the gravitational wave signal for one measurement and the optical counterpart for another. What makes this an "independent" measurement compared to existing methods?
CORRECT: D The standard siren method is "independent" of the distance ladder because it derives distance from first principles using general relativity: the gravitational wave signal's amplitude directly encodes the luminosity distance without needing calibration against Cepheids or Type Ia supernovae. The systematic errors that accumulate across distance ladder calibration steps are absent. D captures this. A says it uses different branches of theory, which is true but doesn't explain what makes it independent of existing Hubble constant measurements. B says it uses a single event for both measurements, which is true but doesn't capture why that constitutes independence from existing methods. C says gravitational waves avoid electromagnetic obscuration, which is a related advantage but not the specific independence claim about distance ladder calibration.
12
The passage says the Hubble tension, if not explained by systematic errors, "may require new physics beyond ΛCDM." What does "may require" signal about the current state of the problem?
CORRECT: C The passage explicitly conditions the new physics requirement: "if systematic errors in either measurement do not explain the discrepancy, new physics beyond the standard ΛCDM cosmological model may be required." The "may require" signals that systematic errors are still a live possibility — new physics is the conclusion if and only if systematics are ruled out. C captures this conditional structure. A says the community is confident new physics is required, which overstates — the conditional structure says they are not yet confident. B says systematic errors have already been ruled out, which contradicts the passage's conditional framing. D says the tension is too small to be significant, which contradicts the passage's description of it as "one of contemporary cosmology's most significant puzzles."
Passage 3 Score
/4

P 04
Black Hole Thermodynamics, the Information Paradox & Holography
Passage Timer
10:00
Read the Passage

Hawking's 1974 calculation that black holes emit thermal radiation — now called Hawking radiation — revealed a deep tension at the intersection of quantum mechanics and general relativity. The calculation showed that black holes are not perfectly black: quantum field theory in curved spacetime predicts the spontaneous creation of particle-antiparticle pairs near the event horizon, with one particle escaping to infinity while the other falls inward, resulting in the black hole losing mass and eventually evaporating. The radiation is thermal — it carries no information about the matter that formed the black hole — which creates the information paradox: quantum mechanics requires that information is never destroyed (unitarity), but if a black hole completely evaporates into thermal radiation, the information about everything that fell in appears to be lost.

The information paradox forced theorists to confront a fundamental incompatibility: either quantum mechanics must be modified to permit information loss, or general relativity's description of the black hole interior must be wrong, or some mechanism must exist by which information escapes in the Hawking radiation despite its thermal character. Hawking himself initially defended information loss; most quantum gravity theorists now believe unitarity must be preserved. The holographic principle — associated primarily with 't Hooft and Susskind, and realised concretely in the AdS/CFT correspondence discovered by Maldacena in 1997 — offers a possible resolution: the complete information content of a volume of spacetime is encoded on its boundary, like a hologram. If true, the information that falls into a black hole is not lost but encoded on the horizon and eventually released in subtle correlations in the Hawking radiation.

The firewall paradox, proposed by Almheiri, Marolf, Polchinski, and Sully in 2012, showed that preserving both unitarity and the equivalence principle — which requires that an infalling observer experiences nothing unusual at the horizon — leads to contradiction. In standard quantum mechanics, if the Hawking radiation is unitary, then late-time Hawking photons must be entangled with the early-time radiation already emitted. But quantum field theory also requires the infalling observer's modes to be entangled with interior modes. A single quantum system cannot be maximally entangled with two different systems simultaneously — monogamy of entanglement — so one of these entanglement relationships must be sacrificed. The firewall proposal suggests that the entanglement with early radiation is maintained at the cost of the interior entanglement, resulting in a high-energy boundary — the firewall — that destroys infalling observers at the horizon rather than allowing them smooth passage.

Questions · Passage 04
13
The information paradox arises from a combination of Hawking radiation's thermal character and quantum mechanics' unitarity requirement. What precisely is the conflict between these two features?
CORRECT: B The conflict is precisely stated: thermal radiation carries no information about the initial state (it is characterised only by temperature); unitarity requires that quantum evolution maps initial states to final states without information loss. If the black hole evaporates completely into thermal radiation, the information in the initial state has been destroyed, violating unitarity. B captures this. A confuses temperature with an information capacity — there is no such quantum mechanical limit on temperature. C mislocates the conflict as being about inside versus outside the horizon. D conflates randomness with information loss — thermal radiation is probabilistic but the information paradox is specifically about whether the full quantum state is recoverable from the radiation.
14
The holographic principle is described as a "possible resolution" to the information paradox. What specific mechanism does holography propose for preserving information?
CORRECT: C The passage explicitly says holography implies information "is not lost but encoded on the horizon and eventually released in subtle correlations in the Hawking radiation." C states this mechanism directly. A describes the AdS/CFT mathematical equivalence but frames it as the mechanism (storing information in boundary theory) when the passage identifies the mechanism as encoding on the horizon. B describes a reflective mirror mechanism not in the passage. D describes remnants as the resolution, which is a different and competing proposal not what holography proposes.
15
The firewall paradox arises from the monogamy of entanglement combined with two entanglement requirements. What makes this a genuine paradox rather than simply a theoretical inconsistency to be resolved by choosing one requirement over the other?
CORRECT: D What makes the firewall a genuine paradox rather than a resolvable inconsistency is that each of the conflicting requirements — unitarity from quantum mechanics, smooth horizon passage from the equivalence principle, entanglement structure from quantum field theory — is well-confirmed in its domain, and no known theory reconciles all three simultaneously. Simply choosing one is not a resolution but an abandonment of a confirmed foundation. D captures this. A says it is genuine because empirical evidence doesn't favour abandoning either, which is true but frames it as a practical choice problem rather than a deep theoretical incompatibility. B says monogamy is violated, which misreads the paradox — monogamy is a constraint that creates the conflict, not something that is violated. C captures part of the structure but frames it as "any resolution destroys another principle," which is the consequence rather than why it is a genuine paradox.
16
The passage notes that Hawking "initially defended information loss" but that most quantum gravity theorists now believe unitarity must be preserved. What does this shift suggest about how consensus develops in theoretical physics when direct empirical testing is unavailable?
CORRECT: C The shift away from information loss was driven by theoretical developments: AdS/CFT provided mathematical machinery supporting unitarity, and theorists realised that accepting information loss would undermine quantum mechanics more broadly than was acceptable. This shows that theoretical consensus in empirically inaccessible domains can be shaped by the development of theoretical frameworks that provide indirect evidence and by assessing the systemic consequences of abandoning confirmed principles. C captures this. A attributes the shift to authority rather than theoretical developments, which is too cynical and misses the role of AdS/CFT. B says consensus is socially constructed without epistemic basis, which overstates — theoretical coherence and mathematical frameworks constitute genuine (if not empirical) evidence. D says Hawking was simply wrong and the process was self-correction, which is too neat — the paradox remains unresolved and the information paradox is still an open question.
Passage 4 Score
/4

P 05
Stellar Evolution, Nucleosynthesis & the Cosmic Origin of the Elements
Passage Timer
10:00
Read the Passage

The atoms that compose living organisms, rocky planets, and the interstellar medium were not created in the Big Bang. Big Bang nucleosynthesis produced hydrogen, helium, and trace amounts of lithium; everything heavier is the product of stellar nucleosynthesis — the fusion reactions that power stars and, at the ends of stellar lives, seed the interstellar medium with the products. The mechanism by which this occurs was largely worked out in a 1957 paper by Burbidge, Burbidge, Fowler, and Hoyle — B2FH — which identified the main nuclear reaction pathways: hydrogen burning to helium, helium burning to carbon and oxygen, and progressive fusion up to iron, the most tightly bound nucleus. Iron represents the endpoint of energy-releasing fusion: synthesising elements heavier than iron requires energy input rather than producing it, which is why the cores of massive stars collapse rather than continue burning when iron accumulates.

The production of elements heavier than iron requires neutron capture rather than fusion. Two neutron capture processes are distinguished by the ratio of neutron capture rate to beta-decay rate. The slow process (s-process) occurs in asymptotic giant branch stars over timescales of thousands of years, building heavier elements step by step as nuclei capture one neutron at a time and beta-decay between captures. The rapid process (r-process) requires a much higher neutron flux and a much shorter timescale — seconds rather than millennia — and was for decades assigned to core-collapse supernovae as the most plausible site providing the required conditions. As noted in the Geopolitics section, the direct observation of r-process nucleosynthesis in the GW170817 neutron star merger event confirmed that neutron star mergers are at least one important r-process site, though the relative contribution of mergers versus supernovae to the galactic r-process inventory remains actively studied.

The cosmic abundance pattern — the relative proportions of elements across the universe — encodes the history of stellar nucleosynthesis and can be used to probe conditions in early galaxies and in specific stellar populations. Iron-poor, old Population II stars formed before many generations of stellar enrichment and show abundance ratios different from the sun; the patterns of those ratios provide constraints on the relative contribution of different nucleosynthetic processes in the early universe. The field of galactic chemical evolution models how the interstellar medium's composition evolves as successive stellar generations enrich it, connecting the physics of individual stellar nucleosynthesis to the large-scale chemistry of galaxies across cosmic time.

Questions · Passage 05
17
The passage says iron "represents the endpoint of energy-releasing fusion" and that massive star cores collapse when iron accumulates. What is the connection between iron's nuclear binding energy and stellar collapse?
CORRECT: B The passage explicitly says synthesising heavier-than-iron elements "requires energy input rather than producing it." The connection is: stellar interiors are supported against gravitational collapse by the outward pressure generated by energy released in nuclear fusion. Once iron accumulates and further fusion becomes energy-consuming rather than energy-releasing, that support mechanism fails and gravity drives collapse. B states this connection. A attributes collapse to iron's density rather than its nuclear binding energy — this conflates the mechanism. C says iron is the highest-mass product in stellar interiors, which is false — stellar nucleosynthesis can produce elements up to iron but elements heavier than iron require the neutron capture processes described in the passage. D describes Chandrasekhar mass and electron degeneracy pressure, which is a real physical mechanism but not the one the passage identifies as the connection between iron's nuclear properties and collapse.
18
The passage distinguishes s-process and r-process by the ratio of neutron capture rate to beta-decay rate. What does this ratio determine about the elements each process produces?
CORRECT: B The key physical consequence of the capture/decay ratio is how far the nucleosynthesis pathway strays from stable nuclei. In the s-process, beta decay occurs between captures, keeping nuclei near the valley of nuclear stability. In the r-process, captures happen faster than beta decay, building up highly neutron-rich nuclei far from stability; these then decay back toward stability when the neutron flux ends, creating elements that the s-process pathway cannot reach. B captures this nuclear physics consequence. A focuses on temperature, which is a condition rather than what the ratio determines about the products. C describes photodisintegration limits, which is a related consideration but not what the capture/decay ratio specifically determines. D describes even/odd abundance peaks, which is a real feature of nucleosynthesis but not what the capture/decay ratio specifically determines about products.
19
The passage describes Population II stars as having "abundance ratios different from the sun." What does studying these ratios tell astronomers that stellar physics alone cannot?
CORRECT: C The passage says Population II abundance ratios "provide constraints on the relative contribution of different nucleosynthetic processes in the early universe." What stellar physics alone tells us is how stars produce elements; what Population II abundance ratios add is a record of which processes were dominant in the early galaxy before subsequent enrichment. C captures this. A concerns stellar interior conditions, but the passage says the ratios constrain early-universe processes, not old stellar interiors per se. B concerns age estimation from abundance ratios, which is a different application not what the passage identifies. D concerns the initial mass function, which may be constrained by abundance ratios but the passage focuses on the relative contribution of nucleosynthetic processes.
20
The passage traces elements from Big Bang nucleosynthesis through stellar nucleosynthesis to galactic chemical evolution. What does this account imply about the relationship between astrophysics and chemistry as disciplines?
CORRECT: C The account shows that the elemental composition of any region of the universe — the chemical raw material for everything from planets to life — is a product of cosmic and stellar history, not a fixed given. This means the chemical possibilities available in a given context depend on the astrophysical history of that region. C captures this implication about how cosmic history determines chemical possibility. A says chemistry is reducible to astrophysics, which overstates — chemistry has its own principles governing how atoms combine, independent of how atoms were made. B says astrophysics requires chemistry for nuclear rate measurements, which is true but not what the passage's narrative implies about the broader relationship. D makes a prediction about chemical diversity across galaxies, which goes beyond what the passage's account implies.
Passage 5 Score
/4
Astronomy · Total Score
/8
Category 17
Media
5 passages · 4 questions each · CAT 5/5 · 50 min total
Score
/20
P 01
Agenda-Setting, Framing Effects & the Conditions for Persuasion
Read the Passage

McCombs and Shaw's agenda-setting hypothesis distinguishes the media's capacity to tell audiences what to think about from its capacity to influence how audiences think about it. The first-level claim — that the media's issue salience transfers to the public agenda — is the original and more robust finding: issues receiving prominent coverage are perceived as more important by audiences, regardless of the substantive position the coverage takes. Second-level agenda-setting, or attribute agenda-setting, extends this to the transfer of attribute salience: not only which issues matter but which attributes of those issues are made cognitively accessible. This converges analytically with framing research, where frames are defined as the selection and emphasis of certain aspects of a perceived reality that promote a particular problem definition, causal interpretation, moral evaluation, or treatment recommendation.

Equivalence framing research demonstrates that logically equivalent information presented in different frames produces systematically different judgments: a medical treatment described as having a 90% survival rate is preferred over one described as having a 10% mortality rate even though the descriptions are informationally identical. This finding appears to establish framing as a powerful and reliable effect. Critics argue, however, that the laboratory evidence may be methodologically inflated: experimental participants who encounter a single isolated frame without prior attitudes, competing information, or the motivation to process carefully are maximally susceptible to framing effects in ways that may not generalise to natural media consumption environments. In the field, prior attitudes, competing frames, and motivated reasoning significantly moderate framing effects — in many real-world studies, framing effects attenuate dramatically or disappear entirely among ideologically committed audiences.

The more defensible empirical synthesis holds that framing effects are real but conditional: strongest when prior attitudes are weak or absent, when information environment is non-competitive (single-frame exposure), and when the topic is low in personal relevance. This conditionality has a structural implication that the framing literature has not sufficiently absorbed: if framing effects are primarily operative in conditions of weak priors and non-competitive information environments, then the aggregate political impact of elite media framing may be concentrated among the least politically engaged citizens — precisely those whose opinion change has the least stable, reliable connection to subsequent political behaviour.

Questions · Passage 01
1
The passage argues that framing effects are strongest among the least politically engaged — those with weak priors and low personal relevance — and are attenuated among ideologically committed audiences. Which of the following, if true, most seriously weakens the inference that framing therefore has limited aggregate political impact?
CORRECT: A The passage's inference is: framing works on the least engaged → limited aggregate political impact (because they vote erratically). Option A directly challenges this by showing that in specific electoral contexts — low-salience races — low-engagement voters are disproportionately decisive. If these voters are both most susceptible to framing and most electorally consequential in specific races, the "limited aggregate impact" inference is undermined for precisely those races. B is strong — early framing forms durable attitudes — but it operates over generational timescales and is about the future engaged population, not the current aggregate impact. C shows non-competitive environments are common — this would extend framing's reach but doesn't directly address who the effects fall on (the least engaged). D shows small effects shift close elections — relevant but the passage's critique is about the *quality* of opinion change (unstable, unreliable political behaviour), not just its magnitude; D doesn't address that concern.
2
The passage states that framing effects are "real but conditional" and identifies the conditions under which they operate. Which of the following can be most reliably inferred from the conditionality claim?
CORRECT: B If framing effects are conditional on specific moderating factors — prior attitude strength, information competition, personal relevance — then reporting a single aggregate effect size without specifying these conditions produces findings that cannot be correctly interpreted or applied. The same frame might have strong effects in one context and zero effects in another; decontextualised reporting obscures this. A says laboratory methods are invalid — too strong; the passage says they may be "inflated" for natural environments, not that they are invalid. C says reinforcement is the primary mechanism — the passage never makes this claim; it discusses the conditions on framing, not alternative mechanisms. D says effects are negligible in politics — the passage says the aggregate impact may be limited and concentrated in unstable opinion change, but it doesn't say effects are negligible; and A's correction already showed some political scenarios are highly relevant.
3
The passage identifies a structural implication of framing conditionality: framing effects are primarily operative among the least politically engaged, whose subsequent political behaviour is least stable and reliable. This creates a paradox for theories of media power. What is that paradox?
CORRECT: A The paradox is a gap between the level at which framing operates and the level at which political significance is assessed: framing demonstrably changes opinions (among the susceptible), but opinion change in this population doesn't reliably translate to political behaviour. So framing research documents a real psychological effect that may not constitute a real political effect — the mechanism of media influence documented in the research is not the mechanism through which media power actually manifests in political outcomes. B is very close and largely correct — it identifies the electoral irrelevance of the susceptible population. But A is more precise: it captures the gap between the mechanism framing research documents (opinion change) and the mechanism through which political power actually operates (reliable behaviour change), which is the deeper paradox. C says unpredictability undermines strategic intent — this is a different point about strategic control, not the structural paradox between framing's documented effects and its political significance. D says competing frames cancel out — possible but speculative and not what the passage argues.
4
The equivalence framing finding — that "90% survival rate" is preferred over "10% mortality rate" — is used in the passage to establish framing as a "powerful and reliable effect." For this to be a valid inference, which of the following must be assumed?
CORRECT: C The passage uses the equivalence framing finding as evidence that framing is "powerful and reliable." For this inference to hold, the equivalence framing case must be representative of framing's general power — not an outlier. Equivalence framing is a particularly clean and strong case (logically identical information, no prior attitudes on the specific framing). If this case is atypically strong, it cannot validly establish that framing in general is powerful and reliable. A is about attitude vs. verbal response — a genuine methodological concern, but the passage uses the finding to establish power in principle; whether attitudes vs. responses are measured is a separate question from whether the finding supports "powerful and reliable" as a general description. B requires mechanism generalisation — relevant but the passage's inference is about effect magnitude and reliability, not specifically about mechanism. D requires equivalence framing to be typical in political media — this would be relevant to applicability but the inference from the laboratory finding is about framing's power in general, not specifically its prevalence in politics.
Passage 1 Score
/4

P 02
Platformisation, the Public Sphere & the Architecture of Attention
Read the Passage

Habermas's public sphere — the space of rational-critical discourse in which private citizens deliberate on matters of common concern, free from both state coercion and market distortion — has long served as the normative standard against which democratic communication practices are assessed. Contemporary critics have identified two structural exclusions that undermined the historical bourgeois public sphere before its institutionalisation was complete: its dependence on literacy and cultural capital that excluded most citizens, and its constitution through the very exclusion of women, the non-propertied, and racialised others. Fraser's concept of "subaltern counterpublics" — alternative discursive arenas in which subordinated groups elaborate counter-discourses and oppositional identities — complicates the Habermasian ideal by revealing the public sphere as always already plural and stratified rather than unified and egalitarian.

Social media platforms appeared initially to offer a democratic corrective to elite-dominated public spheres: universal access to publication, decentralisation of content production, and the breakdown of broadcast gatekeeping. The promised democratisation has not materialised in the form anticipated. Platform architectures optimise for engagement — measured by interaction, sharing, and time-on-platform — rather than for deliberative quality. Content that generates strong emotional responses (outrage, fear, tribalism, novelty) is systematically amplified relative to content that generates careful reasoning. The result is an information environment in which the most emotionally resonant and identity-confirming material circulates most widely, while the information most conducive to the formation of considered public opinion is structurally disadvantaged.

The structural diagnosis is not that platforms have introduced new pathologies into democratic discourse but that they have automated and accelerated dynamics already present in commercial media: the tension between attention economics and deliberative quality. Pre-platform commercial media already faced the incentive to prioritise engagement over information quality; platforms have operationalised this tension through algorithmic recommendation systems that optimise engagement at scale and in real time. The normative failure is not that platform owners are malicious but that the incentive structure of attention markets is orthogonal to — and in many cases actively contrary to — the informational requirements of deliberative democracy. Reforming democratic communication therefore requires not just platform regulation but a structural theory of how attention markets interact with democratic information needs.

Questions · Passage 02
5
The passage argues that platform algorithmic recommendation systems systematically amplify emotionally resonant content over deliberatively valuable content, creating a structural disadvantage for careful reasoning in the information environment. Which of the following, if true, most strengthens this argument?
CORRECT: A The passage claims algorithmic amplification systematically favours emotionally resonant over deliberatively valuable content. Option A provides internal platform data showing exactly this differential: outraged language gets more shares per view than factually accurate neutral language on the same topic. This directly evidences the amplification mechanism at the content level — the algorithm's engagement metric is being gamed by emotional framing, which is then rewarded with wider distribution. B shows correlational self-reported outcomes — but correlation doesn't establish the algorithmic mechanism; users who consume more social media may be self-selecting. C shows regulatory resistance — circumstantial; firms resist disclosure for many reasons. D shows academic content gets lower reach — consistent with the argument but confounds content type with content quality; entertainment accounts are simply more popular with general audiences regardless of algorithm.
6
The passage argues that the normative failure of platforms "is not that platform owners are malicious but that the incentive structure of attention markets is orthogonal to the informational requirements of deliberative democracy." Which of the following can be most reliably inferred from this claim?
CORRECT: A If the problem is structural — built into the incentive structure of attention markets — then personnel changes (replacing owners with well-meaning alternatives) cannot resolve it. The incentive structure would produce the same pressures on any operator within the attention market model. This is the direct and reliable inference from "the problem is in the incentive structure, not individual malice." B says regulation would be sufficient — the passage says reform requires "not just platform regulation but a structural theory of how attention markets interact with democratic information needs" — implying regulation alone is insufficient. C says voluntary adoption would follow from understanding — this assumes the problem is cognitive (owners don't understand) rather than structural (the market incentivises engagement regardless of normative understanding). D says the problem is unique to the digital era — the passage explicitly says platforms "automated and accelerated dynamics already present in commercial media," not that they introduced new pathologies.
7
Social media platforms were designed to democratise public discourse — universal publication access, decentralised production, breakdown of gatekeeping — yet the passage argues they structurally disadvantage the content most conducive to deliberative democracy. What makes this specifically a structural paradox rather than an unintended consequence?
CORRECT: B The paradox is structural because the democratic feature (universal access, scale) and the anti-deliberative feature (engagement optimisation at scale) are produced by the same mechanism. Universal publication means billions of posts competing for attention; that competition is resolved by engagement metrics; those metrics systematically reward emotional resonance over deliberative quality. The scale of democratisation is the scale of the distortion — they are two sides of the same architectural coin. A says it's unintended and correctable — this would make it a non-structural unintended consequence; but the passage's diagnosis is that the attention market incentive structure is the driver, not specific correctable design choices. C attributes intentionality to engineers — but the passage explicitly says "the failure is not that platform owners are malicious"; intentionality would make it a moral failure, not a structural paradox. D says it's general to all information markets — this would dissolve the specific platform paradox into a general market tendency rather than identifying the structural feature specific to platform architecture.
8
The passage concludes that "reforming democratic communication requires not just platform regulation but a structural theory of how attention markets interact with democratic information needs." For this prescription to be justified by the passage's preceding analysis, which of the following must be assumed?
CORRECT: B The prescription is: not just regulation but a structural theory. For this "not just" claim to be justified, existing regulatory instruments must be insufficient to address the structural conflict — otherwise "just regulation" would suffice. The passage's analysis shows the problem is structural (attention market incentive structure vs. deliberative needs), not a matter of individual bad actors or specific prohibited practices. The assumption is that existing regulatory categories (content rules, transparency, auditing) address the surface without the underlying structural conflict — making a theoretical framework that maps that conflict a prerequisite for knowing what structural reform would look like. A says regulation is feasible and would be effective but insufficient — the passage doesn't assert effectiveness of regulation, only its insufficiency; moreover, the assumption needed is about the regulatory gap, not the technical feasibility. C requires the Habermasian ideal to be the normative goal — the passage uses it as a standard but doesn't require it to be achievable; the prescription doesn't depend on Habermasian achievability. D requires categorical incompatibility — too strong; the passage says attention markets are "orthogonal to" and "often contrary to" deliberative requirements, not categorically incompatible; the solution space may include modified market structures.
Passage 2 Score
/4

P 03
Misinformation, Correction & the Limits of Fact-Checking
Passage Timer
10:00
Read the Passage

The misinformation research programme has accumulated evidence on two questions: how effectively corrections reduce false beliefs, and whether corrections can produce backlash effects — making believers cling more tightly to corrected claims. The early backlash literature, associated with Nyhan and Reifler's "backfire effect," suggested that corrections of politically congenial misinformation could paradoxically strengthen rather than weaken the false belief, particularly among motivated reasoners. This finding, if robust, would make fact-checking not merely ineffective but counterproductive. Subsequent attempts to replicate the backfire effect across diverse topics and populations have largely failed to find it, suggesting the original results were either sample-specific, topic-specific, or artefacts of the experimental design. The current consensus is that backfire effects, if they exist at all, are narrow exceptions rather than a general phenomenon.

The more reliable finding is that corrections do reduce false beliefs on average, but modestly, and with substantial heterogeneity across individuals and topics. The correction effect is weakest precisely where it matters most: for claims that are politically congenial, emotionally resonant, or consistent with well-established prior beliefs. This creates what researchers call an accuracy-motivation trade-off: beliefs serve epistemic functions (tracking truth) and identity-protective functions (affirming group belonging and self-concept), and when these functions conflict — when accurate belief would require accepting something that challenges identity — motivated reasoning tends to dominate accuracy motivation. Corrections of identity-threatening misinformation face a more powerful opponent than corrections of identity-neutral falsehoods.

A structurally distinct problem is the illusory truth effect: claims that are familiar become more believable with repetition, regardless of whether the familiarity arose from truthful or false exposure. Fact-checking that repeats a claim in order to debunk it increases familiarity with the claim, potentially increasing its perceived truth among those who process the debunking shallowly. The "truth sandwich" heuristic — stating the truth first, minimising repetition of the false claim, and closing with the truth — attempts to correct without amplifying the false claim, but its practical implementation in high-volume news environments is contested. This creates a structural tension between the demands of transparent journalism — which may require quoting and addressing false claims directly — and the cognitive demands of effective correction.

Questions · Passage 03
9
The failure to replicate the backfire effect leads the passage to conclude it is "narrow exceptions rather than a general phenomenon." What does this conclusion imply about the original findings?
CORRECT: C The passage says replications "largely failed to find" the backfire effect and suggests the original results were "sample-specific, topic-specific, or artefacts of experimental design." This implies the original findings were likely real under the conditions studied but do not generalise — they were overgeneralised from specific conditions to a general phenomenon. C captures this. A attributes fraud, which is not what the passage implies — sample-specificity and design artefacts are normal scientific limitations. B says the effect is real but conditional, which is C's position expressed less precisely. D says misinformation research generally produces unreliable findings, which overstates — the passage endorses the reliable finding that corrections reduce false beliefs on average.
10
The passage describes an "accuracy-motivation trade-off." Which of the following most precisely characterises what this trade-off involves?
CORRECT: B The passage explicitly defines the accuracy-motivation trade-off: beliefs serve "epistemic functions (tracking truth) and identity-protective functions (affirming group belonging and self-concept), and when these functions conflict... motivated reasoning tends to dominate accuracy motivation." B states this precisely. A describes cognitive effort versus ease, which is a different trade-off framework. C describes short-run versus long-run accuracy trade-offs, which is not in the passage. D describes individual accuracy versus social harmony, which introduces a social cost dimension not in the passage's account.
11
The illusory truth effect creates a structural tension for journalism that the "truth sandwich" attempts to address. Which feature of the illusory truth effect specifically creates this tension?
CORRECT: A The passage says fact-checking "repeats a claim in order to debunk it" and thereby "increases familiarity with the claim, potentially increasing its perceived truth among those who process the debunking shallowly." The specific feature that creates the tension is that repetition — any repetition, including in debunking — can increase believability. The tension is therefore structural: transparent journalism requires engaging with false claims directly, but that engagement is the mechanism by which illusory truth operates. A captures the core feature. B focuses specifically on shallow processing, which is one pathway for the effect but A identifies the more fundamental feature — repetition regardless of depth. C introduces political congruence, which is a feature of backfire effects not specifically of illusory truth. D concerns accumulation over time, which is a consequence rather than the feature creating the structural tension.
12
The passage describes the correction effect as "weakest precisely where it matters most." What does this observation imply about the practical utility of fact-checking as a democratic institution?
CORRECT: C The observation that corrections are weakest where they matter most — politically congenial, identity-relevant misinformation — implies that fact-checking's actual corrective reach is narrowest at exactly the high-stakes political beliefs where democratic accountability most depends on accurate information. This limits its democratic function without eliminating it entirely; it remains useful for lower-stakes, identity-neutral corrections. C captures this qualified, scope-limited assessment. A says fact-checking should be replaced, which goes further than the observation warrants. B says fact-checking is merely credentialing, which is too cynical — the passage acknowledges it reduces false beliefs on average. D prescribes environmental restructuring, which is a policy recommendation not an implication drawn from the observation itself.
Passage 3 Score
/4

P 04
Journalism Ethics, Objectivity & the Advocacy Turn
Passage Timer
10:00
Read the Passage

The norm of journalistic objectivity — that reporters should present facts without personal opinion, offer multiple perspectives, and separate news from editorial comment — was institutionalised in the early twentieth century as a response to partisan newspaper culture and as a professional credibility claim. Its philosophical basis is contested: critics argue that selection of what to cover, how to frame it, and whose voices to include are inevitably evaluative decisions, making pure objectivity epistemically impossible. The procedural objectivity that journalists actually practice — quoting official sources, presenting "both sides," maintaining a formal neutrality — is distinct from the principled objectivity the norm invokes, and critics argue that procedural objectivity systematically advantages powerful actors who can generate quotable statements while marginalising perspectives that resist the conventions of official statement.

The "false balance" problem is procedural objectivity's most visible failure mode: treating fringe positions as equivalent to well-established consensus produces representations of epistemic reality that are systematically misleading. Climate coverage in which scientists and climate sceptics are given equal framing time misrepresents the actual state of expert knowledge, satisfying a procedural fairness norm while failing an epistemic accuracy norm. This tension reveals that procedural objectivity and epistemic accuracy are not the same standard — and that optimising for one can undermine the other. The response of many science journalists has been to explicitly reject "both sides" framing for empirical questions where evidence strongly favours one side, while maintaining it for genuinely contested normative and policy questions.

The advocacy journalism movement argues that the solution to objectivity's failures is transparency rather than its replacement by a different procedural norm. On this account, journalists should disclose their perspectives and values, and audiences should evaluate coverage with that context. This approach converges with the "view from somewhere" argument: all journalism proceeds from a perspective, and transparency about that perspective is more honest than the pretence of a "view from nowhere." Critics respond that declared bias can legitimate systematic distortion — an outlet that announces its ideological position can use that transparency as cover for partisan amplification rather than genuine reporting — and that the collapse of shared factual premises is a greater democratic threat than hidden editorial perspective. Both concerns are genuine, and the debate about journalism's normative foundations reflects the deeper question of what role journalism is supposed to play in a democracy.

Questions · Passage 04
13
The passage distinguishes "procedural objectivity" from "principled objectivity." What is the significance of this distinction for the critique of journalistic objectivity?
CORRECT: B The distinction matters because the objectivity norm's legitimacy claim rests on principled objectivity — actual neutrality and truth-tracking — while what is practiced is procedural. The critique lands precisely in this gap: procedural objectivity (quoting officials, both-sidesing) fails the principled ideal while using it as cover. The false balance problem demonstrates this failure. B captures this. A says critics misidentify their target, but the passage suggests procedural objectivity's systematic distortions are precisely what critics attack — the critique lands correctly on procedural practices. C says procedural is better because achievable, which misses the point that the passage shows procedural has its own systematic failures. D allows acceptance of principled while rejecting procedural, which is possible but not the significance the distinction has for the passage's critique.
14
Science journalists' response to false balance — rejecting "both sides" for empirical questions while maintaining it for normative ones — requires a principled distinction between empirical and normative questions. Which of the following most effectively challenges whether this distinction is stable in practice?
CORRECT: C The most effective challenge shows that the empirical/normative distinction is not categorical for politically contested questions — they have both components, which may be separately settled or contested. The empirical component of climate change (is it happening, is it anthropogenic) may be settled, while the normative component (what should we do) remains contested. Treating "climate change" as a single empirical question and applying no-both-sides to it obliterates this internal complexity. C captures this. A says normative questions have empirical dimensions, which is the mirror image of the same problem and equally valid, but the most pressing challenge in journalism practice is C's version — contested questions that are partly settled empirically. B says classification is itself editorial, which is true but a more abstract challenge that does not as directly challenge the practical stability of the distinction. D says expert consensus can be wrong, which challenges the basis for departing from both-sides but is a more radical challenge than C.
15
Critics of advocacy journalism argue that "declared bias can legitimate systematic distortion." What specific mechanism makes transparency a potential cover for distortion rather than a remedy for it?
CORRECT: B The mechanism is that the transparency declaration functions as a prior licence for partisan behaviour: having disclosed its perspective, the outlet can engage in selective reporting and framing and deflect accuracy criticism by pointing to the disclosed bias. Transparency substitutes for — rather than facilitating — the substantive accuracy it was supposed to support. B captures this. A concerns polarisation through audience sorting, which is a consequence of advocacy journalism but not the mechanism by which transparency enables distortion. C concerns competitive disadvantages from disclosure requirements, which is a different institutional concern. D concerns audience credibility discounting, which is a consequence for the outlet but not the mechanism by which transparency enables the outlet to distort.
16
The passage says the debate about journalism's normative foundations "reflects the deeper question of what role journalism is supposed to play in a democracy." Which of the following most accurately characterises the disagreement that this deeper question generates?
CORRECT: D The passage identifies the critics' concern: "the collapse of shared factual premises is a greater democratic threat than hidden editorial perspective." This implies the deeper question is about whether democracy requires a shared epistemic foundation — common factual ground — or whether it can function across diverse and even opposed information environments. Objectivity norms try to provide that shared ground; advocacy journalism presupposes democracy can work across perspectival diversity. D captures this. A concerns informing versus mobilising, which is a related distinction but not the deepest disagreement the passage identifies. B concerns regulatory models, which is an institutional question not the foundational democratic theory question. C concerns individual versus aggregate information accuracy, which is a related dimension but not the central tension about shared epistemic foundations.
Passage 4 Score
/4

P 05
Political Economy of Media, Ownership & the Capture of Editorial Independence
Passage Timer
10:00
Read the Passage

Herman and Chomsky's propaganda model, developed in "Manufacturing Consent" (1988), proposed that mass media perform a social control function by filtering information through five institutional mechanisms: ownership by profit-seeking large corporations; reliance on advertising revenue that aligns editorial interests with advertiser preferences; dependence on official sources (government and corporate PR) as the primary supply of news; vulnerability to organised "flak" from powerful actors who contest unfavourable coverage; and an ideological framework — initially anti-communism, later terrorism and related threats — that serves as a filtering criterion for acceptable coverage. The model predicts that media content will systematically favour the interests of the wealthy and powerful not because editors are corrupted but because structural selection pressures internalised by editors and journalists shape what is produced without requiring explicit instruction.

The propaganda model has been criticised on several grounds. First, it is primarily a macro-structural account that cannot explain within-system variation: some journalists produce genuinely critical coverage of powerful interests despite working within the same structural constraints, which the model can accommodate only by treating such coverage as the exception that proves the rule rather than as evidence that requires explanation. Second, the model's structural determinism means its predictions are difficult to falsify: any apparently counter-model evidence can be reframed as structural tolerance for limited dissent that serves the overall legitimating function. Third, critics argue that the model underestimates editorial autonomy — journalists and editors do resist owner influence in documented cases — and overestimates the coherence and intentionality of elite interests in shaping media output.

The digital transformation of media economics has both confirmed some elements of the propaganda model and complicated others. The collapse of advertising revenue for legacy print media has increased ownership concentration as weaker outlets are acquired or fold, consistent with the model's ownership filter. The dependence on digital platform intermediaries — Facebook, Google — as distribution channels introduces a new structural dependence that the original model did not anticipate. At the same time, the fragmentation of media into thousands of outlets with diverse ownership and funding structures has created a more complex information environment than the concentrated broadcast model the propaganda model was built to explain. The model's core insight about structural pressures on content remains productive; its specific institutional architecture no longer maps cleanly onto a digitally transformed media landscape.

Questions · Passage 05
17
The propaganda model predicts systematic bias without requiring explicit editorial corruption. What mechanism enables this, and why is it analytically significant?
CORRECT: C The passage says structural selection pressures are "internalised by editors and journalists" who are "shaped" by what is produced "without requiring explicit instruction." The mechanism is that the selection process — hiring, promotion, publication — sorts toward people whose judgments happen to align with structural interests, not through corruption but through accumulated micro-decisions about whose work fits the outlet. This is analytically significant because it means individual journalistic integrity is insufficient to overcome structural bias: even honest journalists operating in good faith produce structurally biased output if they have been selected for alignment. C captures this. A describes informal communication channels, which would still be a form of explicit influence not structural selection. B describes material incentive alignment, which is related but frames it as individual self-interest rather than structural sorting. D describes source dependence, which is one filter in the model but not the general mechanism of structural internalisation.
18
The second critique — that the model's predictions are "difficult to falsify" — raises which specific methodological concern?
CORRECT: B The passage explicitly says "any apparently counter-model evidence can be reframed as structural tolerance for limited dissent that serves the overall legitimating function." This is the classic unfalsifiability structure: the model accommodates both expected and unexpected evidence through its own theoretical resources, leaving no prediction that would be violated by any observable. B states this precisely. A says the model makes no testable predictions, which overstates — the model makes predictions about media content patterns that could in principle be tested. C concerns measurement difficulty, which is a practical obstacle but not the unfalsifiability concern. D concerns sample size requirements, which is also a practical concern not the logical unfalsifiability the critique identifies.
19
The passage says the digital transformation "has both confirmed some elements of the propaganda model and complicated others." Which element does it most directly complicate, and why?
CORRECT: C The passage says the model's "specific institutional architecture no longer maps cleanly onto a digitally transformed media landscape." The complication is the architectural mismatch: the model was designed to describe a concentrated broadcast landscape, and fragmentation of media into thousands of diverse outlets with different ownership and funding structures complicates the model's assumptions about how structural pressures operate. C captures the most comprehensive complication. A concerns advertising revenue flows, which is a specific element that has changed but is part of the broader architectural shift. B says digital media has reduced ownership concentration, but the passage actually says the collapse of advertising revenue has increased ownership concentration for legacy media — so B is factually inconsistent with the passage. D concerns flak amplification, which would confirm rather than complicate the model.
20
The passage concludes that the propaganda model's "core insight about structural pressures on content remains productive" while its "specific institutional architecture no longer maps cleanly." What does this conclusion imply about how to evaluate political economy frameworks in media studies?
CORRECT: D The passage's conclusion distinguishes the structural logic (productive and enduring) from the specific institutional architecture (no longer clean fit). This implies evaluating the framework by whether its analytical orientation — structural pressures shape content — applies in new configurations, not whether the original five filters map onto contemporary media. D captures this evaluative principle: assess the structural logic in new contexts rather than testing the specific original mechanism. C is close but frames it as a methodological distinction between abstract insight and specific mechanism, which is accurate but D is more precise about what the productive evaluation looks like. A says updating would restore full explanatory power, which overstates — the passage suggests the digital environment is more complex than any simple update would capture. B says the model should be replaced, which goes further than the passage's "remains productive" conclusion.
Passage 5 Score
/4
Media · Total Score
/8
Category 18
Linguistics
5 passages · 4 questions each · CAT 5/5 · 50 min total
Score
/20
P 01
Universal Grammar, the Poverty of the Stimulus & the Usage-Based Challenge
Read the Passage

Chomsky's universal grammar (UG) hypothesis holds that children acquire language with a speed and accuracy that cannot be explained by the quantity and quality of available linguistic input — the "poverty of the stimulus" argument. The input is too sparse, too noisy, and too underdetermined to support the grammatical knowledge children rapidly achieve; therefore the mind must be endowed with innate, species-specific linguistic knowledge that constrains the hypothesis space over which language acquisition operates. UG provides the initial state of a "language acquisition device" (LAD) that bridges the gap between impoverished input and rich grammatical competence. The poverty of the stimulus argument is not merely about quantity of input but about structural complexity: children converge on grammatical rules that go beyond anything explicitly modelled in the input, including structure-dependence — the fact that grammatical transformations (such as question formation) operate on hierarchical phrase structure rather than on linear word order.

The strongest versions of UG have been substantially weakened by subsequent empirical research. Corpus analysis of child-directed speech reveals more complex syntactic constructions in the input than the poverty of the stimulus argument assumed; statistical learning research demonstrates that infants extract remarkably abstract distributional patterns from linguistic input at ages and in timescales inconsistent with a slow constructivist process; and computational models using neural networks or probabilistic grammars trained on child-directed speech achieve grammatical competence on standard measures without explicitly encoded UG principles. These findings do not conclusively refute UG — the corpus evidence and neural network models address quantity and pattern-learning while the poverty of the stimulus argument concerns structural dependencies that may still require innate specification.

Usage-based accounts (Tomasello, Goldberg) argue that language acquisition is a form of domain-general pattern-learning: children extract constructions — form-meaning pairings of varying abstraction — from exemplars in the input through mechanisms of analogy, categorisation, and schematisation that are not linguistically specific. The child brings powerful general cognitive machinery, not pre-programmed syntactic structures. The current state of the field is not resolution but productive disagreement: innate constraints on the shape of possible grammars are probably real, but are considerably weaker and more abstract than the early UG programme proposed — more like a set of preferential biases operating on general learning mechanisms than a full modular syntactic competence.

Questions · Passage 01
1
The poverty of the stimulus argument holds that children converge on structure-dependent grammatical rules that go beyond anything explicitly modelled in the input — specifically, that question formation operates on hierarchical structure rather than linear word order. Which of the following, if true, most seriously weakens the claim that this specific structure-dependence requires innate specification?
CORRECT: C The poverty of the stimulus argument for UG in the structure-dependence case is: children acquire structure-dependent rules that can't be learned from input alone → innate specification required. Option C directly refutes the "can't be learned from input alone" premise: a neural network with no pre-programmed hierarchy acquires structure-dependent question formation from child-directed speech. If a system with no innate syntactic structure can learn this from input, the poverty of the stimulus argument for innate specification of hierarchy is undermined. A shows more complex input than assumed — this addresses the "quantity" version of the argument but the passage says POS concerns structural dependencies that may still require innate specification even with richer input; A is thus consistent with the passage's qualified UG position. B shows cross-linguistic robustness — this actually strengthens UG by showing universality. D shows language-specific patterns — interesting but does not address whether the English case requires innate specification; universality is not required for the POS argument to hold in specific cases.
2
The passage concludes that innate constraints "are probably real, but are considerably weaker and more abstract than the early UG programme proposed — more like a set of preferential biases operating on general learning mechanisms." Which of the following can be most reliably inferred from this conclusion?
CORRECT: B The conclusion says the early UG programme overstated innateness — the constraints are real but weaker and more abstract than claimed. A reliable inference is that early UG's specific modularity and deterministic syntactic principles were empirically overclaimed; the residual innateness is compatible with domain-general learning, representing a substantial revision of the original programme. A predicts convergence — possible but speculative; the passage describes "productive disagreement," not an emerging convergence. C uses cross-linguistic variation as evidence for the biases — interesting logic but the passage doesn't make this inference. D says the difference is terminological — too strong; if UG's innate constraints are real (even if weaker), this is a theoretical commitment that usage-based accounts reject; "preferential biases" vs. no specifically linguistic innateness is not merely terminological.
3
The poverty of the stimulus argument concludes that input is insufficient → innate specification is required. Yet the passage notes that computational models trained only on child-directed speech achieve grammatical competence "without explicitly encoded UG principles." These two facts appear to conflict. Is this a genuine paradox, or can it be resolved? Which best captures the relationship?
CORRECT: B The passage itself provides the resolution: "the corpus evidence and neural network models address quantity and pattern-learning while the poverty of the stimulus argument concerns structural dependencies that may still require innate specification." The models achieve competence on standard measures but the UG argument concerns a specific and deeper property — structure-dependence — that the models may be tracking through surface regularities without the underlying hierarchical representation. B captures this precisely: it's not a paradox but a distinction between levels of description. A says it's a genuine refuting paradox — the passage explicitly says the findings "do not conclusively refute UG" for this reason. C says it's irresolvable because one must be flawed — the passage treats both as genuine but operating at different levels, not as contradictory. D says neural networks are themselves "innate" due to engineering — a real point in some debates but not the resolution the passage provides.
4
Usage-based accounts argue that children acquire grammar through domain-general pattern-learning mechanisms — analogy, categorisation, schematisation — rather than through a linguistically specific LAD. For this claim to constitute a genuine alternative to UG rather than merely a redescription of it, which of the following must be assumed?
CORRECT: B For usage-based accounts to be a genuine alternative (not just a relabelling), the domain-general mechanisms must be sufficient to produce all the grammatical competencies that UG explains — including the hard cases like structure-dependence and unbounded dependencies. If domain-general mechanisms can handle the easy cases but still require supplementation by linguistically specific principles for the hard cases, usage-based accounts are not genuine alternatives to UG but modifications of it. A requires independent validation of the mechanisms — relevant to their credibility but not to whether they constitute a genuine alternative to UG; even unvalidated mechanisms could in principle be genuine alternatives. C appeals to parsimony — a methodological virtue, not the foundational assumption for being a genuine alternative. D equates uniquely powerful domain-general abilities with species-specific endowment — interesting but this is a separate debate about the nature of human cognition, not the core assumption for usage-based accounts to be genuine alternatives to UG.
Passage 1 Score
/4

P 02
Linguistic Relativity: Strong Whorf, Thinking-for-Speaking & the Calibration Problem
Read the Passage

The Sapir-Whorf hypothesis in its strong form — that language determines thought, making thought without language impossible — has been decisively refuted by pre-linguistic infant cognition research: infants exhibit object permanence, number discrimination, and proto-causal reasoning long before they acquire the linguistic categories that, on strong Whorfianism, should be constitutive of such concepts. The weak form — that language influences rather than determines thought, making certain cognitive distinctions more or less accessible — has received more nuanced empirical support. Boroditsky's cross-linguistic studies demonstrate systematic effects of linguistic categories on non-linguistic cognition: speakers of languages with grammatical gender attribute gendered properties to inanimate objects consistent with their linguistic gender category; speakers of languages encoding absolute spatial reference (north/south rather than left/right) show reliably different spatial memory and navigation patterns from speakers of relative reference languages; speakers of languages without obligatory future tense marking show higher savings rates, consistent with reduced psychological distance to future outcomes.

Slobin's "thinking for speaking" framework offers an important theoretical limitation on the weak Whorfian position. Language shapes cognition not as a general background condition but specifically in the moment of linguistic encoding — when a speaker is preparing to speak and must package her experience into the categorical resources her language provides. The claim is not that habitual use of linguistic categories permanently restructures pre-linguistic conceptual representations but that it activates certain distinctions preferentially in contexts of linguistic production. This limits the scope of linguistic relativity to performance contexts rather than competence: the Whorfian effect is an artefact of the measurement procedure (asking speakers to respond linguistically) rather than evidence of deep conceptual restructuring.

Critics of the thinking-for-speaking limitation argue that Slobin's framework does not resolve but merely displaces the problem. If speakers habitually activate certain distinctions when encoding experience for linguistic communication, and if most of their socially consequential reasoning occurs in contexts requiring linguistic communication, then the performance/competence distinction may not carve the phenomena at their joints. The real question is whether habitual linguistic activation of certain distinctions over long periods of language use produces cumulative effects on non-linguistic conceptual representation — which is an empirical question that neither Boroditsky's cross-sectional studies nor Slobin's theoretical proposal resolves.

Questions · Passage 02
5
Critics of Slobin's thinking-for-speaking framework argue that habitual linguistic activation of distinctions may produce cumulative effects on non-linguistic representation. Which of the following, if true, most strengthens this critical argument against Slobin?
CORRECT: B Slobin's limitation is that Whorfian effects are artefacts of linguistic encoding — they only show up when speakers must package experience linguistically. To strengthen the critics' position, we need evidence that the effects persist in genuinely non-linguistic tasks where the thinking-for-speaking mechanism cannot operate. Option B provides exactly this: cross-linguistic differences in purely non-linguistic tasks (pointing, sorting, navigation without speech). If the effects appear with no linguistic response required, Slobin's limitation is undermined. A shows longitudinal restructuring in bilinguals — strong evidence for cumulative effects, but it's a longitudinal study of bilinguals, not a purely non-linguistic task test; Slobin could still argue that bilinguals' increased linguistic engagement drives the shift. C shows language regions activated in silent spatial reasoning — this actually provides evidence consistent with Slobin's framework (linguistic encoding is covertly active even in apparently non-linguistic tasks) rather than against it. D confirms modality-independence of the effect — relevant to sign language research but doesn't address whether effects persist in non-linguistic tasks for any speakers.
6
The passage says the question of whether habitual linguistic activation produces cumulative effects on non-linguistic representation "is an empirical question that neither Boroditsky's cross-sectional studies nor Slobin's theoretical proposal resolves." Which of the following can be most reliably inferred from this claim?
CORRECT: B The passage says neither cross-sectional studies nor Slobin's theoretical limitation resolves whether habitual activation produces cumulative conceptual restructuring. The reliable inference is that resolving this requires research designs capable of distinguishing the two hypotheses — specifically longitudinal tracking or experimental paradigms that manipulate linguistic experience and measure conceptual representation independently of linguistic encoding. B captures this methodological inference precisely. A says Boroditsky's studies are "flawed" — too strong; the passage says the data is consistent with both interpretations (ambiguous), not that it is methodologically invalid. C says weak Whorfianism should be abandoned — the passage takes no such prescriptive stance; "unresolved" does not mean "abandoned." D says Slobin's account is the correct interpretation — the passage presents both as viable and explicitly says the question is empirically unresolved.
7
A Whorfian might argue: "Boroditsky's spatial cognition results show that speakers of absolute reference languages perform differently on non-linguistic spatial tasks. This proves that linguistic categories shape non-linguistic thought, refuting Slobin's thinking-for-speaking limitation." Which logical problem most seriously affects this argument?
CORRECT: B The Whorfian argument uses the spatial task differences to "refute" Slobin. But Slobin's framework doesn't predict that speakers of different languages will perform identically on non-linguistic tasks — it predicts that the differences reflect thinking-for-speaking activation rather than deep conceptual restructuring. So finding differences doesn't refute Slobin; it's exactly what his framework predicts (since preparing to respond to spatial tasks in an experimental context involves linguistic encoding). The argument attacks a claim Slobin doesn't make. A identifies an alternative causation problem — real and important, but this is a confound that affects Boroditsky's studies generally, not specifically the argument that the results refute Slobin. C says it proves too much — a valid concern about overreading any cognitive difference as language-caused, but not the specific flaw in using spatial task results against Slobin. D raises reverse causation — also a valid concern about the broader research programme but not the specific logical flaw in the Whorfian's argument against Slobin.
8
The passage ends by calling the question of cumulative effects "an empirical question" that current evidence doesn't resolve. What is the author's most plausible purpose in ending the passage this way rather than adjudicating between Boroditsky and Slobin?
CORRECT: B The passage carefully maps the positions and identifies why neither resolves the core question — cross-sectional data can't distinguish performance-level activation from conceptual restructuring, and Slobin's framework is theoretical rather than an independent empirical test. Ending with "this is an empirical question neither resolves" signals what kind of evidence would settle it: research designs that separate the two hypotheses. This is a methodological guidance function — the author is identifying what the field needs to do next, not adjudicating on current evidence. A attributes political avoidance — unwarranted; the author engages substantively with both positions. C attributes implicit Slobin endorsement — there is no signal of this; the author presents the critic's response to Slobin as "not merely displac[ing] the problem." D says the purpose is to establish the field has moved beyond strong Whorfianism — a correct observation about the passage's content but not the primary function of the concluding statement.
Passage 2 Score
/4

P 03
Language Change, Variation & the Social Embedding of Grammar
Passage Timer
10:00
Read the Passage

Languages change continuously, yet communities of speakers typically perceive their language as stable across their lifetimes. This paradox — change without apparent change — is partly explained by the fact that sound changes, grammatical restructuring, and semantic shifts operate over timescales longer than individual perception. William Labov's pioneering variationist sociolinguistics demonstrated that apparent synchronic variation — multiple ways of saying the same thing coexisting in a speech community — often represents change in progress: what looks like a random stylistic option is actually a variable that is systematically correlated with age, gender, social class, and social network density, and that is shifting directionally across generations. By treating variation as the raw material for change and by studying social embedding alongside linguistic structure, Labov dissolved the structuralist distinction between synchrony (language at a moment) and diachrony (language over time).

The mechanism of change typically operates through the distinction between innovators and adopters. Linguistic innovations arise in specific social groups — young working-class women have been disproportionately identified as linguistic innovators in multiple Labov-tradition studies — and spread outward through social networks. The S-curve of adoption describes the typical trajectory: slow initial spread, rapid acceleration as adoption reaches a critical mass, and then deceleration as the innovation becomes the new norm. The social motivation for adoption is typically prestige: covert prestige (the street prestige of non-standard forms associated with local identity and in-group solidarity) drives the adoption of vernacular features, while overt prestige (the institutional prestige of standard forms) creates pressure in the other direction. Languages do not change because individuals decide to change them; they change through the aggregation of micro-level social choices whose individual motivation is social positioning rather than communicative efficiency.

The mechanisms that drive change also explain resistance to change. Hypercorrection occurs when speakers overextend a prestige form beyond its grammatically justified domain — "between you and I" rather than "between you and me" — because the prestige form is incompletely acquired and is applied by analogy in contexts where the standard actually calls for the non-prestige variant. Dialect levelling, the convergence of dialects toward a supraregional standard in contexts of high mobility and inter-dialect contact, reduces the geographic fragmentation that historically produced distinct dialects and that provided the conditions for divergent change. The standard language ideology — the belief that there is a single correct form of the language from which others deviate — actively suppresses variation and change by stigmatising innovative and non-standard forms, channelling the prestige mechanism against rather than toward linguistic change.

Questions · Passage 03
9
Labov's variationist approach "dissolved the structuralist distinction between synchrony and diachrony." What specifically allowed this dissolution?
CORRECT: C The dissolution works by reinterpreting synchronic variation. Structuralism treated synchrony and diachrony as separate objects of study: language-at-a-moment is synchronic, language-over-time is diachronic. Labov showed that synchronic variation — the coexistence of variants at a moment — is systematically correlated with social variables in ways that reveal directional change. The synchronic state is a snapshot of a diachronic process; studying variation reveals change without needing longitudinal data. C captures this reinterpretation. A says they use the same analytic tools, which is about method rather than the dissolution of the conceptual distinction. B says no system is ever fully stable, which is true but vaguer than C's specific mechanism. D says Labov used longitudinal methodology, which misrepresents his approach — the insight was precisely that apparent synchrony contains diachronic evidence.
10
The passage identifies young working-class women as disproportionate linguistic innovators. Which of the following, if true, would most directly explain this pattern in terms of the social mechanisms the passage describes?
CORRECT: D The passage describes the mechanism of change as covert prestige driving adoption, with spread through social networks. An explanation of why young working-class women are innovators should invoke these mechanisms: covert prestige is salient for them (identity solidarity with local working-class community matters more than institutional prestige), and their dense social networks facilitate rapid spread. D combines both mechanisms. C captures the covert prestige salience but omits the network spread mechanism. A concerns educational exposure, which is not a mechanism the passage identifies. B says they are socially mobile, but the passage associates innovation with dense local networks rather than with cross-class contact, which would more likely produce standard form adoption.
11
Hypercorrection involves overextending a prestige form beyond its grammatically justified domain. What does this phenomenon reveal about the relationship between prestige and language acquisition?
CORRECT: B Hypercorrection shows that the social motivation to use prestige forms can operate independently of accurate grammatical knowledge of those forms. The speaker adopts "between you and I" because "I" has prestige over "me," but without the structural knowledge of when "I" versus "me" is grammatically warranted. Social adoption outruns structural acquisition. B captures this relationship. A says prestige forms are psychologically more salient, which is a cognitive framing rather than the social-structural insight. C invokes standard language ideology and insecurity, which is related but focuses on social anxiety rather than the structural point about adoption preceding acquisition. D invokes over-generalisation as evidence for usage-based accounts, which is a theoretical implication not what the phenomenon primarily reveals about prestige and acquisition.
12
The passage says languages change "through the aggregation of micro-level social choices whose individual motivation is social positioning rather than communicative efficiency." What does this imply about teleological accounts of language change?
CORRECT: B A teleological account of language change treats changes as movements toward some functional goal — efficiency, expressiveness, simplicity. The passage explicitly attributes change to social positioning motivation rather than communicative efficiency. This implies that teleological accounts misidentify the motor of change: there is no language-level goal being pursued, only individual-level social motivations whose aggregate produces change. B captures this implication. A says teleological accounts are right about direction but wrong about mechanism, but the passage does not say changes are directional toward efficiency — they are directional because of social pressures which may not produce efficiency outcomes. C says efficiency may emerge from aggregation, which is a possible concession but not the implication the passage draws from the social positioning account. D says both motivations operate simultaneously, which weakens the passage's strong social positioning claim.
Passage 3 Score
/4

P 04
Pragmatics, Speech Act Theory & the Cooperative Principle
Passage Timer
10:00
Read the Passage

Linguistic communication relies on a systematic gap between what is said and what is meant. The sentence "Can you pass the salt?" is grammatically a question about the hearer's ability, but its conventional use is as a request; "It's cold in here" can communicate a request to close the window without explicitly stating one. J.L. Austin's speech act theory provided the conceptual apparatus: utterances perform acts — asserting, promising, requesting, warning, apologising — whose illocutionary force is not always transparently encoded in the sentence's syntactic form. Austin distinguished the locutionary act (what is literally said), the illocutionary act (the speech act performed), and the perlocutionary act (the effect achieved in the hearer). Understanding an utterance requires recovering its illocutionary force, which depends on contextual knowledge, shared background, and knowledge of communicative conventions that go far beyond grammatical decoding.

H.P. Grice's cooperative principle and its maxims — Quantity (say enough but not too much), Quality (say what you believe to be true), Relation (be relevant), and Manner (be clear, brief, orderly) — provided a theoretical framework for the inferential gap between sentence meaning and utterance meaning. Conversational implicature arises when a speaker appears to violate a maxim, triggering the hearer to infer what the speaker must mean given the assumption that they are cooperating. "Some students passed the exam" implicates "not all" because if all had passed, the maxim of Quantity would require the speaker to say so; the underinformative "some" is interpreted as meaning "some but not all" by implicature rather than by semantic encoding. Implicatures are defeasible — they can be cancelled without contradiction ("Some students passed — in fact all of them") — distinguishing them from semantic entailments.

The cooperative principle has been criticised for presupposing a model of communication as fundamentally cooperative and rational that does not capture the range of communicative practices in natural language use. Miscommunication, deliberate uncooperativeness, institutional power asymmetries, and the context-dependence of what counts as "relevant" all challenge the universalist pretensions of the Gricean framework. Cross-cultural pragmatics research has documented that the relative weight given to different maxims varies systematically across cultures — some cultures tolerate far more conversational silence and indirectness than the Gricean framework as typically applied would predict. Sperber and Wilson's relevance theory attempts to reduce the maxims to a single principle of optimal relevance — the expectation that any communicated message is worth the processing effort required — which they argue captures the cognitive economy of interpretation without presupposing explicit normative cooperation.

Questions · Passage 04
13
Austin's distinction between locutionary, illocutionary, and perlocutionary acts is motivated by which observation about language use?
CORRECT: B Austin's tripartite distinction is motivated by the separability of the three levels: the same words (locution) can perform different speech acts depending on context and convention (illocution), and the same speech act can produce different outcomes in different hearers (perlocution). The examples in the passage — "Can you pass the salt?" as ability question versus request — illustrate that locution and illocution come apart. B captures the motivation as systematic separability requiring three-level analysis. A concerns emotional tone versus propositional content, which is a different and narrower distinction. C concerns speaker meaning versus semantic content, which is closer but frames it as irony and metaphor rather than the general structural separability Austin identifies. D concerns presupposition failure and felicity conditions, which is a related but distinct phenomenon.
14
Conversational implicature is described as "defeasible." Why does defeasibility matter for distinguishing implicature from semantic entailment?
CORRECT: C The passage explicitly uses defeasibility as the distinguishing criterion: "implicatures are defeasible — they can be cancelled without contradiction... distinguishing them from semantic entailments." The importance of defeasibility is precisely that it provides a clean diagnostic: if cancelling X produces contradiction, X is semantically entailed; if cancelling X is consistent, X is a pragmatic implicature. C states this diagnostic function. A says defeasibility shows implicatures are pragmatic, which is the conclusion the diagnostic supports but not the reason defeasibility itself matters for the distinction. B says defeasibility is the defining property of inference, which overgeneralises — semantic entailments in logic are not defeasible. D says defeasibility explains context-sensitivity, which is a consequence rather than why it matters for distinguishing implicature from entailment.
15
Relevance theory attempts to replace Grice's four maxims with a single principle. What advantage does this reduction offer over the four-maxim framework?
CORRECT: B The passage says Sperber and Wilson's relevance theory "attempts to capture the cognitive economy of interpretation without presupposing explicit normative cooperation." The advantage is precisely that it handles both cooperative and non-cooperative contexts through a single cognitive principle rather than requiring the normative presupposition of cooperation that Grice's framework needs. B captures this. A concerns falsifiability, which is not the advantage the passage identifies. C concerns practical application, which is a secondary benefit not the theoretical advantage the passage describes. D says it unifies pragmatics with cognitive science, which is partially captured in B but is less precisely the advantage over Grice that the passage identifies — the specific advantage is not presupposing cooperation.
16
Cross-cultural pragmatics research shows that "the relative weight given to different maxims varies systematically across cultures." What does this finding imply about the universality of the Gricean framework?
CORRECT: C The finding is that relative weighting of maxims varies across cultures. This does not establish that the maxims are absent in some cultures (which would support A) but that how they are applied — what counts as adequate Quantity, relevant Relation, clear Manner — is culturally variable. This challenges the universalist claim that interpretations can be derived from culturally neutral principles, since the principles' application is itself culturally specific. C captures this challenge to application universality. A says the framework is culturally specific and inapplicable elsewhere, which goes beyond what variation in weighting implies. B says the maxims are universal but priorities vary, which is a possible reframing but not what "relative weight varies" implies — it challenges the culturally neutral application claim. D says the framework should be abandoned, which is a policy conclusion not implied by the finding alone.
Passage 4 Score
/4

P 05
Language Endangerment, Documentation & the Ethics of Linguistic Fieldwork
Passage Timer
10:00
Read the Passage

Of the approximately 7,000 languages currently spoken in the world, the majority are spoken by small communities, and a substantial fraction — estimates range from 50 to 90 percent — are predicted to cease to be spoken as community languages by the end of the twenty-first century. Language shift occurs when a community transitions from using an ancestral language for most communicative functions to using a dominant regional or national language; it is typically driven by economic incentives, formal education in a dominant language, demographic displacement, and the perceived prestige advantages of the dominant language. Language death follows when the last speakers of a language die or shift without passing it on. The scale of this attrition represents the largest and fastest episode of language loss in the history of the species, and it has prompted both an academic response — the language documentation movement — and a political one — language revitalisation programmes that attempt to reverse or arrest shift.

The case for language documentation rests on several arguments. Scientifically, languages are non-redundant cognitive and cultural artefacts: each language encodes solutions to communicative problems — phonological distinctions, grammatical structures, evidentiality systems, spatial reference frames — that may not exist elsewhere, and their loss represents an irreversible reduction in the sample of human linguistic diversity from which generalisations about language universals and the cognitive bases of language can be drawn. Culturally, languages encode knowledge systems — ecological, medicinal, navigational, social — that are not fully transferable to other languages, so that language loss entails knowledge loss that goes beyond the merely linguistic. Ethically, language maintenance and shift are never purely individual choices: they are shaped by histories of colonial imposition, economic marginalisation, and active suppression that create structural conditions under which individual speakers make choices that are in form voluntary but in substance constrained.

Documentation itself raises ethical questions about the relationship between linguist and community. Historically, fieldwork was extractive: linguists collected materials that were deposited in metropolitan archives inaccessible to source communities. Contemporary documentation ethics emphasises community benefit, shared ownership, and the right of communities to determine how materials collected from them are used and archived. The tension between the linguist's scholarly interest in maximally comprehensive documentation and the community's interest in controlling materials that may contain culturally sensitive or ritually restricted content requires ongoing negotiation rather than resolution by general principle. A further complication arises when communities are divided: elders who value maintenance may conflict with younger members who have already shifted and whose economic interests align with the dominant language.

Questions · Passage 05
17
The passage describes language shift as driven by "economic incentives, formal education, demographic displacement, and perceived prestige advantages." What does the phrase "perceived prestige advantages" add that the other three factors do not?
CORRECT: C Economic incentives, education, and displacement are material and institutional factors. "Perceived prestige advantages" adds that symbolic valuation of languages — independent of concrete material advantage — can drive shift. A community might shift even when the economic incentives are weak if speakers associate the dominant language with modernity and the ancestral language with backwardness. C captures this as a distinct motivational factor. A says it adds a psychological dimension to structural factors, which is true but less precise than C about what specifically "prestige" adds — the symbolic valuation that operates independently of material advantage. B introduces misperception as a correctable error, which is possible but not what the phrase specifically adds. D says it adds voluntary agency, but the third paragraph complicates this by arguing shift choices are constrained even when apparently voluntary.
18
The passage argues that language shift involves choices that are "in form voluntary but in substance constrained." What philosophical distinction does this invoke, and why does it matter for assessing responsibility for language loss?
CORRECT: B The passage specifically invokes "histories of colonial imposition, economic marginalisation, and active suppression that create structural conditions under which individual speakers make choices that are in form voluntary but in substance constrained." This is the formal/substantive freedom distinction: no direct coercion (formal voluntariness) but structural conditions that make alternatives effectively unavailable or very costly (substantive constraint). This matters for responsibility: if the conditions were created by colonial or state action, those actors bear responsibility for language loss even though individuals technically chose to shift. B captures this. A concerns preferences versus interests, which is a different distinction. C concerns collective action problems, which is related but not the formal/substantive distinction the passage draws. D concerns sincere versus strategic choice, which is a different and narrower framing.
19
The passage says the conflict between linguist and community interests "requires ongoing negotiation rather than resolution by general principle." What does this claim imply about the nature of research ethics in fieldwork linguistics?
CORRECT: D The passage contrasts "ongoing negotiation" with "resolution by general principle." The claim implies ethics in this context is relational and dynamic rather than contractual and settled: the tension between comprehensive documentation and community control over sensitive materials cannot be resolved once and for all by a rule but must be renegotiated as the project evolves and circumstances change. D captures this relational, process-oriented character. C is close — contextual judgment about competing claims — but frames it as a judgment task rather than a relational process. A says linguistics is exceptional compared to other disciplines, which is more of a comparative claim than what the statement implies. B says ethics committees are inadequate, which is a practical implication but not the nature of the ethical relationship the claim addresses.
20
The passage says communities may be internally divided between elders who value maintenance and younger members whose economic interests align with the dominant language. What problem does this internal division create for the community-benefit framework in documentation ethics?
CORRECT: C The community-benefit framework assumes "the community" has interests that can be served by documentation. Internal division reveals that the community has conflicting interests: elders' maintenance interests and younger members' economic interests point in opposite directions. "Community benefit" becomes indeterminate because there is no single community interest — there are competing interests — and the framework provides no principle for adjudicating between them. C captures this. A concerns representational consent, which is a related but different problem — the passage's concern is about whose interests count, not just about who can authorise consent. B concerns relevance of materials, which is a practical consideration not the conceptual problem. D concerns intergenerational conflicts, which is one dimension of C but C's formulation is more comprehensive about the indeterminacy of the concept itself.
Passage 5 Score
/4
Linguistics · Total Score
/8
Category 19
Archaeology
5 passages · 4 questions each · CAT 5/5 · 50 min total
Score
/20
P 01
The Origins of Agriculture: Why Farm? Coercion, Ecology & the Revision of the Neolithic Revolution
Read the Passage

Childe's "Neolithic Revolution" framed the transition from foraging to agriculture as a decisive adaptive advance — a technological breakthrough that enabled surplus production, sedentism, population growth, and the eventual emergence of urban civilisation. This narrative has been progressively dismantled by archaeological evidence accumulated since the 1970s. Skeletal analysis of Neolithic populations reveals a systematic deterioration in health relative to preceding forager populations: decreased stature, increased dental caries and enamel hypoplasia indicating nutritional stress, evidence of iron-deficiency anaemia, and higher rates of infectious disease — the last a predictable consequence of sedentary living with domesticated animals and concentrated human waste. Early farmers worked harder and ate worse than their forager predecessors. The question the evidence forces is not "why did agriculture eventually succeed?" but "why did anyone adopt it in the first place?"

Demographic and ecological explanations focus on the conditions under which foraging becomes inadequate. Population pressure on prime foraging territories and the depletion of large game — accelerated in many regions by the terminal Pleistocene extinctions — may have made agriculture a default response to declining returns from wild resources rather than a freely chosen improvement. Scott's "last resort" model extends this: agriculture was not chosen but coerced — either by environmental pressure, by the emergence of early proto-states that captured and settled mobile populations for taxation and corvée, or by the labour requirements of intensive food production that, once initiated, created path dependencies resisting reversal. Agriculture, on this account, is a trap: once entered, the population explosion it enables makes exit impossible, locking societies into a lower-welfare, higher-labour equilibrium from which there is no return.

Recent archaeobotanical and genetic evidence complicates both the revolutionary narrative and the coercion narrative. The transition to agriculture was gradual, polycentric, and regionally specific: multiple independent domestication events occurred across different species and regions over thousands of years, with long periods of mixed forager-cultivator economies. The "full package" narrative — agriculture + sedentism + animal husbandry + pottery occurring simultaneously — is an artefact of earlier coarse-grained chronologies; higher-resolution dating reveals sequential and asynchronous adoption of these practices. The transition's heterogeneity implies that no single model — not revolution, not coercion, not rational choice — captures a phenomenon that appears to have had different primary drivers in different ecological and social contexts.

Questions · Passage 01
1
Scott's "last resort" model argues that agriculture was coerced — by environmental pressure, proto-state capture, or path dependency — rather than freely chosen. Which of the following, if true, most seriously weakens the coercion version of this argument?
CORRECT: A The coercion model argues agriculture was adopted under pressure — subsistence, state capture, or path dependency. Option A provides evidence of voluntary, non-subsistence initiation: the earliest cultivated species were prestige goods for feasting and ritual, not survival staples. If agriculture began as a voluntary social/ritual practice rather than a coerced subsistence response, the coercion model cannot explain the initiation of the practice. B shows gradual rather than abrupt transition — this is consistent with slow coercive pressure or incremental path dependency; it doesn't refute coercion. C shows coexistence without violent displacement — this addresses the proto-state capture mechanism but not environmental pressure or path dependency. D shows contemporary mixed societies resist full agricultural commitment — this would actually support Scott's trap model (people prefer foraging when they can) rather than weakening it.
2
The passage concludes that "no single model — not revolution, not coercion, not rational choice — captures a phenomenon that appears to have had different primary drivers in different ecological and social contexts." Which of the following can be most reliably inferred from this conclusion?
CORRECT: B The passage says no single model captures all instances because different contexts had different primary drivers. The reliable inference is that adequate explanations must be contextually specified — identifying which factors (ecological pressure, social coercion, ritual motivation, path dependency) operated in each specific case. This is not a counsel of despair (A) but a methodological reorientation from universal single-factor to contextual multi-causal explanation. A says the project should be abandoned — the passage identifies the need for better models, not the impossibility of explanation. C says each case is sui generis with no common process — the passage allows for some shared mechanisms (demographic pressure, ecological context) operating at different levels of importance; it doesn't claim each case is entirely unique. D infers the health evidence can't be generalised — the passage discusses the health evidence as a challenge to the Neolithic Revolution narrative; the conclusion about model plurality doesn't specifically undermine the health generalisation, which rests on separate skeletal evidence.
3
The passage presents what might be called the "agricultural adoption paradox": early farmers had worse health, worked harder, and ate less well than foragers — yet agriculture spread and displaced foraging as the dominant subsistence strategy across most of the globe. What is the precise structural feature that makes this paradoxical rather than simply counterintuitive?
CORRECT: B The paradox is self-trapping at the group level: agriculture enabled surplus → population growth → agricultural households could not return to foraging (not enough territory for their expanded population). The mechanism that made agriculture "successful" (population growth) is the same mechanism that locked people in at lower individual welfare. Group-level success and individual-level entrapment are produced by the same causal process. A says it's merely counterintuitive and resolvable by rational choice under constraint — this dissolves the paradox but the point is that even resolving it requires the trap structure B identifies. B names that trap structure precisely. C says the spread is a category error — interesting but this is an observation about how to interpret spread, not the structural feature making it paradoxical. D claims joint inconsistency between health deterioration and population growth — but these are not inconsistent: population growth can occur despite health deterioration if fertility increases faster than mortality, which is exactly what the passage implies (high-calorie but nutritionally poor diets supporting larger but sicker populations).
4
The passage uses skeletal evidence of health deterioration to challenge the Neolithic Revolution narrative. For this evidence to constitute a challenge to the "decisive adaptive advance" claim rather than merely showing agriculture had health costs, which of the following must be assumed?
CORRECT: C For the health evidence to challenge the "decisive adaptive advance" claim, the claim must be interpreted as referring to something health evidence is relevant to — specifically, that it claimed an advance in individual welfare. If Childe's claim was only about collective social complexity (emergence of surpluses, cities, states) and not about individual welfare, then health deterioration is irrelevant to whether agriculture was an "adaptive advance" in the sense Childe intended. The assumption is that the original claim is interpretable as making an individual welfare claim that health evidence addresses. A says health is a proxy for welfare — an assumption about measurement validity, relevant but secondary; the challenge would still be valid if health is the right indicator. B says the skeletal indicators are reliable — a methodological assumption about evidence quality, but not the foundational assumption about what the evidence challenges. D requires comparable ecological baselines — relevant to ruling out confounds but not the core assumption about what the health evidence challenges.
Passage 1 Score
/4

P 02
Taphonomy, Middle-Range Theory & the Problem of Archaeological Inference
Read the Passage

Taphonomy — the study of the processes by which organisms and their remains are transformed from the moment of death through fossilisation or archaeological deposition — is the epistemological foundation of the archaeological record's evidential value. All archaeological evidence has passed through a taphonomic filter: death assemblages become bone assemblages through selective preservation; bone assemblages become excavated assemblages through differential survival across soil conditions, moisture, acidity, and temperature; excavated assemblages become analysed assemblages through recovery methods biased toward visible, large, or expected materials. At each stage, a systematic subset of original evidence is destroyed, transformed, or rendered invisible, and the relationship between the surviving record and the past behaviour it is supposed to represent is mediated by processes the archaeologist must reconstruct.

Binford's middle-range theory (MRT) proposed a systematic methodology for bridging the inferential gap between the static archaeological record and the dynamic past behaviours that produced it. MRT generates bridging arguments — empirically grounded principles, derived from ethnoarchaeology, experimental archaeology, and actualistic studies, that specify what material traces are expected given specific behavioural inputs. If bone surface modification patterns in modern hunter-gatherer butchery sites can be reliably distinguished from carnivore damage patterns, the same diagnostic criteria can be applied to archaeological bone assemblages to infer human versus carnivore agency. MRT's epistemic claim is that the inferential gap can be bridged through systematic actualistic research rather than mere interpretive analogy — "warranted inference" rather than analogical reasoning from surface resemblance.

MRT attracted criticism from post-processual archaeologists on two grounds. First, Hodder argued that meaning is context-dependent and cannot be read off material correlates: the same artefact type may have radically different meanings and functions in different social contexts, making universal MRT bridging principles illegitimate. Second, the circularity objection: MRT's bridging principles are derived from studies of contemporary or historically known societies, which are themselves already products of long cultural histories; applying these principles to deep prehistory where social contexts are maximally different from any modern reference group risks projecting contemporary analogues onto radically unlike situations. Neither objection has been judged decisive — MRT remains methodologically productive — but they identify genuine limits on how far warranted inference can be extended across the gap between known and unknown contexts.

Questions · Passage 02
5
Hodder's post-processual objection argues that the same artefact may have radically different meanings in different social contexts, making universal MRT bridging principles illegitimate. Which of the following, if true, most strengthens this objection?
CORRECT: C Hodder's objection is specifically about meaning and social function being context-dependent — the same object has different meanings in different social contexts, so universal bridging principles connecting material forms to meanings are illegitimate. Option C provides the most direct evidence for this: the same physical object (shell, feather, stone) functions as prestige, utilitarian, or ritual depending on social context, with no material correlate distinguishing these. This is exactly the kind of context-dependence of meaning that Hodder invokes. A shows morphological typology is unreliable for technological behaviour — this concerns technological inference rather than meaning/function context-dependence; it strengthens a related but different point about typological methods. B shows butchery diagnostic criteria are confounded by post-depositional processes — this is a taphonomic challenge to MRT's specific bone modification bridging argument, not Hodder's context-dependence-of-meaning objection. D shows interpretive approaches can generate new knowledge — this supports post-processualism generally but doesn't specifically strengthen the context-dependence-of-meaning argument.
6
The passage states that the post-processual objections "identify genuine limits on how far warranted inference can be extended across the gap between known and unknown contexts." Which of the following can be most reliably inferred from this claim?
CORRECT: A The passage says the objections identify "genuine limits" but "neither objection has been judged decisive — MRT remains methodologically productive." The inference is that MRT is valid within certain bounds — specifically where the material-behaviour relationship is physically constrained (bone modification patterns have physical causes independent of meaning). The limits are at the level of higher-level social and symbolic inference where cultural context-dependence is greatest. A captures this gradient of reliability precisely. B says MRT should be abandoned — the passage explicitly says neither objection is decisive and MRT remains productive. C says the gap is unbridgeable — the passage presents both MRT and its limits; "genuine limits" doesn't mean unbridgeable. D restricts warranted inference to historically documented periods — the passage says the objections identify limits on extension across context gaps but doesn't restrict inference to historically documented periods; physical bridging arguments remain valid for deep prehistory at the low-level material correlate level.
7
MRT derives its bridging principles from studies of contemporary or historically known societies in order to make inferences about deep prehistory. The circularity objection identifies a paradox in this procedure. What is that paradox?
CORRECT: B The circularity objection in the passage is specifically that contemporary reference groups are "already products of long cultural histories" — they are the wrong analogy for deep prehistory precisely because they are similar to us and we need bridging principles for maximally different ancient contexts. The paradox is inverse reliability and necessity: MRT is most reliable (shortest context gap) when applied to societies most similar to its reference groups, but most needed (longest context gap, no other evidence) when applied to societies most different. The method's validity and the archaeologist's need for it vary in opposite directions. A describes a different circularity — about presupposing inference capacity — not the circularity the passage specifically identifies. C says MRT is just analogical reasoning — a philosophical objection about the nature of the method, not the specific context-gap circularity the passage describes. D identifies a disciplinary paradox about archaeology's founding purpose — real and interesting but not the specific circularity the circularity objection in the passage identifies.
8
The passage presents MRT and the post-processual critique with roughly equal analytical attention, concluding that neither objection is "decisive" and MRT "remains methodologically productive." What is the most plausible reason the author structures the passage this way rather than endorsing one position?
CORRECT: D The passage's structure is precisely calibrated: MRT is valid and productive (paragraph 2), but the post-processual objections identify genuine limits on how far it can be extended (paragraph 3). The conclusion — "neither objection decisive, MRT remains productive" — is a calibrated assessment that maps where MRT works and where its reliability degrades. The author's purpose is to produce this map, not to endorse or reject either position. A attributes implicit endorsement — the final sentence is a calibrated statement, not an endorsement; "methodologically productive" acknowledges limits. B says the approaches address different questions and are non-competing — partially true but the passage presents them as having a real methodological debate about the same inferential problem (what the record tells us about the past), not as addressing entirely separate questions. C says methodological crisis — the passage is constructive; it identifies limits that define where MRT works, not a crisis that invalidates it.
Passage 2 Score
/4

P 03
The Archaeology of States: Monumentalism, Surplus & the Revisionist Critique
Passage Timer
10:00
Read the Passage

Traditional evolutionary models of state formation — Childe's urban revolution, Service's chiefdom-to-state sequence, Carneiro's circumscription theory — treated the state as the outcome of increasing social complexity driven by surplus production, population growth, and competition for resources. In these models, social hierarchy, specialised administration, and monumental architecture emerge from the functional requirements of organising large populations and managing surplus. The state is what complex societies inevitably become: a destination at the end of an evolutionary trajectory from band to tribe to chiefdom to state, with each stage driven by adaptive pressures toward greater complexity.

David Graeber and David Wengrow's revisionist synthesis, developed in "The Dawn of Everything" (2021), draws on archaeological evidence to challenge nearly every assumption of this framework. Large sedentary settlements existed for thousands of years in Ukraine (the Trypillia mega-sites), Mesoamerica, and the Indus Valley without the emergence of administrative hierarchy or monumental elite display. These sites show evidence of egalitarian organisation, collective labour, and shared infrastructure without evidence of hereditary ruling classes, palace complexes, or administrative recordkeeping. Equally disruptive is the evidence for "seasonal" or oscillating political organisation: ethnographic and archaeological evidence from the American Pacific Northwest to pre-Columbian Mississippi Valley suggests that the same populations practised radically different political arrangements across different seasons, being hierarchically organised for winter aggregation and egalitarian for summer dispersal. The state form, on this account, is not the destiny of complex societies but a specific political choice that requires explanation and was actively resisted in many contexts.

The archaeological community's reception of Graeber and Wengrow has been mixed. Their synthesis is valued for challenging teleological narratives and recovering the evidence for non-hierarchical complexity. It is criticised for selective use of evidence, for overstating the egalitarianism of specific sites based on absence of elite markers rather than positive evidence of egalitarian organisation, and for conflating two different questions: whether complexity necessarily produces hierarchy (to which the answer appears to be no) and whether the specific trajectory that produced early states was contingent rather than driven by functional pressures (a more contested empirical claim). The absence of palaces does not demonstrate egalitarianism; it demonstrates the absence of palaces.

Questions · Passage 03
9
The passage describes traditional state formation models as treating the state as "what complex societies inevitably become." Which specific assumption does the evidence from Trypillia mega-sites most directly challenge?
CORRECT: B The traditional models predict that large sedentary populations require hierarchical administration as a functional necessity. Trypillia mega-sites — large, sedentary, long-lasting — directly challenge this by showing that large populations can persist for millennia without developing hierarchy or administrative apparatus. The challenge is to the functional necessity claim. B captures this. A concerns surplus production, but the passage does not claim Trypillia sites lacked surplus — it says they lacked hierarchy. C concerns monumental architecture as a state indicator, but the passage says Trypillia sites lack monumental elite display, not that they have monuments without hierarchy. D invokes Carneiro's circumscription theory, but the passage does not make a specific circumscription argument about Trypillia.
10
The passage notes that "seasonal" or oscillating political organisation challenges evolutionary models in a way distinct from the Trypillia evidence. What specific assumption does seasonal oscillation challenge that large non-hierarchical settlements do not?
CORRECT: C The Trypillia evidence challenges the assumption that large populations require hierarchy — it shows non-hierarchical complexity. Seasonal oscillation challenges something different: the irreversibility assumption. If the same population can move between hierarchical and egalitarian arrangements seasonally, then hierarchy is not a stage that locks in once achieved — it is a reversible political choice. This challenges the evolutionary model's directionality. C captures this distinct challenge. A concerns misreading of monumental architecture, which is an interesting methodological point but not the specific challenge seasonal oscillation poses. B says it is the same challenge as Trypillia, which misses the distinction. D concerns population size as a determinant, which is partly right but less precise than C about the reversibility assumption.
11
The passage says "the absence of palaces does not demonstrate egalitarianism; it demonstrates the absence of palaces." What methodological principle does this observation invoke?
CORRECT: B The observation challenges an inferential move: from "no palaces" to "egalitarian organisation." This conflates the absence of one specific material indicator with the absence of the social phenomenon. The principle invoked is that the relationship between material indicator and social organisation must be independently established — you cannot assume that the absence of palaces means the absence of hierarchy, because hierarchy might be expressed through different material channels or because palaces might not have been built for reasons other than the absence of hierarchy. B states this. A says negative evidence is inadmissible, which is too strong — negative evidence can be informative if properly contextualised. C concerns equifinality, which is a related but different principle about multiple causes of the same effect. D makes palaces the only valid indicator, which inverts the problem — the critique is about what absence of palaces does and does not show, not about whether other indicators are valid.
12
The passage identifies two questions that critics argue Graeber and Wengrow conflate. What is the significance of keeping these questions separate?
CORRECT: C The two questions are: (1) does complexity necessarily produce hierarchy? and (2) was the specific trajectory to early states contingent rather than driven by functional pressures? The evidence strongly supports a "no" to (1) — non-hierarchical complexity clearly existed. But (2) is a different and more contested claim about what drove the specific cases where states did form. The significance of keeping them separate is that the evidence for (1) does not automatically support (2): showing that hierarchy is not inevitable does not show that where it did emerge it was not functionally driven. C captures this. A says it protects evolutionary models, which partially follows but C is more precise about what the separation actually does. B says the arguments are self-refuting, which is not what the separation shows. D introduces disciplinary boundary arguments not in the passage.
Passage 3 Score
/4

P 04
Ancient DNA, Population Genetics & the Rewriting of Prehistory
Passage Timer
10:00
Read the Passage

The ancient DNA (aDNA) revolution, enabled by next-generation sequencing techniques that allow recovery of genetic information from degraded skeletal material, has transformed the study of prehistoric population movements. Where traditional archaeology could track material culture — the diffusion of pottery styles, burial practices, metallurgical traditions — without being able to determine whether cultural change reflected migration or adoption by existing populations, aDNA analysis can directly answer questions about ancestry, relatedness, and biological population replacement. The sequencing of genomes from hundreds of prehistoric European skeletal samples has revealed a series of major population replacements that were not anticipated by culture-historical archaeology: a near-complete replacement of Mesolithic hunter-gatherer populations by Neolithic farming populations spreading from Anatolia, followed by a further substantial replacement by Yamnaya-related pastoralists from the Pontic steppe in the Bronze Age.

These findings have generated significant methodological and interpretive debate. On the methodological side, the recovery of aDNA is not neutral: only certain burial types and climatic conditions preserve skeletal material with adequate DNA, creating a sample heavily biased toward elite burials in temperate northern Europe and systematically underrepresenting tropical and subtropical populations whose skeletal material degrades more rapidly. The resulting genomic narrative of European prehistory is therefore a narrative of specific socially and geographically privileged populations rather than a representative account of prehistoric diversity. On the interpretive side, the movement from "genetic ancestry" to "population replacement" requires assumptions about the relationship between genetic and social identity that are not straightforwardly supported: gene flow and cultural change can be decoupled, and the presence of Yamnaya genetic ancestry in a later population does not establish that those individuals identified as Yamnaya, spoke Yamnaya languages, or participated in Yamnaya cultural practices.

The most contested application of aDNA research is the Indo-European language question. The correlation between Yamnaya genetic expansion and the spread of Indo-European languages is compelling to many researchers: the timing and geographic range of Yamnaya-related genetic signatures broadly match models of Indo-European dispersal derived from historical linguistics. Critics note that the correlation between genetic and linguistic ancestry is imperfect at best: language can spread without genetic replacement, as documented in historical cases of elite language adoption; genetic replacement can occur without language replacement; and the linguistic dating of Proto-Indo-European is itself contested. The Indo-European debate illustrates the broader challenge of aDNA research: genetic data can answer questions about biological ancestry with increasing precision, while questions about language, identity, and culture require evidence that genetics alone cannot provide.

Questions · Passage 04
13
The passage says aDNA analysis can answer questions about biological ancestry "directly," while traditional archaeology could not distinguish migration from cultural adoption. What does "directly" mean in this context, and what does it not mean?
CORRECT: C The passage's use of "directly" is in the context of answering questions about biological ancestry — aDNA bypasses the culture-history inference problem by providing biological information directly. But the passage's second and third paragraphs make clear that aDNA cannot directly answer questions about social identity, language, or cultural practices. C captures both what "directly" means (answering the biological question) and what it does not mean (unmediated access to social and cultural identity). A says "directly" means without inferential steps, which overstates — aDNA analysis involves substantial statistical inference and sampling assumptions. B says aDNA is more reliable than cultural evidence, which is a related claim but not what "directly" specifically means. D interprets "directly" as contemporaneous record, which is a different sense of direct.
14
The sampling bias in aDNA research — toward elite burials in temperate northern Europe — creates which specific epistemological problem for the genomic narrative of European prehistory?
CORRECT: B The passage says the sample is biased toward "elite burials in temperate northern Europe," making the genomic narrative one of "specific socially and geographically privileged populations rather than a representative account of prehistoric diversity." The epistemological problem is representativeness: conclusions about "European prehistoric populations" are actually conclusions about a non-representative subset. B captures this directly. A concerns statistical power, which is a related but different epistemological concern — the passage identifies the representativeness problem, not primarily a power problem. C introduces temporal bias toward later periods, which is not what the passage says. D says replacements may be elite-level phenomena, which is an interesting implication but is a substantive claim the passage does not make — it identifies the sampling problem without specifying its effect on the replacement findings.
15
The passage identifies the Indo-European debate as illustrating "the broader challenge of aDNA research." What is that broader challenge?
CORRECT: D The passage explicitly states: "genetic data can answer questions about biological ancestry with increasing precision, while questions about language, identity, and culture require evidence that genetics alone cannot provide." The broader challenge is the gap between what aDNA can answer (biological ancestry) and what prehistorians most want to know (language, identity, culture). D states this. A concerns dating precision, which is a technical issue not the broader challenge the passage identifies. B concerns replication standards, which is a methodological concern not mentioned in this passage. C concerns public misappropriation, which is a real concern but not the disciplinary challenge the passage identifies.
16
The correlation between Yamnaya genetic expansion and Indo-European language spread is described as "compelling to many researchers." Critics note it is imperfect. What would make the correlation more compelling as evidence for the genetic hypothesis of Indo-European origins?
CORRECT: B The critics' objections are: language can spread without genetic replacement; genetic replacement can occur without language change. What would make the correlation more compelling is evidence that addresses these decoupling possibilities: if regions with early Indo-European attestation specifically and consistently have higher Yamnaya ancestry than contemporaneous non-IE regions, and this holds at the non-elite level (addressing the elite adoption concern), the correlation becomes harder to explain without a genetic-linguistic link. B addresses both the decoupling concern and the elite sampling bias. A concerns the speed of linguistic spread, which addresses the elite adoption alternative but not the genetic-without-language decoupling. C concerns vocabulary evidence, which is linguistic not genetic evidence and doesn't address the correlation specifically. D concerns osteological markers, which is behavioural evidence not directly about the genetic-linguistic correlation.
Passage 4 Score
/4

P 05
Cognitive Archaeology, Symbolic Behaviour & the Origins of the Modern Human Mind
Passage Timer
10:00
Read the Passage

The question of when modern human cognitive capacities — symbolic thought, language, extended planning, cumulative culture — emerged is among the most contested in paleoanthropology. The "cognitive revolution" model associated with Richard Klein proposed that modern behaviour erupted suddenly around 50,000 years ago in Europe, evidenced by the sudden appearance of cave art, personal ornaments, musical instruments, and blade tools in the Upper Palaeolithic archaeological record. Klein attributed this behavioural modernity to a neurological mutation that triggered language capacity, which in turn enabled the cultural explosion documented in the record. The model aligned with an anatomical-behavioural package: anatomically modern humans are defined by skeletal features appearing by around 200,000 years ago in Africa, but behaviourally modern humans only appear much later.

The African origins model, developed in response to accumulating Middle Stone Age (MSA) evidence from southern and eastern Africa, challenges the sudden revolution narrative. MSA sites dated to 100,000 years ago and earlier contain ochre use, shell beads used as personal ornaments, engraved geometric patterns, compound adhesives, and long-distance exchange of materials — all behaviours previously associated with Upper Palaeolithic Europe. Howiesons Poort and Still Bay assemblages show blade production and hafted projectile technology appearing, disappearing, and reappearing over tens of thousands of years rather than emerging as a single continuous package. The key interpretive implication is that behavioural modernity was not a sudden neurological event but a gradual accretion of capacities that emerged, were maintained, and were sometimes lost depending on social and environmental conditions — a mosaic development rather than a revolution.

The debate over symbolic behaviour raises a deeper methodological problem: how do we identify cognitive capacity from material remains? An ochre-stained stone may represent symbolic communication, body decoration, hide preparation, or adhesive production. A shell bead may be personal ornament or functional container. The inferential challenge is that the same artefact can be consistent with multiple interpretations that require quite different cognitive capacities — and the identification of "symbolic" behaviour typically requires ruling out all non-symbolic functional alternatives, a standard that is very difficult to meet unambiguously. The conservatism of cognitive archaeological inference reflects genuine epistemological constraints rather than excessive scepticism: cognitive capacity leaves no direct material trace, and the material evidence for it is always inferential and typically underdetermined.

Questions · Passage 05
17
Klein's "cognitive revolution" model rests on a correlation between an anatomical event and a behavioural event separated by 150,000 years. What problem does this temporal gap create for the model?
CORRECT: C The temporal gap is doubly problematic for Klein's model. First, if the mutation responsible for modern cognition occurred with anatomical modernity at 200,000 years ago, why did behaviourally modern behaviour not appear until 50,000 years ago? This requires explanation — why did a cognitive capacity lie dormant for 150,000 years? Second, the African MSA evidence shows modern behaviours much earlier than 50,000 years ago, suggesting Klein's revolution is an artefact of focusing on European Upper Palaeolithic evidence. C combines both problems. A identifies only the explanatory gap. B suggests a sampling artefact, which is the African origins implication but doesn't include the explanatory gap problem. D invokes cranial morphology evidence against Klein, which is an independent empirical challenge not about the temporal gap specifically.
18
The MSA evidence shows blade technology "appearing, disappearing, and reappearing" over tens of thousands of years. What does this pattern imply about the relationship between cognitive capacity and material culture?
CORRECT: B The passage says the technology "appearing, disappearing, and reappearing" implies "mosaic development" rather than revolution, and that behaviours "were maintained, and were sometimes lost depending on social and environmental conditions." This implies that material culture expression is contingent on conditions rather than being a direct expression of cognitive capacity. Absence of blades does not mean absence of blade-making ability. B captures this decoupling of capacity and material expression. A says cognitive capacity itself fluctuated, which the passage explicitly does not claim — it attributes variation to social and environmental conditions. C says cumulative transmission was not established, which is a possible interpretation but not what the passage implies. D invokes genetic distinctiveness, which is not the passage's argument.
19
The passage says identifying symbolic behaviour requires "ruling out all non-symbolic functional alternatives." Why does this standard make identifying symbolic behaviour particularly difficult compared to identifying technological capacity?
CORRECT: C Technological capacity is inferred positively: a blade of a certain type could only have been produced by someone with the relevant cognitive and motor capacity, so the artefact's existence establishes the capacity. Symbolic behaviour is inferred negatively: you must rule out every non-symbolic explanation — functional, accidental, other — and the difficulty is that this list is never definitively closed. There may always be functional alternatives not yet considered. C captures this asymmetry between positive and negative inference. A concerns experimental replication as a method for technology, which is related but not the fundamental asymmetry. B concerns mental states versus physical actions, which is related but framed as a different distinction. D concerns ethnographic analogies, which is a methodological issue for symbolic identification but not the core asymmetry the passage identifies.
20
The passage concludes that "cognitive capacity leaves no direct material trace, and the material evidence for it is always inferential and typically underdetermined." What does this conclusion imply about the project of cognitive archaeology as a discipline?
CORRECT: C The passage says the conservatism of cognitive archaeological inference "reflects genuine epistemological constraints rather than excessive scepticism." The gap between material evidence and cognitive inference is "always inferential and typically underdetermined." This implies the discipline must operate with principled conservatism — presenting conclusions as hedged inferences rather than direct readings — not because researchers are overly cautious but because the epistemological structure of the evidence requires it. C captures this. A says the discipline is impossible and should be abandoned, which the passage explicitly rejects by saying conservatism reflects "genuine epistemological constraints rather than excessive scepticism" — the discipline remains worthwhile but must be epistemologically humble. B says convergent evidence is required, which is a methodological implication but doesn't capture the discipline-level implication about how conclusions should be presented. D prescribes ancient genome analysis, which is one possible supplementary approach but not what the conclusion implies about cognitive archaeology itself.
Passage 5 Score
/4
Archaeology · Total Score
/8
Category 20
Education
5 passages · 4 questions each · CAT 5/5 · 50 min total
Score
/20
P 01
Constructivism vs. Direct Instruction: The Knowledge Debate & the Evidence Problem
Read the Passage

The constructivist tradition in education — tracing through Dewey, Piaget, and Vygotsky to contemporary "inquiry learning" and "discovery-based" pedagogies — holds that durable knowledge is actively constructed by learners through engagement with problems, rather than passively received through direct instruction. Students who discover principles through investigation develop deeper conceptual understanding and more flexible transfer capacity than those who receive those principles through explicit teaching. The pedagogical implication is that the teacher's role is to create conditions for discovery rather than to transmit content; productive struggle, guided exploration, and collaborative construction of meaning are preferred over worked examples, explicit instruction, and drill-based consolidation.

The empirical research on learning outcomes does not support the strong constructivist position. Sweller's cognitive load theory provides a mechanistic explanation: novice learners lack the organised knowledge schemas in long-term memory that would allow them to benefit from minimally guided exploration. When working memory is occupied with managing the complexity of an open-ended problem, there are insufficient resources for schema formation — the exploration produces cognitive overload rather than learning. Explicit instruction, by contrast, provides the schemas directly, freeing working memory for deeper processing. The worked example effect — students who study worked examples outperform those who solve equivalent problems independently — is among the most robustly replicated findings in educational psychology. Kirschner, Sweller, and Clark's influential 2006 review concluded that decades of research on minimally guided instruction demonstrate its systematic inferiority to direct instruction for novice learners.

Constructivist defenders offer three responses. First, the evidence comes primarily from laboratory-style studies of narrow skill acquisition, not from naturalistic classroom studies of deep conceptual understanding. Second, the direct instruction advantage is strongest for novice learners but may reverse for experts — the "expertise reversal effect" — who are better served by reduced instruction that avoids redundant information overload. Third, the pedagogical goal of direct instruction critics — transfer and flexible application — is not measured by the performance metrics used in cognitive load studies, which typically measure immediate recall and near-transfer. The debate is thus partially about what counts as learning: if the criterion is efficient acquisition of defined content, direct instruction wins; if the criterion is deep flexible understanding enabling novel problem-solving, the evidence is less clear.

Questions · Passage 01
1
Cognitive load theory predicts that novice learners benefit more from direct instruction than from minimally guided exploration because exploration overloads working memory before schemas can form. Which of the following, if true, most seriously weakens this argument?
CORRECT: A Cognitive load theory predicts exploration → cognitive overload → poor schema formation → inferior learning. Option A introduces the "productive failure" model: exploration before instruction activates prior knowledge structures and generates errors whose subsequent correction through direct instruction produces superior learning compared to direct instruction alone. This shows exploration can enhance rather than impede schema formation when sequenced appropriately — the exploration produces cognitive activity that prepares the learner for instruction rather than simply overloading working memory. B argues the worked example advantage is a time-on-task artefact — a methodological objection but it doesn't specifically challenge the cognitive load mechanism; it challenges the experimental design. C confirms higher cognitive load in exploration — this actually supports the cognitive load theory prediction. D shows attenuation in classrooms — a real finding but this weakens the generalisability claim, not the mechanistic argument about why exploration impedes novice learning.
2
The passage concludes that the debate is "partially about what counts as learning" — efficient content acquisition vs. deep flexible understanding. Which of the following can be most reliably inferred from this conclusion?
CORRECT: B The passage says the criterion matters: if you measure content acquisition, direct instruction wins; if you measure deep flexible understanding, "the evidence is less clear." The reliable inference is that any claim about the evidence "favouring" one approach is implicitly indexed to a particular outcome measure, and that measure needs to be specified. B captures this: the evidence claim is incomplete without specifying which outcomes were measured. A says the evidence can't adjudicate — too strong; the evidence can adjudicate for specific outcome measures, the problem is that different outcome measures may give different verdicts. C says it's a values debate beyond evidence — partially true (what learning is *for* involves values), but the passage also says "the evidence is less clear" for flexible understanding — an empirical claim, not a statement that evidence is irrelevant. D attributes an implicit ranking and endorses constructivism — the passage makes no evaluative judgment about which outcome is more important.
3
The "expertise reversal effect" — that direct instruction advantages reverse for expert learners — creates a structural problem for the direct instruction position. What is that problem?
CORRECT: B The structural problem is self-undermining: direct instruction efficiently builds novice-to-competent skill, but expert learning requires reduced instruction and greater autonomy — the independent learning mode that constructivism champions. If education's goal is expert learners who eventually need constructivist pedagogy, and if students are always taught by direct instruction throughout their education, they may never develop the self-directed learning capacity that experts require. The efficient method for early stages may prevent the development of the competencies required at later stages. A says direct instruction has a ceiling effect — partially true but it frames the problem as a limitation at the top rather than as the structural tension between early and late-stage optimal pedagogy. C says the reversal is inconsistent with cognitive load theory — actually CLT predicts the reversal: experts have full schemas so additional explicit instruction is redundant information that increases rather than reduces cognitive load. D says it's a false dichotomy — a valid observation but it identifies the resolution rather than the structural problem the reversal creates for the direct instruction position.
4
Kirschner, Sweller, and Clark's 2006 review concludes that "decades of research on minimally guided instruction demonstrate its systematic inferiority to direct instruction for novice learners." For this conclusion to follow from the reviewed studies, which of the following must be assumed?
CORRECT: B The conclusion is a general one about minimally guided instruction. For this to follow from the reviewed studies, those studies must be representative of minimally guided instruction generally — not just poorly designed, under-resourced, or extreme versions of it. If the studies only captured the weakest implementations of inquiry learning (pure discovery without scaffolding, unclear goals), the conclusion "minimally guided instruction is systematically inferior" overgeneralises from atypical instances. A says outcome measures must be valid for constructivist goals — but the passage already identifies this limitation explicitly (the measures are immediate recall and near-transfer, not deep flexible understanding). The conclusion as stated is about novice learning measured on those criteria — so the outcome measure validity is acknowledged as a scope limitation, not an assumption required for the stated conclusion. C says learners must be genuine novices — a real methodological concern but this would challenge specific studies' validity, not the generalisability of the conclusion across appropriately designed studies. D says tasks must be representative — the passage already acknowledges this as a limitation; the conclusion is specifically about "narrow skill acquisition" contexts.
Passage 1 Score
/4

P 02
Curriculum Theory, Cultural Capital & the Hidden Curriculum of Knowledge Selection
Read the Passage

Bourdieu and Passeron's theory of cultural reproduction holds that educational systems function primarily not to equalise opportunity but to reproduce social advantage by legitimating the cultural capital of dominant classes as universal educational value. The curriculum — what counts as valid school knowledge — is not a neutral selection of the most important human knowledge but a socially arbitrary selection whose apparent objectivity conceals its class basis. Students who bring to school the cultural capital of dominant groups — familiarity with high-culture literary references, particular modes of formal reasoning, specific aesthetic sensibilities — are systematically advantaged by a curriculum that treats these dispositions as natural academic ability. The educational system rewards social advantage while misrecognising it as intellectual merit: this misrecognition is the mechanism of symbolic violence through which class hierarchy is reproduced without appearing to do so.

Michael Young's "knowledge of the powerful" versus "powerful knowledge" distinction challenges cultural reproduction theory from within the sociology of education. Young distinguishes two separate questions: who controls curriculum selection (the sociology of the curriculum) and what the epistemic properties of the selected knowledge are (the sociology of knowledge). Cultural reproduction theory answers the first question — dominant classes control selection — but conflates it with the second. Powerful knowledge — systematically organised, theoretically coherent, generalisable beyond context — is not just the knowledge of the powerful; it is epistemically distinctive and provides access to forms of understanding unavailable through everyday experience. Denying students access to powerful knowledge in the name of cultural democracy may reproduce disadvantage more effectively than transmitting it: the student who leaves school without access to theoretical physics, formal mathematics, or canonical literary analysis is not liberated from class knowledge — she is confined to the knowledge available in her immediate social context.

The tension between the two positions is not fully resolvable by distinguishing their targets. Even if Young is correct that powerful knowledge has genuine epistemic properties, Bourdieu's question about who defines the selection of what counts as "powerful" remains. The curriculum is never the totality of available theoretical knowledge; it is always a selection, and that selection reflects the interests and judgments of those with institutional authority to make it. Young's epistemic argument may justify teaching formal mathematics; it does not by itself determine which mathematical topics, in what sequence, with what pedagogical framing, selected from whose intellectual tradition. The two critiques thus operate at different but intersecting levels of the curriculum question and neither renders the other obsolete.

Questions · Passage 02
5
Bourdieu and Passeron argue that educational systems reward social advantage while misrecognising it as intellectual merit — cultural capital of dominant classes is treated as natural academic ability. Which of the following, if true, most strengthens this argument?
CORRECT: C The core of Bourdieu's argument is misrecognition — social advantage being misread as intellectual merit by educational evaluators. Option C provides the most direct evidence of this mechanism: teachers assign higher "academic potential" ratings to higher-SES students while controlling for actual measured performance. This is exactly misrecognition in action — the evaluator reads social background as ability. A shows early cultural practices predict school readiness — strong evidence that cultural capital matters, but it shows a real mechanism for advantage, not the misrecognition mechanism specifically. B shows demanding curricula amplify gaps — consistent with reproduction theory but through a different mechanism (difficulty as a filter) rather than misrecognition of advantage as merit. D shows academic register is socially acquired — relevant to explaining how cultural capital advantages students but this is about input, not the misrecognition mechanism in evaluation.
6
Young argues that denying students access to powerful knowledge "may reproduce disadvantage more effectively" than transmitting it. Which of the following can be most reliably inferred from this claim?
CORRECT: B Young's claim is that withholding powerful knowledge from disadvantaged students — in the name of cultural democracy or anti-elitism — confines them to locally available knowledge and denies them the epistemic tools for social mobility. A curriculum reform that replaces theoretical with everyday knowledge is precisely this error. B captures the inference: well-intentioned "democratising" reform that removes powerful knowledge may harm the students it aims to help. A says cultural reproduction theory is factually incorrect — Young's argument doesn't refute the reproduction analysis; it challenges what follows from it for curriculum policy. C says Young opposes all reform — the passage says Young supports teaching powerful knowledge; opposing the removal of it is not the same as opposing reform generally. D says transmission of powerful knowledge is sufficient to equalise outcomes — too strong; Young argues for powerful knowledge access, not that it alone closes achievement gaps.
7
Bourdieu argues the curriculum rewards cultural capital of dominant classes while misrecognising it as universal merit. Young argues that powerful knowledge has genuine epistemic properties that transcend class. The passage says the tension between them "is not fully resolvable by distinguishing their targets." What makes this tension specifically unresolvable by target-distinction rather than simply a disagreement?
CORRECT: B The passage says Young's argument "may justify teaching formal mathematics; it does not by itself determine which mathematical topics, in what sequence, with what pedagogical framing, selected from whose intellectual tradition." Once you move from justifying *a category* of knowledge (powerful) to selecting *specific content* within it, Bourdieu's question — who selects, on what basis, reflecting whose interests — re-enters at the selection level. The two critiques address different questions at the level of principle but converge on the same problem at the level of practice. A says both are empirically false — the passage doesn't make this claim; it presents both as generating genuine insights that neither renders the other obsolete. C says they answer different questions and can't be compared — the passage explicitly says they "operate at different but intersecting levels," not that they're incomparable. D says they're jointly incompatible — interesting but too strong; the passage implies a more nuanced relationship where each has a domain of validity.
8
Bourdieu and Passeron's argument that the curriculum is "socially arbitrary" requires which of the following assumptions in order to constitute a critique of the educational system rather than merely a sociological description of how curricula are selected?
CORRECT: A Bourdieu's argument is specifically about symbolic violence — the mechanism by which arbitrary selection is misrecognised as natural merit. For this to be a critique (not just a description), the misrecognition must be the operative mechanism: if everyone knew the curriculum was a class-based arbitrary selection, there would be no symbolic violence — the arbitrary selection would simply be visible as a political choice about whose culture to privilege. The critique depends on the misrecognition converting contingent advantage into apparent natural merit. B requires a non-arbitrary alternative to exist — but Bourdieu's critique doesn't require this; the critique of a practice doesn't presuppose a perfect alternative. Young's point that all curricula involve selection doesn't undermine Bourdieu if misrecognition is the key mechanism. C requires the critique to be capitalism-specific — Bourdieu's theory is not presented as capitalism-specific; it concerns any educational system where social capital is misrecognised as natural ability. D requires cultural capital to have no genuine educational value — too strong; Bourdieu doesn't claim cultural capital is worthless, only that it is a social product being misread as natural capacity.
Passage 2 Score
/4

P 03
Assessment, Standardised Testing & the Measurement-Learning Trade-off
Passage Timer
10:00
Read the Passage

Standardised testing is simultaneously the dominant mechanism for measuring educational attainment and the most contested element of educational policy. The case for standardisation rests on three premises: that assessments should be comparable across students, schools, and systems; that comparability requires uniform conditions and content; and that measurable outcomes provide the accountability signals needed for educational improvement. International assessments such as PISA, TIMSS, and PIRLS have transformed educational policy by creating cross-national performance data that enables governments to benchmark their systems and draws attention to high-performing alternatives. Domestically, high-stakes testing creates incentives for schools and teachers to prioritise measurable outcomes, enabling resource allocation decisions based on evidence of effectiveness rather than reputation or tradition.

The critique of standardised testing operates at multiple levels. At the measurement level, tests measure what is measurable — defined content, procedural skills, recall — and necessarily exclude what is not: creativity, collaborative problem-solving, metacognitive awareness, dispositions toward learning. This is not merely a technical limitation but a structural one: the more a test is used to measure and incentivise, the more it becomes the target of instruction, and what cannot be tested disappears from the curriculum through resource competition. Campbell's Law — that any social indicator used for social decision-making will corrupt the indicator — predicts exactly this dynamic: the indicator (test score) becomes the goal, decoupled from the underlying educational value it was meant to represent.

The accountability paradox is the deepest structural problem. High-stakes testing creates pressure for schools to improve measured outcomes; the most direct route to improving measured outcomes is teaching to the test; teaching to the test produces score improvements that do not generalise to broader measures of learning. The Houston Miracle — dramatic test score improvements in Houston schools in the 1990s credited to Superintendent Rod Paige — failed to replicate in subsequent national assessments, suggesting score gains were test-specific rather than representing genuine learning gains. This does not mean that testing is inherently counter-productive; it means that the measure used to evaluate learning must be sufficiently broad that improvement on the measure reflects improvement in the underlying capacity rather than improvement in test-taking performance specifically. The challenge is designing assessments that are simultaneously standardisable, broad, and resistant to coaching — a combination that is technically difficult and has not yet been achieved at scale.

Questions · Passage 03
9
Campbell's Law predicts that any social indicator used for decision-making will "corrupt" the indicator. In the context of standardised testing, what does this corruption specifically consist of?
CORRECT: C The passage says Campbell's Law predicts that "the indicator (test score) becomes the goal, decoupled from the underlying educational value it was meant to represent." The corruption is specifically the decoupling of measure from what it measures: test scores stop functioning as reliable indicators of learning because instruction is optimised for test performance rather than for learning. C captures this decoupling precisely. A concerns data falsification, which is a different form of corruption — Campbell's Law is about structural incentive corruption, not dishonesty. B describes test-taking strategy substitution, which is part of the phenomenon but frames it as a process rather than the nature of the corruption itself. D describes curriculum narrowing, which is a consequence of the incentive structure but not what "corrupting the indicator" specifically means.
10
The passage describes the exclusion of creativity, collaborative problem-solving, and metacognitive awareness from standardised tests as "not merely a technical limitation but a structural one." What distinction is being drawn?
CORRECT: B The passage says "the more a test is used to measure and incentivise, the more it becomes the target of instruction, and what cannot be tested disappears from the curriculum." A merely technical limitation would be that tests cannot currently measure creativity — a problem of assessment design. The structural limitation is that using tests for high-stakes decisions creates resource competition that actively suppresses what tests cannot capture. The limitation is not just that creativity is unmeasured; it is that the incentive structure causes creativity to be de-prioritised in instruction. B captures this active, dynamic suppression. A defines structural as inherent to standardisation itself, which is related but misses the dynamic incentive-driven suppression. C concerns reliability vs validity, which is a different technical distinction. D concerns individual vs aggregate effects, which is not the distinction the passage draws.
11
The Houston Miracle example is cited as evidence for what claim, and what does the passage carefully avoid claiming about it?
CORRECT: C The passage uses Houston to support the claim that "score gains were test-specific rather than representing genuine learning gains." It then explicitly says "this does not mean that testing is inherently counter-productive" — carefully limiting the conclusion to a design challenge rather than a categorical indictment of testing. C captures both what the example is cited for and what the passage avoids claiming. A says the passage avoids claiming fraudulent intent, but the passage's caveat is about counter-productivity of testing, not about intent. B says the passage avoids claiming gains can never reflect learning, but the Houston example is about a specific case, and the caveat is about design, not about whether gains can ever be genuine. D says the passage avoids claiming dishonest practices, which is related but misidentifies what the passage's careful caveat specifically is.
12
The passage describes the ideal assessment as "simultaneously standardisable, broad, and resistant to coaching." Why does this combination present a genuine design challenge rather than simply a technical difficulty to be resolved with more resources?
CORRECT: B A genuine design challenge — rather than a technical difficulty — exists when the requirements are in structural tension such that optimising for one necessarily compromises another. Standardisability requires defined scorable tasks; breadth requires assessing context-dependent complex capacities that resist standardisation; coaching resistance requires that prior preparation on the specific task cannot substitute for the underlying capacity, which conflicts with task definition. B identifies these structural tensions. A attributes the difficulty to institutional vested interests, which is a political economy argument not about the design tension. C says the challenge is political, which is a separate claim. D says the challenge is scaling, which is a practical constraint that more resources could in principle address — making it a technical rather than genuine design challenge.
Passage 3 Score
/4

P 04
Teacher Quality, Effective Pedagogy & the Limits of Replication
Passage Timer
10:00
Read the Passage

Research on teacher effectiveness consistently finds that teachers vary enormously in their impact on student learning outcomes, and that this variation is not adequately explained by observable characteristics such as qualifications, experience beyond the first few years, or professional development participation. Value-added modelling — the statistical approach that attempts to isolate the teacher's contribution to learning growth after controlling for prior achievement and other factors — has been used to argue that effective teachers produce substantially better outcomes than ineffective teachers, with estimated effects on lifetime earnings running to hundreds of thousands of dollars per student per year of highly effective teaching. The policy implication drawn from these findings — that identifying and replacing ineffective teachers with effective ones would produce transformative educational improvement — has been influential but contested.

The value-added model (VAM) has attracted substantial methodological criticism. VAMs typically control for observable student characteristics but cannot control for selection effects that are invisible in the data: effective teachers may teach students with more engaged families, more stable home environments, or stronger peer groups, even after prior test score controls. The measurement of teacher effectiveness from test score growth in a small number of subjects in a small number of years is highly unstable: teachers whose VAM scores place them in the top quintile in one year are frequently in a different quintile the following year, making classification unreliable enough to raise serious questions about using VAM scores for individual employment decisions. The American Statistical Association, in a formal statement, noted that VAM estimates have large standard errors, are sensitive to model specification, and should not be the primary basis for high-stakes decisions about individual teachers.

A deeper problem is the replication challenge. Even granting that highly effective teachers exist and can be identified, the characteristics that make them effective resist codification into teachable practices. Observational studies of exemplary teachers identify patterns — warmth combined with high expectations, real-time responsiveness to student confusion, ability to construct meaningful explanations — that are recognisable after the fact but are not straightforwardly translatable into professional development programmes that produce the same results at scale. The history of educational reform is littered with practices identified in high-performing schools or teachers that failed to replicate when adopted system-wide, because the practices were embedded in social, relational, and contextual conditions that systematic adoption could not reproduce. Teacher quality may be as much about who teachers are as about what they do — a conclusion that implies limits on how much policy can achieve through training and incentive design alone.

Questions · Passage 04
13
The passage notes that VAM scores are unstable across years: teachers in the top quintile in one year are frequently in a different quintile the next. What does this instability specifically imply about what VAMs are measuring?
CORRECT: C High year-to-year instability in VAM scores implies that the scores are driven significantly by noise — factors that vary year to year independent of teacher quality, such as which students are assigned, contextual factors, and measurement error from small samples. This does not necessarily mean teachers don't vary in quality; it means VAMs cannot reliably measure that quality at the individual level. C captures this signal-to-noise interpretation. A says effectiveness itself varies year to year, which is possible but the passage's concern is about measurement reliability, not genuine effectiveness fluctuation. B says VAMs measure the wrong thing, which is one possible explanation but C is more precise about the noise-versus-signal interpretation. D says no teacher ranking is valid, which overstates from the instability finding to a categorical conclusion about ranking.
14
The passage describes effective teaching characteristics — warmth with high expectations, real-time responsiveness, ability to construct explanations — as "recognisable after the fact but not straightforwardly translatable into professional development." What does this observation imply about the nature of teaching expertise?
CORRECT: D The observation that effective characteristics are recognisable but not translatable into professional development aligns with Polanyi's concept of tacit knowledge: expertise that can be observed and recognised but cannot be fully articulated into explicit rules for transmission. Professional development operates through explicit instruction; tacit expertise resists this transmission. D captures this implication and its consequence for training design. A says it is primarily personality, but the passage's conclusion is more nuanced — "who teachers are" implies character and relational capacity, but also includes how they practise, which is not purely personality. B says observational methods are inadequate, but the problem is not the method's adequacy for observation but the translational gap from observation to training. C says the practices are known and implementation is the challenge, which inverts the problem — the passage says practices are recognisable but their production cannot be reduced to rules.
15
The passage says effective practices identified in high-performing schools "failed to replicate when adopted system-wide" because they were "embedded in social, relational, and contextual conditions that systematic adoption could not reproduce." What type of problem does this describe?
CORRECT: B The passage specifically says practices "were embedded in social, relational, and contextual conditions that systematic adoption could not reproduce." This is a context-dependence problem: the practice is inseparable from its context, and transplanting the practice without the context produces form without mechanism. B captures this. A concerns scalability due to resource constraints, but the passage attributes failure to contextual conditions, not resource limitations. C concerns selection bias in identifying what works, which is a related but different problem about attribution rather than replication. D concerns implementation fidelity, but the passage says the problem is that the conditions cannot be reproduced, not that implementation is insufficiently careful.
16
The passage concludes that teacher quality "may be as much about who teachers are as about what they do." What does this conclusion imply about the limits of education policy?
CORRECT: C The conclusion is that quality is partly constituted by who teachers are — character, relational capacity, dispositions — rather than just what they do (teachable behaviours). Policy instruments — training, incentives, performance management — are designed to change what people do, and have less traction on who people are. C captures this limit on policy levers. A prescribes selection over training, which is a possible policy implication but the passage is making a point about limits rather than prescribing a solution. B says improvement is impossible, which overstates — the passage says policy's reach is limited, not that it has zero impact. D attributes quality to social and economic conditions determining entry, which is a different and broader claim about teacher labour markets not what the who-versus-what conclusion specifically implies.
Passage 4 Score
/4

P 05
Educational Inequality, Opportunity & the Limits of School-Based Intervention
Passage Timer
10:00
Read the Passage

Educational attainment is strongly correlated with socioeconomic background across all well-studied education systems, and this correlation has proven remarkably resistant to educational policy interventions designed to reduce it. The Coleman Report (1966) — the largest social science study commissioned by the US government — found that school characteristics (facilities, curriculum, teacher qualifications) explained a surprisingly small proportion of variance in student achievement, while family background explained the largest share. This finding has been partially replicated and partially contested in subsequent decades, but its core implication remains influential: the factors that most powerfully predict educational outcomes are predominantly outside the direct control of schools.

The mechanisms by which socioeconomic advantage translates into educational advantage are multiple and mutually reinforcing. Material resources enable private tutoring, enrichment activities, and residential sorting into better-resourced school districts. Cultural capital — familiarity with the conventions of academic discourse, the implicit curriculum of how to interact with teachers and institutions — is transmitted by parents who themselves succeeded in educational systems. Social capital — networks of contact, advice, and advocacy — provides navigation capacity and access that less well-connected families cannot replicate. The Matthew Effect — the tendency of those who have to accumulate more, while those who lack fall further behind — operates cumulatively: early language and cognitive advantages compound across years of schooling into increasingly divergent trajectories that are very difficult to reverse after the early years.

The policy debate divides between what might be called the "school-fix" position — that sufficiently well-designed schools, teachers, and curricula can substantially close attainment gaps even without addressing underlying inequality — and the "structural" position — that attainment gaps are downstream effects of income inequality, housing segregation, and differential family resources that schools cannot overcome without structural social reform. The evidence on even the most successful school-based interventions suggests that they can reduce gaps measurably but rarely close them, and that effects often fade when interventions end. This does not entail that school improvement is futile; it entails that expecting schools alone to compensate for steep socioeconomic gradients is asking education to solve a problem that education did not create and cannot alone fix.

Questions · Passage 05
17
The Coleman Report finding that school characteristics explain less variance in achievement than family background has been "partially replicated and partially contested." What does this qualified conclusion imply about the report's policy relevance?
CORRECT: C The passage says the finding has been "partially replicated and partially contested" but its "core implication remains influential." This implies the broad directional finding — family background matters enormously — survives contestation even if the precise variance decomposition is context-dependent. C captures this: the finding is robust enough for structural policy concerns without settling the quantitative debate about school effects. A says the report is outdated, which the passage contradicts by saying its core implication remains influential. B says contested findings cannot support policy, which is too strong — contested does not mean uninformative. D says the finding is system-specific, which is possible but the passage does not make this attribution.
18
The passage describes the Matthew Effect as operating "cumulatively." What does cumulative operation specifically imply for the timing of educational interventions?
CORRECT: B The passage says early advantages "compound across years of schooling into increasingly divergent trajectories that are very difficult to reverse after the early years." Cumulative operation means advantages and disadvantages grow over time through compounding. This implies early interventions address smaller deficits before compounding magnifies them and have longer to compound their own effects. B captures this. A says interventions are equally effective at any age, which contradicts the cumulative logic. C says late interventions are futile, which overstates — the passage says "very difficult to reverse," not impossible. D says interventions must be continuous, which is a possible implication of cumulative effects but not what the passage specifically implies about timing.
19
The passage distinguishes the "school-fix" from the "structural" position. Which of the following most accurately characterises what divides them?
CORRECT: C The passage defines the school-fix position as believing schools can "substantially close attainment gaps even without addressing underlying inequality" and the structural position as holding that gaps "are downstream effects of income inequality, housing segregation, and differential family resources that schools cannot overcome without structural social reform." C captures this division: whether school-internal policy versus structural reform outside education is required. B is close but frames it as school quality rather than as a question about whether any school improvement, however good, can compensate for structural inequality. A introduces moral acceptability, which is not the passage's distinction. D says the difference is only empirical, but the structural position implies a fundamentally different theory about what causes gaps, not just different cost-effectiveness estimates.
20
The passage concludes that expecting schools to compensate for socioeconomic gradients is "asking education to solve a problem that education did not create and cannot alone fix." What does this conclusion imply about how educational failure should be understood and attributed?
CORRECT: C The passage explicitly says "this does not entail that school improvement is futile" while arguing that schools cannot alone compensate for steep socioeconomic gradients. The conclusion is about causal attribution and reform framing: attributing persistent gaps to educational failure misidentifies where the primary causal force lies, potentially redirecting reform effort away from upstream structural causes toward proximate school factors. C captures this attribution implication. A says teachers should not be accountable at all, which overstates — the passage argues for shared responsibility, not no accountability. B says attainment gaps are purely social policy failure, which overstates the structural position and neglects what schools can do. D says educational investment is futile, which the passage explicitly rejects.
Passage 5 Score
/4
Education · Total Score
/8
Category 21
Physics
5 passages · 4 questions each · CAT 5/5 · 50 min total
Score
/20
P 01
Quantum Measurement, the Collapse Problem & the Interpretive Proliferation
Read the Passage

The measurement problem is the central unresolved conceptual difficulty of quantum mechanics. The theory's formalism — specifically the Schrödinger equation — describes the time evolution of quantum states as a deterministic, continuous, linear process: a superposition of states evolves predictably until measurement. But measurement outcomes are definite — we observe a particle with a specific position, not a smear of probabilities — and the quantum formalism provides no account of how the superposition resolves into a definite outcome at measurement. The standard von Neumann interpretation posits wave function collapse: at measurement, the quantum state discontinuously and non-linearly "collapses" into an eigenstate of the measured observable. This is mathematically ad hoc — no equation in quantum mechanics describes the collapse — and conceptually unsatisfying because it assigns a special status to "measurement" without specifying what constitutes a measurement, which is not a category the physics independently defines.

The proliferation of interpretations — Copenhagen, Many-Worlds, Pilot Wave, Relational QM, QBism — reflects the fact that the measurement problem is not an empirical puzzle with an experimentally accessible solution but a conceptual problem about what the formalism represents. The interpretations are empirically equivalent: they all predict the same observable frequencies and correlations, and no experiment distinguishable in principle by standard quantum mechanics can choose between them. Copenhagen sidesteps the problem by denying that the wave function represents anything beyond an agent's degrees of belief or a predictive tool — what happens between measurements is not a question physics can answer. Many-Worlds avoids collapse entirely by affirming the superposition: all outcomes occur in branching parallel universes, and the appearance of a single definite outcome reflects the observer's location in one branch. Pilot wave theory restores determinism by postulating a hidden variable (the pilot wave) guiding particles along definite trajectories.

The interpretations disagree not only about quantum mechanics but about what questions physics is permitted to ask. Copenhagen restricts physics to the domain of what can be observed and measured — metaphysical questions about underlying reality are excluded from scientific enquiry. Many-Worlds and Pilot Wave take the opposite position: the formalism describes something real, and the task of interpretation is to specify what. The debate is therefore not primarily a debate about quantum mechanics — it is a debate about the scope and ambitions of physical theory, about whether physics is a tool for prediction or a description of reality. This prior disagreement about what physics is for makes resolution through empirical test structurally impossible.

Questions · Passage 01
1
The passage argues that the interpretations of quantum mechanics are empirically equivalent and that no experiment can choose between them. Which of the following, if true, most seriously weakens this claim of permanent empirical underdetermination?
CORRECT: D The passage claims the interpretations are empirically equivalent within standard quantum mechanics. The qualification "distinguishable in principle by standard quantum mechanics" is key — it leaves open the possibility that a theory beyond standard QM could break the equivalence. Option D provides exactly this: quantum gravity, which extends QM to include gravitational effects, might produce different predictions across interpretations for gravitational effects of superposed massive objects. This breaks the claimed equivalence not within standard QM but at its extension, which is precisely where the passage's qualification allows for the possibility. A proposes a macroscopic coherence test — interesting but the passage's claim is about interpretations being equivalent within the existing formalism; this test would need to show that Many-Worlds and Copenhagen actually make different predictions, which requires specifying what "branching signatures" Copenhagen predicts can't persist. B claims Pilot wave theory makes different statistical predictions — but the passage specifically says all interpretations predict the same observable frequencies; if pilot wave theory does make different predictions, it's not a true interpretation of standard QM but a modification, which the passage already implicitly allows. D is cleaner because it identifies a legitimate extension beyond standard QM.
2
The passage concludes that the debate between interpretations is "not primarily a debate about quantum mechanics — it is a debate about the scope and ambitions of physical theory." Which of the following can be most reliably inferred from this conclusion?
CORRECT: B The passage says the debate is about what physics is for — prediction vs. realistic description. The reliable inference is that settling the debate between interpretations requires first settling this prior question about the scope and purpose of physical theory, and that prior question is philosophical rather than physical. B captures this exactly: the choice between interpretations is downstream of a prior philosophical commitment about what physical theory should do. A attributes a category error to Copenhagen physicists — the passage doesn't say they are doing philosophy while mistaking it for science; it says the disagreement is philosophical, not that any party is confused about this. C prescribes deference to philosophers — the passage identifies the philosophical nature of the disagreement but makes no prescriptive claim about institutional deference. D predicts resolution through quantum gravity — possible but speculative; the passage says the prior philosophical disagreement makes resolution through empirical test "structurally impossible," which is stronger than a prediction about future theory.
3
The measurement problem involves the following tension: the Schrödinger equation is deterministic and continuous, but measurement outcomes are definite and discontinuous. Wave function collapse is invoked to bridge this gap, but the passage calls it "mathematically ad hoc." What makes this specifically a paradox for the theory rather than merely a gap in our understanding?
CORRECT: B The paradox is structural and internal: the theory requires two mathematically incompatible dynamics — the Schrödinger equation (linear, deterministic, continuous) and wave function collapse (non-linear, stochastic, discontinuous) — and it requires both simultaneously to function. When making predictions, you use Schrödinger; when comparing to observations, you apply collapse. These two procedures are mutually inconsistent in their mathematical character, yet both are built into the theory's own operational procedures. This is not a gap (A) — it is an internal contradiction in the theory's foundational structure. A says it's a gap awaiting a future extension — possible, but the passage calls it a paradox that makes the formalism "mathematically ad hoc," suggesting a deeper structural problem. C identifies the "what counts as measurement?" problem — real and important, but this is about underdetermination in the theory's application, not the specific mathematical incompatibility of the two dynamics. D identifies predictive success + conceptual incoherence coexisting — valid but this is the macro-level observation rather than the precise structural feature that makes it a paradox.
4
Copenhagen interpretation restricts physics to the domain of what can be observed and declares that questions about underlying reality between measurements are outside scientific enquiry. For this restriction to be a principled methodological position rather than an arbitrary limitation, which of the following must be assumed?
CORRECT: B Copenhagen's restriction is principled if it rests on a coherent methodological criterion: predictive success is what physical theories are for, and questions that don't bear on predictions are outside the domain. Without this assumption, the restriction looks arbitrary — why exclude questions about underlying reality rather than simply acknowledging we can't currently answer them? B makes the restriction principled by grounding it in a specific and defensible criterion for what counts as a physical question. A says the questions must be unanswerable in principle — too strong; Copenhagen doesn't need unanswer ability, only the claim that the questions are outside physics' legitimate scope, which the predictive-adequacy criterion supplies without requiring in-principle unanswer ability. C says observable phenomena are the totality of reality — too metaphysically strong; Copenhagen is a methodological position that doesn't require this ontological commitment. D says "between measurements" is meaningless — also a strong claim; Copenhagen can hold the restriction methodologically without asserting the questions are literally meaningless (positivism vs. instrumentalism distinction).
Passage 1 Score
/4

P 02
Entropy, the Arrow of Time & the Asymmetry Problem
Read the Passage

The second law of thermodynamics — that the entropy of a closed system tends to increase over time — introduces a temporal asymmetry into physics that the fundamental laws of mechanics do not possess. Newton's laws, Maxwell's equations, and quantum mechanics are all time-reversal invariant: if a physical process is permitted, its time-reversed version is also permitted by the equations. Yet we observe overwhelmingly time-asymmetric phenomena: ice melts in warm water but never spontaneously re-forms; broken eggs do not reassemble. The asymmetry is not a feature of the fundamental laws but an emergent feature of systems with many degrees of freedom: macrostates corresponding to low-entropy configurations are vastly outnumbered by macrostates corresponding to high-entropy configurations, so random evolution of a system is overwhelmingly likely to increase entropy — not because the laws forbid entropy decrease but because the probability of spontaneous decrease is fantastically small.

The statistical mechanical account of the second law faces what Boltzmann recognised as the "reversibility objection": if the laws are time-symmetric, and entropy increases toward the future because high-entropy states are more probable, the same reasoning applies in the past direction — entropy should also have been higher in the past. But our past is a low-entropy past: the early universe was in an extraordinarily low-entropy state, which is precisely why entropy has been increasing ever since. The statistical explanation of the second law therefore pushes the explanation back: it does not explain why entropy is increasing, but why it is increasing from the past toward the future rather than from the future toward the past. The boundary condition of the early universe's low entropy — the "past hypothesis" — is not explained by statistical mechanics but must be assumed as an initial condition that the laws cannot themselves derive.

Some cosmologists have proposed that the low-entropy initial state of the universe is itself the result of a prior cosmological process — eternal inflation, baby universes, cyclical cosmology — that generates low-entropy pocket universes with some probability. Others argue that the past hypothesis requires no further explanation: it is a primitive brute fact that must be accepted as a starting point for physical explanation rather than itself explained. This is philosophically unsatisfying but may be unavoidable: any explanation of the initial condition would itself require an initial condition, generating a regress. The arrow of time thus connects the observable thermodynamics of everyday experience to unresolved questions in cosmology and the philosophy of physics that no current theory fully addresses.

Questions · Passage 02
5
The passage argues that the past hypothesis — the early universe's low-entropy initial state — cannot be explained by statistical mechanics but must be assumed as a primitive initial condition. Which of the following, if true, most strengthens the claim that this brute-fact acceptance may be unavoidable?
CORRECT: A The passage says accepting the past hypothesis as a brute fact may be unavoidable because any explanation generates a regress — the explaining process would itself require an initial condition. Option A provides direct empirical support for this regress claim: all proposed prior-process models (eternal inflation, cyclic cosmology) themselves require unexplained initial or boundary conditions. This confirms that explanatory regress is not just a theoretical worry but an actual feature of every known attempt to explain the past hypothesis. B says the low-entropy state is improbable and demands explanation — this actually strengthens the *demand* for explanation (challenges the brute-fact acceptance) rather than strengthening the claim that acceptance is unavoidable. C shows GR requires unexplained singular initial conditions — relevant to the structural limitation of current theories but doesn't specifically address the regress argument for brute-fact acceptance. D offers an anthropic resolution — this would actually undermine the need for brute-fact acceptance by providing a selection-effect explanation.
6
The passage says the statistical mechanical account does not explain why entropy increases toward the future rather than toward the past — it merely explains why entropy increases from low initial conditions, wherever those conditions are placed. Which of the following can be most reliably inferred from this claim?
CORRECT: B The passage's argument is: statistical mechanics explains entropy increase from low initial conditions, but doesn't explain why low initial conditions are in the past rather than in the future. The past hypothesis — the early universe's low entropy — is the additional ingredient that fixes the direction. The reliable inference is that the arrow of time requires this cosmological initial condition as an input that statistical mechanics cannot derive. B captures this precisely: the arrow is a cosmological phenomenon, not generated purely by thermodynamic statistics. A says statistical mechanics cannot explain the second law at all — too strong; it explains entropy increase given the past hypothesis; what it cannot do is explain the past hypothesis itself or the directionality. C predicts what would happen with a high-entropy initial state — speculative and not what the passage directly implies; if entropy started high and there was a low-entropy state in the "future," the argument for the direction of time would be symmetric. D says the second law is contingent — a valid philosophical inference, but the passage's specific claim is more focused on the explanation of the arrow's direction, not the contingency of the law.
7
The second law is derived from time-symmetric fundamental laws, yet it is itself time-asymmetric. The passage calls this an "emergent" asymmetry. What is the precise structural feature that makes this emergence paradoxical rather than simply surprising?
CORRECT: B The apparent derivation of the second law from symmetric mechanics is illusory — it requires importing the asymmetry through the past hypothesis (low-entropy initial condition). Symmetric laws + low-entropy initial condition → entropy increase toward the future. But the low-entropy initial condition is itself asymmetric (it specifies the past rather than the future as the low-entropy end). The asymmetry was never derived from symmetric laws; it was smuggled in through the boundary condition. The "emergence" of the second law conceals a hidden asymmetric input. This is the structural paradox: the derivation appears to show asymmetry emerging from symmetry, but actually shows symmetry + hidden asymmetric input → apparent asymmetric emergence. A says it's merely surprising and well-understood — the passage calls it a genuine unresolved difficulty and identifies the hidden-input paradox. C identifies the Poincaré recurrence problem — real but a different paradox about eventual entropy decrease, not the derivation-concealing-hidden-input problem. D conflates determinism/probabilism — the second law is statistical, not strictly deterministic, and this conflation doesn't capture the asymmetry paradox.
8
The passage ends by connecting "the observable thermodynamics of everyday experience to unresolved questions in cosmology and the philosophy of physics." What is the most plausible reason the author ends with this connection rather than simply concluding that the second law is unexplained?
CORRECT: B The concluding observation is architecturally unifying: it shows that the same explanatory gap runs from the very familiar (why does ice melt?) through the statistical mechanics of entropy, to the cosmological initial condition, to unresolved questions in the philosophy of physics. The purpose is to demonstrate the deep structural continuity of the problem across scales — that a seemingly mundane observation about ice and eggs is epistemically connected to the deepest open questions in physics. This is a rhetorical and structural move that reveals the philosophical depth of apparently simple phenomena. A attributes a status-elevation motive — unwarranted and reductive. C prescribes disciplinary absorption — the passage makes no such institutional recommendation. D predicts eventual resolution — the passage says "no current theory fully addresses" these questions, which is not a prediction that future theories will resolve them; it's a statement about the current state.
Passage 2 Score
/4

P 03
Quantum Entanglement, Bell Inequalities & the End of Local Realism
Passage Timer
10:00
Read the Passage

Einstein, Podolsky, and Rosen's 1935 thought experiment argued that quantum mechanics is incomplete: if two particles interact and then separate, measuring the spin of one instantly determines the spin of the other regardless of the distance between them. If this correlation reflects a pre-existing definite value — the spin was determined from the start — then quantum mechanics' probabilistic description is incomplete, and hidden variables exist that quantum mechanics fails to capture. If, alternatively, the measurement of one particle genuinely determines the spin of the other instantaneously across arbitrary distances, this would require superluminal causal influence, which violates special relativity. EPR concluded that hidden variables were the only reasonable alternative and that quantum mechanics was therefore incomplete.

John Bell's 1964 theorem transformed the EPR debate from philosophical argument into empirical test. Bell showed that any hidden variable theory that preserves local causality — the requirement that causes cannot propagate faster than light — produces statistical correlations between measurements on separated particles that satisfy a specific inequality. Quantum mechanics predicts correlations that violate this inequality. The two positions are therefore empirically distinguishable. Subsequent experiments, culminating in Aspect's 1982 experiments and the loophole-free tests of 2015 by Hensen and colleagues, confirmed quantum mechanical predictions and Bell inequality violations with high confidence. The empirical result is unambiguous: the correlations predicted by quantum mechanics and observed in experiment cannot be reproduced by any local hidden variable theory.

What the Bell experiments establish is the failure of local realism — the conjunction of locality (no superluminal causal influence) and realism (measurement outcomes reflect pre-existing definite properties). They do not specify which assumption fails. Three positions remain: reject locality, accepting that quantum entanglement involves genuine superluminal influence (though this influence cannot be used to transmit information, preserving the operational content of special relativity); reject realism, accepting that particles have no definite properties before measurement; or reject the conjunction while remaining agnostic about which element fails. The Bell results are often mischaracterised as ruling out hidden variables; more precisely, they rule out local hidden variable theories, leaving non-local hidden variable theories — like Pilot Wave theory — as logically viable.

Questions · Passage 03
9
EPR argued that if quantum mechanics' correlations are correct, either hidden variables exist or superluminal influence occurs. Bell showed these options are empirically distinguishable. What logical step made this transformation from philosophical argument to empirical test possible?
CORRECT: C Bell's contribution was to identify a class of theories (local hidden variable theories) and derive what they would predict, then show this diverges from quantum mechanical predictions. The philosophical question became empirical because the two sides now had different measurable consequences. C captures this. A is close but frames it as deriving an inequality, which is the technical method — C is more precise about the logical transformation. B says Bell showed locality is directly testable, which is an implication but not the specific step. D says Bell enabled physical realisation of EPR's thought experiment, which was a contribution of later experimenters like Aspect, not Bell himself.
10
The passage says Bell violations "cannot be reproduced by any local hidden variable theory." Why does the qualifier "local" matter in this claim?
CORRECT: B The passage explicitly states: "they rule out local hidden variable theories, leaving non-local hidden variable theories — like Pilot Wave theory — as logically viable." The qualifier "local" is essential because without it the claim overstates what Bell proved. Non-local hidden variable theories remain possible. B captures this. A says ruling out local theories rules out all special-relativity-compatible theories, but Pilot Wave is itself a non-local theory that some argue can be made compatible with special relativity at the operational level. C says EPR only proposed local theories, which is true but misses the broader logical point B makes. D introduces determinism/probabilism as the distinction, which is not what locality means in this context.
11
The passage says rejecting locality means accepting superluminal influence "though this influence cannot be used to transmit information." Why is this qualification important for the relationship between quantum mechanics and special relativity?
CORRECT: B Special relativity's prohibitions are operationally grounded in preventing superluminal signalling, which would enable causal paradoxes. Quantum correlations that cannot be used to transmit information or coordinate actions preserve this operational content even if they involve non-local correlations. The qualification preserves coexistence between quantum non-locality and special relativity at the level of what can actually be done physically. B captures this. A says the influence is not genuine, which is one interpretation (Copenhagen-adjacent) but the passage does not claim this. C says entanglement can be used for communication without violating relativity, which misreads the qualification — entanglement cannot be used for information transmission. D says the conflict is fully resolved by the distinction, which overstates — there remain conceptual tensions even if operational content is preserved.
12
Bell violations rule out local realism — the conjunction of locality and realism. The passage says this "does not specify which assumption fails." What is the significance of this indeterminacy for the philosophical interpretation of quantum mechanics?
CORRECT: C The indeterminacy is significant because it shows that even decisive experimental results — Bell inequality violations — leave a non-empirical dimension in the interpretation of quantum mechanics. The experiment eliminates "locality AND realism" but cannot specify which element to drop. The choice between interpretations involves non-empirical considerations that experiments cannot resolve. C captures this relationship between empirical constraint and interpretive underdetermination. A says the debate is unresolved, which is true but less precise than C about why — C explains the structure of the underdetermination. B describes which interpretations sacrifice which assumptions, which illustrates C but is less precise about the general significance. D says physicists need prior commitments, which makes the process circular, but C is more precise about why the non-empirical dimension persists.
Passage 3 Score
/4

P 04
Symmetry, Conservation Laws & the Limits of Noether's Theorem
Passage Timer
10:00
Read the Passage

Emmy Noether's 1915 theorem established one of the deepest connections in theoretical physics: every continuous symmetry of a physical system's action corresponds to a conserved quantity. Time translation symmetry — the fact that the laws of physics are the same today as yesterday — corresponds to conservation of energy. Spatial translation symmetry — the laws are the same here as there — corresponds to conservation of momentum. Rotational symmetry corresponds to conservation of angular momentum. Noether's theorem converts what had previously appeared as independent empirical facts — that energy, momentum, and angular momentum are conserved — into logical consequences of symmetry properties of the laws. This is a profound unification: the diversity of conservation laws is traceable to the structure of the symmetry group of the theory.

The theorem applies within the framework of classical mechanics and field theory formulated as variational principles with an action functional. Its application to general relativity requires care: general relativity is generally covariant — its equations take the same form under arbitrary coordinate transformations — and this general covariance appears to imply a vast symmetry group. Noether's theorem, naively applied, would then appear to generate an enormous number of conservation laws. The result is instead that in generally covariant theories the local conservation laws derived from Noether's theorem are trivially satisfied in a way that does not yield the global conserved quantities familiar from non-relativistic physics. Energy conservation in general relativity is consequently problematic: there is no general, coordinate-independent, globally conserved quantity corresponding to energy in an arbitrary spacetime, though approximate and quasi-local notions of energy can be defined in specific contexts.

The deeper significance of Noether's theorem lies in its reversal of the traditional epistemological direction in physics. The traditional approach observed conserved quantities empirically and sought their explanation. Noether's framework inverts this: identify the symmetries of the theory, and the conservation laws follow as theorems. This symmetry-first approach has been extraordinarily productive in twentieth-century physics — the Standard Model of particle physics is organised around gauge symmetries, and the particle zoo is classified by representations of symmetry groups. But the symmetry-first approach also raises a foundational question: why does nature exhibit the specific symmetries it does? The symmetries are taken as primitive in current physics rather than derived from deeper principles, leaving a potential explanatory gap at the foundation of physical theory.

Questions · Passage 04
13
Noether's theorem converts conservation laws from "independent empirical facts" to "logical consequences of symmetry." What does this conversion achieve epistemologically?
CORRECT: B The passage says the conversion reveals that "the diversity of conservation laws is traceable to the structure of the symmetry group." What this achieves is unification: multiple independent empirical facts are shown to be instances of a single principle. This reduces theoretical primitives — instead of explaining three separate conservation laws, you explain one symmetry-conservation correspondence, from which all three follow. B captures this. A says conservation laws are now predictable without observation, but this overstates — the symmetry itself needs to be established, and identifying symmetries requires engaging with the theory's structure. C says conservation becomes necessarily rather than contingently true, which is a stronger claim that raises complex modal questions the passage does not make. D says Noether's theorem provides a causal mechanism, but symmetry is not a causal mechanism in the standard sense.
14
The passage says energy conservation in general relativity is "problematic." What specifically creates this problem, given that general relativity is a physical theory and Noether's theorem applies to physical theories?
CORRECT: C The passage explains the problem precisely: "in generally covariant theories the local conservation laws derived from Noether's theorem are trivially satisfied in a way that does not yield the global conserved quantities familiar from non-relativistic physics." General covariance produces a formal symmetry that, applied via Noether's theorem, generates trivial local conservation identities rather than the non-trivial global energy conservation one expects. C states this. A says Noether's theorem wasn't designed for GR's mathematics, which is false — Noether's theorem is a very general mathematical result applicable to variational principles. B says Noether generates only global laws while the theory requires local, which reverses the actual problem. D describes the dynamic spacetime correctly but attributes the problem to time translation non-invariance, which is related but different from the trivial local conservation problem the passage identifies.
15
The symmetry-first approach "raises a foundational question: why does nature exhibit the specific symmetries it does?" The passage says symmetries are "taken as primitive." What does treating symmetries as primitive imply for the completeness of current physical theory?
CORRECT: C The passage explicitly says treating symmetries as primitive leaves "a potential explanatory gap at the foundation of physical theory." Primitives are unexplained explainers: the theory can derive consequences from symmetries but cannot explain why the symmetries hold. This is an explanatory incompleteness distinct from empirical incompleteness. C captures this. A says the theory is empirically complete and primitives are just brute facts, which misses the explanatory gap the passage identifies. B says the approach is circular, which is a different and stronger charge. D says there is no scientific question beyond symmetries, which is one possible position (operationalist) but the passage presents it as an open explanatory gap rather than a limit of scientific inquiry.
16
Noether's theorem reverses the "traditional epistemological direction" in physics from empirical observation to theoretical derivation. What does this reversal imply about the relationship between experiment and theory in modern physics?
CORRECT: D Noether's reversal is not a wholesale replacement of empiricism by rationalism. Symmetries must themselves be identified — partly through observing what is conserved and working backward, partly through theoretical considerations. Once identified, conservation follows. The relationship is bidirectional: observation informs symmetry identification, and symmetry structure generates predictions. D captures this. A says experiment is no longer needed, which overstates — symmetries still need empirical identification and testing. B says theory is primary and experiment secondary, which overstates in the other direction. C says the approach is self-confirming, which would be a methodological problem but the passage does not suggest this — symmetry violations in experimental data would be genuine falsifications.
Passage 4 Score
/4

P 05
The Standard Model, Its Anomalies & the Search for Physics Beyond
Passage Timer
10:00
Read the Passage

The Standard Model of particle physics — the quantum field theory describing the electromagnetic, weak, and strong nuclear forces and their associated particle content — is the most precisely tested scientific theory in history. Its predictions for quantities like the anomalous magnetic moment of the electron agree with experiment to better than one part in a trillion. Yet the Standard Model is known to be incomplete. It does not incorporate gravity; it provides no explanation for the observed matter-antimatter asymmetry in the universe; it cannot account for the dark matter that constitutes approximately 85% of the universe's matter content; and it contains approximately 19 free parameters — the masses of the fundamental particles, coupling constants, and mixing angles — whose specific values must be inserted by hand rather than derived from the theory. A complete theory of fundamental physics would explain these parameters rather than accept them as brute empirical inputs.

The hierarchy problem provides the Standard Model's most technically acute internal difficulty. The Higgs boson mass, measured at approximately 125 GeV, is enormously smaller than the Planck scale (approximately 10^19 GeV) at which quantum gravitational effects become important. Quantum field theory predicts that the Higgs mass receives quantum corrections from the virtual particle pairs that the Higgs interacts with, and these corrections are of order the highest energy scale in the theory — the Planck scale. That the physical Higgs mass is 17 orders of magnitude smaller than this requires an extraordinary cancellation between the bare Higgs mass and the quantum corrections — a fine-tuning of the fundamental parameters to one part in 10^34. This is the hierarchy problem: why is the electroweak scale so much smaller than the Planck scale, given that quantum corrections should naturally drive them together?

Proposed solutions to the hierarchy problem — supersymmetry, extra dimensions, composite Higgs models — all predict new particles at the TeV energy scale accessible to the Large Hadron Collider. The non-observation of such particles after a decade of LHC running has created a crisis of expectations: the theoretical arguments for new physics at the TeV scale were compelling, the experimental null results are robust, and the status of naturalness — the principle that theories should not require extraordinary fine-tuning of parameters — as a guide to new physics is now contested. Some physicists conclude that naturalness is a reliable theoretical virtue and that the absence of TeV-scale new physics indicates gaps in the proposed solutions. Others conclude that naturalness is a heuristic that may simply not apply at the Planck scale, and that the universe is fine-tuned without requiring explanation beyond a possible anthropic selection effect.

Questions · Passage 05
17
The Standard Model contains approximately 19 free parameters "whose specific values must be inserted by hand." What does this feature imply about the Standard Model as a scientific theory?
CORRECT: B The passage says a complete theory "would explain these parameters rather than accept them as brute empirical inputs," indicating the Standard Model is empirically successful but lacks the explanatory depth a complete theory would have. B captures this gap between empirical adequacy and explanatory completeness. A says the model is unfalsifiable because of free parameters, but this misunderstands how the parameters work — they are determined by measurement and fixed thereafter, so the model makes falsifiable predictions once parameterised. C says it cannot be fundamental, which is a stronger claim than the passage makes — the passage says a complete theory would explain the parameters, not that the Standard Model is definitively not fundamental. D says it should be replaced by a simpler theory, but simplicity in terms of parameter count is not the only criterion the passage invokes.
18
The hierarchy problem requires the Higgs mass to result from a cancellation of one part in 10^34 between two large quantities. Why does this constitute a theoretical problem rather than simply an empirical fact about the Higgs mass?
CORRECT: B The problem is not an empirical mismatch — the Standard Model correctly predicts (or accommodates) the measured Higgs mass. The problem is that achieving this accommodation requires two independent parameters to cancel to extraordinary precision with no theoretical reason for that cancellation. It is a naturalness/fine-tuning problem: the Standard Model can fit the data but only by requiring a coincidence it cannot explain. B captures this. A says the cancellation is experimentally unverified, but the hierarchy problem is not about unverified measurements — it is about the theoretical structure. C says the Higgs is the only parameter requiring such cancellation, which is approximately true but not the reason it constitutes a problem. D says it is a falsification of the Standard Model, but the Standard Model accommodates the Higgs mass — the problem is theoretical not empirical.
19
The LHC's non-observation of predicted new particles has led some physicists to question whether "naturalness" is a reliable guide to new physics. What does naturalness as a theoretical principle assume, and why does the LHC result challenge that assumption?
CORRECT: C Naturalness assumes fine-tuning is problematic and that the problem should be resolved by new physics accessible at the relevant energy scale. The LHC result challenges whether this assumption reliably predicts where new physics is — the arguments were compelling, the predictions specific, but no particles appeared. This challenges naturalness not by disproving fine-tuning but by questioning whether fine-tuning reliably signals new physics at a specific energy scale. C captures this. A confuses naturalness with specific energy scale predictions. B conflates the hierarchy problem with the cosmological constant problem. D conflates naturalness with the requirement to eliminate all free parameters.
20
Physicists who conclude naturalness "may simply not apply at the Planck scale" and accept anthropic explanations are making what type of concession about the methodology of physics?
CORRECT: C Accepting anthropic explanations for fine-tuning is a methodological concession: it abandons the principle that fine-tuning demands a physical explanation (a symmetry or mechanism) and replaces it with observer-selection — the universe is fine-tuned because only fine-tuned universes contain observers. This is a shift in what kinds of explanations are acceptable in physics, moving away from the demand that every apparent coincidence be explained by new physics. C captures this methodological shift. A says it admits empirical refutation of the Standard Model, but the Standard Model is not empirically refuted — the hierarchy problem is theoretical, not empirical. B says LHC experiments are conclusive and Planck-scale physics is inaccessible, which is a different methodological concession not specifically about naturalness. D says the symmetry-first approach has reached its limits, which is related but not the specific concession about naturalness as a methodological guide.
Passage 5 Score
/4
Physics · Total Score
/8
Category 22
Cyber
5 passages · 4 questions each · CAT 5/5 · 50 min total
Score
/20
P 01
Cyber Deterrence: The Attribution Problem, Credibility & the Asymmetry of Offence
Read the Passage

Classical nuclear deterrence rested on three pillars: capability (the ability to inflict unacceptable damage), credibility (the adversary's belief that retaliation would follow), and communication (the clear signal that retaliation would follow specific acts). Cyber deterrence by punishment — threatening retaliatory cyber or kinetic strikes to dissuade adversaries from conducting cyber attacks — struggles against all three pillars simultaneously, but the attribution problem is structurally the most disabling. Deterrence by punishment requires that the adversary believe retaliation will follow an attack; retaliation requires knowing who attacked; knowing who attacked in cyberspace requires attributing the attack to a specific state or actor with sufficient confidence to justify a response whose costs and risks the responding state must bear. In cyberspace, attribution is technically difficult, legally contested, politically sensitive, and strategically double-edged: states that reveal their attribution methods to justify a response simultaneously reveal intelligence capabilities that may be more valuable than the response is worth.

The attribution problem is compounded by the offence-defence imbalance in cyberspace. Unlike nuclear deterrence — where the development of second-strike capability could credibly neutralise first-strike advantage — cyber offence is structurally dominant over cyber defence for multiple reasons: the attacker can choose the time, vector, and target while the defender must protect all surfaces simultaneously; zero-day vulnerabilities are asymmetrically available to attackers who discover them; and the borderless, multi-layered architecture of the internet provides routing and attribution concealment by design. Deterrence by denial — raising the cost of successful attack by improving defences — is thus similarly limited: the defender's resource requirements to achieve comprehensive network security at national scale exceed the attacker's resource requirements to find and exploit a single vulnerability by many orders of magnitude.

Strategic thinkers have proposed entanglement and norms as alternative deterrence mechanisms. Entanglement — the mutual dependence of potential adversaries on shared digital infrastructure — creates symmetrical vulnerability that may deter large-scale attacks: a state dependent on the global financial system, cloud providers, or supply chain networks for its own prosperity has incentives to avoid triggering cascading cyber conflict. Norms — international agreements about prohibited conduct in cyberspace — may supplement entanglement by stigmatising certain attack categories (attacks on civilian critical infrastructure, election interference, attacks on medical systems) and raising the political and reputational costs of their use. Neither mechanism provides the crisp deterrence mathematics of MAD, but both operate through cost-benefit manipulation of a more diffuse kind, introducing friction into adversary decision-making without requiring the precise attribution and credible retaliation that classical deterrence demands.

Questions - Passage 01
1
The passage argues that the attribution problem is "structurally the most disabling" obstacle to cyber deterrence by punishment, because revealing attribution methods to justify retaliation simultaneously reveals intelligence capabilities more valuable than the response. Which of the following, if true, most seriously weakens this specific argument?
CORRECT: B The specific argument is: justifying retaliation publicly requires revealing attribution methods, and revealing those methods destroys their intelligence value. Option B directly dismantles this by separating the communication channel from public disclosure: private diplomatic channels can deliver the attribution signal to the adversary (sufficient for deterrence) without public disclosure of methods (preserving intelligence value). The dilemma the passage describes only holds if attribution must be public to be deterrently effective — B shows it need not be. A shows faster attribution — useful but this addresses speed, not the disclosure dilemma. C shows attribution without apparent capability compromise — tempting, but "apparently" without compromise is consistent with the passage's concern that the trade-off is "double-edged" and strategically costly in non-obvious ways; C is weaker than B's structural resolution. D addresses legal standards — relevant to international law but not to the intelligence-capability disclosure dilemma the passage identifies.
2
The passage argues that entanglement and norms introduce "friction into adversary decision-making without requiring the precise attribution and credible retaliation that classical deterrence demands." Which of the following can be most reliably inferred from this claim?
CORRECT: B The passage says entanglement and norms work "through cost-benefit manipulation of a more diffuse kind" and don't require attribution or credible retaliation. The reliable inference is that these mechanisms operate differently from classical deterrence — not through a precise threat-response chain but through raising costs diffusely across adversary decision-making. B captures this mechanistic distinction precisely. A says they are superior — the passage explicitly says "neither mechanism provides the crisp deterrence mathematics of MAD," signalling they are weaker substitutes, not superior alternatives. C says they are inadequate and states should improve attribution — prescriptive and not supported; the passage presents entanglement and norms as genuine alternatives, not inadequate substitutes. D says isolated states face greater threat — an interesting inference about digital sovereignty but goes well beyond what the passage establishes about entanglement as a deterrent mechanism.
3
The passage identifies an offence-defence imbalance: attackers need exploit only a single vulnerability while defenders must protect all surfaces simultaneously. This creates a structural paradox for national cyber defence investment. What is that paradox?
CORRECT: B The passage establishes: attacker needs one vulnerability; defender must protect all surfaces; defender's resource requirements exceed attacker's by many orders of magnitude. The structural paradox is that the investment required to achieve comprehensive security may exceed the value of the assets being protected — spending more on defence than the defended assets are worth is a losing proposition, yet partial defence leaves vulnerabilities. This traps states: insufficient investment leaves gaps, sufficient investment is economically irrational. A describes a signal-of-value problem — not in the passage and different from the resource asymmetry paradox. C says defence knowledge cannot be used offensively — not argued and factually questionable (understanding attacks improves both defence and offence). D identifies an arms race dynamic — a different strategic paradox about relative positioning, not the cost-asymmetry problem the passage describes.
4
The entanglement mechanism — mutual dependence on shared digital infrastructure deterring large-scale attacks — implicitly assumes which of the following for it to function as a genuine deterrent?
CORRECT: A Entanglement deters by making states perceive that large-scale cyber attacks risk cascading damage to their own infrastructure. For this to work, decision-makers must accurately perceive this mutual vulnerability and factor it into cost-benefit calculations. If leaders are unaware of their own vulnerability, or dismiss it, or believe they can conduct targeted attacks without triggering cascades, entanglement produces no deterrent effect regardless of the objective technical reality. B addresses disaggregation — a real strategic option that limits entanglement's scope, but not the core assumption for the mechanism to work where it does apply. C questions whether cascading effects are technically achievable — relevant to entanglement's empirical basis but not the foundational assumption about how the deterrence mechanism operates. D says likely attackers must be entangled — a necessary empirical condition for scope, but not the mechanism assumption; the mechanism requires accurate perception by whatever actors are entangled, not that all potential attackers are entangled.
Passage 1 Score
-/4

P 02
Cyber Sovereignty, the Splinternet & the Governance of Digital Infrastructure
Read the Passage

The internet's original design philosophy — end-to-end architecture, protocol standardisation, and the absence of built-in control points — reflected a governance assumption: that the network should be neutral infrastructure, indifferent to the content, origin, and destination of data packets. This design choice was simultaneously technical and political, and its political implications have become progressively more contested as states have recognised that the internet enables information flows, economic activity, and political organising that states historically controlled. The concept of cyber sovereignty — the claim that states have the right to govern the internet within their territory as an extension of their broader sovereign authority — has been championed most coherently by China and Russia, and contested most vigorously by the United States and European liberal democracies, for whom the open internet represents both an economic interest and a normative commitment.

The splinternet — the fragmentation of the globally integrated internet into nationally or regionally governed network segments — represents the partial realisation of the cyber sovereignty agenda. China's Great Firewall is the most developed instance: a technical and legal architecture that filters, redirects, and censors information flows crossing its national network boundary, enabling domestic information control while participating selectively in global digital commerce. Russia's sovereign internet law mandates the technical infrastructure for disconnecting from the global internet and routing domestic traffic through state-controlled exchange points. Both systems involve enormous economic costs — firms operating in filtered environments face reduced access to global knowledge and tool ecosystems — as well as political costs in their domestic populations' awareness of the gap between filtered and unfiltered information environments.

The governance challenge for liberal democracies is that opposing cyber sovereignty in principle while accommodating it in practice — through platform compliance with local content moderation orders, data localisation requirements, and surveillance cooperation — produces an incoherent position that undermines both the normative argument and the economic interests it is supposed to protect. Platforms that comply with Chinese censorship orders to access Chinese markets, or with European data localisation rules to operate in the EU, are participating in and legitimating the fragmentation they officially oppose. The coherent alternatives — full engagement with cyber sovereignty norms at the cost of the open internet principle, or full refusal at the cost of market access — are both strategically costly. Liberal democracies have chosen a pragmatic incoherence whose cumulative effect is to accelerate the splinternet while maintaining the rhetorical position that they oppose it.

Questions - Passage 02
5
The passage argues that platform compliance with local censorship and data localisation requirements legitimates cyber sovereignty fragmentation, making liberal democracies' rhetorical opposition to the splinternet incoherent. Which of the following, if true, most strengthens this argument?
CORRECT: B The passage's argument is that compliance legitimates fragmentation — platforms participating in local governance validate the sovereign authority they officially oppose. Option B provides direct evidence of this legitimation effect: authoritarian states have explicitly cited Western platform compliance as evidence that even liberal democracies accept state authority over digital infrastructure. This is the strongest form of the argument — not just that compliance is logically inconsistent, but that it is being actively used by cyber sovereignty advocates to justify and normalise their position in international forums. A shows revenue dependence — explains the motivation for compliance but doesn't directly address the legitimation effect. C shows infrastructure repurposing — a downstream harm from compliance, but not the legitimation of the sovereignty principle itself. D shows that data localisation requirements serve different purposes — this actually complicates the argument by distinguishing legitimate from illegitimate localisation, potentially weakening rather than strengthening the claim that all compliance is incoherent.
6
The passage describes liberal democracies' position as "pragmatic incoherence whose cumulative effect is to accelerate the splinternet while maintaining the rhetorical position that they oppose it." Which of the following can be most reliably inferred from this characterisation?
CORRECT: B "Pragmatic incoherence whose cumulative effect is to accelerate the splinternet while opposing it" directly says: the practical outcome contradicts the stated goal. The reliable inference is self-defeat — the policy produces the opposite of what it claims to pursue. B captures this precisely without over-inferring. A prescribes a principled refusal strategy — the passage identifies two costly alternatives but doesn't prescribe either. C says the incoherence is rationally deliberate — the passage describes it as an incoherent outcome, not a deliberate strategic choice; "incoherence" implies it is not strategically rational. D says reversal is now impossible — the passage identifies the cumulative effect as acceleration, not irreversibility; it doesn't claim the process is beyond correction.
7
The passage presents both the cyber sovereignty position and the liberal democratic position, and identifies a structural incoherence in the latter without endorsing the former. What is the most plausible reason the author identifies this incoherence rather than endorsing one of the two coherent alternatives?
CORRECT: B The passage maps the problem: two coherent but costly alternatives, and a default incoherent position that accumulates toward the worst outcome without anyone explicitly choosing it. Identifying the incoherence makes the implicit drift explicit — the contribution is to force awareness of the trade-off so policy can be made deliberately rather than by default. This is the diagnostic function of analytical writing that presents a structural problem without prescribing a solution. A attributes hidden cyber sovereignty endorsement — unwarranted and uncharitable. C says neutrality is a professional norm — possible but the passage is substantively critical of the incoherent position, which is incompatible with pure neutrality. D says the author endorses incoherence — misreads "identifying but not rejecting" as implicit endorsement; the passage calls the outcome "accelerating the splinternet," which is presented as a negative consequence, not a preferred outcome.
8
A defender of liberal democratic platform policy might argue: "Our compliance with local content moderation orders does not validate cyber sovereignty, because we comply for pragmatic market reasons, not because we accept the normative claim that states have authority over digital content. Accepting a practice for pragmatic reasons is different from endorsing the principle behind it." Which logical problem most seriously affects this defence?
CORRECT: B The defence says: our motivation is pragmatic, not principled, so compliance doesn't endorse the norm. But the passage's argument is about observable behaviour and its external effect on the legitimation of cyber sovereignty in international discourse. The defence focuses on internal motivation; the relevant audience — authoritarian states citing platform compliance as validation — has access only to observable behaviour, not internal motivation. Legitimation is produced by behaviour in the eyes of those who observe it, not by the private intentions of those who behave. B captures this precisely: the internal/external distinction is the logical problem — external legitimation effects are generated by observed behaviour regardless of internal motivation. A says ad hominem — but the defence focuses on intentions specifically to make a normative argument about endorsement; pointing this out is not an ad hominem but identifying the relevant distinction. C says pragmatic and principled can't be cleanly separated — real and interesting, but less precise than B; the key flaw isn't about how motivations blur over time but about the gap between internal motivation and external legitimation effect. D says fallacy of division — the aggregate/individual distinction is real but the specific flaw B identifies is more direct and precise.
Passage 2 Score
-/4

P 03
Ransomware, Critical Infrastructure & the Political Economy of Cybercrime
Passage Timer
10:00
Read the Passage

Ransomware — malware that encrypts victims' data and demands payment for decryption — has evolved from opportunistic attacks on individuals into a structured criminal industry targeting high-value organisations, including hospitals, utilities, and government agencies. The professionalisation of the ransomware ecosystem reflects the application of legitimate business models to criminal enterprise: ransomware-as-a-service platforms allow technically unsophisticated operators to deploy sophisticated attack tools developed by specialised groups, in exchange for a share of ransom proceeds. The division of labour between developers, operators, initial access brokers, and money laundering networks mirrors the supply chains of legitimate industries, with cryptocurrency providing the financial infrastructure that makes cross-border criminal transactions difficult to interdict.

The geopolitical dimension of ransomware arises from the de facto tolerance extended to criminal groups operating from certain jurisdictions. Russia, in particular, has permitted ransomware groups to operate with minimal interference as long as attacks avoid targets within the Commonwealth of Independent States and do not directly damage Russian economic interests. This arrangement is not formal state sponsorship — the groups maintain plausible criminal independence — but it constitutes a form of strategic permissiveness that amplifies the groups' effectiveness. It allows the Russian state to benefit from disruption to Western institutions without assuming the diplomatic and escalatory costs of direct state-sponsored attacks. The May 2021 Colonial Pipeline attack and the subsequent diplomatic pressure that resulted in the DarkSide group's apparent dissolution illustrates the mechanism: when ransomware attacks produce sufficient political cost for the host state, tolerance can be withdrawn.

The policy debate about ransomware payments concentrates on whether payment should be regulated or prohibited. The economic argument for allowing payment is that victim organisations — especially hospitals and utilities providing essential services — face immediate harm from operational disruption that may exceed the long-run deterrence benefit of non-payment. The strategic argument against payment is that every payment funds the criminal ecosystem, improves attacker capabilities, and signals that ransomware is economically viable, generating further attacks. Mandatory payment prohibition would remove the victim's economic incentive to pay but would place the costs of non-payment — continued operational disruption, data loss — on individual organisations rather than distributing them across the collective deterrence benefit. Insurance markets have compounded the problem: cyber insurance that covers ransomware payments enables victims to pay without internalising the full cost, contributing to the market failure that sustains ransomware as an industry.

Questions · Passage 03
9
The passage describes Russia's tolerance of ransomware groups as "strategic permissiveness" rather than formal state sponsorship. What is the strategic value of this distinction from Russia's perspective?
CORRECT: B The passage explicitly identifies the mechanism: Russia "benefits from disruption to Western institutions without assuming the diplomatic and escalatory costs of direct state-sponsored attacks." The value of the distinction is that it provides benefits while limiting accountability. B captures this. A concerns legal liability under international law, which is a related consideration but the passage focuses on diplomatic and escalatory costs rather than legal liability. C says Russia claims the groups act against its interests, which is not the passage's account — the passage says Russia tolerates the groups, not that it claims victimhood. D concerns retaliatory thresholds, which is related to B but less complete — B captures both the benefit and cost-reduction sides of the arrangement.
10
The Colonial Pipeline case illustrates that Russian tolerance of ransomware groups "can be withdrawn." What does this reveal about the nature of the strategic permissiveness arrangement?
CORRECT: C The ability to withdraw tolerance when attacks "produce sufficient political cost for the host state" implies Russia can actually control the groups' operating environment when it chooses. This is inconsistent with genuine criminal independence — if the groups were truly independent, Russia could not simply cause their dissolution in response to diplomatic pressure. The capacity to withdraw tolerance reveals that the permissiveness is active and managed rather than merely passive non-enforcement. C captures this implication. A says Russia has no reliable mechanism to control the groups, which is contradicted by the ability to apparently dissolve DarkSide. B says Russia calibrates to just below US escalation threshold, which is possible but a narrower claim than C. D says the groups are de facto state agents, which goes further than the passage implies — C preserves the ambiguity while noting the implication.
11
The passage says cyber insurance that covers ransomware payments "contributes to the market failure that sustains ransomware." What is the market failure being identified?
CORRECT: B The passage says insurance "enables victims to pay without internalising the full cost, contributing to the market failure that sustains ransomware." The market failure is negative externality: each payment provides private benefit (restored operations) to the payer while generating social costs (sustaining the ecosystem) distributed across all potential future victims. Insurance exacerbates this by further reducing the payer's private cost of payment, widening the gap between private and social costs. B captures this. C describes moral hazard related to security investment, which is a related market failure but not the one the passage specifically identifies — the passage focuses on the payment decision, not the security investment decision. A describes monopoly, which is a different economic structure. D describes adverse selection, which is an insurance market failure but not what the passage identifies.
12
The debate about prohibiting ransomware payments involves a conflict between individual victim interests and collective deterrence. The passage says prohibition would "place the costs of non-payment on individual organisations rather than distributing them across the collective deterrence benefit." What type of policy problem does this describe?
CORRECT: C Each victim's payment is individually rational (restores operations, limits immediate harm) but collectively irrational (sustains the ecosystem, enables more attacks). No individual victim can achieve the deterrence benefit unilaterally — non-payment only creates deterrence if enough victims refuse to pay. This is a classic collective action problem: the individually rational choice produces a collectively suboptimal outcome that requires coordination to overcome. C captures this. A identifies the public goods character of deterrence, which is related but frames it as an undersupply problem rather than a coordination problem. B introduces regulatory capture, which is not in the passage. D raises distributive justice, which is a valid concern about prohibition design but is not the type of policy problem the payment structure itself describes.
Passage 3 Score
/4

P 04
AI, Autonomous Systems & the Changing Character of Cyber Conflict
Passage Timer
10:00
Read the Passage

Artificial intelligence is transforming the operational tempo and character of cyber conflict in ways that challenge existing frameworks for attribution, accountability, and escalation management. On the offensive side, large language models and related AI systems can automate the discovery of vulnerabilities, the generation of phishing content personalised to specific targets, the adaptation of malware to evade signature-based detection, and the analysis of large datasets to identify exploitable patterns in adversary systems. On the defensive side, AI enables pattern recognition across network traffic at speeds and scales impossible for human analysts, improving detection of anomalous behaviour that may indicate intrusion. The AI arms race in cyberspace accelerates the action-reaction cycle, compressing the time between attack and response in ways that reduce the opportunity for human deliberation and escalation management.

The accountability problem for AI-enabled cyber operations is structural. When an autonomous system identifies a vulnerability, generates an exploit, and deploys it without human approval of the specific operation, the chain of accountability between the initiating decision and the operational outcome is broken. Human operators set parameters and constraints but do not authorise specific actions within those constraints; the specific targeting and timing decisions are made algorithmically. International humanitarian law requires that attacks be subject to human judgement about proportionality and distinction — the requirement that civilian harm be proportionate to military advantage and that attacks discriminate between combatants and civilians. Fully autonomous cyber operations may satisfy neither requirement if the distinction and proportionality judgements are made by an algorithm operating outside meaningful human control.

The speed compression problem interacts with escalation management in potentially destabilising ways. Crisis stability in nuclear deterrence depends on having sufficient time for decision-makers to assess an ambiguous signal, consult advisors, and choose a measured response rather than triggering an escalation spiral through hasty reaction. AI-enabled cyber operations that can produce rapid, cascading effects on critical infrastructure compress this decision window in ways that may force state actors into reactive automated defence systems whose interaction dynamics are difficult to predict. The concern is not primarily about autonomous cyberweapons making the decision to attack — though this risk exists — but about the interaction between autonomous offensive systems and automated defensive responses creating escalatory dynamics that no human actor deliberately initiated.

Questions · Passage 04
13
The passage says AI "compresses the time between attack and response in ways that reduce the opportunity for human deliberation." Why is this compression specifically destabilising rather than simply faster?
CORRECT: C The passage explicitly connects speed compression to escalation management: "crisis stability in nuclear deterrence depends on having sufficient time for decision-makers to assess an ambiguous signal, consult advisors, and choose a measured response." The destabilisation is specifically about removing the deliberation time on which crisis stability depends. C states this. A concerns destructive magnitude, which is a different harm. B concerns intrusion success rates, which is also a different harm. D concerns information asymmetry, which is a strategic advantage concern not the crisis stability mechanism the passage identifies.
14
The passage says IHL requires attacks to be subject to "human judgement about proportionality and distinction." Why does autonomous cyber operation raise a compliance problem that remotely piloted operations do not?
CORRECT: B The passage says human operators "set parameters and constraints but do not authorise specific actions within those constraints; the specific targeting and timing decisions are made algorithmically." IHL requires judgement about proportionality and distinction for each specific attack. Remotely piloted operations have a human making those judgements for each specific action; autonomous operations have humans setting general parameters but not exercising contextual judgement about each specific operation. B captures this distinction. A captures the same distinction as B but less precisely — B specifies the difference between parameter-setting and specific-action authorisation. C attributes the absence of human judgement to speed constraints, but the passage identifies it as a design feature of autonomy. D focuses on legal accountability of software, which is a related but different concern.
15
The passage says the primary concern is "the interaction between autonomous offensive systems and automated defensive responses creating escalatory dynamics that no human actor deliberately initiated." Why does this scenario specifically challenge traditional frameworks for managing escalation?
CORRECT: C The passage says the concern is escalatory dynamics that "no human actor deliberately initiated." Traditional frameworks assume escalation results from deliberate human choices that can be de-escalated through deliberate human communication. When escalation emerges from autonomous system interactions without human initiation, it may occur faster than communication can intervene, and the interacting systems may not be following cost-benefit logic that de-escalation dialogue can reach. C captures both the speed and the intentionality dimensions. A concerns unintended signalling, which is part of the problem but less complete than C. B concerns the inability to recall systems, which is a related operational constraint. D concerns attribution of responsibility to a negotiating partner, which is a related but narrower point.
16
The passage describes an "AI arms race in cyberspace." What specific dynamic constitutes this as a race rather than simply parallel development of AI capabilities?
CORRECT: B The passage describes the AI arms race as "accelerating the action-reaction cycle" — the offensive AI evades defences, defensive AI catches up, offensive AI adapts to evade the new defences, and so on. This is what makes it a race rather than parallel development: each side's advances directly respond to and shape the other's, creating a coupled dynamic. B captures this action-reaction coupling. A concerns resource competition for talent and infrastructure, which is a race in a different sense — competitive acquisition rather than the coupled action-reaction dynamic the passage describes. C concerns first-mover advantage, which is a property of some arms races but not the specific dynamic the passage identifies. D concerns general AI advances translating into cyber advantages, which is related but not the reactive coupling the passage describes.
Passage 4 Score
/4

P 05
Zero-Day Vulnerabilities, State Stockpiling & the Equities Problem
Passage Timer
10:00
Read the Passage

A zero-day vulnerability is a software flaw unknown to the vendor and therefore unpatched — "zero days" of advance warning have been available to defenders. States that discover or purchase zero-day vulnerabilities face a decision between disclosing the flaw to the vendor for patching, which strengthens cybersecurity for all users of the affected software, and retaining the flaw as an intelligence or offensive capability. The equities problem — weighing the intelligence and offensive benefits of a zero-day against the defensive costs of leaving a vulnerability unpatched in widely used software — has no clean resolution. The magnitude of both sides depends on factors that are difficult to estimate: how likely are adversaries to discover the same vulnerability independently? How extensively is the vulnerable software deployed in systems the state itself depends on? How significant is the intelligence access or offensive capability the vulnerability enables?

The US Vulnerabilities Equities Process (VEP), formalised in 2017, institutionalised the decision framework for how the US government manages zero-day vulnerabilities. The VEP requires that classified vulnerabilities be reviewed by a cross-agency process that includes both offensive-oriented agencies (NSA, Cyber Command) and defensive-oriented ones (CISA, Commerce), and that a default bias toward disclosure be applied unless specific equities justify retention. The process has been criticised on two grounds. First, the bias toward retention in practice: the agencies with offensive interests in retaining vulnerabilities have structural advantages in the review process — they discovered or purchased the vulnerability, they are most aware of its intelligence value, and they frame the disclosure question — that tend to favour retention despite the nominal disclosure default. Second, the process is classified and its outputs are not publicly disclosed, making independent assessment of whether the nominal default is being applied impossible.

The most serious systemic risk from state zero-day stockpiling is demonstrated by the 2017 Shadow Brokers leak, in which NSA-developed exploit tools including EternalBlue — based on a zero-day in Windows SMB protocol — were released publicly. EternalBlue was subsequently used in the WannaCry and NotPetya attacks, which caused billions of dollars in global damage including to critical infrastructure. The NSA had retained the vulnerability for approximately five years before disclosing it to Microsoft, which patched it two months before the leak. The incident illustrates the externality that state zero-day stockpiling imposes on the broader digital ecosystem: retaining a vulnerability in widely deployed software creates systemic risk for all users of that software, and that risk materialises if the stockpile is breached, if the vulnerability is independently discovered by adversaries, or if the capability is used in operations that allow the exploit to be reverse-engineered.

Questions · Passage 05
17
The equities problem involves weighing intelligence/offensive benefits against defensive costs. The passage identifies three factors that make this calculation "difficult to estimate." Which factor is most asymmetrically difficult for the retaining state to assess accurately?
CORRECT: B Among the three factors, the probability of adversary independent discovery is most asymmetrically difficult for the retaining state. The retaining state knows it has discovered the vulnerability (reducing its estimate of others' likelihood of finding it), but adversary discovery is determined by factors the retaining state cannot fully observe — adversary research programmes, commercial zero-day market activity, and the intrinsic discoverability of the flaw. The retaining state will systematically underestimate this probability because it frames the question from the perspective of its own exclusive knowledge. B captures this asymmetric information problem. A concerns intelligence value over time, which is difficult but the state has direct operational knowledge informing this. C concerns own-infrastructure deployment, which is difficult due to distributed inventories but is accessible through inventory processes. D concerns offensive capability against adversary defences, which is uncertain but assessable through intelligence collection.
18
The passage says offensive-oriented agencies have "structural advantages" in the VEP review process that "tend to favour retention despite the nominal disclosure default." What makes these advantages structural rather than simply reflecting offensive agencies' stronger preferences?
CORRECT: C The passage identifies the structural advantages as: offensive agencies "discovered or purchased the vulnerability, they are most aware of its intelligence value, and they frame the disclosure question." These are positional advantages — arising from their role in the process — rather than simply stronger preferences. They control the information, the framing, and the classification, which shapes what others know and how they evaluate it. C captures this positional/structural character. A concerns seniority in hierarchy, which is not what the passage identifies. B concerns budget and resources, not mentioned. D concerns legal privilege of offensive over defensive considerations, which is not the passage's account.
19
The EternalBlue/Shadow Brokers incident illustrates "the externality that state zero-day stockpiling imposes on the broader digital ecosystem." What type of externality is this, and what makes it distinctive relative to other forms of state security externalities?
CORRECT: C The externality is distinctive because its harm is latent: the vulnerability exists but the harm only materialises under specific conditions — breach of the stockpile, independent discovery by adversaries, or operational exposure. The affected parties (all software users) are unaware of the risk, cannot observe it, and cannot take self-protective action. This makes the expected harm nearly impossible for the state to factor into its decision and makes the externality qualitatively different from immediate harms. C captures this latent, probabilistic character. A correctly identifies it as a negative externality on a global population but does not capture what makes it distinctive relative to other security externalities. B reframes it as a positive externality for adversaries, which is an interesting observation but not the main externality the passage identifies. D draws an analogy to nuclear material storage, which is illuminating but substitutes an analogy for analysing what makes this externality distinctive.
20
The passage notes that the VEP's outputs are not publicly disclosed, "making independent assessment of whether the nominal default is being applied impossible." What does this absence of transparency imply about the legitimacy of the VEP as a governance mechanism?
CORRECT: C The legitimacy deficit is specifically about governance of externalities: the VEP's retention decisions impose costs on people who are not represented in the process and cannot see what decisions are being made. This is distinctive from general transparency concerns — the problem is not just that citizens can't see a classified process, but that the specific affected population (global software users bearing the externality) has no participatory mechanism. C captures this. A identifies the democratic accountability deficit but frames it generically rather than specifically about the externality and its affected population. B says the VEP is a public relations exercise, which is a possible cynical interpretation but the passage's critique is about the structural inability to verify whether the process works as intended. D prescribes an international governance solution, which is a policy recommendation not an implication about current legitimacy.
Passage 5 Score
/4
Cyber - Total Score
-/8
Want to go deeper? Explore structured courses by GRADSKOOL Learning — built for CAT, GMAT, and beyond.
Check out our courses →
_category_undefined · 111
The Entropology of the Exam
1 passage · 4 questions · beyond CAT · ∞ min total
SCORE
/4
> initialising_entropology_module.exe
> loading passage 111/110 ... [OVERFLOW]
> the 23rd category is unclassified. proceed? [Y/n]
> Y
P 111
The Entropology of the Exam
Read the Passage

The human obsession with categorisation is a defensive reflex against the sheer, unbridled chaos of the universe. In the preceding 110 passages, you have traversed the neat silos of Philosophy, Physics, and Geopolitics. You have been trained to believe that if you can label a thing, you can master it. Yet the 111th step of any journey is rarely a continuation of the path; it is the moment the path dissolves into the undergrowth. This is the domain of Entropology — not the study of man, but the study of his undoing.

Standardised testing is the peak of this "Lego-brick" reality. It assumes that a complex intellectual consciousness can be measured by its ability to navigate a 600-word cage. The "Main Idea" is often a ghost, a consensus reached by committee to ensure that no one is too right or too wrong. When a student identifies a "Tone," they are not engaging with a human soul; they are identifying a frequency. We have turned the wild, sprawling jungle of human thought into a series of manicured hedges. We clip the stray branches of ambiguity and call it "Logic."

However, true mastery lies in recognising the "Glitch." The Glitch is the moment where the passage stops being a source of information and starts being a mirror. In this 111th slot, the quirkiness is not a bug; it is the feature. While the previous 110 passages asked you to look at the world, this one asks you to look at the screen. It asks why you are here, leaning into the blue light, hunting for a "Correct Option" as if it were a survival necessity.

The badassery of the polymath is not found in knowing the 22 categories listed in the nav bar. It is found in the realisation that the 23rd category — the unclassified — is where the actual truth resides. Everything else is just a very expensive rehearsal for a play that has no audience. The silence between the words is where the entropy lives. If you can read that silence, you have finally passed the test.

Questions · Passage 111
1
Which of the following best captures the author's primary critique of standardised testing?
CORRECT: B The passage argues that testing is a "defensive reflex" that tries to measure "complex intellectual consciousness" by putting it in a "600-word cage" of manicured categories. The critique is structural: the act of reduction itself is the problem. A mentions diversity but the author implies even 22 categories leave the method flawed. C is a minor detail. D misinterprets the "rehearsal" metaphor — it refers to the act of testing itself, not a future career.
2
The author coins the term "Entropology" most likely to suggest which of the following?
CORRECT: B The text defines Entropology as "the study of man's undoing" and the "moment the path dissolves." The neologism fuses entropy (the tendency toward disorder) with anthropology (the study of humans) — suggesting that the study of human classification is simultaneously the study of its failure. A is too literal; the author wants to break categories, not add one. C imports the thermodynamic definition without the humanistic critique. D narrows the scope to individual student failure rather than systemic collapse.
3
What is the "Glitch" referred to in the third paragraph?
CORRECT: C The Glitch is described as the passage becoming "a mirror" and the reader asking "why you are here" — a moment of meta-cognitive reflection that breaks the fourth wall between reader and text. This is specifically a self-referential, systemic recognition rather than a personal error or a general textual property. A is a literal reading of a metaphorical term. B describes a common test experience but not the reflective "mirror" the author describes. D names a general literary quality; the Glitch is specifically the reader's realisation in this context.
4
The tone of the passage can best be described as:
CORRECT: B "Subversive" because the passage undermines the very testing apparatus it inhabits — it exists inside a test to critique the existence of tests. "Philosophical" in its sustained engagement with categorisation, entropy, and the limits of structured knowledge. A is wrong because the author explicitly mocks academic silos rather than inhabiting them. C is incorrect — the tone is biting and, at points, cynical ("a very expensive rehearsal for a play that has no audience"). D fails because the author takes a strong, subjective, even provocative stance rather than maintaining analytic distance.
111th Passage · Score
/4
> you have reached the edge of the map
> the 23rd category remains undefined
> _