FEBRUARY 17, 2026

The Paradox

Simpson's Paradox — where an aggregate trend reverses inside its own subgroups — has operated inside the Federal Reserve for fifty-three years. The choice of inflation average has never been neutral.

Quick Summary

Simpson's Paradox — when a trend visible in aggregated data reverses inside every subgroup — is among the most consequential phenomena in statistics. The FOMC Insight Engine tested whether the same paradox operates inside the Federal Reserve's treatment of inflation data across fifty-three years.

The archive confirms a persistent institutional pattern spanning five chairmanships and six inflation episodes. The Federal Reserve repeatedly decomposes aggregate inflation into components, identifies technically defensible reasons to discount whichever components argue for tighter policy, and presents the resulting "underlying" measure as analytically neutral when it is analytically motivated. The pattern operates regardless of whether the measurement critique is wrong, correct, or mixed — and it has been identified in real time by internal dissenters who were systematically overridden.

Bottom line: the choice of inflation average is the policy.

In 1973, the University of California, Berkeley was accused of discriminating against women in graduate admissions. The aggregate numbers were damning: 44 percent of male applicants were admitted, compared to 35 percent of female applicants. A federal investigation seemed justified.

Then a statistician named Peter Bickel decomposed the data by department. In four of the six largest departments, women were admitted at rates equal to or higher than men. The aggregate bias vanished — and in some cases reversed — once the subgroups were examined separately. The explanation was structural: women applied disproportionately to the most competitive departments, which had low admission rates for everyone. The aggregate told one story. The decomposition told the opposite. Both were arithmetically correct. The question was which average to present — and the choice of average determined whether Berkeley appeared guilty or innocent.

The phenomenon is called Simpson's Paradox: a trend that appears in aggregated data reverses when the data is decomposed into subgroups. The paradox resolves once you recognize that the aggregation method — the choice of how to weight the components — determines the conclusion. At Berkeley, presenting both averages resolved the controversy. In most institutional settings, only one average is shown. The choice of which one is rarely announced as a choice.

The same paradox operates inside the most consequential number in the global economy: the Federal Reserve's chosen measure of inflation. For fifty-three years, the committee has possessed the full decomposition — every component, every sector, every measurement artifact — and has consistently presented the public with whichever aggregation produces the most accommodative reading. The choice is never announced as a choice. It is presented as the signal extracted from noise. But the archive records every deliberation, every alternative metric considered, and every dissenting voice that identified the pattern in real time.

These are testable claims.

We searched 230,000+ passages across 90 years of Federal Reserve documents.
• • •

The Fine Reasons

On September 27, 1994, Gary Stern, President of the Federal Reserve Bank of Minneapolis, delivered the most concise diagnosis of a pattern that the FOMC Insight Engine would later document across five decades of deliberations.

"I'm concerned about our ability to analyze the price data and come up with all these fine reasons why something unusual is going on and the rise in prices won't last. We can tell those kinds of tales, but I must say that that kind of analysis has led to past policy errors."
Gary Stern, President, Federal Reserve Bank of Minneapolis, FOMC Meeting, September 27, 1994

Stern was not objecting to compositional analysis itself. He was identifying a directional bias in how the committee deployed it. The analytical framework — decompose inflation into components, identify the volatile or idiosyncratic elements, argue that the "underlying" trend is more benign than the headline — was technically defensible in any individual instance. The problem was cumulative. Each time the committee found a component to exclude, the exclusion supported the same conclusion: true inflation was lower than measured, and therefore tighter policy was unnecessary.

He sharpened the point in an earlier meeting that July, naming the institutional mechanism with a precision the archive would validate thirty years later.

"But if we continue to look at it this way, we are always going to say, at least at first glance, why not avoid a tighter policy or why not indeed go to an easier policy."
Gary Stern, President, Federal Reserve Bank of Minneapolis, FOMC Meeting, July 6, 1994

The archive tested Stern's claim against every subsequent decade. In the 1970s, the components excluded were food and energy — "exogenous" factors attributed to OPEC and weather rather than monetary policy. In the 1990s, the exclusion took a different form: a measurement discount, applied to the entire Consumer Price Index, based on the Boskin Commission's finding that CPI overstated true inflation by roughly 1.1 percentage points. In 2017, the excluded component was wireless telephone service prices — a specific line item whose decline the staff labeled "idiosyncratic," justifying continued accommodation despite a tightening labor market. In 2021, the exclusion was used cars and supply chain effects, packaged under the word "transitory." In 2023 and 2024, the excluded component was shelter — a lagged measurement artifact consuming one-third of the CPI basket.

In each of these episodes, the technical observation was defensible. Food prices do spike with weather. CPI did overstate inflation. Cellular data plans did experience a one-time price drop. Supply chains were disrupted. Shelter measurement does lag market rents. The "fine reasons" were valid in isolation. Stern's warning was that their validity is precisely what makes them dangerous — because the committee can always find one, and the one it finds systematically argues for the same policy direction.

Loretta Mester, President of the Cleveland Fed, identified the same pattern twenty-seven years after Stern, in September 2021, when she warned that the "transitory" label had become "a less useful description of the inflation situation." The Engine scored her prescience at 1.0 — the highest rating in the system. Within three months, the committee was forced to retire the word. Within nine months, inflation reached forty-year highs. The "fine reasons" had held for exactly as long as Stern predicted: until the systemic damage was already done.

The question the archive forces is not whether the committee's compositional analysis was ever wrong. It is whether the committee has ever used compositional analysis to argue that measured inflation was too low — that the headline understated the true inflationary pressure and therefore tighter policy was warranted. The answer requires tracing the pattern to its origin.

• • •

The First Exclusion

The concept of core inflation — the exclusion of food and energy prices from the aggregate measure — was not a statistical discovery. It was a policy construction, and the archive documents its invention with a clarity that subsequent decades of institutionalization have obscured.

In 1973, as inflation accelerated past the levels that conventional monetary theory attributed to demand pressure, Chairman Arthur Burns began publicly decoupling the headline price index from what he called the "basic trend." His framing was explicit.

"The upsurge of the price level this year hardly represents either the basic trend of prices or the response of prices to previous monetary or fiscal policies."
Arthur Burns, Chairman, Federal Reserve Board, Public Statement, 1973

The rhetorical move was not to deny that prices were rising. It was to deny that the rise was the Fed's responsibility. Staff had presented a compositional breakdown showing that food and fuel accounted for a large share of the increase. Burns converted this factual decomposition into a causal exoneration: if food and fuel were "exogenous" — driven by OPEC and drought rather than monetary expansion — then the "basic trend" was more benign, and the policy stance was appropriate. By 1974, he was telling the public that "60 per cent of the rise in the consumer price index in 1973 stemmed from increased prices of food and fuel," as though a compositional breakdown were an absolution.

The committee's internal deliberations reveal that the framing was not accidental. Vice Chairman George Mitchell asked staff to assess the proposition that "recent increases in prices of oil and foods were one-time events which had led to unavoidable increases in the general price level but which had no necessary implications for the subsequent rate of inflation." Governor J. Charles Partee argued that food price increases resulting from drought meant that "fundamentally there really has not been an underlying deterioration." By 1978, staff had formalized the exclusion, producing tables for "All items less food and energy" that showed core inflation running 100 to 200 basis points below the headline. In May 1978, the core rate was reported at 8.5 percent while headline CPI stood at 10.7 percent.

One voice challenged the architecture directly. Lawrence Roos, President of the Federal Reserve Bank of St. Louis, refused to accept the premise that inflation could be decomposed into the Fed's fault and someone else's fault.

"Sometimes I think we like to believe that we are prisoners of exogenous factors. What part of that 10 percent is a reflection of monetary policy and what part is a reflection of energy and food prices?"
Lawrence Roos, President, Federal Reserve Bank of St. Louis, FOMC Meeting, July 11, 1979

Roos was making the anti-Simpson's-Paradox argument: stop decomposing the aggregate to find a convenient subgroup. The aggregate is the aggregate. Sustained inflation of any component requires monetary validation — without accommodative policy, supply shocks produce one-time level adjustments, not persistent multi-year price spirals. By excluding food and energy, the committee had constructed an alternative average that told a different story from the headline, and it was presenting that alternative as the "basic trend" while the headline was treated as noise.

The Volcker shock proved Roos right — not that supply shocks are imaginary, but that their persistence is monetary. When Paul Volcker raised the federal funds rate to 20 percent in 1981, inflation broke — food, energy, and all. Relative price shocks from oil embargoes and drought were real, but they cannot sustain multi-year price spirals without accommodative policy to validate them. The "exogenous" factors that Burns had argued were beyond the Fed's control proved entirely dependent on monetary accommodation for their persistence. The Engine scored Roos's prescience at 1.0. The committee had spent a decade excluding the components that told it to tighten. It took a new chairman, willing to ignore the decomposition and respond to the aggregate, to end the inflation.

Core inflation survived Burns's chairmanship. It became the standard analytical lens — not because it was proven analytically neutral, but because it had become institutionally indispensable. The question was whether the committee would ever use it symmetrically.

• • •

The Measurement Discount

In 1996, the Boskin Commission reported that the Consumer Price Index overstated true inflation by approximately 1.1 percentage points due to substitution bias, quality adjustment failures, and outlet effects. The finding was technically legitimate. The Bureau of Labor Statistics subsequently implemented corrections. This was genuine measurement science, not motivated reasoning about the data.

The committee converted it into a policy instrument anyway.

Chairman Alan Greenspan led the conversion. In July 1996, he told the committee that the emphasis on CPI had been a mistake, and began constructing a framework in which measured inflation could be systematically discounted.

"The emphasis that we have been putting on the consumer price index, I think in retrospect, is turning out to have been a mistake... the CPI is biased not only with respect to the absolute amount of change — the 1/2 to 1-1/2 percentage point bias — but there is also increasing evidence that the bias is increasing."
Alan Greenspan, Chairman, Federal Reserve Board, FOMC Meeting, July 2, 1996

Staff had presented the Boskin findings as a range of uncertainty — David Stockton told the committee in September 1996 that "a range roughly of 1/2 to 1-1/2 percent is still a reasonable estimate for the measurement bias." Staff memos cautioned that "year by year, these differences fluctuate considerably and can give conflicting signals." This was honest analytical work: a range, with caveats, presented as one input among many.

Greenspan stripped away the uncertainty. By January 1997, he was testifying before Congress in terms that bore no resemblance to the staff's cautious range.

"There is virtually no chance that the CPI as currently published understates the rate of growth... there is almost a 100 percent probability that we are overcompensating."
Alan Greenspan, Chairman, Federal Reserve Board, Congressional Testimony, January 30, 1997

A "reasonable estimate" of a range had become "100 percent probability." The transformation served a specific purpose: if inflation was systematically overstated, then a measured 3 percent inflation rate might represent "true" price stability, and the committee did not need to respond to the low unemployment rates that Phillips Curve models indicated would trigger acceleration. The measurement discount created room for accommodation that the raw data did not provide.

The committee chose the Boskin Commission's larger bias estimate over the Congressional Budget Office's more conservative range of 0.25 to 1.0 percent — because the larger discount provided more room. It rejected a staff proposal to incorporate asset prices into the inflation index — because that would have produced higher readings during the dot-com expansion. It rejected the staff's own suggestion of a "methodologically consistent" index when the alternative would have been more hawkish. At each fork in the road the archive documents, the institution chose the path that produced the lower inflation reading.

The decisive test came seven years later, when Janet Yellen discovered that the measurement argument cuts both ways. In 2004, she observed that a methodology change had revealed that "inflation was really worse in 2004 than we thought" — a technical revision that ran in the opposite direction from the Boskin discount. The committee treated this finding as an anomaly rather than as evidence that measurement uncertainty is bidirectional. (The Measure documented this institutional tendency — the Fed's choice of inflation gauge serving convenience rather than analytical rigor.)

The Boskin episode proved something that the Burns era could not. Burns's food/energy exclusion was arguably wrong — Roos showed that monetary policy was the primary driver. The committee could dismiss the 1970s pattern as a mistake corrected by Volcker. But the Boskin critique was right. CPI did overstate inflation. And the committee weaponized the valid finding just as readily as Burns had weaponized the invalid one. The pattern was independent of whether the underlying technical observation was correct. What mattered was what the institution did with it.

Greenspan himself demonstrated a remarkable self-awareness of what he was building. In July 1997, he warned his colleagues that the institutional environment was shifting "toward finding any reason to presume that a sound-money, hawkish view is the wrong view." He was describing the dynamic he was leading. The measurement discount had become what the archive identifies as a one-way ratchet: when bias suggested inflation was overstated, the committee incorporated it immediately; when revisions suggested inflation was understated, the finding was deferred for further study. The ratchet turned, in every episode the archive documents, in the accommodative direction.

But the Burns and Boskin episodes share a limitation: both are visible only in retrospect, through transcripts released with a five-year lag and assessed against outcomes that took a decade to materialize. What the archive could not yet show was whether the pattern operated in real time — whether the three institutional layers could be caught in the act of transforming a precise technical finding into an unfalsifiable public narrative, with each stage of dilution documented as it happened.

• • •

The Pipeline

The 2022–2024 inflation episode provided exactly that case, because the three institutional layers — technical staff analysis, committee deliberation, and public communication — produced records that the Engine could triangulate against each other while the episode was still unfolding.

By October 2022, Federal Reserve staff had identified that shelter inflation — comprising roughly one-third of the CPI basket — was being driven by a mechanical lag between market rents (measured by Zillow, CoreLogic, and Apartment List) and the official CPI measure (which tracks the stock of existing leases rather than the flow of new ones). The lag was approximately twelve months. Private-sector indices had peaked in April 2022; the official measures had not yet turned. This was technically valid and analytically useful. It meant that a significant portion of the inflation the public was experiencing in the published data reflected housing costs that had already peaked in the market.

Three governors — Philip Jefferson, Christopher Waller, and Lisa Cook — all used an identical quantitative framework in their October 2022 speeches, drawing from the same staff briefing.

"If the price index for housing services continues to increase at the recent monthly average rate of around 0.6 percent for the next several months, then other core price increases would need to moderate considerably, to a monthly average of a bit less than 0.2 percent."
Philip Jefferson, Governor, Federal Reserve Board, Public Speech, October 4, 2022

This was a precise, conditional, falsifiable statement. It defined a 40-basis-point "inflation budget": shelter was consuming nearly all of the allowable inflation for the 2 percent target to hold. For the target to be met, either shelter had to drop to 0.2 percent monthly or the entire non-housing basket had to run at near-zero. The conditionality was load-bearing — it told the committee and the public exactly what had to happen for the disinflationary scenario to materialize.

Then the information passed through the institutional layers, and the conditionality was stripped away.

On November 2, 2022, Chair Jerome Powell translated the staff's quantitative framework into a metaphor.

"The implication is that there are still, as people, as non-new leases roll over and expire, right, they're still in the pipeline... But at some point, once you get through that, the new leases are going to tell you — what they're telling you is, there will come a point at which rent inflation will start to come down."
Jerome Powell, Chair, Federal Reserve Board, Press Conference, November 2, 2022

The "pipeline" metaphor preserved the direction of the staff's finding but erased its conditions. The 0.6 percent threshold disappeared. The 0.2 percent non-housing requirement disappeared. The conditionality — "if X continues at Y, then Z must happen" — became an unconditional narrative: disinflation is in the pipeline and will arrive. The "when" became vague. The "how much" vanished entirely. The critical insight — that shelter was consuming the entire inflation budget — was simply dropped.

By March 2024, the pipeline metaphor had degraded further. The expected convergence had not materialized on schedule. The lag was proving longer than twelve months. Some governors were warning internally about a shelter resurgence. Powell's public language had evolved from the conditional quantitative framework of October 2022 through the unconditional pipeline narrative of November 2022 to its final form.

"There's a little bit of uncertainty about when that will happen, but there's real confidence that they will show up eventually over time."
Jerome Powell, Chair, Federal Reserve Board, Press Conference, March 20, 2024

The dilution gradient was now complete. The technical layer had produced: "0.6 percent monthly shelter, 0.2 percent required for non-housing — specific, conditional, falsifiable." The political layer converted this to: "pipeline of new leases will flow through — directional, unconditional, unfalsifiable." The public layer reduced it to: "real confidence they will show up eventually — content-free reassurance." The Engine estimated 80 percent information dilution between staff analysis and public communication on this specific topic — measured as the fraction of quantitative content (numbers, conditions, ranges, explicit uncertainty markers) present in the technical layer but absent from the public layer.

While the public was hearing "real confidence," Governor Christopher Waller was raising an alarm that did not reach the press conference podium.

"I have been concerned since May about a resurgence of housing services inflation... this number heightens my concern that housing services inflation has not slowed, and may not slow, to the rate needed to sustain a return to our 2 percent target."
Christopher Waller, Governor, Federal Reserve Board, Public Speech, October 18, 2023

Waller had been among the three governors who used the precise 0.6 percent framework in October 2022. He was the person closest to the technical analysis and the first to recognize when the pipeline narrative was failing. The committee majority, which had received the same staff briefing but internalized only the metaphorical version, was slower to update. By May 2024, the FOMC minutes conceded that "incoming data pointed to more persistence in inflation" than the March projections had anticipated. Some participants argued that "the recent increases in inflation had been relatively broad based and therefore should not be discounted as merely statistical aberrations."

They were making Stern's argument from 1994, updated for a new decade: stop telling tales about why the price increases will not last. By September 2024, Powell was forced to acknowledge that the lag was taking "several years rather than just one or two cycles of annual lease renewals" — an admission that the original pipeline framing had been too confident, but delivered in language soft enough that the admission itself required no accountability, because the original framing had been stripped of the conditionality that would have made its failure measurable.

This is the institutional consequence of information dilution. The technical layer produces conditional, updatable analysis. The public layer produces unconditional narrative commitments. When the conditions change, the technical layer can update. The public layer is locked into its prior narrative and must either contradict itself or quietly shift the goalposts. The committee chose the latter — from "several months" to "over the year ahead" to "eventually over time" to "several years" — and each shift eroded credibility precisely because the original public framing had been stripped of the uncertainty that would have allowed graceful revision. (The Position traced the full architecture of this institutional filtering — from staff precision through committee framing to public simplification.)

The shelter episode demonstrated one half of the pattern: when a component is running inconveniently high, the committee identifies a technically valid reason to discount it and presents the resulting "underlying" measure as more representative of true price pressures. But Stern's warning implied a stronger claim — that the analytical framework is deployed asymmetrically, systematically in the accommodative direction. Proving asymmetry required finding an episode where the same analytical framework was deployed in the opposite direction.

• • •

The Instrument That Sees Both Ways

The evidence arrived from a corner of the price index that most inflation commentary ignores entirely: the non-market-based components of the Personal Consumption Expenditures deflator.

Core PCE — the Federal Reserve's preferred inflation gauge — includes components that do not reflect market transactions. Portfolio management services are priced as a share of assets under management, so that rising equity valuations mechanically register as higher service prices even when the actual cost of financial advice has not changed. Nonprofit hospital charges reflect administered Medicare and Medicaid payment schedules rather than competitive pricing. Imputed financial services are statistical constructions with no transactional referent.

As early as 1999, David Stockton, Director of the Division of Research and Statistics, questioned whether these components belonged in the Fed's policy instrument at all.

"Clearly, one might argue that the nonmarket price portion of the PCE is not so relevant."
David Stockton, Director, Division of Research and Statistics, FOMC Meeting, February 2, 1999

By 2005, David Wilcox, Deputy Director of the same division, had reached a more alarming conclusion about the measurability of these components.

"The very essence of these components we're talking about is that it's vastly more difficult, maybe even impossible, to do any kind of benchmark study to know whether we have the measurement right."
David Wilcox, Deputy Director, Division of Research and Statistics, FOMC Meeting, August 9, 2005

Wilcox was saying that roughly a quarter of the core PCE index — the metric used to calibrate the most important price in the global economy — was unmeasurable in any verifiable sense. Staff could not test whether the imputed prices corresponded to reality because there was no "third source of the real world" to validate them against. The Fed's preferred inflation gauge contained a structural layer of noise that could not be separated from signal.

The staff did what competent technicians do: they built a workaround. By 2010, "Market-Based PCE" — which excluded the imputed and administered components — was formally included in Tealbook projections. Staff provided the committee with specific basis-point estimates of the wedge between market-based and total core PCE. In 2017, staff analysis quantified that administered healthcare prices alone were contributing 30 fewer basis points to core PCE than their historical average, concluding that core PCE would read 1.7 percent rather than its published level if healthcare services inflation were at its benchmark.

This is where the bidirectional deployment becomes visible. During 2013–2017, the non-market components were suppressing core PCE — making inflation look lower than market-based prices alone would suggest. The committee used this finding to argue that "true" inflation was actually higher than measured, closer to target than the headline indicated. This supported continued accommodation: the economy was closer to price stability than the published number suggested, so there was no need to normalize rates.

During 2022–2024, the logic reversed. Portfolio management fees — mechanically inflated by a rising stock market — were pushing core PCE higher than market-based prices alone. The committee used the same analytical framework, in the opposite direction, to argue that "true" inflation was lower than measured. This supported patience on further tightening: the inflation reading was distorted by a statistical artifact, so there was no need for additional rate increases.

Same analytical framework. Opposite direction. Same institutional function: justify the accommodative stance.

James Bullard, President of the St. Louis Fed, pushed in 2018 for the committee to formally decompose core PCE and exclude the non-market "blue components" that did not represent actual inflation dynamics. The staff had already built Market-Based PCE. The committee had it in the Tealbook. The question was whether to adopt it publicly as the primary gauge — to show the public both averages, as Bickel did with the Berkeley admissions data, rather than one.

The committee declined. It rejected the switch to Market-Based PCE for the same reason it had rejected the Trimmed-Mean PCE proposal in 2018 and the asset-price-inclusive index in 1997: the alternative metric would constrain policy flexibility. The committee feared that changing the public target would "look like they were manipulating the data to hit their targets" — an admission that the choice of metric was understood, internally, as a policy-relevant decision rather than a neutral analytical one. The contaminated index was retained precisely because its known flaws provided optionality. When the noise pushed the number down, the committee could invoke the noise to argue inflation was really higher. When the noise pushed it up, the committee could invoke the same noise to argue inflation was really lower.

This is the recognition that completes Stern's diagnosis. The committee does not merely use compositional analysis to argue that inflation is lower than measured — the pattern the Burns, Boskin, and shelter episodes document. It uses compositional analysis to argue whatever the policy requires. The direction of the decomposition tracks the policy preference, not the measurement science. The constant across five decades is not the direction of the choice. It is that in every episode the archive documents, the choice supports accommodation.

The constraint is now visible. The committee cannot adopt Market-Based PCE, Trimmed-Mean PCE, or any robust alternative as its public target without surrendering the flexibility that the contaminated index provides. It cannot show the public both averages — the published measure and the internal alternative — without revealing that the choice between them has been a policy decision all along. And it cannot continue presenting a single, selected average without perpetuating the asymmetry that Stern, Roos, Mester, and Waller identified. The institution built better instruments and chose not to use them, because the instruments it retained were more useful precisely in their imprecision. That choice, once made, is self-reinforcing: each year the committee relies on the flexible metric, the cost of switching to the accurate one — measured in credibility, in the retroactive admission of what the switch implies — grows larger. The degrees of freedom have been exhausted. The pattern will continue not because the institution lacks the capacity to measure inflation cleanly, but because clean measurement would constrain the policy discretion the institution values more.

Three findings would falsify this thesis. A sustained episode in which the committee used compositional decomposition to argue that measured inflation was too low and tightened policy in response. A public commitment to reporting both the headline and a robust alternative metric, with symmetric use of the gap. Or documented cases in which the committee chose the more hawkish decomposition when both readings were plausible. The archive spans eighty-three years. These episodes should be easy to find if the pattern is not real.

• • •

The Paradox

Bickel resolved the Berkeley paradox by showing both averages. The aggregate admission rate and the department-by-department rates were laid before the reader, who could judge which number described reality. The analytical contribution was not the discovery of either number. It was the insistence on showing both.

The Federal Reserve has never shown both. Across fifty-three years, five chairmanships, and six distinct inflation episodes, the committee has possessed the full decomposition — every component, every alternative metric, every robust estimator — and has presented the public with whichever aggregation supports its current policy preference. Burns presented "All items less food and energy" as the "basic trend" while headline CPI ran two points higher. Greenspan presented the Boskin discount as "100 percent probability" while staff called it a range. Powell presented the "pipeline" as inevitable while the 0.6 percent condition that made it falsifiable was stripped from public communication. In every documented episode, the alternative average existed internally. In every documented episode, the committee chose which one the public would see. And in every documented episode, the institutional dissenters — Roos, Stern, Mester, Waller — who identified the pattern were overridden until the inflationary consequences forced a belated correction. The choice of which average to present was the policy. It always has been.

The archive does not show an institution that cannot measure inflation accurately. It shows an institution that possesses multiple accurate measures and selects among them based on which story the selection tells. The selection is the policy instrument. The price index is the output. And the paradox — that the aggregate and the decomposition tell opposite stories — is not a statistical curiosity discovered in a university admissions office. It is the structural foundation on which five decades of Federal Reserve inflation communication rest.

Search the Archive Yourself

The FOMC Insight Engine contains 230,000+ searchable passages from Federal Reserve transcripts, Tealbooks, minutes, and speeches spanning 1936–2025. Every claim in this article can be verified.

Explore the Archive →
Konstantin Milevskiy Builder of the FOMC Insight Engine • konstantine.milevsky@gmail.com