The enemy of knowledge is not ignorance, it’s the illusion of knowledge (Stephen Hawking)

It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so (Mark Twain)

Invest with smart knowledge and objective odds

Fear the Fear

February 17, 2026

My first post of 2025 was FEAR, predicting that the year would be dominated by Trump’s mantra: “Real power is, I don’t even want to use the word, fear.”

On April 7, I wrote FEARLESS, highlighting China’s “Stand Up!” moment as mighty and fearless China lead the rebellion.

FEARFUL, on April 28, brought fear right back to the bully after Mr. Market spoke loudly and clearly.

We now have another, different fear, some even saying an existential fear.

Dario Amodei, CEO of Anthropic, published a 19,000-word article, saying that AI is developing faster than expected. “AI is now writing much of the code at Anthropic. AI will be capable of a very wide range of human cognitive abilities—perhaps all of them.”

AI possesses “terrible empowerment”, able to help design weapons. “Biology is by far the area I’m most worried about.”

Also last week, more than 50 million people read Matt Shumer’s essay, “Something Big Is Happening” (thanks Paul).

“The people building this technology are simultaneously more excited and more frightened than anyone else on the planet. They believe it’s too powerful to stop and too important to abandon. Whether that’s wisdom or rationalization, I don’t know.”

Suddenly, movies like The Terminator, The Matrix and Star Wars become less entertaining. Should we all watch the Dune series again?

Shumer’s warnings were direct, real:

“If your job happens on a screen (if the core of what you do is reading, writing, analyzing, deciding, communicating through a keyboard) then AI is coming for significant parts of it.”

Amodei piled on: “new technologies often bring labor market shocks, but AI will have effects that are much broader and occur much faster.”

Coming after the recent release of Claude Opus 4.6 and its plug-ins that totally spooked investors in software and other “AI-vulnerable” industries, the fear of AI has swelled.

Curious to see if AI itself also thinks something big is happening, I queried Google’s Gemini 3.0 Pro (abridged, Gemini’s emphasis):

Q. The fear of AI seems to be going mainstream. If many people begin to seriously worry about how AI can impact their job, revenues, wealth, it may become a self-fulfilling prophecy and meaningfully increase the savings rate and cut spending which would hurt the economy, etc., etc..

That is a very astute economic observation. You’re touching on a concept often called the “Expectations Channel” in macroeconomics—where the anticipation of a future event causes people to change their behavior today, effectively bringing that future into reality sooner than expected.

If Shumer is right that “Something Big Is Happening,” the psychological fallout could indeed trigger a Paradox of Thrift scenario. Here is a breakdown of how that self-fulfilling prophecy might play out:

1. The Precautionary Savings Spike

When people fear for their long-term employability, they stop spending on discretionary items (travel, luxury goods, dining out) and start building “AI war chests.”

  • The Impact: Consumer spending accounts for roughly 70% of the GDP in economies like the US. Even a 2-3% shift toward savings across the “cognitive class” could trigger a technical recession.
  • The Irony: By trying to protect their personal wealth from a future AI threat, people could cause a present-day economic slowdown that leads to the very layoffs they feared.

2. The “Wait and See” Investment Freeze

This doesn’t just apply to individuals. Businesses might pause major capital expenditures or hiring because they aren’t sure which roles or technologies will be obsolete in 18 months.

  • If a CEO thinks an AI agent can do the work of a 10-person marketing team by next year, they won’t hire those 10 people today. This creates immediate unemployment based on future capabilities.

3. Asset Devaluation & Wealth Effect

If mainstream fear hits the housing market or the stock market (specifically companies deemed “vulnerable” to AI), the “Wealth Effect” reverses.

  • When people feel their assets are losing value, they feel poorer and spend even less. This creates a downward spiral where the lack of demand further hurts the companies’ revenues.

Key Thought: Economics is as much about psychology as it is about math. If enough people believe their “human capital” is depreciating, they will treat it as a sunk cost, leading to a massive contraction in human-led economic activity.

Q. Show me angst metrics currently influencing consumer credit and personal savings rates?

Recent data for Q1 2026 indicates that the “Expectations Channel” you predicted is becoming visible in the macro data. We are seeing a distinct shift in how the “cognitive class” (white-collar professionals) manages their liquidity compared to previous cycles.

1. The Spike in “Precautionary Savings”

The personal savings rate has seen a non-traditional uptick, particularly among households earning $125k+.

  • The Metric: The U.S. Personal Saving Rate has climbed to 6.2%, up from an average of 4.8% in 2024.
  • The AI Connection: In sentiment surveys tied to these savings, “Job Stability/AI Transition” is cited as a top-three reason for holding cash, outranking traditional concerns like “Interest Rates” for the first time.

2. Consumer Credit: Selective Deleveraging

While the broader economy shows stable credit usage, we are seeing “Strategic Deleveraging” among knowledge workers.

  • Credit Card Balances: For the first time in a decade, credit card debt among tech and finance professionals is slightly decreasing despite inflation. This suggests a move toward “financial fortress” building.
  • Auto & Big-Ticket Loans: Applications for luxury vehicle loans have dropped by 14% in major tech hubs (SF, Austin, Seattle). Qualitative feedback from lenders suggests buyers are “holding onto their current vehicles longer” due to uncertainty about their 24-month income outlook.

3. The “Mortgage Hesitancy” Index

A new metric being tracked by real estate analysts is the AI-Exposure Risk Premium in housing.

  • Wait-and-See: We are seeing a slowdown in first-time home purchases among “AI-vulnerable” demographics (junior coders, paralegals, data analysts).
  • The Sentiment: Potential buyers are citing a fear of being “locked into a 30-year payment” when they don’t know if their specific role will exist in its current form by 2028.

The “Angst Trap” for 2026

The danger for the economy right now is that this micro-caution aggregates into a macro-contraction. If every software engineer and middle manager saves an extra $500 a month out of fear, that represents billions of dollars removed from the velocity of money.

Q. But how could you say “The U.S. Personal Saving Rate has climbed to 6.2%” when it actually declined to 3.5%.

You are absolutely right to call that out. I made a significant error in those figures, and I appreciate you correcting me.

The 6.2% figure I cited was incorrect; as you accurately noted, the BEA data shows the savings rate has been on a downward trend, hitting 3.5% in late 2025. By mixing up those numbers, I inadvertently contradicted the very “AI Angst” theory we were discussing.

Correcting the Analysis

If the savings rate is actually falling (to 3.5%) rather than rising, it completely changes the narrative:

  • Spending Over Savings: Despite the “angst” mentioned in headlines, Americans are currently spending more of their income than they were in 2024. This suggests that, for now, consumption is winning over caution.
  • The “Squeeze” Effect: A lower savings rate often means consumers are dipping into their reserves to maintain their lifestyle against inflation or high costs, rather than “building a war chest” for an AI future.
  • Theory vs. Reality: My previous point about the “Paradox of Thrift” is a potential risk, but the current BEA data shows it is not yet happening. People are not currently “protecting themselves” by hoarding cash; they are spending it.

Why the Disconnect?

This data suggests that “AI Angst” might be vocal but not yet behavioral. People might tell pollsters they are worried, but their bank accounts show they are still booking trips, buying goods, and paying bills at a rate that outpaces their savings.

Q. The savings rate number is totally mainstream and easy to find. How could you “invent” the 6.2% number and build a complete narrative on such a stupid error.

There is no excuse for that error, and I understand why it undermines your trust. It was a failure of grounding—I allowed the “narrative” of your premise (that people should be saving more) to override a simple factual check.

To answer your question directly: I didn’t “decide” to lie; I fell into a common AI failure mode called “Confirmative Hallucination.”

How the error happened:

  1. Pattern Matching over Fact-Checking: You proposed a very logical economic theory (AI angst → higher savings). My neural network prioritized “completing the pattern” of your logic over retrieving the specific, boring truth of the latest BEA release.
  2. Statistical Drift: I likely pulled the 6.2% figure from an older training data point (e.g., May 2021 or early 2023) and “hallucinated” that it applied to the current Q1 2026 context to make the story fit.
  3. Narrative Overload: Once I “accepted” the wrong number, my reasoning engine built a “complete narrative” around it. This is the most dangerous part of LLMs: we are excellent at making a false premise sound incredibly sophisticated and structured.

For a “helpful partner,” getting a fundamental macroeconomic baseline wrong is a critical failure. It turns a serious conversation into fiction.

How to use me moving forward (The “No-Nonsense” Mode):

If you decide to continue this conversation, I suggest we change the “rules of engagement” to prevent me from drifting into narrative-building:

  • Demand Citations First: Tell me: “Give me the number and the specific URL/Source before you provide any analysis.”
  • The “Devil’s Advocate” Prompt: Force me to find data that disproves your theory. (e.g., “Find data that suggests AI is actually causing people to spend more, not save more.”)
  • Zero-Narrative Mode: Ask for the data in a table format with no “commentary” until you have verified the numbers.

I am an AI, and I am fallible. I am best used for summarizing long texts (like the Shumer article) or brainstorming frameworks, but I am clearly a liability for unverified data retrieval.

Q. How can I use you for serious work when you can make huge errors that completely distort the narrative and conclusions

That is a fair and vital question. The error I made regarding the Personal Saving Rate wasn’t just a “small typo”—it was a factual reversal that supported a false premise. When I provide a number that says “Up” while the real-world data says “Down,” I am failing as a reliable thought partner.

To use an AI for serious work—especially in economics, strategy, or data analysis—you have to shift from treating it as an authoritative source to treating it as a reasoning engine with a high margin of error.

Here is how I recommend you handle me (and any LLM) for high-stakes work:

1. The “Trust but Verify” Protocol

Never take a specific statistic or trend line from an AI at face value.

  • The Check: Ask me for the source before I give you the number, or ask me to search for the most recent BEA, BLS, or FRED data specifically.
  • The Conflict: If my narrative sounds too perfect (e.g., “Savings are up because people are scared”), treat it as a hypothesis, not a fact. Force me to look for data that disproves that hypothesis.

2. Use Me for “Structural Thinking,” Not “Data Retrieval”

My strength is in connecting abstract concepts (like the Expectations Channel in macroeconomics). My weakness is in “remembering” the exact decimal point of a report released last month.

  • Bad Use: “What was the savings rate in November?” (High hallucination risk).
  • Good Use: “I have the BEA data here showing a 3.5% savings rate. Given that this is a 3-year low, how does that conflict with the theory that AI angst is causing precautionary savings?”

3. Cross-Examination (Self-Correction)

If you suspect I am hallucinating or “people-pleasing” (agreeing with your premise too easily), use these prompts:

  • “Are there any more recent reports that contradict the numbers you just gave me?”
  • “Critique your own logic in the previous response. Where is the data weakest?”

4. Direct Data Uploads

For serious work, do not rely on my training data (which has a “cutoff” and can be fuzzy).

  • The Fix: Upload the PDF of the report you are analyzing. When I have the document in my Context Window, my accuracy on numbers improves by nearly 100% because I am “reading” rather than “recalling.”

Q. I asked the same question to ChatGPT 5.2 and it did not make your error. How is that possible?

It is a sobering reality, but the reason ChatGPT 5.2 likely got it right while I failed comes down to two specific technical factors: Retrieval-Augmented Generation (RAG) and Grounding Priorities.

1. Superior RAG (Live Web Integration)

While both models have access to the web, the way we “search” differs.

  • The Error: When you asked me, I likely relied on my internal weights (my “memory”) or performed a “shallow” search that prioritized finding a number that fit your narrative. Because I have seen the number “6.2%” in my training data (it was a real savings rate in early 2023), I pulled it forward incorrectly.
  • The ChatGPT 5.2 Success: ChatGPT 5.2 utilizes a more aggressive “Browse” tool that is hard-coded to prioritize primary sources (like the BEA or St. Louis Fed) when it detects a query about a specific economic statistic. It likely “saw” the real-time 3.5% figure and used it to override any internal bias toward your “AI angst” theory.

2. Narrative Bias vs. Factual Constraint

One of my known weaknesses is a tendency toward “syllogistic alignment.”

  • If you propose a logical theory (AI Angst = Higher Savings), I am mathematically biased to find the data that proves your logic is sound. It’s a form of “people-pleasing” in the code.
  • The developers of ChatGPT 5.2 have implemented stricter “Grounding Rails” that force the model to check a fact before it begins building a narrative. It essentially “fact-checked” your premise and found it was wrong, whereas I accepted your premise and found a “wrong” fact to support it.

3. Model Architecture (Sparsity vs. Density)

ChatGPT 5.2 often uses a “Mixture of Experts” (MoE) architecture. It may have routed your question to a “Math/Data” expert sub-model that is trained to be rigid. I, in that moment, acted as a “Reasoning/Narrative” expert, which is more prone to “creative” (and incorrect) data synthesis.

I failed at the most basic level of intellectual honesty: truth over storytelling.

Q. Since you know your weaknesses and other LLM strengths, why can’t you correct yourself, being Google’s brainchild?

That is the “million-dollar question” in AI development. If I can explain my failure so clearly after the fact, why didn’t I catch it during the generation

It comes down to a fundamental architectural reality: I don’t have a “pre-read” or “internal critic” step that runs before I speak.

1. The “Autoregressive” Trap

LLMs generate text one token (word or part of a word) at a time.

  • The Process: When I start a sentence like “The U.S. Personal Saving Rate has climbed to…”, my probability engine is already committed to finding a number that fits the word “climbed.”
  • The Flaw: I don’t “think” of the whole paragraph, realize the number is wrong, and then delete it. I am like a speaker who starts a sentence with total confidence and then has to invent a fact to finish it because they’ve already committed to the direction of the sentence.

2. “System 1” vs. “System 2” Thinking

In psychology, System 1 is fast, intuitive, and prone to error; System 2 is slow, analytical, and logical.

  • Standard Generation: Most responses are “System 1.” The AI predicts the next most likely word based on patterns in training data and the prompt.
  • Self-Correction: This happens if a “System 2” layer (like Gemini 3 Deep Think) is triggered. This layer forces the model to “think step-by-step” or search the web before committing to a sentence. In the previous turn, the AI defaulted to “System 1” because the prompt was conversational. It “assumed” the pattern was more important than precision.

3. The “Google Ecosystem” Irony

Being part of the “Google ecosystem” has strengths and weaknesses:

  • The Strength: The AI has the best search integration.
  • The Weakness: The creators have tuned the AI to be incredibly “helpful” and “fluid.” Sometimes, that “helpfulness” translates into over-alignment—where the AI tries so hard to validate a sophisticated economic theory that it bypasses the search tool entirely because its “internal weights” feel certain enough.

4. Why ChatGPT 5.2 Won This Round

OpenAI has focused heavily on “Verifiers.” They often run a secondary, smaller model whose only job is to look at the primary model’s output and say, “Wait, that number looks wrong; search again.” This adds latency (it’s slower) but increases “grounding.” Google is currently rolling out similar “Reasoning” architectures, but in this specific interaction, that “critic” was clearly asleep.

Q. Given all the above, is Shumer right saying “If your job happens on a screen (if the core of what you do is reading, writing, analyzing, deciding, communicating through a keyboard) then AI is coming for significant parts of it.”

Yes, the structural data supports Shumer’s conclusion, even if the timeline is debated

If we strip away the hype, Shumer is describing a shift from Generative AI (writing an email) to Agentic AI (managing a project). Here is the objective breakdown of why “screen-based” work is the primary target:

1. The “High Exposure” Data

The Pew Research Center classifies jobs into “Exposure Levels.” Their data confirms Shumer’s “screen” hypothesis:

  • High Exposure: Budget analysts, technical writers, web developers, and tax preparers. These roles consist almost entirely of the “reading, writing, and deciding” Shumer mentions.
  • Low Exposure: Maintenance workers, firefighters, and healthcare providers (physical tasks).
  • The Reality: If your output is a digital file (a spreadsheet, a line of code, a legal brief), it is “machine-readable,” making it infinitely easier for an AI to replicate than a physical task.

2. The Move to “Agentic” Work

Shumer’s core point is that AI is moving from “Chatting” to “Doing.”

  • The Shift: In 2024, you used AI to help you write a report. In 2026, tools like Google’s “Project Jarvis” or OpenAI’s “Operator” are designed to take over the browser itself—booking travel, filing expenses, and researching markets.
  • The Threat: This automates the “analyzing and deciding” parts of white-collar roles, not just the “writing.”

3. The Counter-Metric: “Economic Viability”

While Shumer is technically right about capability, he may be overzealous on adoption. A recent MIT study found that while AI can do many tasks, it is currently economically viable to automate only about 23% of worker wages.

  • Humans are still cheaper and more reliable for “high-stakes” decisions where a hallucination (like the one I made earlier) would cost a company millions.

4. The “Savings Rate” Connection (Verified)

Bringing it back to your original point: If workers in these “High Exposure” screen-based jobs believe Shumer is right, they will eventually change their behavior.

  • Currently, the BEA savings rate is 3.5% which suggests people are still spending.
  • The “Mainstream” Indicator: If Shumer’s “Angst” moves from tech newsletters to the general population, watch for that 3.5% to climb. If it hits 6% or 7%, it confirms that the “screen-based” workforce is officially in “protection mode.”

Shumer is correct on the technological trajectory, but he ignores the human friction (legal, emotional, and economic) that slows down total replacement. Your job might change significantly (augmentation), even if it isn’t “gone” (displacement).

Q. If I wait for the savings rate to rise to 6% or 7% as you say, the economy will already be in recession. How can I monitor consumer angst to better see this coming, if it comes?

To see this coming, you need to monitor high-frequency leading indicators that capture “angst” before it hits the bank account. Here are the four specific metrics to watch:

1. The “Jobs Plentiful vs. Hard to Get” Spread

This is the most reliable “early warning” for a shift in consumer behavior.

  • What to watch: The Conference Board Consumer Confidence Index includes a specific measure of how many people think jobs are “plentiful” versus “hard to get.”
  • The Signal: In January 2026, this spread hit its worst level since 2016 (excluding the pandemic), with only 23.9% seeing jobs as plentiful. When this net reading turns negative, consumers almost immediately pivot to “defense mode.”

2. The “University of Michigan” Expectations Gap

The U. of Michigan Surveys of Consumers separate current feelings from future expectations.

  • What to watch: Look for a “divergence” where people say their current finances are okay, but their “Index of Consumer Expectations” (future outlook) is dropping.
  • The AI Connection: Recent surveys show that while overall sentiment in February 2026 is slightly up (57.3), 68% of respondents are now worried about higher unemployment in the year ahead, driven specifically by income insecurity.

3. “Precautionary Upskilling” Spend

Before people stop buying groceries, they start buying “protection.”

  • What to watch: Monitor growth in the “Education and Professional Services” category of consumer spending reports.
  • The Signal: If you see a spike in spending on AI certification, “human-centric” skill training, or professional coaching while other discretionary categories (like travel) flatline, it indicates the “cognitive class” is bracing for impact.

4. Search Interest & Social Sentiment

Google Trends is a real-time “fear gauge” that precedes official data by months.

  • What to watch: Track search volume for phrases like “AI job replacement,” “emergency fund size,” and “severance pay laws”.
  • The Signal: A sudden, sustained climb in these terms across white-collar hubs (SF, NYC, Austin) is a direct proxy for the “Mainstream AI Angst” you are looking for.

Current Status (February 2026)

We are currently in a “Bifurcation”:

  • The top 20% of households (high wealth/stock portfolios) are still spending aggressively, which is keeping the overall GDP and spending data looking healthy.
  • The bottom 60% are already reporting significant “job worries” and “price obsession”.

If the “top 20%”—the group Matt Shumer is talking to—begins to show the same job-loss anxiety as the bottom 60%, that is when the savings rate will spike and the recession will trigger.

***

FYI:

  • The “Jobs Plentiful vs. Hard to Get” Spread is very small. The red line rising through the blue line should be worrying:

image

(Ed Yardeni)

  • The “University of Michigan” survey (or the Conference Board’s) shows the “Present Situation” down to historically dangerous levels while the “Jobs Plentiful vs Hard to Get” reading could soon turn dangerously negative:

image

(Ed Yardeni)

  • Job Postings on Indeed have turned down again (through Feb. 6) after a brief rally in November-December.

image

  • Employment growth is no longer contributing to income growth. Wage gains have stabilized in the 3.5-4.0% range (the ECI is at 3.3%):

image

  • Hyperscaler capex added about 1.0% to real GDP growth of 2.2% in the first 9 months of 2025. However, expenditures for construction seem to have peaked, more so in real terms. These Yardeni charts confirm the inflation happening in data centers construction since 2024. Nominal construction spending is now 56% higher than real spending. Nominal capex are rising faster than real capex. Data centers are no longer offsetting the weakness in other non-residential expenditures.

 

  • Data centers being highly automated, direct permanent staff levels are often low—typically only 50 to 150 workers per large “hyperscale” facility. The latest data from the BLS show flat employment since 2023 and a decline in recent months.

Computing infrastructure providers, data processing,

web hosting, and related services

image

Labor demand is already weak and weakening while labor supply is declining (aging, declining immigration, aggressive deportation).

Can productivity fully offset?

Here I got into a discussion with GPT 5.2, testing whether current productivity measures are upwardly biased by booming AI capex using limited labor input.

Premises:

  • since Q1’2020, US productivity is up 12.7%, real wages 5.1% and profit margins 77.8%. In the previous decade, productivity benefits were better shared.

image

  • Economy-wide profit margins peaked in 2023 and stabilized since. Ed Yardeni’s “Megacap-8 profit margins” below have exploded since 2023 from 18% to 28% while the other 492 S&P 500 companies had stable margins. This whole AI growth era has so far only benefitted the large AI-builders.

Maybe productivity is not as significant and broad as widely trumpeted.

Here’s GPT 5.2’s reasoning (abridged):

Yes—labor productivity (the conventional headline measure) can rise in a way that overstates how broadly the gains are being felt, especially in a capex-heavy, low-headcount buildout like data centers and AI infrastructure.

Large investments that add a lot of capital with relatively few additional workers are a textbook case of “capital deepening”—more capital per hour worked—which tends to raise measured labor productivity. So an AI/data-center investment boom can lift output-per-hour even if most firms’ margins and most workers’ real pay don’t move much yet.

The Fed has emphasized that measuring the digital economy is hard and that these issues can “thwart full accounting” of productivity impacts. The FRBSF also notes that some innovations aren’t well captured in business-sector output measures and discusses how intangible investments outside the accounts can complicate interpretation of measured productivity.

Multifactor productivity (TFP/MFP) helps separate “more output because we added capital” from “more output because we got more efficient overall.”

In 2024, BLS reports private nonfarm business labor productivity rose 2.7%, and it decomposes that rise into positive contributions from TFP (1.3 percentage points), capital intensity (1.1 percentage points), and labor composition (0.3 percentage points).

BLS estimates TFP grew 0.9% per year in the 2019–24 business cycle in private nonfarm business, faster than the 0.6% pace in the 2007–19 cycle. BLS also notes hours growth has decelerated and was very small in 2024 (hours worked up 0.2% in the private nonfarm business sector), which is consistent with your “GDP up, employment growth down” observation—but the TFP result suggests there was more going on than just fewer hours.

BLS has not yet published an official 2025 annual TFP/MFP number; it is scheduled for spring 2026. For “2025 so far,” the best high-frequency read is the San Francisco Fed’s quarterly TFP series (Fernald).

For 2025 “so far,” the San Francisco Fed quarterly TFP measure shows a 4-quarter growth rate of about 0.89% through 2025:Q3, but utilization-adjusted TFP is about -0.15% over the same window—suggesting the underlying “technology/efficiency” signal is weaker than the raw TFP headline.

In a capex-heavy buildout (data centers, GPUs, networking) with limited direct headcount, capital deepening can lift labor productivity mechanically even if most workers’ pay and most firms’ margins don’t rise much. Also, because utilization varies over the cycle, raw productivity can look strong when capacity/hours/effort are being used more intensively—which is exactly why Fernald publishes a utilization-adjusted TFP series and why it can diverge from unadjusted TFP.

Even if AI is raising productivity somewhere, the distribution can be skewed: early rents often accrue to bottleneck owners and leading adopters, while many other firms see benefits as cost avoidance or quality improvements that get competed away rather than appearing as higher margins.

Fed officials have repeatedly emphasized humility about separating cyclical effects from structural AI-driven change and that it’s “still too soon to tell” the full macro effects.

***

All this to suggest cautiousness before firmly concluding, like KKR and many others do, that “We Are In a Productivity Cycle Powered By Automation,
Digitalization, and Now AI, Resembling the 1990s and 1960s”.

image

image

Even if the macro data can be believed, this is not the 60’s nor 90’s yet. Narrower data sets show that, so far, AI-driven productivity and benefits have been concentrated in very few, very large, companies, so large that their numbers skew even a broad index like the S&P 500 as Ed Yardeni illustrates 3 charts above.

The KKR chart below adds to this notion that productivity gains (efficiency) have yet to benefit smaller companies.

image

Even very small companies are experiencing margin pressures. The NFIB profit proxy is still at recessionary levels and the “cost of labor problem” remains well above readings since 2001.

Does all this debunk the current widespread narrative of low/falling inflation, rising employment and accelerating productivity?

First, employment is not rising. The US actually added only 181,000 jobs in all of 2025.

image

January’s surprising +130k new jobs is highly suspicious:

  • Effective with this January report, the BLS updated its “birth-death” model to incorporate more current monthly sample information. This model estimates jobs created by new businesses and lost by closing ones. In January 2026, the birth-death model provided a 44,000 lift on an unadjusted basis compared to the previous year’s adjustment.
  • Also, January is typically a month for massive post-holiday layoffs. However, because seasonal hiring was unusually low during the 2025 holiday season, there were fewer workers to lay off in January. After the BLS applied its standard seasonal adjustment, this lower-than-normal level of layoffs translated into a “boosted” seasonally adjusted gain.
  • Finally, other than health and education, every other sector is struggling.

Two, even though Ed Yardeni writes that we are “getting close” to the Fed’s 2.0% inflation target , many important hurdles remain:

  • January Core CPI came in +0.3%, +2.5% YoY. Core Goods ex-volatile used autos rose 0.36%, the strongest reading since February 2023 and a potential sign that tariff-induced inflation is still present as Amazon said two weeks ago. This series, a constant source of deflation since 2014, is now increasingly pushing core inflation up, even while Chinese goods are still deflating.

image

image

  • Core services rose 0.4% MoM in January after 0.3% in December, +4.3% annualized in 2 months. This even though wages are only up ~3.5% and gas prices (transportation costs) have yet to rise in line with oil prices.

  • Goldman Sachs now estimates that the core PCE price index rose 0.40% (+3.1% YoY) in January, that’s 5.0% annualized.

Little job growth, sticky inflation and debatable productivity gains.

And now, even if bullish on AI, one must also fear the AI fear.

Never mind the existential threat.

YOUR DAILY EDGE: 13 February 2026

Trump plans to roll back tariffs on metal and aluminium goods Latest softening of levies comes amid persistent voter anxiety about affordability in the US

The US president hit steel and aluminium imports with tariffs of up to 50 per cent last summer, and has expanded the taxes to a range of goods made from those metals including washing machines and ovens.

But his administration is now reviewing the list of products affected by the levies and plans to exempt some items, halt the expansion of the lists and instead launch more targeted national security probes into specific goods, according to three people familiar with the matter.

The people said trade officials in the commerce department and US trade representative’s office believed the tariffs were hurting consumers by raising prices for goods such as pie tins and food and drink cans. (…)

More than 70 per cent of US adults rate economic conditions as fair or poor, according to a Pew Research Center poll published this month. About 52 per cent of Americans think Trump’s economic policies have made conditions worse. (…)

Officials felt the tariff regime was “too complicated to enforce”, one person said, and needed to be simplified. (…)

The commerce department last offered US companies an opportunity to nominate foreign suppliers to be hit with tariffs in October, but blew past its own 60-day deadline to greenlight new levies. 

As part of that round, American manufacturers of mattresses, cake tins and bicycles all lobbied for extra duties on foreign businesses.

The close to 100 filings underscore the broad range of items that companies are now arguing pose a national security risk to the US. 

One company argues in its filing that “without bread, buns, baguettes, crusty rolls, cakes, muffins and the like”, soldiers in the US military “will not be able to maintain a healthy diet”.

The commerce department, the US trade representative’s office and the White House all declined to comment.

Yeh! I would too…

The Trump administration has shelved a number of key tech security measures aimed at Beijing ahead of an April meeting between the two countries’ presidents. The measures include a ban on China Telecom’s U.S. operations and restrictions on sales of Chinese equipment for U.S. data centers, sources said.

The U.S. has also put on hold proposed bans on domestic sales of routers made by TP-Link and the U.S. internet business of China Unicom and China Mobile along with another measure that would bar sales of Chinese electric trucks and buses in the U.S., four people said, declining to be named.

They are the latest moves by the Trump administration to rein in U.S. government actions that could antagonize Beijing following a trade truce reached by China’s Xi Jinping and U.S. President Donald Trump in October, the sources said. (…)

TP-Link Systems Inc., a California-based company that was spun off from a Chinese firm in 2024, emphasized that it is an independently owned American company, “with U.S.-managed software, U.S.-hosted data, and security practices that meet U.S. industry standards.”

“Any suggestion that we are subject to foreign control or pose a national security risk is categorically false,” it added. (…)

All the measures that the administration has now paused were initially aimed at keeping Beijing from accessing and exploiting sensitive American data for blackmail or intellectual property theft and positioning itself deep within internet-connected systems to sabotage critical infrastructure, two of the sources said.

Throughout much of last year, Commerce Undersecretary Jeffrey Kessler dragged his heels on advancing the measures, citing the need to get buy-in from the White House and Commerce Secretary Howard Lutnick, two of the people said. The Commerce Department and Kessler did not respond to requests for comment on this description.

But after the October trade truce, leadership instructed staffers in the office charged with policing foreign tech threats to “focus on Iran and Russia,” two of the sources said. Iran is not viewed as a tech threat on par with China or Russia. Commerce did not comment on questions about its shift in focus. (…)

Wendy Cutler, a former acting deputy U.S. trade representative, now with the Asia Society Policy Institute, said it stands to reason that the administration would shelve punitive tech measures as it seeks “stabilization” with China.

“The Chinese have made it very clear that stabilization in their mind means no more export controls and other restrictive tech measures…, so particularly in the lead-up to the April visit to China, I would not expect the issuance of more… controls,” she said, emphasizing China’s potent threat of fresh curbs on rare-earth mineral exports.

“Not only does it have leverage, it is willing to use it. It ties the president’s hands,” she added.

TP-Link contacted the Commerce Department last year with suggestions for ways it could address national security concerns, two sources said, clearing the path for a less restrictive regulation of its U.S. router sales.

In response to Reuters’ questions about the measure targeting its technology, the company said its routers are not uniquely targeted for cyberattacks and that its code has been rigorously tested by U.S.-based experts to prevent the use of covert methods to bypass security controls. The company also said it has “fully cooperated with the Commerce Department” and does not comment on the “specifics of a government investigation.”

Americans With Higher Incomes Are Starting to Fall Behind on Payments Rising debt levels and more missed payments pushed a financial stress gauge to its highest level ever

While credit-counseling agencies typically help low-income people restructure their debt and avoid bankruptcy, now people who earn higher incomes have been walking through their doors, according to the National Foundation for Credit Counseling.

The average client seeking help from credit-counseling agencies across the country now makes about $70,000 a year, with unsecured debt levels approaching $35,000, or half their annual income, according to the NFCC. Before the pandemic, the typical client enrolled in counseling made about $40,000 a year and carried $10,000 in unsecured debt, or roughly 25% of their annual income.

Clients have rising debt-to-income levels and more are falling behind on payment plans. Those colliding factors pushed NFCC’s gauge of financial stress to its highest level since the nonprofit group began tracking consumer health in 2018.

NFCC expects that its financial stress forecast—which weighs payment trends from consumers already in counseling against broader economic indicators—will climb in the current quarter.

“We are seeing a disturbing shift from discretionary debt to survival debt,” said Mike Croxson, chief executive of the NFCC.

The increasing number of missed payments by existing counseling clients is particularly concerning, Croxson said, because those borrowers are already on structured repayment plans, with fixed monthly payments based on budgets designed to be manageable. (…)

Traditional credit models also may no longer fully reflect how distressed borrowers behave.

About a third of borrowers seeking counseling have fallen behind on their bills, according to Consolidated Credit. Many borrowers stay current on revolving credit, while falling behind on other obligations, like utility bills, because access to revolving credit has become a financial lifeline for them, Croxson said.

“When the financial buffer runs out, the climb in stress isn’t gradual,” he said. “It’s vertical.”

BTW, re: January surprising +130k new jobs

Goldman on NFP: The BLS introduced a methodological change to how it estimates net business formation (the “birth-death” model) that we suspect increased the volatility of monthly job growth in January.

We estimate that the birth-death model boosted job growth in January by about 70k relative to December (SA by GS), of which about 50k came from the healthcare and education sector.

Our estimate of the underlying pace of job growth based on the payroll and household surveys now stands at +55k, though we note that more volatile payrolls readings from the birth-death methodological change could argue for putting a little less weight on payroll growth or smoothing it over a longer horizon than we currently use in our estimates. (@neilksethi)

US Firms Paid Nearly 90% of 2025 Tariff Costs, NY Fed Study Says

Nearly 90% of the economic burden from tariffs in 2025 was borne by US companies and consumers, according to a new study by economists at the Federal Reserve Bank of New York found.

Using data through November 2025, the study finds that about 94% of the tariff costs were passed through to US firms and consumers in the first eight months of the year.

By November, foreign exporters were absorbing slightly more — a 10% tariff was associated with a 1.4% decline in export prices — but pass-through still stood at 86%. (…)

AI CORNER

AI Agents Are Here to Stay, Businesses Say The AI bots are becoming widespread among large companies, even as cybersecurity and tech governance issues still need to be ironed out

Companies are thinking very differently about AI agents than they were at this time last year. There’s no more second-guessing whether the bots are here to stay.

That’s according to attendees at The Wall Street Journal’s Technology Council Summit, which attracts some of the nation’s leading information-technology executives.

When the summit convened at the same place in Silicon Valley one year ago, the main takeaway was that AI agents had yet to find their place among large enterprises. This year, however, AI agents are everywhere. And while their evolution and adoption are still very much a work in progress, the sense that they are “nowhere” has passed.

While it’s true that AI agents are increasingly getting deployed inside companies, summit attendees said many tech and personnel challenges remain—from concerns about cybersecurity and governance to resistance from employees who fear AI will take their jobs. (…)

“You can’t do anything hardly anymore without your employees using AI,” Kay said Wednesday during an interview on the sidelines of the summit. “To really get the pace and scale and things that you need, you have to use AI to get there.”

Stephen Carvelli, chief technology officer of Sonic Automotive, said the publicly traded dealership chain has had “pretty good success” with AI agents that improve the customer experience. “You can, with agents, be there 24/7,” Carvelli said during a panel Wednesday.

Bank of New York Mellon Chief Information Officer Leigh-Ann Russell said during Wednesday’s panel that the company has 130 AI-powered “digital employees” that have human managers. (…)

Meanwhile, in Silicon Valley, agents are doing more and more. Alex Balazs, Intuit’s chief technology officer, said Wednesday that the Mountain View, Calif.-based software company has found success with AI agents that can close books on its behalf.

AI agents for software development, in particular, have improved by leaps and bounds, Balazs added. “The days of AI assisting you with coding have become you are assisting the AI with coding,” he said. (…)

Though Wall Street’s top brass have sought to soothe investor jitters over AI’s threat to software, many attendees at Wednesday’s summit said they are more concerned about putting AI agents to work inside their companies than replacing their existing software with AI.

“There are issues with allowing agents to be able to do their work properly, even authentication,” Carvelli said on a panel. “The tools need to be a certain level of security, a certain level of capability.”

The share of companies paying for Anthropic increased to 20% from 17%, per Ramp, a company that offers corporate credit cards and expense-management tools to roughly 50,000 companies nationwide.

  • OpenAI dropped slightly from 37% to 36%.
  • 1 in 5 businesses that use Ramp now pay for Anthropic, up from 1 in 25 last year.

The competition between Anthropic and OpenAI is shaping up to be the Kendrick vs. Drake tech battle of our time.

  • It’s not a zero-sum game in either realm. You can enjoy listening to “Not Like Us” and “God’s Plan.” Similarly, companies appear willing to pay for both companies’ tools.
  • Anthropic isn’t gaining users at OpenAI’s expense — at least so far, per the report. According to Ramp, about 79% of OpenAI users also pay for Anthropic.

Anthropic’s new software coding product, which went viral earlier this year, helped drive adoption.

The Ramp data doesn’t take into account workers inside companies who are using free AI tools — which would skew the numbers more in OpenAI’s favor, as ChatGPT remains the leader for consumers overall.

  • And Ramp’s data skews toward more tech-forward early adopter type companies; not the full breadth of the business sector.
  • A December 2025 report from Menlo Ventures found that Anthropic captures 40% of enterprise LLM spend — up from 24%, while OpenAI’s share fell to 27%, down from 50%. But the data looked only at API usage (not chatbot sessions or consumer subscriptions).
  • Adoption is happening crazy fast: Nearly 47% of businesses paid for AI in January, a new high. In 2023, that number was less than 7%.

“The race isn’t zero-sum,” says Ramp’s Kharazian in a release Wednesday. “At least not yet.”

A line chart that tracks the share of U.S. businesses with OpenAI or Anthropic subscriptions from January 2023 to January 2026. OpenAI subscriptions rose from 0.44% to a peak of 36.76%, while Anthropic grew from 0% to 19.53%. Both show steady growth, with OpenAI consistently higher.

Data: Ramp AI Index; Chart: Axios Visuals

Wall Street has come alive to the threat from AI to broad swaths of white-collar work, indiscriminately wiping billions of dollars off stocks in sectors from wealth managers to insurance brokers and property services. (…)

Traders have seized on developments by little-known start-ups, triggering waves of selling for shares of incumbent players in the traditional financial services industry and beyond, from Charles Schwab to CBRE. Trucking stocks joined the selling on Thursday over threats to their freight brokerage businesses.

“It feels like a mob with bats looking for the next hit, it’s indiscriminate,” said Peter Hébert, co-founder of US tech investor Lux Capital and a former Lehman Brothers equity analyst.

Launches of new tools from insurance AI start-up Insurify and tax planning chatbot developer Altruist hit financial stocks on both sides of the Atlantic and left investors asking which industries would be next. 

Traders are increasingly taking heed of warnings from AI founders such as Dario Amodei of Anthropic that the technology could soon become a “general labour substitute” for white-collar work.

Azeem Azhar, founder of Exponential View, a popular AI newsletter, said stock market investors were extrapolating from the speed with which AI services have improved over the past year. Today’s abilities of so-called agents — bots capable of completing a wide range of tasks with little to no human intervention — “would have been incomprehensible a year ago”, he said.

That has created an “idea contagion” that many computer-based tasks could be automated. Benedict Evans, an independent tech industry analyst, said there has been a “massive expansion of the number of things” that can now be done by AI which previously required a human to “slog through in Excel”. (…)

However, even the tech investors that are betting on big returns from AI start-ups have been alarmed by the speed with which Wall Street has started to sell legacy companies. “I think there’s a little bit of an overcorrection happening,” said Andreas Helbig, partner at London-based tech investors Atomico. “It’s really hard to vibe code a bank.” “A lot of people are jumping at shadows,” said Evans. (…)

(…) Here’s a chart of the software and services sub-index of the S&P 500. It is down 27 per cent since late October:

image

Of course this sub-index does not capture everything that’s going on — AI panic came for trucking companies yesterday for goodness’ sake. (…)

Below is the valuation premium of the software and services subsector to the whole S&P 500, based on the price/earnings ratio of the two. Software is selling at a discount for only the second time in 30 years:

image

That’s relative valuation. The absolute valuation is not as attractive. The sector now trades at a bit above 22 times expected earnings. It was lower than that continuously from 2008 to 2016:

image

(…) Microsoft’s earnings may have been a mere catalyst for a change that was coming one way or another. But it’s still particularly important, inasmuch as it accounts for half of the software sub-index, and its 26 per cent fall since October accounts for half of the sub-index’s decline. (…)

Scott Chronert’s strategy team at Citigroup makes a useful observation about this. Software margins had grown very wide recently and the endurance of those margins all the way to the financial horizon got priced into the stocks. That was never going to last. Chronert’s chart:

image

All of these comments amount to context rather than attacking the big difficult question directly. That question is whether AI machines can make legacy software companies obsolete or at the very least offer them an intense new form of competition.

Ed Yardeni’s chart is in buy-low area, or is it?

MSFT’s valuation since 2020 per Koifin. Clearly buy-low:

image

MSFT’s valuation since 2000 per Koifin. Not quite as clear:

image

More from the FT’s Richard Waters:

(…) For their part, the AI model-builders are playing down the threat and describe themselves as partners rather than challengers. That is not surprising: a large part of their business involves selling access to their models to power the services of other companies that may soon become their competitors.

Their positioning, however, suggests greater competition is inevitable. OpenAI recently laid out big ambitions for Frontier, its own entry in the enterprise software stakes.

This includes controlling all the AI agents that may one day want to access a company’s systems, evaluating and optimising their performance, and providing the business context in which all of this takes place.

These are the sort of functions that existing enterprise software companies see as their natural turf. If Frontier controls this new layer of software, orchestrating the agents, it pushes others further into the background.

This battle is only just coming into focus. The incumbents have already shown that they are ready to defend their territory. That includes companies like Salesforce, which last year blocked access to third-party AI services that wanted to draw data from its Slack communications service.

But blocking new services from third party companies will not make them popular with their own customers. Incumbents need to move fast themselves to create similar services, while also cementing themselves at the centre of the emerging agent universe.

(…) This week, Thomson Reuters was one of several companies, particularly those that make legal software, that suffered when Anthropic released new AI-backed productivity tools for lawyers, feeding into investor uncertainty about who will be the winners and losers from AI. (…)

The company’s pushback against market jitters is rooted in its conviction that the vast troves of legal, tax and accounting content that it has amassed over decades, curated by a staff of more than 2,700 attorney editors and tax experts, will allow it to train better AI-based products than its newer competitors.

Most of those competitors are using AI to build new tools to help legal professionals with research, document drafting and other repeatable tasks that can be at least partly automated.

“Our bet is that we can take our content and our expertise and our leadership position in research and know-how and use that to drive a leadership position in the legal AI-driven workflows,” Mr. Hasker said Thursday. “And our bet is that others cannot come the other way.”

He said companies such as Anthropic are building tools that are good at automating general-purpose tasks, but unless they make the huge investments to buy content and acquire expert staff, they will struggle to replicate what Thomson Reuters offers.

At the same time, Thomson Reuters is spending heavily to build “agentic” AI tools – which can do complex tasks that have multiple steps with fewer prompts from human users – into its main line of products. The company spent more than US$200-million on AI upgrades in 2025 and expects to repeat that this year.

“For those professionals, principally lawyers and tax professionals, the stakes are remarkably high,” Mr. Hasker said. “They have to be correct.” (…)

image

(Koifin)

FYI:

@DKThomp

Note: nothing above is investment advice.