Is “Meet Or Beat” Pricing Anti-Consumer?

The four biggest British supermarket chains all offer some form of price-match guarantee, promising that their customers could not save any money by shopping elsewhere.
--from The Economist

Any savvy, empowered consumer will have a predictable response to claims like "You won't save money by shopping elsewhere." We laugh inwardly and disbelieve. Claims like this work out great for the claimer only if no one actually tests it.

So when a claim like this is made by four separate grocery store chains in the same region at the same time, it's a "what you talkin' bout Willis" moment. [1] A real credulity-stretcher.

Obviously, and by definition, not every store can be the cheapest. But every store benefits by fooling their customers into not checking!

Which takes us to the counterintuitive economics of "meet or beat" pricing.

When a store in your community adopts meet or beat pricing, it sounds at first like a great idea. Theoretically, you always know that you'll get the best price by going there. So, you can just go there.

Wrong! What real-world examples teach us is that meet or beat pricing is inflationary--it actually makes prices go up. In order to see why, though, you have to ask the second order question: And then what?

This is exactly what all the other competing stores in town are asking themselves: How can I compete when I know this meet or beat store will always match or beat my price?

It then becomes game theory situation. The other stores can try and cut prices, but they'll only hurt themselves. Worse, typically, the store adopting meet or beat pricing is usually that market's low-cost provider anyway.

You don't compete on price unless you can compete on price. So, the other stores must compete some other way. Thus the correct game theory response--if you're not the store with the lowest cost structure--is to raise prices. It sounds counterintuitive, but this is actually what happens in any market where one player embraces a meet or beat strategy.

In this case, the market creates its own pricing umbrella, and all the retailers win. At your expense. Your prices go up, even as you think you're getting the best deal in town.


A 30 Day Experiment with Mini Habits

Today's post returns to the elegant ideas of Stephen Guise, author of Mini Habits and How to Be an Imperfectionist. I wanted to share with readers the results of a month-long experiment I ran to test out the usefulness of mini habits, a cornerstone of Guise's unusually creative approach to personal development.

A quick word on what mini habits actually are--and the best way to describe them is by explaining what they're not. They are not aggressive resolutions like "READ 200 PAGES EVERY DAY!" or "RUN 10 MILES EVERY DAY!!!" or whatever. Those are exactly the kinds of unsustainable goals that don't become habits. They're too hard. They drain your willpower. And you'll resist them and eventually quit on them.

A mini habit operates under completely different incentives. The idea is to make the habit so small, so easy, that you have no resistance whatsoever to doing it. Guise gives his own amusing example of building a surprisingly robust workout habit based on the mini habit of doing one pushup a day. If he does his one pushup, he "worked out."

You might snicker at this at first, but once you think through the psychology of it, you'll realize the sheer elegance and intuitiveness of such a laughably easy goal.

First, put yourself in the place of someone who never was able to make fitness a regular habit, as Stephen Guise was for many years. A "one pushup" mini habit was a device that got him to start doing something. What typically stops us from doing things (and produces procrastination as well as frustration with ourselves) is our resistance to getting started.

This is particularly true if the goal has some enormity to it, like READ 200 PAGES TODAY! Unfortunately, the subtext to a goal like this is: AND IF YOU READ ONLY 199 PAGES YOU ARE A COMPLETE LOSER!

In stark contrast, the mini-goal mechanism lowers the entry fee. The goal is something easy--hilariously easy--to do. And because it gets you started, you sidestep procrastination and inner resistance.

And, all along you have the option to continue or to quit. You can do your one pushup and stop. Or you can do a few more, if you want. Or a lot more. It doesn't matter! You've met your goal already so it's all gravy. It takes away all the pressure.

This totally altered Guise's mental construct of what "working out" meant, and it changed his image of what it meant to build an exercise habit. Moreover, setting the bar so low annihilated his exercise perfectionism which had been a substantial obstacle between him and fitness.

Contrast this with a person who does 80 pushups but feels like a failure because he "failed" in his goal of doing 100. As somebody who tried (a few times) to follow the 100 pushups workout (and for whatever reason I never was able to get much above 70 pushups in a row), this resonates with me. I would do 74 pushups yet feel like a putz because I couldn't do more. Sad! It just goes to show how rubber our yardsticks can be when we measure ourselves.

Note also: It shouldn't be a surprise that under this kind of self-imposed negative reinforcement, I kind of... slipped out of the habit of doing pushups. Which takes us to the key psychological takeaway here: it's impossible to build a healthy and sustainable habit out of something that's a source of failure and frustration.

Okay. Clearly, there are many reasons why the mini habit concept makes intuitive sense. But I still wanted to test it for myself. And what I didn't know was there was yet another gigantic advantage to this seemingly innocuous mental hack. I'll get to it in just a minute.

So, I picked two imperfectionist-friendly mini habits and trialed them both for 30 days, just to see what would happen. My mini habits were:

1) Write for 20 minutes
2) Read 25 pages of any book

The thing is, lately, I haven't been reading as much nor writing as much as I would like. Entire days would go by where I wouldn't write at all, and I'd often go a day or two not really reading much in the form of long-form works--like books that really teach you and change your thinking in ways short-form reading cannot.

I wasn't satisfied with this. At all. There's so much to learn, so many insights to gain out there... and yet I seemed to be passively letting myself waste time consuming useless information like the news, or peoples' political rants on Facebook.

So, I set goals that, for me, were the equivalent of "do one pushup." After I reached them, I'd permit myself the freedom to stop, yet grant myself the success of having met my goal and taken a small step forward.

I'm guessing readers conversant in the psychology of goal setting already know where this is going.

Had I merely met the laughably easy minimums for each day, over 30 days I should have read approximately 750 pages (30 days x 25 pages) and I should have written for about 600 minutes, or 10 hours (30 days x 20 minutes). This is nothing to be ashamed of: it's actually quite a decent amount of reading and writing.

But what actually happened was I read a grand total of 1,195 pages and wrote for about 1,200 minutes. I exceeded the reading mini habit by some 60% and crushed the writing goal by 100%. More importantly, it felt easy. Weirdly easy. A lot easier than I expected.

We mentioned before the central idea that these mini habits served to get me to sit down and start. In both domains, reading and writing, it often got me into a groove, but not always. Some of those brief writing sessions ended the minute the timer went off: I wasn't feeling it so I quit. But that was okay too: I met my minimum, so I was cool with it. I didn't castigate myself. (Side-benefit: no negative reinforcement!)

On some days however, I kept going. Sometimes, while writing, I never even heard the timer go off as I slipped effortlessly into a glorious flow state. Interestingly, I never knew which type of day it was going to be until I sat down and started. Which meant there was a strong positive incentive to try each and every day.

The same thing happened with reading: many of the days I read just the minimum, and that was okay. But on other days I'd get engrossed and read double, triple or even five times that minimum page count. And once again, I never knew which day was going to be which.

This was an unmitigated success, and I recommend to readers to try out their own mini habits in domains they wish to explore. Who wouldn't want to easily fit 20 hours of writing and the reading of some 5-6 books into a given month, and have it feel easy? These mini goals, they really work.

Nobody Wants to Find the Errors

I wanted to share one more thought about the various crises in "studies show" science.

We're finding a lot of errors in a lot of past studies, and--hopefully--we're fixing them. Or at the least we've been given the opportunity to change our beliefs when it turns out they were based on erroneous or unreplicable studies. This is good. And it's a halfway decent attempt at actually using the scientific method.

But think about this: Imagine you're a "studies show" scientist, and consider the various pressures out there arrayed against you if take it upon yourself to uncover these types of errors. It takes precious time away from your own research. You look vaguely like a jerk for criticizing your peers. You get stonewalled when you ask to see peoples' data. And the research world is small: nobody wants to find errors in the work of someone you might work with (or worse, work for) in the future.

Worst of all by far: you don't get paid for it.

There is absolutely no incentive structure out there for finding study errors. In fact there are enormous incentives not to find them.

So it makes you even more cynical about "studies show" science: if they're finding as many errors as they are--despite all the pressures and reasons not to find them--how many more errors must there be?

READ NEXT: Rebellion Practice

You May Now Ignore All Scientific Studies

Readers, I've developed a cognitive rule of thumb for "scientific studies" that I now use whenever I hear or read about any study. Here it is:

Heuristic: Any study you see, ever, anywhere, that happens to reach you through the media is wrong in some fundamentally significant way. You may safely ignore it.

My first halting step toward this admittedly cynical mental rule came a long time ago: when "scientists" decided--and later undecided--that margarine was better for you than butter. I took another halting step toward this cynical rule when the "don't eat too many eggs" study came out, a study that blissfully ignored the fact that dietary cholesterol is not blood serum cholesterol.

My steps became a lot less halting thereafter, as I thought through what "scientists" used to think was true. Things like:

* "Healthy whole grains."
* An entirely upside down food pyramid.
* Recommendations for statin meds because of a "studies show" link to cardiovascular health that turned out to be wholly imaginary.
* The fact that the medical profession doesn't seem to know what normal blood pressure is.
* That toast causes cancer.

There are many, many more examples, of course, some amusing, some not funny at all, some actually life-or-death. And all wrong. And if you have even a cursory understanding of decline effects[1] and the great crisis of reproducibility,[2] you will lose any remaining faith in "studies show" science.

Still more severe examples include South Korea's misguided war on thyroid cancer[3], or the highly counterintuitive discoveries that annual mammography screenings have zero effect on life expectancy and that, shockingly, prostate cancer screenings negatively affect life quality and life expectancy.[4]

And when it seems "studies show" science couldn't be any more wronger, it gets worse: We've discovered that even some of the most important and foundational studies in twentieth century psychology cannot be reproduced.[5] We're seeing major, domain-shattering acts of sheer data fabrication--see vaccines[6] and social psychology[7] for two object examples. Even in the food industry, one of America's best known dietary scientists, Cornell University's Brian Wansink, just got caught torturing his data to find links that don't really exist.[8]

And this is to say nothing of the various types of errors that inevitably show up in the study industry, as well as the rampant errors common to media coverage trumpeting study findings. These include errors like baseline risk error (an increase in a risk that's too small to matter is still too small to matter), p-hacking (mining data for statistical anomalies first, then forming a post hoc hypothesis on something that is almost assuredly spurious), and many others, some of which we've already discussed elsewhere at Casual Kitchen. And don't forget the well-known pressure to "publish or perish" in academia, which is most likely the prime driver of many of the industry's data fabrication and data-torturing scandals.

Last but certainly not least, there's the structural fact that the media--and it does not matter which media--inevitably oversimplifies or exaggerates all study claims to the point of making them into anti-information. And I have yet to read an article in my life saying "XYZ perfectly normal everyday activity shows no link to cancer." It's always the opposite.

You can safely ignore it all.

Finally, for any readers who consider this article to be somehow anti-science, keep in mind: "studies show" science is not science. It never was.

[1] A readable article on the "Decline Effect," a genre of the problem of reproducibility.

[2] More good articles on the reproducibility crisis here and here.

[3] Korea and its misguided search for thyroid cancer as a textbook example of two types of silent risks: overscreening risk and overdiagnosis risk.

[4] For excellent discussions of why prostate screenings and mammography screenings carry silent risks, see Gerd Giggerenzer's book Risk Savvy and Gilbert Welch's book Less Medicine, More Health.

[5] Classic and foundational studies in pyschology cannot be successfully replicated.

[6] See for example the famously fraudulent “vaccines cause autism” study. Ironically, the paper pointing out the fraudulence itself had to be later corrected because it never acknowledged financial support from MMR vaccine makers. (!)

[7] “2011: A Year of Horrors” in social psychology.

[8] Bombshell allegations of data mining, p-hacking and other types of statistical data torture shatter the credibility of dietary science professor Brain Wansink's entire department at Cornell University: here, here, and most depressingly: here.

The Very Best of Casual Kitchen 2017

With 2017 almost in the bag, here's my annual retrospective of Casual Kitchen's best posts of the year.

Once again, I want to thank you, readers, for your time, your attention, and your support. I'm deeply grateful. See you all in January!

Top Posts of 2017

8) How To Beat Inflation

7) Why Bad Blogs Get More Readers

6) When Food Advocates Tell You What To Serve Your Customers

5) If I Can't Give Advice (!) How Do I Evangelize Frugality and Anticonsumerism?

4) Nine Terrible Ways to Make Choices (That You Probably Didn't Know You Were Using)

3) Using Your Sophistication and Great Taste Against You

2) Running Towards Humps

1) Checkers... and Chess

Note for new readers: If you're new here and you'd like to look over Casual Kitchen's best work over the years, a great place to start is the "Best Of" posts from each year:

Best of Casual Kitchen 2016

Best of Casual Kitchen 2015

Best of Casual Kitchen 2014

Best of Casual Kitchen 2013

Best of Casual Kitchen 2012

Best of Casual Kitchen 2011

Finally, let me thank all readers, new and old, for generously supporting my work with your kind purchases at Amazon via the links at this site!