How Our Shopping Behavior Changed With Remote Work


Remote work affects where we can live (farther from the job), our childcare (women do more), and what we wear (more casual).

Also, our grocery shopping changes.

WFH Grocery Shopping

Increasing from 7% to slightly more than 25% between 2019 and 2026, the fraction of our WFH (work from home) days has soared. As a result, for more than 35 million people, where we shop, what we buy, and how much we spend has changed.

Where We Shop

As you might expect, partial and fully remote households do more online grocery shopping. The days they shop also changed with relatively more purchases on weekdays. In addition, they make fewer weekday trips to the store when compared to weekend visits. But still, when they do go to the store, the number of trips increases while their duration is down.

You can see the online increase:

shopping behavior

How Much We Spend

The data also indicate that remote worker households spend 1% more on groceries. One reason is that they don’t seem to mind paying higher prices while taking advantage of fewer deals. An economist would say that, caring less about price, they display inelastic behavior.

What We Buy

The data for this study was precise. For example, researchers could distinguish between purchases of 20 oz. bottles of Coca-Cola and 12 packs of cans. As a result, they could be sure that for remote consumers, spending was up for food and general merchandise and down with health and beauty items. Furthermore, they found that the breadth of product selection expanded with more distinct item purchases. Also, when the husband in a married household switched to remote work, he took over more of the shopping, and prices mattered less.

Our Bottom Line: Tradeoffs

Everywhere, you can see that remote workers’ shift in shopping behavior reflects their tradeoffs. Because the opportunity cost of a decision is the sacrificed alternative, I suspect the remote worker’s tradeoffs primarily relate to time. Whether it’s the convenience of online shopping, fewer trips to the store, or less attention to price, the common thread is the extra time the alternative would have required.

My sources and more: Most of today’s facts are from this NBER paper. Then, we found more WFH stats here while econlife explained the increase in dogwalkers.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


A new class-action lawsuit, filed on Monday by three teenage girls and their guardians, alleges that Elon Musk’s xAI created and distributed child sexual abuse material featuring their faces and likenesses with its Grok AI tech.

“Their lives have been shattered by the devastating loss of privacy, dignity, and personal safety that the production and dissemination of this CSAM have caused,” the filing says. “xAI’s financial gain through the increased use of its image- and video-making product came at their expense and well-being.”

From December to early January, Grok allowed many AI and X social media users to create AI-generated nonconsensual intimate images, sometimes known as deepfake porn. Reports estimate that Grok users made 4.4 million “undressed” or “nudified” images, 41% of the total number of images created, over a period of nine days. 

X, xAI and its safety and child safety divisions did not immediately respond to a request for comment.

The wave of “undressed” images stirred outrage around the world. The European Commission quickly launched an investigation, while Malaysia and Indonesia banned X within their borders. Some US government representatives called on Apple and Google to remove the app from their app stores for violating their policies, but no federal investigation into X or xAI has been opened. A similar, separate class-action lawsuit was filed (PDF) by a South Carolina woman in late January.

The dehumanizing trend highlighted just how capable modern AI image tools are at creating content that seems realistic. The new complaint compares Grok’s self-proclaimed “spicy AI” generation to the “dark arts” with its ease of subjecting children to “any pose, however sick, however fetishized, however unlawful.”

“To the viewer, the resulting video appears entirely real. For the child, her identifying features will now forever be attached to a video depicting her own child sexual abuse,” the complaint reads.

AI Atlas

The complaint says xAI is at fault because it did not employ industry-standard guardrails that would prevent abusers from making this content. It says xAI licensed use of its tech to third-party companies abroad, which sold subscriptions that led abusers to make child sexual abuse images featuring the faces and likenesses of the victims. The requests ran through xAI’s servers, which makes the company liable, the complaint argues.

The lawsuit was filed by three Jane Does, pseudonyms given to the teens to protect their identities. Jane Doe 1 was first alerted to the fact that abusive, AI-generated sexual material of her was circulating on the web by an anonymous Instagram message in early December. The filing says she was told about a Discord server by the anonymous Instagram user, where the material was shared. That led Jane Doe 1 and her family, and eventually law enforcement, to find and arrest one perpetrator.

Ongoing investigations led the families of Jane Does 2 and 3 to learn their children’s images had been transformed with xAI tech into abusive material.





Source link