Vana, P., Lambrecht, A., & Bertini, M. (2018). Journal of Marketing Research, 55(6), 852-868.
The authors examine purchase behavior in the context of cashback shopping—a novel form of price promotion online in which consumers initiate transactions at the website of a cashback company and, after a significant delay, receive the savings promised to them. Specifically, they analyze panel data from a large cashback company and show that, independent of the predictable effect of cashback offers on initial demand, cashback payments (1) increase the probability that consumers will make an additional purchase via the website of the cashback company and (2) increase the size of that purchase. These effects pass several robustness checks and are also meaningful: At average values in the data, an additional $1.00 in cashback payment increases the likelihood of a future transaction by .02% and spending by $.32—figures that represent 10.03% of the overall impact of a given promotion. Moreover, the authors find that consumers are more likely to spend the money returned to them at generalist retailers, such as department stores, than at other retailers. They consider three explanations for these findings; the leading hypothesis is that consumers fail to treat money as a fungible resource. They also discuss implications for cashback companies and retailers.
Vana, P., & Lambrecht, A. (2021). Marketing Science, 40(4),708-730.
Online product reviews constitute a powerful source of information for consumers. Past research has studied the effect of aggregate measures of reviews (such as, average product rating and number of reviews) on consumer behaviour. In this study, we investigate how individual reviews displayed on a product webpage affect consumers’ purchase likelihood. Identifying this effect is challenging because retailers are free to select which reviews to display on the product page and in what order, making the display of reviews in particular positions potentially endogenous. We address this challenge by utilizing an empirical context where the retailer displays reviews by recency and exploit the variation in review positions generated as newer reviews are added on top of older ones. We find that individual reviews have a strong effect on consumer purchase decisions. These effects are particularly pronounced when individual reviews contrast with the aggregate information that is instantly available on the product page or help consumers resolve uncertainty about the product.
Complementing Human Effort in Online Reviews: A Deep Learning Approach to Automatic Content Generation and Review Synthesis
Carlson, K., Kopalle, P., Riddell, A., Rockmore, D., and Vana, P. (2021). Forthcoming at the International Journal of Research in Marketing.
Online product reviews are one of the most ubiquitous and helpful sources of information available to consumers today for making purchase decisions. Consumers particularly rely on reviews by experts who professionally critique products for a variety of experience goods such as books, movies and wines. While these experts are capable of objectively evaluating a product, they may need assistance articulating their opinions into engaging reviews given the sheer volume of reviews they are typically tasked to write. In this paper, we seek to address this challenge by asking a broader question: “To what extent can a machine learn to write an expert review that is as engaging, informative, and appropriate as a human-written review?” We use a deep learning approach based on the Transformer network that takes as input a list of traits of the wine and generates as output a human readable text review of the wine. We apply our model to 125,000 expert reviews from Wine Enthusiast and the associated metadata including winery, style, reviewer's name, and rating. Our results suggest that the model generates reviews close to human-quality with descriptions that closely reflect the wine. In the spirit of a Turing Test, we assess through an experiment on MTurk whether the machine-generated reviews are indistinguishable to humans from human-generated reviews. We find no significant difference in respondents’ identification of whether a review was written by a machine or human being. We thus show that machines can indeed learn to write “human-quality” reviews. While extant literature focuses on using natural language processing to generate text that resembles human-written text, our main intended contribution in this research is to demonstrate that machines are indeed capable of performing the critical marketing task of writing expert reviews, work which until now has been an exclusively human task. Further, to our knowledge, there is no research on directly testing human versus machine-generated reviews. We suggest three possible applications of our model and approach and provide directions for future research.
Brands In Unsafe Places: Effects of Brand Safety Incidents on Consumers’ Brand Attitudes
Grewal, L.S., Vana, P., & Stephens, A.T. (2022). Revising for 2nd round at the Journal of Marketing Research.
In numerous well-publicized incidents brands’ reputations have come under threat because their content has appeared adjacent to “unsafe” content (e.g., offensive, harmful, or violent posts). Despite improvements in digital media platforms’ content-moderation algorithms that attempt to identify and remove unsafe content before users ever see it, it is currently impossible for platforms or brands to fully control the digital environments in which a brand’s content appears. Thus, for a brand posting digital content or advertisements, there is always a risk that the brand will appear “with” unsafe content, particularly when the platform’s content is sourced from users. Conventional wisdom suggests that this must be harmful to brands; indeed, brand managers and Chief Marketing Officers consider so-called “brand safety incidents” a serious threat. This paper examines whether this is the case and identifies conditions under which any harmful effects of brands appearing adjacent to unsafe content in digital media are mitigated. Combining experiments and archival data on brand safety incidents, the authors show how and when unsafe content appearing adjacent to brand content can influence consumers’ brand attitudes and evaluations. Additionally, the authors provide insights into what brand managers can do to appropriately mitigate brand safety threats in digital media.
The Impact of Algorithmic Components on Contributions in Charitable Crowdfunding
Vana, P., & Lambrecht, A. (2022). Under review at Management Science.
Crowdfunding platforms host thousands of projects and typically use ranking algorithms which, based on a set of project-specific variables, determine the rank order of projects in order to facilitate contributors’ choices and allow the platform to achieve specific goals. Here, we examine the role of individual components entering a ranking algorithm. We ask how such components affect project completion. We then explore whether ranking algorithms can help
direct funding towards underprivileged groups. Last, we examine the trade-offs a crowdfunding charity faces between directing funding towards underprivileged groups and having a large number of projects complete. Our study is based on data and the ranking algorithm of the educational crowdfunding platform DonorsChoose. We develop a structural model of donors’ contributions using a multiple discrete continuous choice framework and report estimation results and counterfactual outcomes if the charity reweighted algorithmic components. We find that in the algorithm the amount remaining for a project, as well as other variables related to a project’s progress, strongly affect whether a project will be fully funded. At the same time, our results indicate that prioritizing in the algorithm projects from high poverty schools increases contributions to such schools significantly. Encouragingly, our findings further suggest that, at least in our empirical context, using the algorithm to direct funding towards high poverty schools does not compromise the platform’s overall goal to collect a large amount of funding overall.
Vana, P., & Pachigolla, P. (2021). Working paper.
A prominent way in which the sale of illegal goods such as drugs, weapons and counterfeit occurs online is through Darknet markets, which are platforms where buyers and vendors transact on the Dark Web. The Dark Web offers a high degree of anonymity and security to its users through its encryption technology and the use of cryptocurrency. Law enforcement agencies have responded to Darknet markets by conducting secret bust operations where they shut down the operation of these markets. In this research, we investigate if bust operations deter the activity of buyers and vendors in other Darknet markets that are not subject to the bust. We leverage a joint bust operation by the FBI and Interpol conducted in November 2014 where Silk Road 2.0, a large Darknet market was shut down. Our results indicate that following the bust, prices dropped and the number of transactions per month per vendor increased in two other large Darknet markets that were not bust. Consequently, the bust did not deter criminal activity in these markets and it was cheaper to buy illegal products. We explore the mechanism for the price drop and find that vendors decreased prices to attract buyers wary of shopping in Darknet markets due to increased risk of getting caught after the bust. We offer recommendations for law enforcement agencies on the sequencing of conducting busts as well as the characteristics of markets that could be bust so that the unintended consequences in other markets could be minimal.