Crowd-sourced technologies reveal the tensions between information providers and consumers
A company named Waze Mobile has developed a GPS navigation software, a social mobile application that provides free, turn-by-turn navigation based on the live conditions of the road, based on real-time traffic conditions, as reported by users. The greater the number of drivers who use this software, the more beneficial it is to its customers.
Waze is but one example of the proliferation of technologies that purvey “the wisdom of the crowd.” Technologies such as these collect, maintain and disseminate information provided by users to establish reputations and recommendations for practically everything – high schools, restaurants, doctors, travel destinations, and even religious gurus.
Our recent research looks at this phenomenon through the lens of mathematical models to provide understanding about the optimal strategy for sharing and using such information. Our work underscores the difficulty in managing such crowd-sourced technologies, which must find a way to balance the tension inherent in a system in which agents are both providers of and consumers of information. Companies must carefully manage this conflict of interest, and, deciding just how much information to reveal is a surprisingly tricky matter, we find.
Waze provides an apt case in point. When a customer logs in to Waze with a smartphone, he continuously sends information to Waze about his speed and location. This information and information sent by others enable Waze to recommend to this driver and others an optimal route to their destinations. But in order to provide good recommendations, Waze must have drivers on every possible route. Indeed, as Waze’s own president and co-founder has admitted, Waze sometimes recommends a particular route to a driver even though – and indeed, exactly because – the service does not have information about that route. The information transmitted by this “test” driver is then used to better serve future drivers. But, in order not to deter drivers from using the system, Waze must be very careful about how often it “sacrifices” drivers by subjecting them to an exploratory drive intended to improve the experiences of others, rather than to provide them with the quickest trip. Thus, the tension: All users benefit from the experiences of the explorers, but no user wants to be thrust into the role of explorer.
The wisdom of the crowd is far from perfect – as anyone who has acted on crowd-source recommendation that went awry has experienced first-hand. This is largely because of one of the important characteristics of these new reputational markets, namely, the feedback effect, wherein users are consumers of as well as generators of information. A policy that ignores this effect and simply provides the most accurate, current recommendations, will lead in the long run to insufficient exploration of available information – think of the alternative routes in the case of Waze – and, hence, a suboptimal outcome.
Clearly, a policy not to reveal any information would not be helpful in any way. However, a policy of full transparency is not ideal either.
This applies for a wide array of situations, not all of them stemming from new technological innovations. Consider, for instance, the controversy over healthcare report cards to evaluate medical providers that are being discussed in the UK and the US and elsewhere. These systems propose giving the public disclosure about patient health outcomes for individual physicians or certain hospitals. Supporters argue that the system gives providers powerful incentives to improve quality together with providing patients with important information. Sceptics counter that report cards may encourage providers to “game” the system by avoiding sick patients, seeking healthy patients, or both. Finding the way to balance these incentives, and to make the information beneficial is difficult. The performance record of physicians or medical centres just embarking on certain procedures will be difficult to evaluate – because well-established people and places will have many cases to cite and those new to the procedure will have so few to evaluate that the record will not be statistically significant. At the same time, society has an interest in seeing that newcomers learn these procedures, rather than sending all patients to one “best” provider. Sometimes too much transparency comes at a cost that does not result in the best outcome.
Indeed the difference between the “best” and “second best” may be negligible. Consider, for instance, the situation with TripAdvisor, the dominant online source in the hospitality industry, with more than 75 million reviews generated by some 40 milllion visitors per month. TripAdvisor’s Popularity Index is a company secret, yet it is apparent that its exact strategy differs from just a simple aggregation. The closer a property is to a Number One ranking in its given market, the more numerous its direct online bookings. For example, a property ranked Number One generates 11 per cent more bookings per month than one ranked Number Two. The difference is particularly striking given that in most cases the difference between similarly ranked hotels is minor. TripAdvisor’s revenue is generated through advertising, and as a result, the company’s main concern is the volume of visitors to its site.
These are just a few of many fascinating examples of the rapid growth in the number of rankings and league tables published in recent years, and they may well be the face of things to come – affecting the reputations of many, perhaps even university professors. Our work suggests that managers of these websites are facing a difficult tension conflict between gathering information from users and making good recommendations to the same users.
About the authors
Yishay Monsour is a computer science professor at Tel Aviv University.
This article is based on Implementing the Wisdom of the Crowd, a working paper, available here