between public polls and predictionmarkets mechanism, it is hard not to point out a distinct advantage of the predictionmarkets framework. It is familiar that public-polls methodology incorporates something known as the two-day lag period, meaning that this approximately corresponds to the time between polling the data and publishing of results. Such practice was probably logical ten or fifteen years ago, but current technological development had very likely changed the real lag time. Also, the adjusted-pools method represents a sort of historical data analysis, and there is no reminder necessary of the fact how fundamentally wrong turned out to be every quantitative technique based on historical data. On the other side, every price obtained in predictionmarkets represents a real-time value, with the additional advantage of being easily interpreted since it only depends on particular contract design. Predictionmarkets also relate to the convenience of being able to predict different metrics, simultaneously. Political predictionmarkets also profit from some economic arguments, the most obvious in this context being the equilibrium price. Nevertheless, public polls and predictionmarkets should by no means be considered as two mutually exclusive techniques. They possess, on the contrary, complementary characteristics. Arnesen and Bergfojrd (2014) state that, while survey results represent the voters, the market traders interpret the voters and anticipate their behavior. Polls can be theoretically used to improve the accuracy of predictionmarkets, predictionmarkets, on the other hand, are unlikely to have a positive influence on the accuracy of the polls. It is because the average citizen who takes part in a public survey probably does not possess the necessary financial literacy for the participation in a prediction market. Such view, however, seems to be accurate only up to a certain extent. Market traders use various sources, including polls, as an input for their trading decisions. However, they also have to use other perspective- taking methods. Predictionmarkets could have an impact on the results of public polls, but it is improbable that the results would be positively correlated with the accuracy. There is a famous argument that being informed about politics represents an irrationality 50 . “Nothing strikes the student of public opinion more forcefully than the paucity of information most people possess about politics.” 51 As a combination of expressing opinions and acting upon their interpretation, predictionmarkets seem to be able to overcome some of the persistent flaws of modern society, even in the ambiguous field of politics.
In contrast, little is known about the applicability and performance of predictionmarkets when it comes to the long-term. Results from an analysis of the Foresight Exchange (FX) 26 , a prediction market aiming at assessing long-term developments, suggested that predictionmarkets might perform well for longer forecasting horizons (Pennock et al. 2001a). Launched in 1994, the FX is an online play-money prediction market that aims at forecasting events in the far future. For example, at the time of writing, one claim (‘Lunar Excursion Tourism by 2025’) predicted a 20% chance that a private tourist will land on the surface of the moon by January 1, 2025. Thus, this claim cannot be judged before January 1, 2025. For 161 contracts that referred to ‘yes’ or ‘no’ questions, Pennock et al. (2001a) recorded FX forecasts thirty days before the respective outcome was known. They found that the FX forecasts strongly correlated with outcome frequencies. However, the forecasting horizon of thirty days does not allow for drawing conclusions on the long-term forecasting performance of predictionmarkets. Furthermore, the analysis lacked a comparison to a benchmark forecast.
to population average belief. Wolfers and Zitzewitz (2006) confirm such result extending it to a broad class of models. Both papers agree on the fact that the estimation of the average beliefs provided by prediction market prices can be biased, but they quantify that bias as small. Conversely Manski (2006) shows that under risk neutrality the divergence between the price and the average belief can be large. He and Treich (2012) summarize and complement the previous results providing conditions for equilibrium price to match the average belief: for every possible beliefs distribution it is necessary and sufficient that agents share a logarithmic utility, or, alternatively, for every possible strictly concave utility, it is necessary and sufficient that the beliefs distribution is symmetric around one half. In recent years, the attention has increasingly switched to the case of repeated predictionmarkets, which share more similarities with other financial market mod- els. Beygelzimer et al. (2012) and Kets et al. (2014), in a dynamic version of the previous models, assume that in every time step a prediction market on a binary event is hold. The success probability of the binary event, constant over time, is supposed unknown. Traders have a certain amount of wealth and in every pe- riod they decide, according to their beliefs, how much to invest in the prediction market and what position they shall take. In every time step a central auctioneer collects the orders and establishes the price accordingly. At the end of the period the outcome of the event is revealed, agents’ wealth is updated and the process is repeated. This dynamic framework has the advantage of providing an objective probability (the one that drives the Bernoulli trials) that acts as a benchmark for evaluating the correctness of market price and agents’ beliefs. Beygelzimer et al. (2012) find that, under the assumption that agents bet according to the Kelly cri- terion (Kelly, 1956), the market price adapts to the success probability at optimal rate (i.e. in a Bayesian fashion) and provides a prediction that is only slightly worse than the best agent’s one. Kets et al. (2014) go further and show that with Kelly investors only the agents with the most accurate beliefs survives in the long run and, consequently, market prices converge there. Based on extensive numer- ical simulations, they also suggest that if Kelly traders bet only a small fraction of their wealth, according to the so-called “fractional Kelly” rule (see for instance (MacLean et al., 1992, 2004, 2005)), then more than one agent will survive in the long run and the market is efficient, as the expected price seems to converge to the true probability of the binary event.
Department of Finance, University of Iowa
Predictionmarkets for future events are increasingly common and they often trade several contracts for the same event. This paper considers the distribution of a normative risk-neutral trader who, given any portfolio of contracts traded on the event, would choose not to reallocate that portfolio of contracts even if transac- tions costs were zero. Because common parametric distributions can conflict with observed prediction market prices, the distribution is given a nonparametric rep- resentation together with a prior distribution favoring smooth and concentrated distributions. Posterior modal distributions are found for popular vote shares of the U.S. presidential candidates in the 100 days leading up to the elections of 1992, 1996, 2000, and 2004, using bid and ask prices on multiple contracts from the Iowa Electronic Markets. On some days, the distributions are multimodal or substantially asymmetric. The derived distributions are more concentrated than the historical distribution of popular vote shares in presidential elections, but do not tend to become more concentrated as time to elections diminishes.
For trade based manipulations Allen and Gale bring in a theory of manipulation via signalling. Due to their model a manipulative trader acts successful in imitating a well-informed insider within a surrounding of traders with rational expectations if the manipulative intentions of the manipulator are not revealed [Alle 92]. Bohm and Sonnegaard describe the vulnerability of predictionmarkets by coalitions performing circular trading [Bohm 99]. They describe that a coalition member A could offer a large number of shares to the market at a very low price and a coalition member B immediately afterwards purchases all standing offers. At a later date with higher prices in the same share, coalition member B offers to sell all shares at very high prices and coalition member B immediately accepts buying all standing shares. This procedure is particularly attractive when the queue of bids is short in the times of both actions. Hansen et al. [Hans 04a] describe an incentive for manipulation in a prediction market (i) which is covered by the media and (ii) in which a decisive vote illusion can be created. They present the behavioural model in Figure 2.2 as a basis for this type of manipulation. The figure shows the rationale of a political stock market without (left) and with (right) coverage by mass media. They argue that even if the probability of a single vote having a decisive influence in the outcome of the real world event is infinitesimal small (see e.g. Owen and Grofman [Owen 84]), a surrounding in which a vote illusion is present - i.e. the real world deciders believe in the influence of their vote - can occur. Together with a extensive mass media coverage this motivates the above mentioned acceptance for manipulators to take even losses for changing a share’s price which Hansen et al. call the ’circle of influence’.
With the above-mentioned requirements for potential instruments to monitor venture capital investment in mind, the following chapter takes a closer look at predictionmarkets, especially their ability to perform internal control.
Predictionmarkets in general are artificial markets wherein participants trade contracts with payoffs tied to a future event, thereby yielding prices that can be interpreted as market- aggregated forecasts. Modern predictionmarkets are dominated by service providers (e.g., Prediki) that use a question and multiple-answer structure. Therefore, participants more specifically trade on answer-related contracts of corresponding questions. Price movement per answer can be monitored over time, which provides useful information on the probability of each answer.
Why do these markets predict so accurately? There are no satisfying explanations thus far. As Berg and Rietz ( 2006 ) state, “exactly how predictionmarkets become efficient is something of a mystery.” The main goal of this paper is to provide and formally illustrate a theory. In what I shall call information acquisition explanation, traders have stronger incentives to acquire information about the unknown outcome the larger their endowment. Consequently, high endowment traders are better informed. Moreover, high endowment traders have larger impact on the market price, because they can buy more assets. This in- teraction implies that few, but well-situated traders can move the market price—interpreted as prediction—in the right direction, thereby explaining the observed accuracy. Unlike many financial market models, the explanation does not rely on the presence of insiders nor the ability of traders to infer information from asset prices. Even markets with traders who have systematically biased opinions about the outcomes can produce accurate forecasts, because of effective incentives for information acquisition and weighting by investment volume.
An evolutionary explanation is not just compatible, indeed it complements the infor- mation acquisition explanation. Successful investing increases endowment, which improves incentives for information acquisition and thereby future success, along with weight in the market. Recently, Blume and Easley (2006) asked whether agents with more accurate beliefs survive over time in the market, whereas agents with inaccurate beliefs vanish. In an infinite horizon consumption model, they show that risk averse agents in complete markets indeed survive only if they have correct beliefs, given that someone else does. If no one has accurate beliefs, then those with beliefs closest to the truth survive, assuming homogeneous discount factors. However, their results may not be directly applicable to predictionmarkets. For instance, they do not model entry of new traders. And if there is continuous entry of new, biased traders, then the population will never be completely free of biased beliefs. Moreover, selection with respect to beliefs is retroactive, i.e., survivors have had correct forecasts. Con- sequently, selection need not guarantee accurate forecasts in the future if novel events come up for prediction. A question for empirical research is whether there is a forecasting ability, so that past accuracy is positively related to future accuracy, or whether selection just favors those who “got lucky,” in which case there would be a regression-toward-the-mean effect.
Since the beginning of prediction market research, various applications have been discussed. From the predictions of political elections results to supply chain information and project management, the spectrum is widely diversified. While the application spectrum does not lack variety, there has been little diversification in scientific research. Although an enormous number of articles regarding “open to the public” predictionmarkets are available, the academic presence of internal control predictionmarkets is surprisingly small (Christiansen 2007). Despite the potential of internal control predictionmarkets, as corporate governance tools, there are only a few relevant articles on comprehensive “real world” experiments such as the one Ortner has done before. (Henderson & Abramowicz 2007; Ortner 2000). In addition, most of the articles have focused on the accuracy and general applicability of predictionmarkets as internal control instruments. The subject of sensitivity, especially regarding the general development of the prediction target and event sensitivity concerning the identification of specific event influence has been neither discussed from an internal control perspective nor examined in a comprehensive field experiment.
Suggested Citation: Choo, Lawrence; Kaplan, Todd R.; Zultan, Ro'i (2019) : Manipulation and
(mis)trust in predictionmarkets, FAU Discussion Papers in Economics, No. 12/2019, Friedrich- Alexander-Universität Erlangen-Nürnberg, Institute for Economics, Nürnberg
This Version is available at: http://hdl.handle.net/10419/210649
to Servan-Schreiber et al. (2004) and Rosenbloom and Notz (2006) play money markets are as accurate as real money markets. They argue that real money mar- kets may better motivate information discovery while play money markets may yield more efficient information aggregation. Luckner et al. (2008) find that play money markets for the FIFA world cup are about as accurate as betting mar- kets, which are strongly incentivized. Gruca et al. (2008) argue that there is no difference in forecast accuracy as long as there is a lot of publicly available in- formation; otherwise real-money markets perform better. In order to set some incentives in play money markets the usual procedure is to shuffle prizes among participants. There are various winning schemes possible. Luckner and Wein- hardt (2007) find rank-order winning schemes lead to the best results in terms of prediction accuracy due to the risk aversion of traders in competitive environ- ments. In contrast to that, Wolfers and Zitzewitz (2006a) state that rank-order tournaments potentially provide an incentive to add variance to one’s true be- liefs. As fixed payments do not stop traders to be irrationally active in their experiment Luckner et al. (2008) conclude that traders are not only driven by monetary incentives. Spann and Skiera (2003) point out that the motivation to participate decreases if payout dates are too far in future. If the disbursement is limited, the question arises how to motivate traders intrinsically. Christiansen (2007) describes an accurate prediction market forecasting rowing events in the UK with no monetary or prize incentives at all. Furthermore, he speculates that reputation within the rowing community and passion about the game itself gen- erates enough motivation. Cowgill et al. (2009) find when adding “fun markets” 2 to serious business related markets that volume in both markets are positively correlated, suggesting that the former might increase participation.
2.1. Fundamentals of PredictionMarkets
Throughout history business people have always tried to forecast the future to improve the performance of their companies. Commodity futures can be traced back to the Middle Ages when farmers and merchants faced the risk of price changes as a result of weather conditions or wars. In recent years, a relatively new approach for information aggregation has gained importance in the area of forecasting, namely predictionmarkets. Predictionmarkets bring a group of participants together and let them trade contracts whose payoff depends on the outcome of uncertain future events. The contracts thus represent a bet on the outcome of those future events. Once the outcome is known traders receive a cash payment in exchange for the contracts they hold. Several studies describe how such markets have been applied for predicting future events or developments in the field of politics (Forsythe et al., 1992), sports (Luckner et al., 2007), medicine (Polgreen et al., 2007), or entertainment (Pennock et al., 2000). Moreover, companies like Siemens or Hewlett-Packard have employed predictionmarkets in order to improve their decision making (Chen and Plott, 2002, Ortner, 1997). This section contains a definition of what predictionmarkets are (2.1.1), a description of the operational principle of predictionmarkets (2.1.2) as well as the theoretical foundations of predictionmarkets (2.1.3).
This dissertation aims at the development of a novel framework for statistical motion prediction. Its main contributions are the derivations of the Bernoulli-Gaussian mixture model (BGMM) as well as the variational Bayesian Bernoulli-Gaussian mixture model (VBBGMM) for density approximation in the presence of mixed- feature variables. Further, their respective application to statistical long-term motion prediction has been throughly investigated and demonstrated. Mixed-feature variables often occur in artificial intelligence since many real-world applications involve both: continuous feature variables and categorical feature variables. The BGMM facilitates approximating unknown distributional shapes arbitrarily well. Therefore, it is able to model a wide variety of random phenomena. Another important property of the BGMM is that it is able to capture correlations between mixed-feature variables due to a factorized component distribution which is described in this thesis for the first time. Further, its Bayesian extension, the VBBGMM, exhibits several valuable advantages: It does not suffer from overfitting, it provides an increased numerical stability, it allows for a seamless integration of prior knowledge, and its complexity can be determined automatically while the computational overhead is minor. In addition, both models facilitate online parameter inference. Prediction is accomplished by conditioning the joint PDF of input and output variables on the observed input. Thereby, instead of calculating a single point estimate, a complete PDF is obtained as final prediction, of which individual hypothesis can be extracted. Since the models are mixture distributions, they exhibit excellent approximation properties and can even be multimodal.
On the other extreme, Eq. 9 is also tested to see if it can be applied to predict sediment discharge with very fine bed material that is generally regarded as wash load (sediment size is finer than 0.07mm, Partheniades, 1977). Fig. 5 shows the predictions for laboratory data with d 50 = 0.011 mm from Kalinske and Hsia (1945) and field data with d 50 = 0.02 to 0.07 mm from Indian Canals by Chitale (1966). As a comparison, other equations are also included in Fig. 5 and it can be seen that Eq. 9 provides a reasonable prediction. Hence, it can be concluded that wash-load could also be predicted since the motions for coarse and fine sediment are governed by identical physical laws, Partheniades (1977).
Action itself (more on this shortly) then needs to be reconceived. Action is not so much a response to an input as a neat and efficient way of selecting the next “input”, and thereby driving a rolling cycle. These hyperactive sys- tems are constantly predicting their own up- coming states, and actively moving so as to bring some of them into being. We thus act so as to bring forth the evolving streams of sensory information that keep us viable (keeping us fed, warm, and watered) and that serve our increas- ingly recondite ends. PP thus implements a comprehensive reversal of the traditional (bot- tom-up, forward-flowing) schema. The largest contributor to ongoing neural response, if PP is correct, is the ceaseless anticipatory buzz of downwards-flowing neural prediction that drives both perception and action. Incoming sensory information is just one further factor perturbing those restless pro-active seas. Within those seas, percepts and actions emerge via a recurrent cas- cade of sub-personal predictions forged (see be- low) from unconscious expectations spanning multiple spatial and temporal scales.
The dual labor market theory of Peter B. Doeringer and Michael J. Piore (1971) is based on the hypothesis that labor markets are divided into segments, which are distinguished from each other by a separate system of rules, job behavior requirements, and different skills. This division is the result of employee characteristics such as gender, age, and race, which define their work environment and lifestyle. For example, human resources policies include preferences for recruiting white male workers to managerial positions by offering training, pay gains, promotion, and job security. This theory allows the analysis of issues such as barriers to satisfying the structural labor demand by women and teenagers, the availability of unstable and low productivity jobs in advanced economies, the employment of immigrants in jobs that are not attractive for local workers, barriers to promoting the unattractive jobs by market mechanisms such as raising wages, and acceptance of unattractive jobs by socially vulnerable groups (Kogan 2007).
Figure 6.1: Example of a simulation step to compare the two interpretations of prediction intervals. Grey points represent data points in the test data, small dark points the borders of prediction intervals estimated on the training data. The x-axis is the index of the observation in the test data, the sample is ordered by the size of the corresponding prediction interval. Left: Heuristic sample interpretation Right: Conditional interpretation, horizontal lines represent the borders of the intervals interested in. Nevertheless, Meinshausen (2006) used the heuristic sample interpre- tation to confirm the coverage of prediction intervals based on random forests. We call this interpretation heuristic as we clearly stated that this view is not incorrect, as of course correct specified intervals should hold the coverage over a whole sample with different x. Yet we also gave an example how looking only at the sample cov- erage can be misleading, as also intervals can cover (1 − α) · 100% of a new sample that are completely out of range for many specific x new .