ProgressVote: Discussion of scientific integrity

There are several counter-arguments against the participation of society in measuring its own progress. The following are some of these concerns and how ProgressVote would address them.

People might just vote anything
Argument: Users may just vote randomly

Ideas:
 * The website could control for random clicking for example by not counting users who visited the website very quickly (just clicked through) or who did not look at all visualisations (e.g. if users are required to view several series and fail to do so). Such random votes could be filtered out automatically.
 * Users could be offered the choice to publish their voting on social networks. This would incentivise rationality. People invest in building networks, they do not want to publish that their opinion about social progress is just “something”.
 * Viewing expert statements rationalises the voting.
 * Asking if the user agrees with some agent, rather than just asking for a vote, may encourage less random responses. Who would want to randomly align with some statement he has not read?
 * Those who go through data visualisation of time series on social progress measurement might care enough to make a conscious choice.

People would protest-vote
Hard version of argument: People who generally dislike the government would just always vote negatively, people who are loyal to the government would just always vote positively; voting is not according to the data presented but according to your ideological conviction.

Ideas:
 * This problem alone would only have level-effects on the index, not affect rates of change (this is the usual Solow-model type argument about effects of the s-parameter. Same outcome).
 * The experts would be asked to keep their evaluation statements close to the data and not provide any general "bad/good government"-type statements. Then users would have a harder time just making a partisan selection.
 * One could also provide a space on the website where users could leave their "general view about the government / social progress irrespective of the shown data" and direct them to leave all "general views" there, keeping their voting just to the data considered. This extra space would then act to "purge" the index-creating questions (1&2) from protest-vote. People have another way to "get their anger from their chest", without messing up the index. The idea is that also an ideological user may become able to vote on just the data if he is offered a outlet for general convictions.
 * Some people would still always vote negatively/positively, but again: these would be constants and the interesting thing about an index is changes -whatever the level of the index- and those changes would not be affected by people who always vote ideologically.

Soft version of argument: The indicator would alter according to changes in popular mood about the government, independent of changes in the data.

Ideas:
 * True!, but Voting on data should still be rather rational, compared to any other means by which citizens can today express their view on social progress. Just consider the moody swings characterising today's surveys. Given the more informed setting of a panel of data visualisations, voting out of a mood should be much less dramatic in the ProgressVote setting.

People just vote according to the indicator with the most prominent visualisation
Ideas:
 * Well, we certainly need to think well about the visualisations used and the framing effects caused. This is not a general problem affecting the app, it is a problem that could occur for different forms of visualisation. Visualisation is just a key job, we need to do it well.


 * One aspect of this problem is recall bias. The problem here is that the user might not recall all visualisations he saw but certainly remembers the last one and that might affect his voting disproportionately. He would then not vote on "social progress" altogether, but just on the aspect of social progress that the last data series displayed. This problem can be resolved by programming ProgressVote such that the order by which users see different visualisations is randomised. The bias, if existant, then disappears as the sample grows, just because different users would have different biases, knocking each other out.


 * The same removal of recall bias applies to the choice of experts. The order by which expert statements are presented would hence also be randomised.

People mix up stocks and flows
Ideas:
 * In line with Stiglitz-Sen-Fitoussi, we should generally focus on stock variables (GDP is a good flow, the trick is now complementing it with a non-arbitrary composite of stocks). Users would hence not get the wild mix of stocks and flows used in some indices and should hence not be confused.

People do not understand composite functions, and fail to separate between inputs and outputs ??

 * It's easy to say but let's try hard to assemble data for output indicators only. Then we do not have this problem.

People might vote several times
Ideas:
 * Cookie: only one vote per quarter.
 * Requiring users to sign in with FB-account and allowing only votes by users with n number of friends. Since there is a cost of making friends on FB due to the time it takes for confirmation, people would not create a fake FB-account just in order to get more voting power in the index.
 * Those more professional measures discussed for Adhocracy.

Unrepresentative voter → unrepresentative index
Argument: The voters might be all geeks and the index would hence not capture a representative image of the public's evaluation of social progress.

Ideas:
 * How geeky the public is can be tested. The potential inclusion of a question about general evaluation of government (see section of "Protest Voting" above), that we might include as “outlet” to purge the index-questions from ideology, coincides with standard opinion polls. The average happiness with the government amongst ProgressVote-users (as inferred from the third question) can hence be compared to the results of standard surveys, testing for significant diversion. If ProgressVote users coincide with the public in their average opinion towards the government, then also the index constructed from their votings on the real questions should be considered representative.
 * Is this “Lucas-proof”?: The suggested strategy to test representativeness would fail if user foresaw that their index looses value if it is held non-representative and hence try to imitate the mainstream in their response to the general happy-with-government question. But isn't this concern far-fetched? A believing supporter of the opposition would not vote pro-government just in order to fake the index; particularly in a large sample the emotional loss from falsely voting against/for a government that you do/don't support should outweigh any expected benefit from misreporting; hence making truthful preference-revelation incentive-compatible.

General Behavioural Economics concerns
Argument: Various arguments could be brought forward why public voting may not work.

Idea: We would try to get advice from the Max Planck Research School on Adapting Behaviour, a leading institute for Behavioural Economics to try to get design it as good as possible.

< Return to overview of project proposal