Files
Doudtful
made a comment:

Thank you pointing me toward the blog. Unfortunately, I'm not a blogger but I am interested in forecasting. Is there a particular scholarly journal, book, other than Tetlock's, you'd recommend, or other source that would be helpful for graduate student interested in tackling forecasting for a grad thesis? For that matter is/are there a specific direction in the field you find intriguing for a thesis topic? :)

Files
Inactive-43
made a comment:

The research papers are scattered through many journals. A Google scholar search on the key word IARPA should bring up hits, as this is stated in the acknowledgements at the end of each publication. Some of these are free to the public, others you have to purchase. Or start with the names of known researchers listed in Tetlock's book, and then with each paper you pull up you'll get more names. Also, even if you aren't a college student, the reference library of just about any publicly supported college or university can help. Ask the reference librarian for help.

Files
Doudtful
made a comment:

thank you : )

Files
Inactive-43
made a comment:

@Doudtful Please keep up the good work with your research and analyses. If perchance you were to lose big time on your first few forecasts, don't get discouraged. I was an official superforecaster in the Good Judgment Project, and I've made plenty of bad calls. What counts is what you do, on average, over the long run.

Files
Doudtful
made a comment:

Q: What's the probability of relatively outperforming seasoned superforecasters over the next 3 months even with much needed and appreciated encouragement ?
A: I'm nowhere near a superforecaster but even I know it's not good; nonetheless I'll try my best : )

Files
Inactive-43
made a comment:

@Doudtful Guess what, given how open thisforecasting competition is, many of the newbie newcomers are beating the pants off of many superforecasters, including me.The leaderboards aren't terribly good indicators because too much weight is on luck. But one of my fellow supers, @Heffalump, and I have surveyed all players in the game, an found several newbies who have been scored on 20 or more questions, who have a Brier score divided by the median Brier score of less than 0.8, and have good "accuracy scores" for all challenges. We only found 21 people who were that good out of some 18,000 players. These 21 are better by our measures of success than 99.9% of all players. All us superforecasters had to do was be in the mere top 2%. Yet several of today's super-supercorecasters joined way late and nevertheless caught up and surpassed superforecasters like me.

Here are the best of the latecomers, meaning with user#s over 10,000: user #10192, @Counterintelligence user #11157 @STLDCA user #11678 @christianmaas and user # 12302 Sjens.

You can generally tell who the old superforecasters are by the user #s. If you look in the web link above, yours for example is https://www.gjopen.com/memberships/17875/scores, so you are #17875.

Most of the first 200 are supers, but not all, and some supers have higher user numbers.

That said, people who I know are superforecasters who also are members of this top one tenth of one percent of winners are @GeneH, @Gonzo, @morrell, and @NickLutz.

The bottom line is that some people who joined late, and therefore are likely to be novice forecasters,nevertheless have done spectacularly well. I hope to see more of your research and analyses and that you won't let temporary setbacks discourage you.

Files
GeneH
made a comment:

@cmeinel thank you for that analysis. I had NO IDEA I was doing that well. Felt like I was very average, if that.

Files
Doudtful
made a comment:

I thought about your comments, specifically new entrant forecasters to those of experienced forecasters, so if I'm wrong I accept full responsibility. I think the answer is evaluating forecasts in an environment of higher uncertainty.

In your mind, does this question capture that? Given a 1% change in the median brier score from .5, what % increase in a forecaster's brier score is accounted for by the change in the median? I think forecasting in an environment with a median brier around .5 is ideal. Forecasting in an environment around 0 median brier and beyond 1 median brier is simply a problem with the forecasting structure. The best mix of beta & alpha is about .5 median brier in my humble opinion. Beyond .75 median brier, forecasting is simply manipulating the system to generate beta (which I guess is a skill). But a median brier environment of .25 is simply manipulating the system to generate alpha (again a skill but not a forecasting skill). I'd prefer to have skills that slows the rate of increase in brier given the rate of increase in median brier at the .5 level rather than guessing the implied volatility on a binomial option. The former is a difficult but a very useful skill and the latter is a survey question.

Given the open GJ environment, what changes would one anticipate? Say the prior superforecasters were forecasting in a .2 median brier environment and now it's about .35 median brier (i.e. IFP probabilities being .10 worse). How much more difficult is forecasting in an environment with a median of .35 brier to that of a .2 median brier? Similarly, are the skills of forecasting in a .25 median brier environment the same as a .5 median brier environment? I think the open environment format may be informative in evaluating a DIFFERENT, not better, type of forecasting skill because it allows forecasters to self-select into IFP's with greater perceived uncertainty. Making accurate forecasts sooner on what looks like a 50/50 from a long ways away (foxy skill of a day trader-- generating alpha) may not be the same as making accurate forecasts under greater uncertainty only 24 hours away (hedgehogy skill of a portfolio manager -- generating beta). As the open GJ environment gets closer to .5 median brier score the opportunity to manage uncertainty across multiple IFPs in a cluster (i.e. envision coherence and the like, or portfolio management with derivatives, or however you think of it ) will be more like actual decision-making - requiring both Fox and Hedgehog style skills.

I wonder if the vagueness of the open questions don't cancel themselves out? Yes they contribute in some way to the uncertainty but that may be an inherent function of refinement/nuance getting to the essence of something uncertain. In this way, knowing the exact (perfect) question to ask for a .25 median brier environment may be a stricture to asking a good (imperfect) question about capturing a particularly uncertain event in a .5 median brier environment? (I concede there are simply cases of bad questions in either)

What are top forecaster's beta and alpha to variations in uncertain environments? I think the minimum meaningful metric given this question is Brier/median * ln(median). In this way, I think superforcasters will do extremely well in clustering at the top by creating alpha with the exception of those who self-select away from boring questions and skew toward creating beta. If we get the median brier up from .35 to .5 environment then I think we'd have a real contest on our hand of two adversarial skill sets.

Files
Inactive-43
made a comment:

@Doudtful I love your analysis. Would you be willing to delve into this further with my collaborators and me offline? Please contact me at [email protected]

Files
Doudtful
made a comment:

happy to : )

Files
Files
Tip: Mention someone by typing @username