• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Official April 2007 NPD Prediction Thread

donny2112

Member
Deku said:
Is Pachter ranked?

He was #25 in April, so he has some points. If he keeps predicting, we'll keep adding him to the prediction list. :)

starship said:
I am at #13 by units and LOCK is four spots behind me but in the points system ranking he is way higher than me (I'm at #16 and he is at #4). What causes such a big difference in the rankings?

JoshuaJSlone explained the concept, so here's the actual data.

Jan:
starship - #24 - 421K - 82 points
LOCK - #18 - 386K - 110 points

Feb:
starship - #11 - 317K - 134 points
LOCK - #18 - 338K - 106 points

Mar:
starship - #129 - 472K - 0 points
LOCK - #5 - 186K - 163 points

Sub-total: (through March)
starship - 1,210K - 216 points
LOCK - 910K - 379 points

Apr:
starship - #75 - 307K - 0 points
LOCK - #157 - 838K - 0 points (He predicted 1 million for the DS. ;) )


Essentially if you aren't in the top 50 for a month, it doesn't matter how bad you do when it comes to the points.

Edit:
starship said:
1. Constant difference between spots at any level.

That comes back to November and December being way skewed when it comes to how far out the top 10+ could go.

starship said:
2. The last person gets the lowest point, for example zero, and the #1 gets the highest point, for example 100. I think if for example 130 users participate in the prediction, its not fair that form #130 to #50 all get zero points.

If 30 people participate (in May 2006, we had 17 predictions :eek: ), #15 would get 50 points. If 200 people participate, #15 would get 92.5 points.

I think a points system is the most fair way to allow late-comers to still have some fun, but it doesn't have to be this points system. Maybe extend it out to top 75/100 users get points? What about months with less than 75/100 predictors? The lowest number we've had in a month this year is 65, so using a top 50 hasn't run into that issue, yet.

I'm open to other ideas, though.
 

starship

psycho_snake's and The Black Brad Pitt's B*TCH
donny2112 said:
That comes back to November and December being way skewed when it comes to how far out the top 10+ could go.
I don't get your point here. :) I just said that instead of using this:

#2-#10 get 5 points less per position, so #2 gets 178, #3 gets 173, etc.
#11-#25 get 4 points less per position, so #11 gets 134, #12 gets 130, etc.
#26-#50 get 3 points less per position, so #26 gets 75, #27 gets 72, ..., #50 gets 3.

From the last one to the first one, any user get [insert a number here, for example 5] more per position.

If 30 people participate (in May 2006, we had 17 predictions :eek: ), #15 would get 50 points. If 200 people participate, #15 would get 92.5 points.
But being #15 amongst 200 people is way better than being #15 amongst 30 people, so you get more points because you deserve it. This way you get what you deserve relative to all participants.
 

donny2112

Member
starship said:
I don't get your point here. :)

I misunderstood you. :)

starship said:
From the last one to the first one, any user get [insert a number here, for example 5] more per position.

The NASCAR-like system puts more emphasis on the higher rankings. Finishing 3 instead of 4 is a bigger bonus than finishing 42 instead of 43. It also keeps the points of the #1 position from getting very big, very fast. A top 50 with 5 points separation puts #1 at 250 + 15 point bonus (I like the idea of finishing first getting an extra bonus) instead of 183 + 15 point bonus.

I'm fine with either way, though. Anyone else have a preference?

starship said:
But being #15 amongst 200 people is way better than being #15 amongst 30 people, so you get more points because you deserve it. This way you get what you deserve relative to all participants.

One of my goals here is to make every month worth the same. That's why I didn't really like the idea of using total sales for overall rankings. I can understand what you're saying with "more participants" = "bigger win." However, I prefer to keep things consistent from month-to-month.
 

Zynx

Member
One suggestion - have everyone posting in the next prediction thread post in the same format, so that a parser can interpret what people have predicted, rather than requiring someone to hand-copy-paste into a spreadsheet.


One thing I don't like about your ranking by closest to the actual value, is that it encourages a sneaky person to 'clip' another's prediction by predicting right next to it.

Let's say currently there's predictions of 100k, 120k, 130k, 135k, and I was thinking of predicting 108k. But under this system I would actually "predict" 101k, to predict as near to another user's prediction as I can, because if the true value was 108k, I'd still get maximum points (being closest). But I'd also get maximum points if it was 101k.

I think giving credit based on absolute different from amount, or % difference, encourages a person to forgo considering how others have predicted when making their prediction.
 

starship

psycho_snake's and The Black Brad Pitt's B*TCH
donny2112 said:
One of my goals here is to make every month worth the same. That's why I didn't really like the idea of using total sales for overall rankings. I can understand what you're saying with "more participants" = "bigger win." However, I prefer to keep things consistent from month-to-month.
The problem is, I don't think every month worth the same.
The chance that you rank higher is way higher when the participants are way lower.
Let's say only 20 people participate in a month prediction and I'm one of them. I'll rank #20 even if I'm off by 20 gazillion units. Now do you think I deserve 98 points in your Nascar-ish system?
 

donny2112

Member
Zynx said:
One suggestion - have everyone posting in the next prediction thread post in the same format, so that a parser can interpret what people have predicted, rather than requiring someone to hand-copy-paste into a spreadsheet.

I haven't thought of a good way to implement that, so it's hand entered into an Excel spreadsheet, for now. Any practical suggestions on that?

Zynx said:
One thing I don't like about your ranking by closest to the actual value, is that it encourages a sneaky person to 'clip' another's prediction by predicting right next to it.

The problem is picking the right prediction to 'clip.' ;)

Zynx said:
I think giving credit based on absolute different from amount, or % difference, encourages a person to forgo considering how others have predicted when making their prediction.

The total is based on the sum of absolute differences for each of the seven main consoles.

starship said:
Let's say only 20 people participate in a month prediction and I'm one of them. I'll rank #20 even if I'm off by 20 gazillion units. Now do you think I deserve 98 points in your Nascar-ish system?

If only 20 people participate in a month, I would be so happy since I do the spreadsheet by hand. :D :lol

You're correct in that from a relative sense, #20 wasn't "worthy" of 98 points. However, the months this year have gone 65 -> 97 -> 171 -> 160. The chances of it going below 50 probably aren't that high.

We could limit it to the minimum(50,0.5*# of predictors) that get points. In the theoretical 20 month, that means only the top 10 would get points. In January, only the top 33 would get points. They'd still get full points, though.

Personally, I'm more toward just keeping to a top 50, and if there's less than 50 predictors that month, BONUS! :D
 

starship

psycho_snake's and The Black Brad Pitt's B*TCH
donny2112 said:
If only 20 people participate in a month, I would be so happy since I do the spreadsheet by hand. :D :lol

You're correct in that from a relative sense, #20 wasn't "worthy" of 98 points. However, the months this year have gone 65 -> 97 -> 171 -> 160. The chances of it going below 50 probably aren't that high.
Yes I know that the chances of it going below 50 are low but It doesn't matter. In January being in the top 50 wasn't difficult at all because only 15 people were outside of the top 50, so you would be in the top 50 even with a modest prediction. On the other hand in March there were 121 people outside of the top 50 and the competition was way stronger, so you needed much better prediction to put you in the top 50. For example being #40 amongst 65 people (in Jan.) is not the same as being #40 amongst 171 people (in March) while in your method both would get the same point.

We could limit it to the minimum(50,0.5*# of predictors) that get points. In the theoretical 20 month, that means only the top 10 would get points. In January, only the top 33 would get points. They'd still get full points, though.
I like this better. :)
 

donny2112

Member
starship said:
donny2112 said:
We could limit it to the minimum(50,0.5*# of predictors) that get points. In the theoretical 20 month, that means only the top 10 would get points. In January, only the top 33 would get points. They'd still get full points, though.
I like this better. :)

OK. How's this?

1. 25 point bonus for finishing first.
2. #1 gets 150 points, so 175 with the bonus.
3. #2-#75 drop in 2 point increments, so #2 gets 148 and #75 gets 2. The exception to that is if there are < 150 predictors, in which case only the top half (rounded up) will get points.

Examples:

January had 65 predictors, so only the top 33 (CEIL(65/2)) get points.
#1 gets 175 points.
#33 gets 86 points.
#34 and higher get nothing.

April had 160 predictors, so a full 75 get points.
#1 gets 175 points.
#33 gets 86 points.
#75 gets 2 points.
#76 and above get nothing.


Comments? Is this one fine with most everyone?
 

apujanata

Member
donny2112 said:
OK. How's this?

1. 25 point bonus for finishing first.
2. #1 gets 150 points, so 175 with the bonus.
3. #2-#75 drop in 2 point increments, so #2 gets 148 and #75 gets 2. The exception to that is if there are < 150 predictors, in which case only the top half (rounded up) will get points.

Examples:

January had 65 predictors, so only the top 33 (CEIL(65/2)) get points.
#1 gets 175 points.
#33 gets 86 points.
#34 and higher get nothing.

April had 160 predictors, so a full 75 get points.
#1 gets 175 points.
#33 gets 86 points.
#75 gets 2 points.
#76 and above get nothing.


Comments? Is this one fine with most everyone?

The 25 bonus is too big (percentage wise, it is 10%+).

I prefer to keep the competition heated, and the very big bonus for #1 does not help achieve that.
Lets change the 25 bonus to 5 or 10 (max) bonus

Your calculation will be very difficult, donny2112, since you need to calculate variable value for every position for every month. Are you sure it is not too much trouble to calculate it ?

Your new calculation means there are no longer bonus for getting very close to #1, which I understand is the main reason for the NASCAR point value you propose.

How about this general rules :
If there are less than 100 predictor,
round down the # of predictor to tens (to made it easier to calculate the points)
50% of predictor got points
50% got 0 points

If there are 68 predictor, it is rounded down to 60. If there are 72 predictor, it is rounded down to 70.

If there are more than 100 predictor, only the Top 50 predictor will get points

Of the predictors that get points,
#1 get 5 bonus points
20% share 45 points (from 185 to 140)
30% share 60 points (from 140 to 80)
50% share 75 points (from 80 to 5)

April case : 160 predictor. since # of predictor is 100+, only 50 predictor will get points.
#1 get 5 points bonus
20% of 50 is 10, so #1 to #10 will share 50 points between them.
#1 get 190 (185+5), #2 get 180, #3 175, #4 170, #5 165, #6 160, #7 155, #8 150, #9 145, #10 140.
30% of 50 is 15, so #11 to #25 will share 60 points between them (4 points per position). #11 get 136, #12 get 132, #13 get 128, #30 get 80 points.
50% of 50 is 25, so #26 to #50 will share 75 points between them (3 points per position). #26 is 78, #27 is 75, #50 is 5.

January case : 65 predictor, rounded down to 60. Only the top 30 predictor will get points.
20% of 30 is 6, so #1 to #6 will share 45 points (7.5 points per position. #1 get (185+5), #2 get 177.5 (rounded to 178), #3 get 170, #4 get 162.5 (rounded to 163), #5 get 155, #6 get 137.5 (rounded to 138)
30% of 30 is 9, so #7 to #15 will share 60 points (6.67 points per position). #7 get 140-6.67 (rounded to 133), etc.
50% of 30 is 15, so #16 to #30 will share 75 points (5 points per position).
 

apujanata

Member
donny2112 said:
Okay, 10 bonus. Any other suggestions or comments?



May NPD runs from May 6 - June 2. NPD should be released Thursday, June 14.

Please see my revised post above.
You can change my +5 points suggestion to +10 points, if you wanted to.
 
donny2112 said:
OK. How's this?

1. 25 point bonus for finishing first.
2. #1 gets 150 points, so 175 with the bonus.
3. #2-#75 drop in 2 point increments, so #2 gets 148 and #75 gets 2. The exception to that is if there are < 150 predictors, in which case only the top half (rounded up) will get points.

Examples:

January had 65 predictors, so only the top 33 (CEIL(65/2)) get points.
#1 gets 175 points.
#33 gets 86 points.
#34 and higher get nothing.

April had 160 predictors, so a full 75 get points.
#1 gets 175 points.
#33 gets 86 points.
#75 gets 2 points.
#76 and above get nothing.


Comments? Is this one fine with most everyone?

I think you shouldn't be ranking people against each other based on their position to each other. It should be based on how well there predictions are relative to the actuals.

Ex:
Person A was number 1 for a given month, but was off by 200k units, and was 10th another month off by 200k units.

Person B was off by 250k units for 14th and then only off by 30k units good for 2nd

IMO, person B should be ranked higher not lower.

If you remember, back in the day, I had a little scoring mechanism for a single month and it was based on the relationship to the actual.

Some of the considerations I think should be present:

1) A person gets more points for being close to the actual on a percentage basis per platform. So, larger selling platforms don't have a monopolistic effect on the totals. Wii + DS could account for half of all points in a given month.

2) Each platform could have a weight, likely relative to it's importance or a weighting based on categorization. What is most important? Next Gen, handhelds, or last gen? Personally, I'd give all systems the same weighting except for the Gamecube, but I'd be willing to give some leeway here.

3) The ordinal position of the platforms should matter. If person A predict all HW positions correctly, but is off by a slightly larger percentage than person B, who got two HW platforms out of position, person A should get a better score.

4) Getting an exact hit on a specific plaform (within 2% or so), should give a bonus for excellence.

5) Predictions might need to be kept hidden so we don't have jokers who just add 1 to all of my predictions ;). I could accept them via PM and take myself out of the prediction thread (or PM donny with my predictions before I take any predictions in, so I don't change them afterwards). We could close the predictions a day early to allow folks to see what's out there.
 

donny2112

Member
apujanata said:
Your calculation will be very difficult, donny2112, since you need to calculate variable value for every position for every month. Are you sure it is not too much trouble to calculate it ?

The only way a position's point value would vary month-to-month is if there were less than 2*position predictors. e.g. #30 will always get 92 points unless there are 58 or less predictors in a month, in which case they would get 0 points.

Also, I'm putting this all in a script, so that it is automatic. :)

apujanata said:
Your new calculation means there are no longer bonus for getting very close to #1, which I understand is the main reason for the NASCAR point value you propose.

I liked that about it, but I don't have a problem with a constant reduction. If we wanted to keep a reward for finishing near the top, maybe a stacked bonus system for #1-#5? #1 - 25 bonus, #2 - 15 bonus, #3 - 10 bonus, #4 - 7 bonus, #5 - 3 bonus, and otherwise a constant reduction?

apujanata said:
How about this general rules :

Thanks for working out those examples ( :) ), but that goes back to the idea that I would like each month to be basically equal to the one before it. e.g. I'd like #5 to be worth the same in January, February, August, and December, regardless of the number of predictors.


Thanks for the suggestions so far, everyone! :)
 

apujanata

Member
sonycowboy said:
I think you shouldn't be ranking people against each other based on their position to each other. It should be based on how well there predictions are relative to the actuals.

Ex:
Person A was number 1 for a given month, but was off by 200k units, and was 10th another month off by 200k units.

Person B was off by 250k units for 14th and then only off by 30k units good for 2nd

IMO, person B should be ranked higher not lower.

If you remember, back in the day, I had a little scoring mechanism for a single month and it was based on the relationship to the actual.

Some of the considerations I think should be present:

1) A person gets more points for being close to the actual on a percentage basis per platform. So, larger selling platforms don't have a monopolistic effect on the totals. Wii + DS could account for half of all points in a given month.

2) Each platform could have a weight, likely relative to it's importance or a weighting based on categorization. What is most important? Next Gen, handhelds, or last gen? Personally, I'd give all systems the same weighting except for the Gamecube, but I'd be willing to give some leeway here.

3) The ordinal position of the platforms should matter. If person A predict all HW positions correctly, but is off by a slightly larger percentage than person B, who got two HW platforms out of position, person A should get a better score.

4) Getting an exact hit on a specific plaform (within 2% or so), should give a bonus for excellence.

5) Predictions might need to be kept hidden so we don't have jokers who just add 1 to all of my predictions ;). I could accept them via PM and take myself out of the prediction thread (or PM donny with my predictions before I take any predictions in, so I don't change them afterwards). We could close the predictions a day early to allow folks to see what's out there.

Your suggestion all have merit, sonycowboy. However, I see a lot of difficulty to implement them, especially #3 and #4, so I don't agree with it.

I believe #2 is already taken into consideration by donny2112, since he did not take into account GC sales on April 2007 prediction calculation. I also agree with all system having the same weight, since it is the simplest (and the most reasonable).

About #1, I do not know how easier or more difficult calculating the percentage is, so no comment there. As I understand it, you were suggesting to calculate the difference for each platform in %, instead of the usual number, and just added all the % to arrive at the final percentage.

However, if we go by percentage, donny2112 will need to recalculate all of the prediction, which is quite burdensome to him. The way I see it, percentage has their own plus or minus point. For small number console (such as GBA and PS3), even a small difference will make your ranking goes way down.

I think the current system (actual number instead of percentage) is better (we are not penalized by guessing wrong for the small #). Currently, there are two big # (300K+), DS & Wii, three medium # (100-300K) PS2, PSP and X360), and 2-3 small # (PS3, GBA, GC). Since many people forget to submit GBA, while none forget to submit DS and Wii, the effect of small # error are bigger than the effect of big # error (and the reason why I feel the amount system is better).
A lot of people forget to submit GC, but since we can decide to ignore GC (like what was done for April 07), then it might be not relevant anymore.

I don't agree with #5, since it is too troublesome, and there not so many joker there (as of now), anyway. Even when some joker wanted to copy someone, he has to decide who he wanted to copy from. Since the Top 5 predictor for each month can vary, the joker might not be able to choose the correct person to copy from for each month. I think we need to ignore this point, until it was proven that there are a lot of joker. We might change the rules later, it does not have be done right away (except for the point calculation).


Thanks for working out those examples ( ), but that goes back to the idea that I would like each month to be basically equal to the one before it. e.g. I'd like #5 to be worth the same in January, February, August, and December, regardless of the number of predictors.

not really. For 100+ predictor, your conclusion is correct. But for 100- predictor, your conclusion is incorrect. I think my suggestion might cater to starship points, and your points, while not making it over complicated. using your original suggestion (CEIL X/2), there are so many variation. Using my suggestion, there are only 11 variation (1-10, 11-20, 21-30, 31-40, 41-50, 51-60, 61-70, 71-80, 81-90, 91-100, 100+)
You can even try to simplify it into 6 variation (1-20, 21-40, 41-60, 61-80, 81-100, 100+)

If we wanted to keep a reward for finishing near the top, maybe a stacked bonus system for #1-#5? #1 - 25 bonus, #2 - 15 bonus, #3 - 10 bonus, #4 - 7 bonus, #5 - 3 bonus, and otherwise a constant reduction?

I am fine with your suggestion, but change it into something that cover Top10. Say : #1 : +17, #2 : +12, #3 : +11, #4 : 9, #5 : 7, #6 : 5, #7 : 4, #8 : 3, #9 : 2, #10 : 1
Constand reduction of 2 should be fine.
 

donny2112

Member
sonycowboy said:
If you remember, back in the day, I had a little scoring mechanism for a single month and it was based on the relationship to the actual.

I tallied the points for those, so I remember them well. :lol

November 2005 Prediction Thread

Just the hardware portion ...
Non-moderator_sonycowboy said:
Point System:
1) 25 points for getting Hardware positions correct. Maximum Points: 25
2) 20 points * (percentage of correct units for hardware system, anything over 100% difference = 0 points). Maximum points: 140

...

Required Predictions
You must make the following predictions to qualify for the points contest

1) Hardware (within 10k) each system

The software portion
unfortunately
isn't something we could replicate, but we could replicate the hardware portion. I still have all the individual hardware predictions by each user from January, so this is something we could do. It also satisfies my desire to have each month worth the same. :)

For users who didn't predict, they'd get zero points. For users who didn't predict all seven consoles, they would get 0/25 for 1) and whatever they did predict for 2).

sonycowboy said:
Personally, I'd give all systems the same weighting except for the Gamecube, but I'd be willing to give some leeway here.

I haven't been counting the GameCube, because I figure NPD will stop releasing its numbers publicly some point this year.


How does sonycowboy's method sound? Everyone gets scored based on how well they did and not on how well they did compared to others. :)


Edit:
Spending 15-30 minutes preparing a reply means I keep missing posts. :)

apujanata said:
I see a lot of difficulty to implement them, especially #3 and #4,

It will take some work on my part to automate it, but I'm pretty sure I can do it. I wouldn't want to do that determination by hand with the number of people predicting nowadays, though. :lol

apujanata said:
About #1, I do not know how easier or more difficult calculating the percentage is, so no comment there.

I'm pretty sure I can automate that, too.

apujanata said:
As I understand it, you were suggesting to calculate the difference for each platform in %, instead of the usual number, and just added all the % to arrive at the final percentage.

The % off for each hardware would be multiplied by 20 points. Then the points from each system would be added up. It isn't an overall percentage value.

apujanata said:
However, if we go by percentage, donny2112 will need to recalculate all of the prediction, which is quite burdensome to him.

Thanks for the consideration. :) I still have all the separate predictions for this year, though, so I don't think it will be too difficult. As with most things, the time to initially set it up will be much larger than the time to execute it in subsequent months.

apujanata said:
I don't agree with #5, since it is too troublesome, and there not so many joker there (as of now), anyway.

Yeah, I don't think we need to go to the trouble of hiding predictions. One month, one guy "predicted" one unit above each of sonycowboy's numbers and another guy "predicted" one unit below each of sonycowboy's numbers. They were banned from doing predictions after that. ;) We could just do something similar.


I may just be being nostalgic, but I like sonycowboy's previous method.

1) Each month is worth the same (165 points)
2) It's not too complicated.
3) Everybody can get points regardless of how many predictors there are in a month.
4) If you do well in a month and lots of other people do well, too, you still get a high score. Conversely, if you do poorly, you'll get a low score even if you were in the top 5 for a month.
 
donny2112 said:
The software portion
unfortunately
isn't something we could replicate, but we could replicate the hardware portion.

Don't give up on the software just yet ;) There are plans in the works to try and allow us to rate our software predictions ;) It may be a little different than before, but not too much.

donny2112 said:
I may just be being nostalgic, but I like sonycowboy's previous method.

1) Each month is worth the same (165 points)
2) It's not too complicated.
3) Everybody can get points regardless of how many predictors there are in a month.
4) If you do well in a month and lots of other people do well, too, you still get a high score. Conversely, if you do poorly, you'll get a low score even if you were in the top 5 for a month.

Don't be overly concerned about it being "too complicated". All people have to worry about it predicting the hardware well and we'll take care of the scoring. It just needs to be fair and transparent.

I do get a little concerned about the public predictions, and I felt I had to wait until the final day so I didn't affect people's predictions. Of course, people got much better towards the end as they started to understand comparisons to previous years, retail calendars, weekly sell through, effect of price drops and big titles, impact of the holidays, etc, etc.

Finally, I do like the idea of some "bonus", but we can work it out.

I do plan on expanding the NPD threads to have more analysis and more content in the coming months, as well as really looking at the predictive parts and seeing if we can come up with some models that work based on previous sales as well as the "wisdom of the crowd".
 

donny2112

Member
sonycowboy said:
I do plan on expanding the NPD threads to have more analysis and more content in the coming months, as well as really looking at the predictive parts and seeing if we can come up with some models that work based on previous sales as well as the "wisdom of the crowd".

"The GAF analyst group has released their predictions for hardware growth in the coming year..."
 

F#A#Oo

Banned
donny2112 said:
May NPD runs from May 6 - June 2. NPD should be released Thursday, June 14.

Awesome...that means I'll be home just in time for the thread to be posted...without having to catch up on reading the thread with GIF and JPEG's frogged!:D
 

donny2112

Member
sonycowboy's Point System:
1) 25 points for getting Hardware positions correct. Maximum Points: 25
2) 20 points * (percentage of correct units for hardware system, anything over 100% difference = 0 points). Maximum points: 140

Here are the rankings for the year and the last three months using that system.

NPD Prediction Points Leaders (2007)

1. GhaleonEB - 475.65
2. sonycowboy - 470.3
3. BuzzJive - 466.71
4. argon - 466.09
5. donny2112 - 463.37
6. Clever Pun - 462.58
7. starship - 459.25
8. JoshuaJSlone - 450.1
9. duderon - 448.77
10. Orgen - 448.24


NPD Prediction Points Leaders (3 months)

1. Jokeropia - 363.8
2. GhaleonEB - 352.8
3. starship - 351.71
4. Clever Pun - 351.55
5. BuzzJive - 351.48
6. donny2112 - 350.13
7. Heidir - 348.91
8. argon - 348.33
9. sonycowboy - 343.28
10. DayShallCome - 340.52
 

donny2112

Member
How about we decide between the following two systems?

1) Point system based on your rank for the month (relative performance).

Rules

1. Bonus Points for finishing in the top 10: #1 - 17, #2 - 12, #3 - 11, #4 - 9, #5 - 7, #6 - 5, #7 - 4, #8 - 3, #9 - 2, #10 - 1

2. #1 gets 150 points, so 167 with the bonus.

3. #2-#75 drop in 2 point increments, so #2 gets 148 (+12) and #75 gets 2. The exception to that is if there are < 150 predictors, in which case only the top half (rounded up) will get points.


2) Point system based on how close you were to the "truth" (absolute performance)

Rules

1. 25 points for getting Hardware positions correct.
This means that you got the order (1-7) correct regardless of the actual values you put in for each hardware piece.

2. 20 points * (percentage of correct units for hardware system, anything over 100% difference = 0 points). Maximum points: 140
This means that if you missed the 360 value by 10%, you'd get 0.9*20 = 18 points for the 360 prediction. Repeat for each of the other seven main systems.

*3. Possibly some bonus for place of finish in the normal unit ranking for each month.



I can do either without a lot of trouble, so don't let perceived complexity on my part push you in either direction.
The longest part will still be entering all the predictions to begin with. ;)
Also to repeat, this is in addition to the normal unit ranking that is done each month. There will also be an overall 2007 ranking and a moving 3-month window ranking done for either point system.


Which do you prefer? General comments?

Thanks! :)
 

apujanata

Member
donny2112 said:
How about we decide between the following two systems?

1) Point system based on your rank for the month (relative performance).

Rules

1. Bonus Points for finishing in the top 10: #1 - 17, #2 - 12, #3 - 11, #4 - 9, #5 - 7, #6 - 5, #7 - 4, #8 - 3, #9 - 2, #10 - 1

2. #1 gets 150 points, so 167 with the bonus.

3. #2-#75 drop in 2 point increments, so #2 gets 148 (+12) and #75 gets 2. The exception to that is if there are < 150 predictors, in which case only the top half (rounded up) will get points.


2) Point system based on how close you were to the "truth" (absolute performance)

Rules

1. 25 points for getting Hardware positions correct.
This means that you got the order (1-7) correct regardless of the actual values you put in for each hardware piece.

2. 20 points * (percentage of correct units for hardware system, anything over 100% difference = 0 points). Maximum points: 140
This means that if you missed the 360 value by 10%, you'd get 0.9*20 = 18 points for the 360 prediction. Repeat for each of the other seven main systems.

*3. Possibly some bonus for place of finish in the normal unit ranking for each month.



I can do either without a lot of trouble, so don't let perceived complexity on my part push you in either direction.
The longest part will still be entering all the predictions to begin with. ;)
Also to repeat, this is in addition to the normal unit ranking that is done each month. There will also be an overall 2007 ranking and a moving 3-month window ranking done for either point system.


Which do you prefer? General comments?

Thanks! :)

Question about Sonycowboys rules (point #2) :
If I predicted that X360 is 200K, and PS2 is also 200K, but actual result is X360 174K and PS2 194K, does that mean I do not get the 25 point bonus ?
I don't understand why it is so important to get the hardware positions correct (sincei it get 25 point bonus, which is very high) ?

If we wanted to get a competitive leaderboard, I choose option #1 (relative)
If we wanted to get a non-competitive leaderboard, I choose option 2 (absolute)

Maybe you should made a new topic, just to discuss this (since I don't think most predictor are aware of this choice). We should be able to finalize the decision before May NPD.
 

donny2112

Member
apujanata said:
Question about Sonycowboys rules (point #2) :
If I predicted that X360 is 200K, and PS2 is also 200K, but actual result is X360 174K and PS2 194K, does that mean I do not get the 25 point bonus ?

That's the way I interpret it, yes. Even if you number them in your prediction, the algorithm I'm using just goes off the value, so the rankings won't correctly match. I do keep track of the full prediction, though, so you could make one of them 200,001 to avoid that.

apujanata said:
I don't understand why it is so important to get the hardware positions correct (sincei it get 25 point bonus, which is very high) ?

January: 0/65
February: 0/97
March: 4/171
April: 12/160

That's how many got the ordering correct each month. From the top 10 for 2007, only GhaleonEB and starship have hit it. From that, I think it's clear that getting them all in order is a pretty significant feat.

As for the amount of bonus, I just used what sonycowboy had previously specified.

apujanata said:
Maybe you should made a new topic, just to discuss this (since I don't think most predictor are aware of this choice). We should be able to finalize the decision before May NPD.

Good idea. :) I'll try to get one up tomorrow.
 
Top Bottom