althusser wrote: ↑Wed Mar 02, 2022 12:35 pm
Just to pass the time, I am only thinking loud here: Things seem to be different this time around, so let's forget previous years.
This year, people say that they pass massively from evaluation to ranking within 5 minutes. So let's say that this is the time needed for the EC ITs to pass each applicant's results into the system. 5 minutes per applicant for the total of 8356 proposals requires 41.780 minutes. Every working day (8 hours) has 480 minutes. We divide 41.780/480, 87 days are required if it was to be done by one person. Let's say now that the EC has 20 ITs working exclusively on that. Each will need 4,35 working days to finish up their pile. Let's round it up to 5 days. If yesterday was the first day, the 20 ITs will finish the upload of the results on Monday.
Now, as far as the priority is concerned, they may start at the top of the scores (descending) or at the bottom of scores (ascending), but they may also pass the results per panel (as it was suggested), or even per country, or a combination of some of the above.
I think that at this point we can only assume that people with a score over 70% will be put into "RANKING" (last year these were more or less 70% of the applicants), people with a score below 70% will be put into "EVALUATION" and stay there (last year these were more or less 30% of the applicants) and those who currently appear in "SUBMISSION" will just have to wait until their information goes up onto the system.
According to this line of thinking we shouldn't wait for results before the end of next week, which is 2-3 days prior to the official deadline.
P.S. Correct me if I'm wrong. After all, I am in SOC panel
cool analysis!!
Some inspiration from yours (also SOC, EF):
step-1, build a logistic regression model,
a full model would be DV - funded or not (1- y, 0- n), IV1 - ranking or not (1- y, 0-n), IV2 - absolute timing of turning into ranking (put the first post for phase change as time 0), IV-3 time taken from evaluation to ranking, IV-4 panel risk (given the past cut-off value of panels), IV-5 times of submission, IV-6 xxx, etc, and all the interactions
step-2, collect data from past posts and organize
step-3, run the models combined with cross-validations/model comparison (i.e. compare the full model with a simpler model with less IV)/etc.
Expected output: a model that explains the data best, IVs that (might) significantly contribute to the predictions
![Mr. Green :mrgreen:](./images/smilies/icon_mrgreen.gif)