2019 Marie Curie Individual Fellowship (H2020-MSCA-IF-2019)

Locked
PetetheCat
Posts: 323
Joined: Wed Feb 06, 2019 2:04 am

Re: 2019 Marie Curie Individual Fellowship (H2020-MSCA-IF-2019)

Post by PetetheCat » Thu Jan 23, 2020 3:48 pm

Kitten wrote:
Thu Jan 23, 2020 2:24 pm
good luck PetetheCat - hope it is successful this year - quite horrible to see the REJECT in one's inbox :cry: :cry:
Thanks! Same to you. At least I know that it made my proposed supervisor sad. So he is hopeful that I will get the funding and would like to work with me for sure :lol:

MSCA_CHEM_2019
Posts: 256
Joined: Fri Jan 10, 2020 5:58 pm

Re: 2019 Marie Curie Individual Fellowship (H2020-MSCA-IF-2019)

Post by MSCA_CHEM_2019 » Thu Jan 23, 2020 3:51 pm

MSCA_CHEM_2019 wrote:
Thu Jan 23, 2020 3:45 pm
SimpaLif wrote:
Thu Jan 23, 2020 3:39 pm
Slower refreshing of the link?
Submission for me
Now, it is only appearing the white page without text :o :cry:

Kenniz
Posts: 172
Joined: Mon Jan 20, 2020 12:03 pm

Re: 2019 Marie Curie Individual Fellowship (H2020-MSCA-IF-2019)

Post by Kenniz » Thu Jan 23, 2020 3:51 pm

Shapovalov wrote:
Thu Jan 23, 2020 3:47 pm
(Similar point to what LIF just made, but from a different perspective)

I'm going to go against the grain and say that huge discrepancies between scores of re-submissions is not surprising. Not say that it is correct, just that it is understandable why this happens. What's worse is that there is absolutely no reasonable solution to it.

The main point here is not about harsh vs lenient reviewers. It's about getting reviewers who understand the broad topic of your proposal.

Let's start with the case of reviewers for a paper. The choice of reviewer already plays a huge role there. Even though it is just one paper, you have provided loads of keywords, and suggested a whole bunch of experts in the field as potential reviewers, journals often struggle to find a suitable reviewer. I've had a paper where the journal took months over their usual response time because they could not find enough reviewers. And, in theory, it should be a lot easier for papers than proposals.

To have that strict a standard for proposal submitted to such a big call is not practical at all. You would need maybe 10k+ reviewers for the 10k proposals. I don't remember the number of experts from previous years, but I would expect it to be significantly lesser than this. If so, it is inevitable for such a big call that reviewers will get proposals in fields that are not in their specialty. And then all bets are off. They may not understand the whole point of the proposal, and then will end up scoring it very badly.
imho the best practicable! solution would be to have two teams of reviewers (each three as already done) that do not know who the others are.
then the council or whatever can check if there are huge discrepancies between the two groups or if they come to similar point scores, because lets be honest. as already mentioned the three reviewers will influence each other and will come to a similar solution because they dont want to be the guy who gave 70 when two others give 90.

User avatar
chemcps
Posts: 55
Joined: Wed Jan 22, 2020 9:21 am

Re: 2019 Marie Curie Individual Fellowship (H2020-MSCA-IF-2019)

Post by chemcps » Thu Jan 23, 2020 3:52 pm

Hi, Could you please share the link to check the updated score?

Kenniz
Posts: 172
Joined: Mon Jan 20, 2020 12:03 pm

Re: 2019 Marie Curie Individual Fellowship (H2020-MSCA-IF-2019)

Post by Kenniz » Thu Jan 23, 2020 3:52 pm

MSCA_CHEM_2019 wrote:
Thu Jan 23, 2020 3:51 pm
MSCA_CHEM_2019 wrote:
Thu Jan 23, 2020 3:45 pm
SimpaLif wrote:
Thu Jan 23, 2020 3:39 pm
Slower refreshing of the link?
Submission for me
Now, it is only appearing the white page without text :o :cry:
everything is as normal for me. no lags, no long loading, no changes

PetetheCat
Posts: 323
Joined: Wed Feb 06, 2019 2:04 am

Re: 2019 Marie Curie Individual Fellowship (H2020-MSCA-IF-2019)

Post by PetetheCat » Thu Jan 23, 2020 3:52 pm

LIF wrote:
Thu Jan 23, 2020 3:29 pm
If I may pitch in, I totally get the frustration about the seemingly random scores. But I believe that the vast majority of the reviewers do their best to evaluate the proposals they get. And if you've ever submitted papers, had a proposal of yours read by different people, or discussed your research with people from outside your immediate field, you probably have noticed that different people pick up on different things, simply because they may know more about this specific part, have practical experience with something similar, or just find it more interesting than other parts.

It doesn't mean that their opinion isn't valid, they are simply judging the science you are serving them from different view points. So, one year you might have reviewers that can fully understand and judge the quality of your proposal and give you a high score. The next year, you get different reviewers who come from more distantly related fields, have less background knowledge and cannot understand the implications of your research that are maybe not written out explicitly, and therefore give you a lower score.

So, even though the evaluation feels really random, I think everyone can try and make their proposal as accessible as possible and spell out things that seem super-obvious to them. That's also one of the most recurring tips I've heard and read regarding such proposals: be really really clear, concrete and help your reviewers understand why your research is important and relevant. They might not have the time to read between your lines, so better serve them your core message on a silver plate.
I would agree on this. I think that there are always frustrations about reviews as the reviewers cannot see what you were thinking in your head and sometimes they may just not like your perspective on the field. Last year I had one review comment that said I did not mention something that was actually written about right there in my proposal. But that aside, the things that bought my score down were real weaknesses in my proposal. I am sure we could find some genuine unfair reviewers, but I am also sure that there are real things that they find to focus on. In my case, they liked my proposal, but were clearly looking for the minor weaknesses. I ended up on the reserve list and I knew it was a good proposal, but there were weaknesses, so it felt fair.

This process is taking something that is inherently subjective and trying hard to make it fair to everyone. I don't think it is possible to make it so that it really feels fair, but they are trying.

megasphaera
Posts: 226
Joined: Mon Jan 14, 2019 2:55 pm

Re: 2019 Marie Curie Individual Fellowship (H2020-MSCA-IF-2019)

Post by megasphaera » Thu Jan 23, 2020 3:56 pm

Kenniz wrote:
Thu Jan 23, 2020 3:51 pm
Shapovalov wrote:
Thu Jan 23, 2020 3:47 pm
(Similar point to what LIF just made, but from a different perspective)

I'm going to go against the grain and say that huge discrepancies between scores of re-submissions is not surprising. Not say that it is correct, just that it is understandable why this happens. What's worse is that there is absolutely no reasonable solution to it.

The main point here is not about harsh vs lenient reviewers. It's about getting reviewers who understand the broad topic of your proposal.

Let's start with the case of reviewers for a paper. The choice of reviewer already plays a huge role there. Even though it is just one paper, you have provided loads of keywords, and suggested a whole bunch of experts in the field as potential reviewers, journals often struggle to find a suitable reviewer. I've had a paper where the journal took months over their usual response time because they could not find enough reviewers. And, in theory, it should be a lot easier for papers than proposals.

To have that strict a standard for proposal submitted to such a big call is not practical at all. You would need maybe 10k+ reviewers for the 10k proposals. I don't remember the number of experts from previous years, but I would expect it to be significantly lesser than this. If so, it is inevitable for such a big call that reviewers will get proposals in fields that are not in their specialty. And then all bets are off. They may not understand the whole point of the proposal, and then will end up scoring it very badly.
imho the best practicable! solution would be to have two teams of reviewers (each three as already done) that do not know who the others are.
then the council or whatever can check if there are huge discrepancies between the two groups or if they come to similar point scores, because lets be honest. as already mentioned the three reviewers will influence each other and will come to a similar solution because they dont want to be the guy who gave 70 when two others give 90.
That seems a very smart solution. But not practical since there are lots of proposal.
Anyway, in my case I know my project was shit and I am pissed off about other comments I received. As I said is only my fault for not highlighting it enough.

Kenniz
Posts: 172
Joined: Mon Jan 20, 2020 12:03 pm

Re: 2019 Marie Curie Individual Fellowship (H2020-MSCA-IF-2019)

Post by Kenniz » Thu Jan 23, 2020 4:01 pm

megasphaera wrote:
Thu Jan 23, 2020 3:56 pm
Kenniz wrote:
Thu Jan 23, 2020 3:51 pm
Shapovalov wrote:
Thu Jan 23, 2020 3:47 pm
(Similar point to what LIF just made, but from a different perspective)

I'm going to go against the grain and say that huge discrepancies between scores of re-submissions is not surprising. Not say that it is correct, just that it is understandable why this happens. What's worse is that there is absolutely no reasonable solution to it.

The main point here is not about harsh vs lenient reviewers. It's about getting reviewers who understand the broad topic of your proposal.

Let's start with the case of reviewers for a paper. The choice of reviewer already plays a huge role there. Even though it is just one paper, you have provided loads of keywords, and suggested a whole bunch of experts in the field as potential reviewers, journals often struggle to find a suitable reviewer. I've had a paper where the journal took months over their usual response time because they could not find enough reviewers. And, in theory, it should be a lot easier for papers than proposals.

To have that strict a standard for proposal submitted to such a big call is not practical at all. You would need maybe 10k+ reviewers for the 10k proposals. I don't remember the number of experts from previous years, but I would expect it to be significantly lesser than this. If so, it is inevitable for such a big call that reviewers will get proposals in fields that are not in their specialty. And then all bets are off. They may not understand the whole point of the proposal, and then will end up scoring it very badly.
imho the best practicable! solution would be to have two teams of reviewers (each three as already done) that do not know who the others are.
then the council or whatever can check if there are huge discrepancies between the two groups or if they come to similar point scores, because lets be honest. as already mentioned the three reviewers will influence each other and will come to a similar solution because they dont want to be the guy who gave 70 when two others give 90.
That seems a very smart solution. But not practical since there are lots of proposal.
Anyway, in my case I know my project was shit.
yeah i mean. it would double the number of reviewers, you could shrink it down by for example using two per group instead of three, or in general make blind reviews for the three reviewers so they dont influence each other. in my opinion the influencing will lead to giving similiar scores, even if they would normally see it differently. just look at same-stage reviews for papers, they dont know what the other reviewer thinks, atleast at the same stage.

Amar
Posts: 298
Joined: Fri Dec 29, 2017 5:33 pm

Re: 2019 Marie Curie Individual Fellowship (H2020-MSCA-IF-2019)

Post by Amar » Thu Jan 23, 2020 4:03 pm

Question to those who received Reject today (reserve list) regarding last year proposal: Do you have the same project number (for re-submitted proposal)?

NKx
Posts: 4
Joined: Thu Jan 16, 2020 6:04 pm

Re: 2019 Marie Curie Individual Fellowship (H2020-MSCA-IF-2019)

Post by NKx » Thu Jan 23, 2020 4:18 pm

The participant portal is not working for me! Been loading for 20+ mins - have refreshed/closed browser & started again but to no avail :shock:

Locked