How old is your supervisor? Just curious to know what "young" meansGiadina_1988 wrote: ↑Fri Feb 05, 2021 2:18 pmIt was the case for me last year. I had a relatively young supervisor, but excellent in his qualifications. He had an excellent truck of successful supervisions, great track of publications, the host which already has MSCA fellow... Nothing bad on paper! In the reviewers' comments we could totally see they had something against the supervisor (who in fairness, has a bit the reputation of being a 'challenger' in his field). They questioned his qualifications, even stating he was of a too young age to supervise a MSCA. Results: 30 points LESS than the previous year (when the supervisor was exactly the same and not questioned at all!).
PetetheCat wrote: ↑Fri Feb 05, 2021 2:10 pmYes, but remember that this also means that you can get a reviewer who dislikes your well-known supervisor. Mostly it is good, but it can cut two ways.
2020 Marie Curie Individual Fellowship (H2020-MSCA-IF-2020)
Re: 2020 Marie Curie Individual Fellowship (H2020-MSCA-IF-2020)
Re: 2020 Marie Curie Individual Fellowship (H2020-MSCA-IF-2020)
Currently I applied with very famous supervisor from one uk top rank university. The proposal was expected to be strong with a new topic. But until now still evaluation
It should be a lottery
It should be a lottery
Last edited by alexinem on Fri Feb 05, 2021 4:07 pm, edited 1 time in total.
Re: 2020 Marie Curie Individual Fellowship (H2020-MSCA-IF-2020)
My proposed supervisor is one of the foremost authorities in their field. Several books, over a hundred articles, has supervised directly and indirectly dozens of graduate students over the decades. They fully endorsed my project and helped me write it. When we discussed the submission, they seemed fairly confident we would succeed.
They asked, "So when are you going to start learning Italian?"
They asked, "So when are you going to start learning Italian?"
-
- Posts: 5
- Joined: Wed Jan 27, 2021 12:36 pm
Re: 2020 Marie Curie Individual Fellowship (H2020-MSCA-IF-2020)
People are putting too much emphasis on a single evaluator randomly hating you. Recall that your proposal is evaluated by three experts, without communication between them. Then a fourth person, the rapporteur, collects the reports and identifies agreements and disagreements, chairs a meeting between everyone, and tries to build consensus. The rapporteur doesn't just average all the scores; any significant divergence between the evaluators must be resolved. If consensus can't be built, then the panel Vice-Chair decides.
Furthermore, if the proposal is a re-submission but the score is lower, the rapporteur must check for divergences in opinion. For example if in 2019 the evaluators said "the proposed dissemination activities are innovative" but in 2020 they said "the proposed dissemination activities are unoriginal," the rapporteur has to investigate the issue to make sure the change in evaluation is justified (e.g., maybe sharing your research through interpretative dance on TikTok was innovative in 2019, but now in 2020 everyone is doing it).
Lastly, the score you get is somewhat relative to everyone else that submitted in the same year. It's not meant to; the scoring scheme is supposed to be absolute and unchanging, but in practice it is, because a recommendation to evaluators is to read all proposals they will score beforehand, in order to get an overview of the level of proposals this year. A drop in score when you resubmit typically means the competition got better.
Look, I won't deny luck matters. It absolutely does, specially when competition is this fierce. Even if luck amounts to a fluctuation of just plus or minus 1 point out of 100, that can easily be the difference between you getting the grant or not. But it's not a lottery. If your proposal scored low, it's significantly more likely it just wasn't that good, rather than an evaluator randomly hating you and convincing three other people of the same bias.
Furthermore, if the proposal is a re-submission but the score is lower, the rapporteur must check for divergences in opinion. For example if in 2019 the evaluators said "the proposed dissemination activities are innovative" but in 2020 they said "the proposed dissemination activities are unoriginal," the rapporteur has to investigate the issue to make sure the change in evaluation is justified (e.g., maybe sharing your research through interpretative dance on TikTok was innovative in 2019, but now in 2020 everyone is doing it).
Lastly, the score you get is somewhat relative to everyone else that submitted in the same year. It's not meant to; the scoring scheme is supposed to be absolute and unchanging, but in practice it is, because a recommendation to evaluators is to read all proposals they will score beforehand, in order to get an overview of the level of proposals this year. A drop in score when you resubmit typically means the competition got better.
Look, I won't deny luck matters. It absolutely does, specially when competition is this fierce. Even if luck amounts to a fluctuation of just plus or minus 1 point out of 100, that can easily be the difference between you getting the grant or not. But it's not a lottery. If your proposal scored low, it's significantly more likely it just wasn't that good, rather than an evaluator randomly hating you and convincing three other people of the same bias.
Re: 2020 Marie Curie Individual Fellowship (H2020-MSCA-IF-2020)
Fantastic post, thank you.GuyFromSpace wrote: ↑Fri Feb 05, 2021 4:20 pmPeople are putting too much emphasis on a single evaluator randomly hating you. Recall that your proposal is evaluated by three experts, without communication between them. Then a fourth person, the rapporteur, collects the reports and identifies agreements and disagreements, chairs a meeting between everyone, and tries to build consensus. The rapporteur doesn't just average all the scores; any significant divergence between the evaluators must be resolved. If consensus can't be built, then the panel Vice-Chair decides.
Furthermore, if the proposal is a re-submission but the score is lower, the rapporteur must check for divergences in opinion. For example if in 2019 the evaluators said "the proposed dissemination activities are innovative" but in 2020 they said "the proposed dissemination activities are unoriginal," the rapporteur has to investigate the issue to make sure the change in evaluation is justified (e.g., maybe sharing your research through interpretative dance on TikTok was innovative in 2019, but now in 2020 everyone is doing it).
Lastly, the score you get is somewhat relative to everyone else that submitted in the same year. It's not meant to; the scoring scheme is supposed to be absolute and unchanging, but in practice it is, because a recommendation to evaluators is to read all proposals they will score beforehand, in order to get an overview of the level of proposals this year. A drop in score when you resubmit typically means the competition got better.
Look, I won't deny luck matters. It absolutely does, specially when competition is this fierce. Even if luck amounts to a fluctuation of just plus or minus 1 point out of 100, that can easily be the difference between you getting the grant or not. But it's not a lottery. If your proposal scored low, it's significantly more likely it just wasn't that good, rather than an evaluator randomly hating you and convincing three other people of the same bias.
Re: 2020 Marie Curie Individual Fellowship (H2020-MSCA-IF-2020)
GuyFromSpace wrote: ↑Fri Feb 05, 2021 4:20 pmLook, I won't deny luck matters. It absolutely does, specially when competition is this fierce. Even if luck amounts to a fluctuation of just plus or minus 1 point out of 100, that can easily be the difference between you getting the grant or not. But it's not a lottery. If your proposal scored low, it's significantly more likely it just wasn't that good, rather than an evaluator randomly hating you and convincing three other people of the same bias.
My funding applications to Netherlands and Belgium were met with extremely divergent evaluations. The final comments even said so. Some ranked me very low, others ranked me very high. My application for Canadian government funding was evaluated by people who weren't even in the same field. It was unfair, but the system is unfair.
MSCA appears to be no different. It really depends on who evaluates your proposal, provided your proposal is coherent and readable in the first place.
-
- Posts: 66
- Joined: Thu Jan 14, 2021 8:59 pm
Re: 2020 Marie Curie Individual Fellowship (H2020-MSCA-IF-2020)
I have been unsuccessful two times with a world leader supervisor and one of the three best universities in the world.
What failed? The proposal. You need to address all criteria that reviewers are looking for and of course a bit of luck.
What failed? The proposal. You need to address all criteria that reviewers are looking for and of course a bit of luck.
-
- Posts: 110
- Joined: Mon Jan 25, 2021 1:58 pm
Re: 2020 Marie Curie Individual Fellowship (H2020-MSCA-IF-2020)
This sounds perfect on paper but I personally, over the years, have seen dozens of examples of random, inconsistent and self-contraductory reviewer comments. They will literally say that the project is innovative in the first comment and that the projects lacks innovative aspects in the last one, with all kinds of incosistencies in between. Also, dozens of examples of people scoring significantly lower on a resubmission, having allegedly improved the proposal. Sadly, the reviews I have seen strongly suggests that, at least in those particular cases, little of what you describe below is actually followed in practice.
I Ml
I Ml
GuyFromSpace wrote: ↑Fri Feb 05, 2021 4:20 pmPeople are putting too much emphasis on a single evaluator randomly hating you. Recall that your proposal is evaluated by three experts, without communication between them. Then a fourth person, the rapporteur, collects the reports and identifies agreements and disagreements, chairs a meeting between everyone, and tries to build consensus. The rapporteur doesn't just average all the scores; any significant divergence between the evaluators must be resolved. If consensus can't be built, then the panel Vice-Chair decides.
Furthermore, if the proposal is a re-submission but the score is lower, the rapporteur must check for divergences in opinion. For example if in 2019 the evaluators said "the proposed dissemination activities are innovative" but in 2020 they said "the proposed dissemination activities are unoriginal," the rapporteur has to investigate the issue to make sure the change in evaluation is justified (e.g., maybe sharing your research through interpretative dance on TikTok was innovative in 2019, but now in 2020 everyone is doing it).
Lastly, the score you get is somewhat relative to everyone else that submitted in the same year. It's not meant to; the scoring scheme is supposed to be absolute and unchanging, but in practice it is, because a recommendation to evaluators is to read all proposals they will score beforehand, in order to get an overview of the level of proposals this year. A drop in score when you resubmit typically means the competition got better.
Look, I won't deny luck matters. It absolutely does, specially when competition is this fierce. Even if luck amounts to a fluctuation of just plus or minus 1 point out of 100, that can easily be the difference between you getting the grant or not. But it's not a lottery. If your proposal scored low, it's significantly more likely it just wasn't that good, rather than an evaluator randomly hating you and convincing three other people of the same bias.
Last edited by Little_Venice on Sat Feb 06, 2021 9:36 am, edited 1 time in total.
Re: 2020 Marie Curie Individual Fellowship (H2020-MSCA-IF-2020)
A bit of luck you need. However, last information NCP does not receive results so it will be a long alcoholic weekend Monday.
-
- Posts: 256
- Joined: Fri Jan 10, 2020 5:58 pm