2020 Marie Curie Individual Fellowship (H2020-MSCA-IF-2020)

lifmc2020
Posts: 314
Joined: Wed Jan 06, 2021 11:24 am

Re: 2020 Marie Curie Individual Fellowship (H2020-MSCA-IF-2020)

Post by lifmc2020 » Fri Feb 05, 2021 4:53 pm

sunday night panic party right here everyone? :lol: :lol: :lol:

Surrreal
Posts: 37
Joined: Wed Feb 03, 2021 2:44 pm

Re: 2020 Marie Curie Individual Fellowship (H2020-MSCA-IF-2020)

Post by Surrreal » Fri Feb 05, 2021 4:59 pm

Me on Sunday night:
Attachments
download.jpg
download.jpg (16.11 KiB) Viewed 4925 times

Moskito
Posts: 3
Joined: Fri Feb 05, 2021 4:53 pm

Re: 2020 Marie Curie Individual Fellowship (H2020-MSCA-IF-2020)

Post by Moskito » Fri Feb 05, 2021 4:59 pm

Hi everybody,
I am a new member although I have been looking at the forum for some days. Enjoying it. I do not feel alone waiting for the results.
I applied for a global fellowship in the chemistry panel.
I wish good luck to all of you.

Giadina_1988
Posts: 44
Joined: Thu Feb 07, 2019 1:05 pm

Re: 2020 Marie Curie Individual Fellowship (H2020-MSCA-IF-2020)

Post by Giadina_1988 » Fri Feb 05, 2021 5:04 pm

Last year's supervisor - I changed host and supervisor this year, because, in the meantime, I got a job in that same institution - was/is 43/45 :D
Fu Manchu wrote:
Fri Feb 05, 2021 3:35 pm
Giadina_1988 wrote:
Fri Feb 05, 2021 2:18 pm
It was the case for me last year. I had a relatively young supervisor, but excellent in his qualifications. He had an excellent truck of successful supervisions, great track of publications, the host which already has MSCA fellow... Nothing bad on paper! In the reviewers' comments we could totally see they had something against the supervisor (who in fairness, has a bit the reputation of being a 'challenger' in his field). They questioned his qualifications, even stating he was of a too young age to supervise a MSCA. Results: 30 points LESS than the previous year (when the supervisor was exactly the same and not questioned at all!).
PetetheCat wrote:
Fri Feb 05, 2021 2:10 pm


Yes, but remember that this also means that you can get a reviewer who dislikes your well-known supervisor. Mostly it is good, but it can cut two ways. :lol:
How old is your supervisor? Just curious to know what "young" means :| :|

Steminist
Posts: 193
Joined: Thu Jan 14, 2021 3:50 pm

Re: 2020 Marie Curie Individual Fellowship (H2020-MSCA-IF-2020)

Post by Steminist » Fri Feb 05, 2021 5:10 pm

I will buy min 6 beers to survive at sunday night and i think i will sleep all the sunday to be alive at sunday-to-monday craziness =D

tahir
Posts: 66
Joined: Wed Sep 11, 2019 6:54 pm
Location: Pakistan

Re: 2020 Marie Curie Individual Fellowship (H2020-MSCA-IF-2020)

Post by tahir » Fri Feb 05, 2021 5:10 pm

Hi everyone, do you guys think, this MSCA fellowship would help one to secure an academic job in Europe?

entropy123
Posts: 12
Joined: Wed Jan 27, 2021 6:04 pm

Re: 2020 Marie Curie Individual Fellowship (H2020-MSCA-IF-2020)

Post by entropy123 » Fri Feb 05, 2021 5:13 pm

Do we get the emails on Monday or Tuesday?

Bluestar
Posts: 9
Joined: Sat Jan 30, 2021 9:12 am

Re: 2020 Marie Curie Individual Fellowship (H2020-MSCA-IF-2020)

Post by Bluestar » Fri Feb 05, 2021 5:14 pm

Little_Venice wrote:
Fri Feb 05, 2021 4:35 pm
This sounds perfect on paper but I personally, over the years, have seen dozens of examples of random, inconsistent and self-contraductory reviewer comments. They will literally say that the project is innovative in the first comment and that the projects lacks innovative aspects in the last one, with all kinds of incosistencies in between. Also, dozens of examples of people scoring significantly lower on a resubmission, having allegedly improved the proposal. Sadly, the reviews I have seen strongly suggests that, at least in those particular cases, little of what you describe below is actually followed in practice.

I Ml
GuyFromSpace wrote:
Fri Feb 05, 2021 4:20 pm
People are putting too much emphasis on a single evaluator randomly hating you. Recall that your proposal is evaluated by three experts, without communication between them. Then a fourth person, the rapporteur, collects the reports and identifies agreements and disagreements, chairs a meeting between everyone, and tries to build consensus. The rapporteur doesn't just average all the scores; any significant divergence between the evaluators must be resolved. If consensus can't be built, then the panel Vice-Chair decides.

Furthermore, if the proposal is a re-submission but the score is lower, the rapporteur must check for divergences in opinion. For example if in 2019 the evaluators said "the proposed dissemination activities are innovative" but in 2020 they said "the proposed dissemination activities are unoriginal," the rapporteur has to investigate the issue to make sure the change in evaluation is justified (e.g., maybe sharing your research through interpretative dance on TikTok was innovative in 2019, but now in 2020 everyone is doing it).

Lastly, the score you get is somewhat relative to everyone else that submitted in the same year. It's not meant to; the scoring scheme is supposed to be absolute and unchanging, but in practice it is, because a recommendation to evaluators is to read all proposals they will score beforehand, in order to get an overview of the level of proposals this year. A drop in score when you resubmit typically means the competition got better.

Look, I won't deny luck matters. It absolutely does, specially when competition is this fierce. Even if luck amounts to a fluctuation of just plus or minus 1 point out of 100, that can easily be the difference between you getting the grant or not. But it's not a lottery. If your proposal scored low, it's significantly more likely it just wasn't that good, rather than an evaluator randomly hating you and convincing three other people of the same bias.
I agree with Little_Venice. I add that with 11000 application (or 9000 doesn't make really a difference) there is the need to sort out heavily applications. Therefore, if an evaluator wants to find shortcomings/limitations to your application she/he will find it and then your application might be compromised in any case already. Its not the first time I hear that from other colleagues evaluating.

PetetheCat
Posts: 323
Joined: Wed Feb 06, 2019 2:04 am

Re: 2020 Marie Curie Individual Fellowship (H2020-MSCA-IF-2020)

Post by PetetheCat » Fri Feb 05, 2021 5:22 pm

Bluestar wrote:
Fri Feb 05, 2021 5:14 pm
Little_Venice wrote:
Fri Feb 05, 2021 4:35 pm
This sounds perfect on paper but I personally, over the years, have seen dozens of examples of random, inconsistent and self-contraductory reviewer comments. They will literally say that the project is innovative in the first comment and that the projects lacks innovative aspects in the last one, with all kinds of incosistencies in between. Also, dozens of examples of people scoring significantly lower on a resubmission, having allegedly improved the proposal. Sadly, the reviews I have seen strongly suggests that, at least in those particular cases, little of what you describe below is actually followed in practice.

I Ml
GuyFromSpace wrote:
Fri Feb 05, 2021 4:20 pm
People are putting too much emphasis on a single evaluator randomly hating you. Recall that your proposal is evaluated by three experts, without communication between them. Then a fourth person, the rapporteur, collects the reports and identifies agreements and disagreements, chairs a meeting between everyone, and tries to build consensus. The rapporteur doesn't just average all the scores; any significant divergence between the evaluators must be resolved. If consensus can't be built, then the panel Vice-Chair decides.

Furthermore, if the proposal is a re-submission but the score is lower, the rapporteur must check for divergences in opinion. For example if in 2019 the evaluators said "the proposed dissemination activities are innovative" but in 2020 they said "the proposed dissemination activities are unoriginal," the rapporteur has to investigate the issue to make sure the change in evaluation is justified (e.g., maybe sharing your research through interpretative dance on TikTok was innovative in 2019, but now in 2020 everyone is doing it).

Lastly, the score you get is somewhat relative to everyone else that submitted in the same year. It's not meant to; the scoring scheme is supposed to be absolute and unchanging, but in practice it is, because a recommendation to evaluators is to read all proposals they will score beforehand, in order to get an overview of the level of proposals this year. A drop in score when you resubmit typically means the competition got better.

Look, I won't deny luck matters. It absolutely does, specially when competition is this fierce. Even if luck amounts to a fluctuation of just plus or minus 1 point out of 100, that can easily be the difference between you getting the grant or not. But it's not a lottery. If your proposal scored low, it's significantly more likely it just wasn't that good, rather than an evaluator randomly hating you and convincing three other people of the same bias.
I agree with Little_Venice. I add that with 11000 application (or 9000 doesn't make really a difference) there is the need to sort out heavily applications. Therefore, if an evaluator wants to find shortcomings/limitations to your application she/he will find it and then your application might be compromised in any case already. Its not the first time I hear that from other colleagues evaluating.
It's not random, so perhaps the word "lottery" is unfair. But it is about the evaluator looking at your proposal and seeing value in it. The nature of debate in academic fields means that there are different views on this. And last year, the comments that I got that lowered by "excellence" score by over one point (even with improvements in clarity from the year before, when this section scored highly) were clearly about a political position in relation to the work I was carrying out. They were about a critique of the project that I could understand that someone would have from that perspective -- and have tried to preempt this year -- but they weren't actually about not seeing merit in the proposal. And then there were other comments that were clearly in there to find justifications to lower the score.

So randomly hating is not the correct term, but evaluators have positions, or they are given a proposal where they don't fully know the field so they are unable to evaluate them according to the latest work in that field. And when you are on the receiving end of this and your improved proposal reduces in score from one year to the other it feels very unfair and arbitrary.

For what it's worth, the first year that I didn't get it, I was waitlisted and had minor technical critiques about my project and the criteria which actually made me feel that it was very fair. But the reviewers were clear that I had a great idea. And then the next year the reviewers disagreed on this core point of whether the research topic I was addressing was a good one. It is a lottery as to whether I get reviewers who share this first position this year or the second one.

Giadina_1988
Posts: 44
Joined: Thu Feb 07, 2019 1:05 pm

Re: 2020 Marie Curie Individual Fellowship (H2020-MSCA-IF-2020)

Post by Giadina_1988 » Fri Feb 05, 2021 5:23 pm

That's not my story unfortunately. I do totally share your perspective, but I might have been the exception. We are not talking about 15 points of difference between one year and the other ... We are talking about getting two points from being awarded one year and 20 points below the threshold the following year. And sure: if my work would have been that good I would have got the funding or some funding, at least. I have presented it to other 4 national and international competitions since and the score and comments were always consistent with the first MSCA "you're almost there ... just improve this..."!

The point being: it happens that subjectivity, personal relations and just opinions diverge so much that it is not possible even for a group of four people to get agreement. In my case I could have appealed, but what for? To argue with someone stating clearly I have no basic preparation on EU integration and EU politics... When I have even a book published on it? With someone saying that there is no interdisciplinarity because 'History, Political Science and Law are basically the same thing'?

Better was just to know inside my mind and my heart that that score is not a reflexion on me as a scholar, but just the opinion of someone/s who did really not like at all my idea. It happens! I don't like many things :-) I took what little there was of constructive in that report and moved on to improve what I have. And if it does not work... At least I tried to follow my dread, without making that person/people win! :D I got rejected 85 times before getting a job ... But the best thing I did was to never give up! :-)

GuyFromSpace wrote:
Fri Feb 05, 2021 4:20 pm
People are putting too much emphasis on a single evaluator randomly hating you. Recall that your proposal is evaluated by three experts, without communication between them. Then a fourth person, the rapporteur, collects the reports and identifies agreements and disagreements, chairs a meeting between everyone, and tries to build consensus. The rapporteur doesn't just average all the scores; any significant divergence between the evaluators must be resolved. If consensus can't be built, then the panel Vice-Chair decides.

Furthermore, if the proposal is a re-submission but the score is lower, the rapporteur must check for divergences in opinion. For example if in 2019 the evaluators said "the proposed dissemination activities are innovative" but in 2020 they said "the proposed dissemination activities are unoriginal," the rapporteur has to investigate the issue to make sure the change in evaluation is justified (e.g., maybe sharing your research through interpretative dance on TikTok was innovative in 2019, but now in 2020 everyone is doing it).

Lastly, the score you get is somewhat relative to everyone else that submitted in the same year. It's not meant to; the scoring scheme is supposed to be absolute and unchanging, but in practice it is, because a recommendation to evaluators is to read all proposals they will score beforehand, in order to get an overview of the level of proposals this year. A drop in score when you resubmit typically means the competition got better.

Look, I won't deny luck matters. It absolutely does, specially when competition is this fierce. Even if luck amounts to a fluctuation of just plus or minus 1 point out of 100, that can easily be the difference between you getting the grant or not. But it's not a lottery. If your proposal scored low, it's significantly more likely it just wasn't that good, rather than an evaluator randomly hating you and convincing three other people of the same bias.

Post Reply