Agile on the Beach – update on review process

294 submissions
54 volunteer reviewers
2,400 votes
2 rounds of review
40 accepted submissions

Over the weekend I did the hardest thing I do all year: I sent the “sorry your submission to Agile on the Beach has not been chosen.” The declines.

I have explained how we review sessions for AOTB before but things have changed a bit so I owe it to submitters an explanation. So here goes…

This year, as usual, we had nearly 300 submissions to speak at Agile on the Beach. There are only 40 speaking slots – the exact number varies a little because we make minor changes to the schedule.

We have two review rounds. In round one reviewers score submissions from 1 to 5 – with 5 being the best. From this we create a shortlist for each track. Rule of thumb is the shortlist is twice the number of available slots (5 slots for Thursday tracks, 4 for Friday and 9 for two day tracks, so 10, 8 and 18.)

The first change for 2020: round 1 was done blind. That is anonymously, speaker bibliography details were not shown to reviewers.

Now on the one hand this is fairer: it gives more chance to new speakers, it stops the same old names dominating the conference and hopefully promotes diversity.

On the other hand: reviewers have often seen speakers previously and know who is good and who is not so good. It also has a commercial hit because a programme with more known names is likely to be more attractive to ticket buyers. So 2020 was an experiment.

I think blind reviewing worked. Although it does mean a few regulars did not make it and I feel bad about that, they are my friends. I also think we were right to look at speaker bios in round 2.

While we set no rules about speaker self-identification (i.e. some speakers used the synopsis to give their name or even mini-bios) a few reviewers didn’t appreciate this and in their comments noted they scored such submissions lower.

Back to shortlisting… when shortlisting we look at the mean score in reviews and the median. Usually the first few, highest scoring, sessions are easy to pick. It is the border line ones which are hard to decide.

In round two reviewers are asked to rank the shortlist. There can be only one number 1. The lowest score depends on how long the shortlist is, this varies by track. (So oddly, round 2 reverses scoring, 1 is top.)

In both rounds reviewers are encouraged – but not compelled – to write an explanation of their score. This is used by us as reviewers when deciding border line submissions and provided to the submitter as feedback.

In the past this process has needed reviewers to review 70 or more submissions. A few reviewers look at everything, all 300. I stopped doing this myself about two years back. Our submission system (Mimas) tries to ensure that submissions get an equal number of reviews.

Because reviewers were reviewing so many sessions feedback declined, I suspect attention to submissions decline too but I don’t know for sure.

The other big change for 2020: we recruited more reviewers. Actually, we invited everyone who had already bought a ticket, and everyone who attended in 2019 to review for round 1. 54 people volunteers and were allocated sessions to review. About a dozen people didn’t do their reviews but we still had plenty of scores.

This created more work for me than expected. While Mimas was updated to cope with this is wasn’t completely smooth. Whats more some reviewers left it to the last minute to review which created problems and meant I had to chase people.

Round 2 reviewing was more limited. Only about 12 reviewers picked including some AOTB committee members.

Over the years, and especially since I wrote Mimas, AOTB selection has become more score dependent. This has its good and bad sides but it does mean we get it done faster.

Once round 2 is done we look at the average (mean and median again) ranks and count of from the top to fill the track. Then we apply some judgement: people who submit in two tracks normally only speak in one. We would like a male/female balance but we don’t have any hard and fast rules plus, we don’t know if we are being fair on other diversity criteria. Are dyslexics fairly represented? Ethnic minorities? and so on.

We also have a financial consideration. AOTB pays a travel allowance dependent on how the speaker travels. This year when we completed selection and looked at the costs we were in budget. That doesn’t always happen, some years we have to cut back on long-haul speakers. Some years we argue and get more money.

Finally, we send acceptances. We ask everyone who is accepted to confirm they will attend and speak. Hopefully everyone says Yes and says so quickly. However we often loose people for one reason or another. Hence there are quick substitutions – that is why we hold off sending declined. Once we have the lineup settled we send the declines.

Every year we decline some amazing sessions that we would really like to host. We only have so much space. Every sessions we accept means another we can’t accept. We have enough good material for two conferences but only have one.

Whether you have been declined or accepted for 2020, whether you are thinking of submitting in 2021 or just trying to find out how conferences operate I hope that helps.

You might also want to look at:

Finally, you can see and try Mimas our conference review system – this is now OpenSource on GitHIb. I’ll blog more about this soon.
(Now reviewing is over I have a bunch of updates to do so the system may be up and down randomly this week.)


Like this post? – Like to receive these posts by e-mail?

Xanpan

Subscribe to my newsletter & receive a free eBook “Xanpan: Team Centric Agile Software Development”

Verified by MonsterInsights