I hope this doesn’t come across as being ungrateful to the team. GitHub Classroom is a very useful tool and has been very beneficial for us. However, I have been carefully following this forum for about 18 months. And there is a recurring theme. Someone brings up the issue of invitation acceptance taking “forever” (figuratively). At various stages, someone from the team may have replied to indicate that a fix was in the works or had been applied. Some time after the fix has been applied, someone else reports a similar long delay. I have kept records of the various postings over that time, but given that I have experienced the problem myself, hopefully we can just take it as read that it exists.
This creates a particular problem in the context of supervised exams, because of the time sensitivity. Particularly so if there is a local possibility that the supervisor in the room may not be technically skilled and may not be able to intervene. In any case, students taking exams are already anxious, without the software contributing to that by appearing to have hung.
Where I would hope to advance this is by trying to characterise the problem more accurately. It’s not about slowness. A typical pattern would be that everyone in a group has their invitation acceptance processed promptly, say in 5-10 seconds. For one person in the group, the processing could take 45 minutes. That’s if you let it run, which I have (on occasion) to try to see what would happen. Notably, it did actually complete and work fine after the 45 minutes, with no intervention. So there’s no distribution of times. Just a freak outlier. If I was to hazard a guess, it looks like that one particular acceptance processing finds itself in some kind of job queue that somehow gets starved. And that something else eventually identifies starved queues and gets them going again. That’s complete conjecture on my part but it’s one way of explaining the behaviour I’ve seen where there’s a big gulf between 10 seconds and 45 minutes. Perhaps I’m totally wrong.
So my question to some member of the team would be, does this ring a bell? Is there still a risk that this might occur? (The last time I experienced it was last summer.) I think it’s probably better to be fully transparent about any such risks, given how critically important it is that supervised exams run without a hitch.
Did you read this post on the use of template repositories for assignments?
The introduction of template repositories within the GH Classroom did represent quite a significant step forward in terms of reliability.
Therefore, in which way did you configure your assignment?
Thanks for that, @pattacini. The last time I ran an exam was before the template repository feature was available. My most recent exam was last summer. I started using template repositories around October. I have an upcoming exam I need to run and will use a template repository. I noted that the little explanation under the template repository option claims that it would increase code import speed. But the point I was trying to get across in the above is that I don’t think the problem I was seeing was down to speed. I had not experienced any problems with speed for the source importer with the fairly trivial starter repositories I use. The problem I had experienced seemed to me to be that one job got kind of lost (semi-indefinitely).
I hadn’t spotted that post. It indicates that template repositories are not just about speed, but also about reliability. That sounds like what I want, but it’s not really expanded on anywhere. Is anyone aware of how exactly using template repositories is more reliable? Is this reliability relating to the previous problem of things taking ages?
Essentially, when you spawn an assignment from a template repository you don’t deal with the whole git history making the process very fast and much more reliable.
Once I came across a nice compartive study GitHub guys made to validate this approach but I can’t find it now. Perhaps @d12 can provide some hints here.
Thanks again. I’m probably coming across as a nit-picker, but the context is the most important thing in my professional life, namely running exams. So when I see a term like “more reliable”, I reassured to some extent, but I have very little insight into why so, exactly. I obviously like to be really on top of things like reliability in the context of exams.
You’re perfectly right to be concerned about reliability during exams and I can tell you that in the past I had to face quite similar problems with my assignments. Nonetheless, with this new mechanism based on templates, I could start seeing all the benefits and didn’t have to do troubleshooting any longer in front of the students.
I’m not into how GH Classroom has been modified under the hood to increase reliability, hence the only suggestion is to make tests yourself.
The only point I can highlight here is that there’s a big difference between doing
git clone repo and
git clone repo --depth 1 (used with templates I can imagine) with lots of repos, where the latter discards the repos history outright, thus saving a huge amount of bandwidth and time spent by the jobs running on GH machines.