Hi! I just did some activities for my students, in which each activity has some exercises (mainly functions) and they are tested via the autograding tool. Here is one recent example of mine. Notice that the repo is a template, so that the students assignments can be created faster according to GitHub. Also a feature that I think is missing is that we may have a solution branch in our repo, but if it exists it will be cloned to the students repo too, so the actual workaround for that is to use a different private repo for the solution.
Hi @juhahinkula we will be offering webinars that will go over GitHub Classroom and it’s features (including autograding). You are welcome to join us:
Thanks @ericdrosado . I enrolled to session
The part that I really need, is assigning scores (points) to tests, and then exporting those points to my LMS (Moodle) somehow. The documentation doesn’t give me the impression that’s possible.
I don’t think that what you seek for is doable as of now. However, this doesn’t mean that in the near future autograding won’t be able to provide this feature as well.
Autograding is built upon the GitHub Actions that is a gigantic super-rich framework for doing sophisticated continuous integration. In this perspective, autograding merely automatizes the provision of template workflows in the GH Actions terms. At any rate, you’re free to bypass autograding as it is and craft your own GH action to get what you want. Of course, this comes at the cost of learning the tool, although there’re plenty of examples and snippets and thus the learning curve is not that steep.
Anyway, with regards to the risk of cheating, take a look at this other post.
Very likely, autograding is currently meant as a tool for the students to put to test their solutions automatically (and that’s great) and not as an “actual” academic grading system.
@pattacini Thanks for the response.
Yes real-time constructive feedback to students is very worthwhile. Though IMO the name “autograding” is over-promising as long as there is no notion of scores. When students and faculty hear “grades” they really expect numerical assessments.
Maybe “autochecking” or “autofeedback” might be a better way to frame this. Thanks!
Yes, probably “autograding” doesn’t reflect exactly what faculty would expect from the service, although the service as it is now might correspond to the first step in the roadmap. Mine are just speculations.
you actually get scores for each test that you run using the autograding setup. It may not be straight forward for now, but Exporting it onto LMS can be done too.
Interesting, could you elaborate on where/how?
can you check this out?