Using autograding results

Hello everyone!

I have been searching for this for a while, and I haven’t found any solutions. However, I can’t believe that such trivial basic functionality does not exist, so I’ll try asking here.

Is there any way to just get a list of all student’s autograding results? Manually copying all results into a text file is just ridiculous, especially with many students. I suspect I could use the classroom assistant (https://github.com/education/classroom-assistant/releases/tag/v2.0.3) to clone all student repos and probably the results of the tests are stored somewhere in there, but this seems like a total overkill and somewhat hacky. Are there any other solutions?

Coming kinda hot, fresh out the box, huh?

If it is such a “trivial basic” chore then perhaps you should write a tool to do this job, and share it with the community. Everybody wins.

Perhaps I would better understand if you clarified: How is “just a list” of results different than “Manually copying all results into a text file”?

Text files are extremely useful. Simple, clean, reliable, parse-able, etc…

If it is such a “trivial basic” chore then perhaps you should write a tool to do this job, and share it with the community. Everybody wins.

This is the basic idea of open source, yes. However, the reason why I’m using an existing tool is that I do not have the time to build everything on my own. And accessing grading results is trivial basic functionality that I would expect from a classroom solution.

Perhaps I would better understand if you clarified: How is “just a list” of results different than “Manually copying all results into a text file”?

I have two classrooms with ~30 and ~60 students, and next semester I’d like to expand the use of classroom to more courses, since I like the idea of teaching my students to routinely use git. However, manually copying 90 autograding results to whatever destination every week (I have weekly assignments) is both error prone and massively inefficient. I’d expect that, since autograding results are visible in the UI, there’d be some more efficient way of accessing them - an API call, a “download as CSV” button, something. However, I have not found anything like that.

Text files are extremely useful. Simple, clean, reliable, parse-able, etc…

I totally agree. This is not about the format, it is about accessing them in the first place in any format whatsoever (other than seeing them in the web interface).

Hi - I had the same problem. I wrote a quick tool to download the “grades” from a unit test suite: GitHub - aronwc/ghc-utils: utilities for interacting with github classroom This assumes the tests have all been run via the GHC workflow and returns an output like “Points 10/20”. It grabs the output for the latest commit and write a csv file with the results for each student.

This is, of course, an unsatisfying solution…(imho GHC needs more developers, but it’s not open source, so I’m not sure how to contribute…)

Hi!
Awesome, thanks! I will definitely take a look at that. Unfortunately, currently many students are complaining that tests fail in classroom while they run perfectly locally, which I was able to confirm (apparently, sometimes files from the repo that I use as input in the tests appear to be empty, i.e. BufferedReader.readLine() returns exactly a single, empty line - which of course leads to funny hard-to-debug fails, especially since test logs are not published in the default grading workflows). So I will have to automatically download all repos and re-run tests locally anyways, but it will be very useful to be able to look at your code to find out what is where in the actions files!

I have a solution similar to this (using the very nice ‘ghapi’ API). I don’t actually use the github autograding interface - instead, I use specifically named workflows so that I can have “mid-lab checkpoints” or deadlines that students need to hit before they complete the overall assignment.

I use the “artifacts” interface to record a grading artifact and the grades are then extracted from that using a specified regex.

@aronwc - looking at your code you’re parsing “check runs” from each commit. Does this use the same mechanism as github actions? I’m currently using actions and have a private “runner” set up since we have so many students in the class. I’ll have to look into the check runs mechanism.

My particular goal has been interfacing with an LMS (Moodle in my case), so I combine the Github Classroom roster with the roster from the LMS. This is for a ~350 person class and there were some missing entries in the github classroom roster which I could have cleaned up if I had pursued “unlinked accounts” earlier.

This then gives me the appropriate keys to produce either a CSV of when people completed the check points, the grade, the # of days before the deadline they accepted the assignment and the # of days before the deadline they did their last commit.

I need to clean up the code a bit (our semester is just ending) but I can post it when that’s done.

That sounds really awesome, I’d be very interested! If I can contribute, I’ll also be happy to do so (as time permits…).
Just one question: Does that mean that you provide the entire test suite to your students from the very beginning, and they just hit checkpoints week by week? If so, do you give them the full class layout of the project from the very beginning? I have them refactoring from time to time (this is an intro to programming, so they are learning the more advanced techniques as they go along and refactor accordingly), so throwing a full “final” test suite in my case would mean that most tests would not compile until the very end.

Yes, they run all the tests all the time. I’m still figuring out what the best setup is. This is for a class using the textbook & modified versions of the labs from “Computer Systems: A Programmer’s Perspective”. Since I already had well-developed testing & autograding code, I wanted to exploit what I had while also using a single “golden reference” environment to avoid platform dependent issues. In addition, some labs have a “performance score” and I wanted everyone evaluated on the the same hardware.

For some labs, I break out the individual tests a single workflow & actions with multiple ‘run’ statements and the students see each individual test passing or not.

That strategy assumes the tests build on one another – i.e. that you’d have to pass test1 in order to pass test 2, etc.

Another strategy (again, still figuring out what makes the most sense…) is to e.g. 2 workflows with multiple actions. For example, in this “attacklab” (buffer-overflow and ROP) assignment, there are different tasks the student is to do (various forms of code-injection, etc).

There are two “workflows”. The first named “grade-first” is the “checkpoint” that is due by a specific date. The second is is “grade-rest” which needs to be finished by the due-date. In this case, I used two workflows (grade-first, grade-rest) and each had multiple jobs (e.g. run-ctarget-l3, run-rtarget-l2, run-rtarget-l3). The jobs can be run in parallel by default ( Workflow syntax for GitHub Actions - GitHub Docs )

For simplicity, the grading script looks through the list of workflows and keys on the label – i.e. a deadline is expressed for each workflow name:

I only used artifacts to record a grade for one such lab (shown above) since the others all use an oral-grading interview.

The things I need to figure out is the difference between actions and the “check_run” interface, how to get the correct artifact for a specific commit (I just use them in time-order) and options such as “best” vs. “most recent” grade, etc.

For another class, we’re going to combine Github Classroom with the INGInious autograder ( INGInious’ documentation — INGInious 0.6 documentation ). Students will still develop assignments using Github, but the final grading (as opposed to incremental checks) will be done using INGInious. There are some benefits including direct score reporting to the LMS and it’s also slightly more secure since the grading script does not need to be contained in the student repository - this allows us to replace portions of the students assignment with standard elements (pulled from a repo). While I can do that using Github Actions, it means that we either need to pull from a public repo or we expose a PST using a secret – while it looks like Github takes steps to mask or hide the PST it’s not really designed for an adversarial attack.

Cool, that looks really awesome! I am totally swamped with the ongoing semester right now, but I will definitely look into your approach (and maybe setting up a local INGInious instance, that also looks very useful) to update my materials for the next semester.

© 2017 GitHub, Inc.
with by
GitHub Education