autograding catches any errors thrown during each test. If no error is thrown, then the test passes.
assert() throws when its condition is not met. If there is no assert() method then something else needs to throw an error, or the test will pass in JUnit and it will pass in autograding.
You could try including an autograding test to check if the JUnit test has been modified. If you have the Feedback PR enabled in the assignment, something like the following should work:
In setup: git fetch --unshallow --update-head-ok origin '+refs/heads/*:refs/heads/*'
In run: [ -z "$(git log feedback..main <relative-path-to-junit/test>)" ]
Setup will fetch the rest of the repo on the workflow runner (Actions only shallow clones by default). The run command will check if there are not any log entries for the file specified (use the relative path to the base of the repo).
Of course, students could just edit .github/classroom/autograding.json. The current solution to this is to have a talk with your students about what academic misconduct is. If the Classroom assignment is just for lab exercises, highlight how ‘cheating’ will just mean that they miss out on the additional teaching that they need.
Definitely agree about having the discussion about academic dishonestly. Having those discussions today
Thanks for the insight into how a point is and is not awarded.
I like the idea of adding an autograding test that checks for changes to the junit files; however I haven’t been enabling the feedback pull request. Maybe that is something I should start doing. I am thinking about using the setup command that you have and then in the run doing a checksum check. Do you think that would be feasible?
Yes, checksum could work, if you pre-compute it, and use comparison in the autograding test. It would save having the checkout the whole repo on the runner and finding an early commit to check against (a feedback branch is created if the Feedback PR is enabled). You’d have to re-compute it and update the autograding test if you make any changes to the JUnit test though.
Depending on how you are running the tests you can check the output against the expected. In my C++ classes, I verify that the output from the framework (I use Catch2) aligns with the expected number of tests and assertions, for example:
All tests passed (6 assertions in 3 test cases)
And in my Java/JUnit tests run from Gradle, I get output such as
AsciiArtImageTest > testInvalidFile() PASSED
So I just put that as the output in autograding.json
Obviously it doesn’t cover the case where a student is trying to cheat, and is changing the assertions, but it covers students who have commented things out for development purposes and accidentally committed the changes.
As for students trying to cheat, I’ve seen several different approaches, but changing the tests has been less common than students changing the weights in the autograding.json file, or otherwise manipulating the build system.