Autograder gives points when there is no assertion failure

Is it possible to change how the autograder awards points? That is, only award points when a positive assertion is made?

The backstory is that I just found 3 students that had commented out the assertXXX methods in the JUnit test cases and the autograder is giving them credit for passing the tests.

I don’t think so.

autograding catches any errors thrown during each test. If no error is thrown, then the test passes.

assert() throws when its condition is not met. If there is no assert() method then something else needs to throw an error, or the test will pass in JUnit and it will pass in autograding.

You could try including an autograding test to check if the JUnit test has been modified. If you have the Feedback PR enabled in the assignment, something like the following should work:

In setup: git fetch --unshallow --update-head-ok origin '+refs/heads/*:refs/heads/*'
In run: [ -z "$(git log feedback..main <relative-path-to-junit/test>)" ]

Setup will fetch the rest of the repo on the workflow runner (Actions only shallow clones by default). The run command will check if there are not any log entries for the file specified (use the relative path to the base of the repo).

Of course, students could just edit .github/classroom/autograding.json. The current solution to this is to have a talk with your students about what academic misconduct is. If the Classroom assignment is just for lab exercises, highlight how ‘cheating’ will just mean that they miss out on the additional teaching that they need.

Definitely agree about having the discussion about academic dishonestly. Having those discussions today :frowning:

Thanks for the insight into how a point is and is not awarded.

I like the idea of adding an autograding test that checks for changes to the junit files; however I haven’t been enabling the feedback pull request. Maybe that is something I should start doing. I am thinking about using the setup command that you have and then in the run doing a checksum check. Do you think that would be feasible?

Yes, checksum could work, if you pre-compute it, and use comparison in the autograding test. It would save having the checkout the whole repo on the runner and finding an early commit to check against (a feedback branch is created if the Feedback PR is enabled). You’d have to re-compute it and update the autograding test if you make any changes to the JUnit test though.

Just trying out your solution using the Feedback PR and it is not working.

setup: git fetch --unshallow --update-head-ok origin ‘+refs/heads/:refs/heads/
run: [[ ! $(git log feedback…main ./src/ ]]

When I make a change to the repo, I get the following error.

Are the square brackets necessary? I did take them out and I got the same error

My “main” branch is called main, as in your example.

I should add that I configured this in the Add Test dialog within Github Classroom as a Run command.

Sorry, my bad. [[ is bash, not sh. Try

[ -z "$(git log feedback..main <relative-path-to-junit/test>)" ]

-z option is test for empty string. Remember to put double quotes " " around the command substitution.

[Edited above comment with fix]

That’s great and it works for me. Thank you very much!

Now you opened up other things to me, as I didn’t really know when or how to use the setup command and that could use sh commands here.

Once again - a hearty thanks to you!

Depending on how you are running the tests you can check the output against the expected. In my C++ classes, I verify that the output from the framework (I use Catch2) aligns with the expected number of tests and assertions, for example:

All tests passed (6 assertions in 3 test cases)

And in my Java/JUnit tests run from Gradle, I get output such as

AsciiArtImageTest > testInvalidFile() PASSED

So I just put that as the output in autograding.json

Obviously it doesn’t cover the case where a student is trying to cheat, and is changing the assertions, but it covers students who have commented things out for development purposes and accidentally committed the changes.

As for students trying to cheat, I’ve seen several different approaches, but changing the tests has been less common than students changing the weights in the autograding.json file, or otherwise manipulating the build system.

thanks for info