Github Classroom autograding / testing

Classroom has now this nice feature that allows us to run tests automatically and use autograding. The documentation is still quite poor and it was hard to find any practical case studies. Has anyone in this forum tested that feature in practice yet? I am interested in running JUnit tests in Java and Jest tests in javascript. It would be nice to hear your experiences and get some practical guides.

2 Likes

Hi! I just did some activities for my students, in which each activity has some exercises (mainly functions) and they are tested via the autograding tool. Here is one recent example of mine. Notice that the repo is a template, so that the students assignments can be created faster according to GitHub. Also a feature that I think is missing is that we may have a solution branch in our repo, but if it exists it will be cloned to the students repo too, so the actual workaround for that is to use a different private repo for the solution.

2 Likes

:wave: Hi @juhahinkula we will be offering webinars that will go over GitHub Classroom and it’s features (including autograding). You are welcome to join us:

6 Likes

Thanks @ericdrosado . I enrolled to session

1 Like

The part that I really need, is assigning scores (points) to tests, and then exporting those points to my LMS (Moodle) somehow. The documentation doesn’t give me the impression that’s possible.

Hi @kevinwortman

I don’t think that what you seek for is doable as of now. However, this doesn’t mean that in the near future autograding won’t be able to provide this feature as well.

Autograding is built upon the GitHub Actions that is a gigantic super-rich framework for doing sophisticated continuous integration. In this perspective, autograding merely automatizes the provision of template workflows in the GH Actions terms. At any rate, you’re free to bypass autograding as it is and craft your own GH action to get what you want. Of course, this comes at the cost of learning the tool, although there’re plenty of examples and snippets and thus the learning curve is not that steep.

Anyway, with regards to the risk of cheating, take a look at this other post.

Very likely, autograding is currently meant as a tool for the students to put to test their solutions automatically (and that’s great) and not as an “actual” academic grading system.

2 Likes

@pattacini Thanks for the response.

Yes real-time constructive feedback to students is very worthwhile. Though IMO the name “autograding” is over-promising as long as there is no notion of scores. When students and faculty hear “grades” they really expect numerical assessments.

Maybe “autochecking” or “autofeedback” might be a better way to frame this. Thanks!

Yes, probably “autograding” doesn’t reflect exactly what faculty would expect from the service, although the service as it is now might correspond to the first step in the roadmap. Mine are just speculations.

you actually get scores for each test that you run using the autograding setup. It may not be straight forward for now, but Exporting it onto LMS can be done too.

Interesting, could you elaborate on where/how?


can you check this out?

1 Like

Hi folks!

Every test you define in our UI can have a point value. At the moment, points per test are not very granular, a passing test gets 100% points, and a failing test gets 0 points.

E.g. If you have a testing suite ./test.sh, we award 100% of the points if it exit 0s, and none if it exits non00. Translating to standard testing frameworks, they only exit 0 if all tests pass.

So to give more granular “grades”, you’ll need to break your testing suite into different GitHub Classroom “tests” and assign each one a point value.

We’re aware that this isn’t an optimal solution, and we’re planning on improving this soon! In the future, we’d like to be able to interpret from the testing framework output how many tests passed and failed, and give a percentage of the points based on that. We’re not done building this feature, so please keep the feedback coming and we’ll do our best to address it in the future!!

Cheers,

Nathaniel
Program Manager - GitHub Classroom

1 Like

Hi Nathaniel,

I broke up my JUnit tests into 4 files and assigned each of them 10 points. However, the autograder is giving the students either a 0 or 40 and nothing in between. That is, I have students that pass all of the tests in 2 of the files, but they are still getting a 0. What am I doing wrong?

Here is the pertinent part of the action log:

BUILD FAILED in 17s

Task :test FAILED

TrailTest > testIsLevelTrailSegment1() PASSED

TrailTest > testIsLevelTrailSegment2() PASSED

TrailTest > testIsDifficult1() PASSED

TrailTest > testIsDifficult2() PASSED

NumberCubeTest > testGetCubeTossesCtor1() PASSED

NumberCubeTest > testGetCubeTossesCtor2() PASSED

NumberCubeTest > testGetLongestRun1() PASSED

NumberCubeTest > testGetLongestRun2() PASSED

TokenPassTest > testDistributeCurrentPlayerTokens() PASSED

TokenPassTest > testTokenPassCtor1() PASSED

TokenPassTest > testTokenPassCtor2() PASSED

TokenPassTest > testTokenPassCtor3() PASSED

TokenPassTest > testDistributeCurrentPlayerTokens2() PASSED

HorseBarnTest > testFindHorseSpace1() PASSED

HorseBarnTest > testFindHorseSpace2() FAILED

org.opentest4j.AssertionFailedError at HorseBarnTest.java:32

HorseBarnTest > testFindHorseSpace3() FAILED

org.opentest4j.AssertionFailedError at HorseBarnTest.java:44

HorseBarnTest > testConsolidate() PASSED

3 actionable tasks: 3 executed

NumberCubeTest.java[39m

Error: Exit with code: 1 and signal: null

TokenPassTest.java[39m

I will note that NumberCubeTest is listed first in the autograding.json, even thought it is not the test that has any failures in this case. Here is my autograding.json

{
  "tests": [
    {
      "name": "NumberCubeTest.java",
      "setup": "",
      "run": "gradle test",
      "input": "",
      "output": "",
      "comparison": "included",
      "timeout": 10,
      "points": 10
    },
    {
      "name": "TokenPassTest.java",
      "setup": "",
      "run": "gradle test",
      "input": "",
      "output": "",
      "comparison": "included",
      "timeout": 10,
      "points": 10
    },
    {
      "name": "TrailTest.java",
      "setup": "",
      "run": "gradle test",
      "input": "",
      "output": "",
      "comparison": "included",
      "timeout": 10,
      "points": 10
    },
    {
      "name": "HorseBarnTest.java",
      "setup": "",
      "run": "gradle test",
      "input": "",
      "output": "",
      "comparison": "included",
      "timeout": 10,
      "points": 10
    }
  ]
}

Ok - I figured out how to get the individual tests to run. You need to specify that in the Run command for the test. I added the command gradle test --tests . For example,

gradle test --tests NumberCubeTest. See the attached photo for an example Java Test case example.

Ah! Glad you’ve figured it out, thanks for sharing your process!

It’s worth noting that you can specify a test within the test class you want to run:

gradle test --tests HelloWorldTest.TestOne

See: https://docs.gradle.org/current/userguide/java_testing.html#simple_name_pattern

2 Likes

Oh - that’s great!

That would allow for an even more granular level of testing.

It would be nice, if there was a file picker to do this. I had a typo in a couple of my test names, which led to some angst.

Overall, the autograder has been great, especially during this remote teaching.

Hi all,

I have to setup autograding without using gradle, just execute some Junit4 tests. I read somewhere that the Github Actions machines have all necesary libraries in the classpath for running Java tests, but I can’t figure out how to run the test.

This is my autograding setup:

but I got

Error: Could not find or load main class org.junit.runner.JUnitCore

when executing the test.

Any help will be much appreciated!

I solved this problem by adding junit4.jar to my repository and then
setup: javac -cp junit4.jar:. *.java
run command: java -cp junit4.jar:. CoordinateTest

© 2017 GitHub, Inc.
with by
GitHub Education