Unit Tests for student feedback: state of the art?


(Matt Price) #1

I’m getting ready for next semester and took a little trip down memory lane via this old thread.

In it, I asked what strategies people use for automating feedback to students via unit tests that they can run themselves (locally, or perhaps via Travis-CI on submission).
I’m wondering what tools people are currently using? I see @thomasjbradley is still developing markbot and @smattingly’s client-side-js-tests repo is still up. What else is out there and what other ideas do people have? Thank you all in advance!

Looks like there are a couple of similar threads, like Giving feedback in the middle of assignments , and Student feedback templates , but they are mostly also fairly old so I’d love to hear what people are doing at present!


(Tcowan) #2

This semester I am teaching an introductory iPhone programming class online and provided my students with a set of user interface tests they can run themselves. I stood up a Jenkins server which pulls their latest commit, compiles their app, runs the same tests and automatically posts a grade to our Canvas system based on a grading rubric. I also teach a Scripting (Bash, Python, Perl) class online where the tests are provided to the student but I run a script manually to grade them. That class is not on Github yet. I use something called Cucumber (in a corrupted, totally industry-unacceptable way) to exercise my student’s submissions and provide them a test results report. Next semester, the Scripting class will move to Github for near-total automation.

I might add that students absolutely love knowing what their grade will be in advance.


(Matt Price) #3

@tcowan that all soundsreally great. Do you have links to any of these tools? I would love to steal from you!


(Tcowan) #4

I would love to provide them. I do need to “clean them up” a bit, to remove secrets like passwords and userIDs, and document them. In a sense, these tools are like creating a stew from the contents of your refrigerator, without a recipe. I have to reverse engineer them to tell you how they work and how to use them. Right now, I am preparing for Spring semester and winding up Fall semester. I will create a project in my to-do list and get started on it. My colleagues especially would like to use them.

Meanwhile, I can answer questions about them if you have any.


(Łukasz Łaniewski-Wołłk) #5

I use a custom script. I use an extended version of ok.sh to access github api. The script does:

  • Check which repos are modified from the last time with (ok.sh)
  • Clone/pull these repos (git)
  • Check if any commits were made compared to the base (git rev-list upstream/master… --count)
  • If this is something new, check status of the commit on github (ok.sh) - to be certain we didn’t test it earlier
  • Run tests
  • Add comment to the commit, with the log of the tests
  • Add status (error/success) to the commit.

I would love to do this with jenkins, but I don’t have enough virtual machine resources (virtualization is crusial, as students can submit pretty much anything).

Things I can recommend: Be very aware of what students can submit. Students even without malicious intent, can sometimes make a solution that “works on their computer”, but for instance on the tests eats up all the memory, or works indefinitely.


(Dawid Zalewski) #6

Unit tests are integrated in the assignments. Students can run them locally to see if they are going in the right direction. The build also includes a static analysis tool invocation - generally speaking students’ code shouldn’t trigger any warnings, for any major ones points are subtracted and students are aware of it. When students push to GitHub a CI run is triggered on Travis with the same tests, this is just to have a record of their work. I clone student’s repositories and run the basic and additional unit tests for grading. Those additional tests cover corner cases that are not tested explicitly on students’ computers. The feedback on failing tests is generated automatically and I send it to students together with the grade.
I hope to automate it even further and trigger automated tests with grading + feedback on an AWS instance.