I write unit tests for my C code all that time. It's not difficult if you use a good build system and if you are willing to stomach some boilerplate. Here is one test from my "test suite" for my npy library:
Your function looks like it's doing I/O, which won't work at compile time test. Here's an example of a unittest for the ImportC compiler:
struct S22079
{
int a, b, c;
};
_Static_assert(sizeof(struct S22079){1,2,3} == sizeof(int)*3, "ok");
_Static_assert(sizeof(struct S22079){1,2,3}.a == sizeof(int), "ok");
The semantics are checked at compile time, so no need to link & run. With the large volume of tests, this speeds things up considerably. The faster the test suite runs, the more productive I am.
Hey Walter, importC is great but on Mac it doesn't work right now because Apple seems to have added the type Float16 to math.h (probably due to this: https://developer.apple.com/documentation/swift/float16) and DMD breaks on that.
It is difficult to imagine that compile-time interpretation of tests is faster than compiling and running them for anything more complex. And for trivial stuff it should not matter. Not being able to do I/O is a limitation not a feature.
For one of my C projects (ca. 450 c-files), full rebuild time on my (not super fast) laptop is just below 10 seconds (incremental builds < 1s). Compiling and linking all unit tests takes a second, and running all unit tests takes 6-7 seconds. So even running the optimized code for the tests almost doubles the time for rebuilding the full project. Although I like the machine code to be tested that is actually used. (BTW: A single C++ file with templates for CUDA someone added to the project - when activated - almost doubles the build time.)
Probably yes. Macros are about equally bad than templates in my experience (although C++ people will disagree, I do not see much difference). But mostly I plan to just let the compiler specialize the functions during optimization. I haven't looked at this specific problem though.
For one, templates can be stepped through on the debugger without additional effort, offer type checking, don't execute parameters multiple times, and don't require additional parenthesis and curly braces to protect their misuse.
Debuggers can expand macros, and you can also look at the pre-processed output or even compile that (you can't the expanded form with templates). But I agree that if it becomes more complicated this is not very good. But you know, this does not matter too much to me as I will neither use complicated macros nor templates. And simple macros are just fine.
You will be pleased to know that you are not the only one who does this.
I previously went down the rabbit hole of fancy unit test frameworks, and after a while I realised that they didn't really win much and settled on something almost identical to what you have (my PRINT_RUN macro has a different name, and requires the () to be passed in - and I only ever write it if the time to run all the tests is more than a second or so, just to make it really convenient to point the finger of blame).
The thing that I do which are potentially looked upon poorly by other people are:
1) I will happily #include a .c file that is being unit tested so I can call static functions in it (I will only #include a single .c file)
2) I do a tiny preprocessor dance before I #include <assert.h> to make sure NDEBUG is not defined (in case someone builds in a "release mode" which disables asserts)
This test/src separation always felt like html/css to me. When still using C, I wrote tests right after a function as a static function with “test_” in the name. And one big run-all-tests function at the end. All you have to do then is to include a c file and call this function. Why would I ever want to separate a test from its subject is a puzzling thought. Would be nice to have “testing {}” sections in other languages too, but in C you can get away with DCE, worst case #ifdef TESTING.
Because tests also serve as api validations. If you can't write a test for functionality without fiddling with internal details the api is probably flawed. Separation forces access via the api.
I don’t need anything to be “forced” on me, especially when this forcing is nominal and I can #include implementation details. You may need that in teams with absurdly inattentive or stubborn members, but for you-personally it’s enough to get the principle and decide when to follow it. The idea is to simply keep tests close to the definitions because that’s where the breaking changes happen.
If you can't write a test for functionality without fiddling with internal details the api is probably flawed
This logic is flawed. If you have an isolated implementation for some procedure that your api invokes in multiple places (or simply abstracted it out for clarity and SoC), it’s perfectly reasonable to test it separately even if it isn’t officially public.
I agree with all of the above. The only fancy thing which I added is a work queue with multiple threads. There really isn't any pressing need for it since natively compiled tests are very fast anyway, but I'm addicted to optimizing build times.
Borrowed heavily from boost.test.minimal and used to be a single header but but over the years I've had to add a single translation unit!
My takeaway is that if you keep your code base in a condition where tests are always passing you need much less complications in your testing tools and their error reporting and fault tolerance etc. !