Question Details

No question body available.

Tags

version-control continuous-integration code-reviews conditions

Answers (7)

Accepted Answer Available
Accepted Answer
March 17, 2025 Score: 33 Rep: 221,438 Quality: Expert Completeness: 80%

Yes, you should commit this code, for the reasons you already mentioned. It is a form of test code, and similar to other test code which checks just correctness, it has some value and will probably be reused at a later point in time. However, I would recommend to avoid mixing the timing code directly with your production code - better try to implement a separate performance test you can run individually, as part of your test suite.

This should be easy when you have to optimize a single function f1() - write a performance test which tests f1() in isolation, maybe with some loop (plus the time measurements) around it. For this, one may use the same, well-known techniques which are used for other kind of unit tests or automated tests, like

  • getting access to f1 even when it is private (like making it public, or making it public only from the POV of the test code, or testing it indirectly through some other public function).

  • structure the test in the form "arrange - act - assert"

  • place the test code in a separate project which will not be deployed

You will probably not run this test each time your automated test suit runs (so even if you place the timing code in a unit testing project, don't mark the code for automatic execution). Maybe you will replace the "assert" step by printing out the measured timings (so you can do the assessment of certain speed improvements manually/interactively).

Sometimes restructuring the code to make f1 testable in isolation may seem to be too much effort, or it may change the timing behaviour too much. If that is really the case, you may go the "conditional compilation guard" route - this is sometimes a sensible approach. However, this would normally not by my first choice, because this can have a pretty negative effect on the readability of the code. Moreover, if you use conditional compilation guards as the only measure, it removes the possiblity to decide at runtime whether you want a certain performance test to run or not, you will always have to recompile the code when you want to enable or disable the performance testing feature.

The "command line flag route", however, will require to implement the timing code directly as a feature in the production code. This is something I would consider when I not just want to test an isolated function in my development environment, but when the goal is to get performance data from the real production environment. It is also a useful approach for a test environment where one wants to run different tests, sometimes performance tests, sometimes others, using the same binary release.

Whether you need individual switches for individual functions, or a single switch for activating all benchmarks at once is nothing we can decide for you - choosing the correct granularity should be subject of your requirements analysis. In general, having different switches for each individual performance measurement will give you the flexibility to decide later if you want to benchmark all functions, some functions or no functions. But if you need that flexibility is nothing we can tell you, this is 100% case dependent.

March 17, 2025 Score: 8 Rep: 85,842 Quality: Medium Completeness: 20%

Consider adding metrics to your code.

Record the timings of various critical paths, how many threads are in use, api calls are made etc. Output these to some performance monitoring tool. eg Windows Perfmon

This can be done without adding much overhead and will provide valuable data for investigation and diagnosis of problems. eg. Does it run slow on my computer/when the virus checker runs/as you open more tabs/with certain config values.

I wouldn't go as far as measuring the timing of every individual function. That's what debug tools are for. But some higher level stuff should serve as an alert which prompts you to investigate more closely

March 17, 2025 Score: 6 Rep: 49,694 Quality: High Completeness: 30%

Do you expect to use this timing code ever again? I have written code to find a bug, found the bug, and thrown the code away because it would never be used again. Timing code, you might check in with other timing code and documentation how to use it.

This kind of code you want to run sometimes. Which means if you run it, you change something in your code (like turning the timing code on). I'd probably have a header file "timingcode.h", a big #define that turns off timing completely, and individual #defines for each item where you have individual timing code. So if someone has the desire to get timing information for feature XYZ, they check out the latest version, go into "timingcode.h", #define the global switch enabling timing, and #define for example timing_XYZ, then run the code.

You probably want to build a version that is as close as possible to released code. Unless your problem is debugging code that takes so long it interferes with development work, and you want to get timing information for that.

Note: If you have a clueless manager who judges your performance and pays bonuses according to commits, then you obviously commit the code, even if you know 100% that the commit is pointless and stupid and gives no benefits whatsoever. If management rewards you for doing stupid things then you do stupid things and nobody will blame you for it.

March 17, 2025 Score: 3 Rep: 1,189 Quality: Medium Completeness: 60%

In the general case, I will typically wrap test code like this in a #ifdef so that it's there if you need it, but it's not actually part of the build unless you go out of your way to turn it on.

That's only for test code of meaningful complexity, though. If this is something as simple as wrapping a function call with a timer start/stop, then I would argue that it's so trivial and easy to recreate that it's not worth the overhead and hassle of keeping (I would rank that like adding a printf call to see the value of a variable). If the test code requires extensive setup or has to be measured in a very specific way, then it might be significant enough to be worth keeping.

For this type of performance measurement in particular, I recommend not writing this sort of code at all and instead using a code profiling tool (valgrind, Intel VTune, etc) as part of your standard test routine. These tools will automatically insert timing and tracking code to every function call and will generate a much more comprehensive performance analysis. It's also all completely automatic, so you don't have to add anything to your code at all.

March 18, 2025 Score: 1 Rep: 238 Quality: Low Completeness: 80%

Now the question is, how do I deal with the added code that measures the execution time of the function? Do I commit it?

Yes do commit stuff like this as temporary commits, where temporary means something you (or your colleagues) want to have present while working on the branch but eventually want to remove before merging the result in the end.

The key concept to temporary commits is to create them with commit messages that clearly differentiate them from normal commits so they are trivial to remove. My approach is to add a "====" pre- and post-fix, e.g.

git add -p src/f1.cpp
git commit -m "==== f1 timing debug ===="

With such commit messages they stand out like sore thumbs and it is trivial (even automatable) to remove before making a release.

If I don't commit it, other developers who review my code have to add back the performance measuring code to their local copy to test my revision.

With temporary commits you only need to agree on a naming convention. Then you can freely share your debug code in a "safe" way where your colleagues intuitively understands that this timing debug is temporary and still benefit from the functionality.

And if you want to keep some of these temporary commits for usage after finishing the current branch you are working on there is no problem keeping them on a different branch (e.g. debug) which you can rebase to wherever you want, they are after all just normal commits you just treat them differently.

March 19, 2025 Score: 0 Rep: 11 Quality: Low Completeness: 50%

if you have existing unit tests for your codebase, consider adding this as a unit test for the function f1. I've typically seen the unit test files categorised as either func (functional) or perf (performance), with functional testing being focussed on the function always returning valid information, even in corner-cases, and performance tests being focussed on timings (in whatever metric is important to you, ie: thoroughput, worst case, etc..). you can then link in to a more mature performance tool such as google benchmark, which does a bit of data shaping to avoid issues with caching, etc..

You could just leave it at that, the perf unit tests would just spit out a bunch of timing information to stdout, or you could set assertions to prove that your timing isn't regressing, and have them run in CI.

March 20, 2025 Score: 0 Rep: 199 Quality: Low Completeness: 80%

Here’s one way to commit code without releasing it, but be prepared to explain:

  1. Commit the timing code. You can do this on a branch based on your current branch tip, or if you’re comfortable with Git do the commit in a detached head (or on your current branch and then git reset --hard HEAD~)
  2. Merge the new branch (or commit hash) into your branch with -s ours (the ours strategy). This creates a multi-parent commit (a merge) but whose tree only reflects the original branch. The second parent still shows the added code.

Notes:

  • This is similar to an evil merge where the result contains code from neither side, except that in this case we drop code that appeared in one side
  • This is entirely different from -X ours, which is a strategy option that instructs the default merge strategy to resolve conflicts using our side.
  • It’s worth explaining what you did and why in the merge commit message.