-
Notifications
You must be signed in to change notification settings - Fork 523
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test Infra: Always fail afterEach if expected errors don't match logged errors, even if the test isn't otherwise passing. #21803
base: main
Are you sure you want to change the base?
Conversation
Even if the test itself failed. Not logging the errors can be confusing while debugging
I don't think this is really fully a testing issue, but an error reporting issue. I've had non-test users have similar issues where they spend a lot of effort debugging some error that was not the first error when the first error put things into a bad state. I'm not saying we shouldn't do this change, just that I think this is treating a symptom of a larger problem, and may be a good idea, but there is also a larger systemic issue around exceptions failing to be fatal leaving things in bad states causing confusing errors. An initial step toward trying treat a similar problem closer to its cause is in #21770 . Anyway, for this specific change, how does this impact test output in the CI/pipeline/ADO GUI? As long as this change doesn't break the running of subsequent tests it that results collection, it seems fine to me, but this is not an area I know well. |
There are cases where failing with the test's error message is more useful for diagnostics. I think a better fix here would be to use AggregateError to report both issues. It seems like Last I checked, AggregateError wasn't well-supported by the matrix we want to support clients on, so it's not usable in production code, but it should be fine for tests. This does mean you may need to change some targeted compile/lib settings for test-version-utils though. |
Not sure we need AggregateError, both failures will appear without it. After this change, the output will show the core test failure, and then also show a failure under "after each" for the test. cc @Abe27342 . I'll make an example and paste the results. |
oh, I had assumed that wasn't the case based on the previous code. But the behavior makes sense based on other experience with mocha. |
I would echo this concern. its a bit of a pain, but we should validate all error show up in all interfaces, cmd, vscode, ado, kusto |
in any case, if we do end up going that route, this commit should compile and be roughly correct. I didn't test the behavior. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see the intent as to provide more information to the dev, so that relevant failures/errors don't get buried deep. Would just logging those as warnings
help? I've seen warnings being very helpful while debugging and not being too much in the face or failing the test altogether.
Interesting idea. We disable console logging by default when running our mocha tests, so we could actually use that for this kind of feedback. Downside is it'll show up inline in the console with the list of tests, rather than at the end with the details failure info. So not sure it's preferable after all. |
@CraigMacomber I totally agree with this. In my particular case, the initial error cause a container to close with the error, which is about as strong an outcome as you can hope for. This happened in the |
And GitHub summary of failures: @anthony-murphy - I'll check Kusto too, was there any other UX you wanted to be tested? I didn't know what you meant by "VS Code" |
Description
Reversal of #11915
Always fail afterEach if expected errors don't match logged errors, even if the test itself failed.
My story why I'm proposing this
tl;dr - Extra noise in initial test results is better than unexpected outcomes while debugging.
I just faced a mass of e2e test failures on another PR, and the apparent reason for failure was so opaque, seemingly completely unrelated to any change I made. It finally occurred to me that some assert may be firing and throwing off the system.
Sure enough - one of the containers was closing due to an assert, but the accompanying error log indicating the closure was not reported in the test failure due to this code I'm changing here. At first I found myself assuming the container didn't even close due to the assert because I was so sure the test would fail with the ContainerClose log! When I saw that wasn't true, I was left stepping through the Test Object Provider and ItExpects code trying to figure out why, thinking there was a bug.
Reviewer Guidance
I'm open to alternatives here, I do understand that in many cases this will be gratuitous (I approved the original PR adding this!).