Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test Infra: Always fail afterEach if expected errors don't match logged errors, even if the test isn't otherwise passing. #21803

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

markfields
Copy link
Member

Description

Reversal of #11915

Always fail afterEach if expected errors don't match logged errors, even if the test itself failed.

My story why I'm proposing this

tl;dr - Extra noise in initial test results is better than unexpected outcomes while debugging.

I just faced a mass of e2e test failures on another PR, and the apparent reason for failure was so opaque, seemingly completely unrelated to any change I made. It finally occurred to me that some assert may be firing and throwing off the system.

Sure enough - one of the containers was closing due to an assert, but the accompanying error log indicating the closure was not reported in the test failure due to this code I'm changing here. At first I found myself assuming the container didn't even close due to the assert because I was so sure the test would fail with the ContainerClose log! When I saw that wasn't true, I was left stepping through the Test Object Provider and ItExpects code trying to figure out why, thinking there was a bug.

Reviewer Guidance

I'm open to alternatives here, I do understand that in many cases this will be gratuitous (I approved the original PR adding this!).

Even if the test itself failed.
Not logging the errors can be confusing while debugging
@github-actions github-actions bot added area: tests Tests to add, test infrastructure improvements, etc base: main PRs targeted against main branch labels Jul 9, 2024
@CraigMacomber
Copy link
Contributor

CraigMacomber commented Jul 9, 2024

I don't think this is really fully a testing issue, but an error reporting issue. I've had non-test users have similar issues where they spend a lot of effort debugging some error that was not the first error when the first error put things into a bad state.

I'm not saying we shouldn't do this change, just that I think this is treating a symptom of a larger problem, and may be a good idea, but there is also a larger systemic issue around exceptions failing to be fatal leaving things in bad states causing confusing errors. An initial step toward trying treat a similar problem closer to its cause is in #21770 .

Anyway, for this specific change, how does this impact test output in the CI/pipeline/ADO GUI? As long as this change doesn't break the running of subsequent tests it that results collection, it seems fine to me, but this is not an area I know well.

@Abe27342
Copy link
Contributor

Abe27342 commented Jul 9, 2024

There are cases where failing with the test's error message is more useful for diagnostics.

I think a better fix here would be to use AggregateError to report both issues. It seems like this.currentTest.err is exposed in mocha's typing (though it's typed as any, so hopefully that is actually the error that the test threw).

image

Last I checked, AggregateError wasn't well-supported by the matrix we want to support clients on, so it's not usable in production code, but it should be fine for tests. This does mean you may need to change some targeted compile/lib settings for test-version-utils though.

@markfields
Copy link
Member Author

Not sure we need AggregateError, both failures will appear without it. After this change, the output will show the core test failure, and then also show a failure under "after each" for the test. cc @Abe27342 . I'll make an example and paste the results.

@Abe27342
Copy link
Contributor

Abe27342 commented Jul 9, 2024

Not sure we need AggregateError, both failures will appear without it. After this change, the output will show the core test failure, and then also show a failure under "after each" for the test. cc @Abe27342 . I'll make an example and paste the results.

oh, I had assumed that wasn't the case based on the previous code. But the behavior makes sense based on other experience with mocha.

@anthony-murphy
Copy link
Contributor

oh, I had assumed that wasn't the case based on the previous code. But the behavior makes sense based on other experience with mocha.

I would echo this concern. its a bit of a pain, but we should validate all error show up in all interfaces, cmd, vscode, ado, kusto

@Abe27342
Copy link
Contributor

Abe27342 commented Jul 9, 2024

in any case, if we do end up going that route, this commit should compile and be roughly correct. I didn't test the behavior.

@rajatch-ff rajatch-ff self-requested a review July 9, 2024 19:05
Copy link
Contributor

@rajatch-ff rajatch-ff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see the intent as to provide more information to the dev, so that relevant failures/errors don't get buried deep. Would just logging those as warnings help? I've seen warnings being very helpful while debugging and not being too much in the face or failing the test altogether.

@markfields
Copy link
Member Author

I see the intent as to provide more information to the dev, so that relevant failures/errors don't get buried deep. Would just logging those as warnings help? I've seen warnings being very helpful while debugging and not being too much in the face or failing the test altogether.

Interesting idea. We disable console logging by default when running our mocha tests, so we could actually use that for this kind of feedback.

Downside is it'll show up inline in the console with the list of tests, rather than at the end with the details failure info. So not sure it's preferable after all.

@markfields
Copy link
Member Author

I don't think this is really fully a testing issue, but an error reporting issue. I've had non-test users have similar issues where they spend a lot of effort debugging some error that was not the first error when the first error put things into a bad state.

@CraigMacomber I totally agree with this. In my particular case, the initial error cause a container to close with the error, which is about as strong an outcome as you can hope for. This happened in the beforeEach block. And yet, somehow it subtly left the document in a state that lead to the subtle / opaque assert when it was loaded / edited. So, yeah - something should probably have failed harder (document corruption error?). Not sure I'll have time to look into this particular flow.

@markfields
Copy link
Member Author

Here's what the results look like in the console after this change:

image

image

Before the change, you see a strict subset of that (the "after each" stuff is missing, the other failure info is there just the same).

I'll push a change like that to see how it appears in CI.

@markfields
Copy link
Member Author

markfields commented Jul 9, 2024

Tests Tab:

image

And GitHub summary of failures:
image

@anthony-murphy - I'll check Kusto too, was there any other UX you wanted to be tested? I didn't know what you meant by "VS Code"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area: tests Tests to add, test infrastructure improvements, etc base: main PRs targeted against main branch
5 participants