Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Query throws a "OSError: [Errno 107] Transport endpoint is not connected" #538

Open
wizardassassin opened this issue May 10, 2023 · 24 comments · Fixed by #540
Open

Query throws a "OSError: [Errno 107] Transport endpoint is not connected" #538

wizardassassin opened this issue May 10, 2023 · 24 comments · Fixed by #540
Labels
area: protocol Related to underlying networking protocol type: bug Something isn't working

Comments

@wizardassassin
Copy link

When I try to query my minecraft server running on localhost, I get a "Transport endpoint is not connected" error. The minecraft server has enable-query=true and query.port=25565.
image
The status option works.
image
The query option works when I ping my minecraft server from my local machine. I assume it's because my local machine is running an older version of mcstatus.
image

@PerchunPak
Copy link
Member

Are two first screenshots from WSL? If not, how do you have Windows and Linux on the same machine?

@wizardassassin
Copy link
Author

I have Linux server running on my local network, and the first two screenshots are from my ssh session. I run windows on my computer.

@PerchunPak
Copy link
Member

Is your server public? If not, could you try 11.0.0rc1 and 10.0.3 versions from the server?

@wizardassassin
Copy link
Author

Assuming I typed the pip install commands correctly, both versions don't work?
image
image
I ran pip install mcstatus==10.0.3

@wizardassassin
Copy link
Author

Oops, I think i clicked the wrong button...
I also ran pip install mcstatus==11.0.0rc1

@PerchunPak
Copy link
Member

Try with this workaround

pip install git+https://github.com/py-mine/mcstatus.git@try-to-fix-538

@wizardassassin
Copy link
Author

That works!
image

@PerchunPak
Copy link
Member

Okay, I will open the PR. Do not close this for now.

@ItsDrike ItsDrike added type: bug Something isn't working state: WIP Work In Progress area: protocol Related to underlying networking protocol labels May 12, 2023
@PerchunPak
Copy link
Member

PerchunPak commented May 13, 2023

Could you please install v11, run this file and send the result?

import asyncio
import mcstatus


async def get_status():
    server = await mcstatus.JavaServer.async_lookup("__IP__")
    await server.async_status()

if __name__ == "__main__":
    asyncio.run(get_status())

Just curious if this error is reproducible with asyncio.

@wizardassassin
Copy link
Author

Works fine for me.
image
image

@vaughnw128
Copy link

Hi! I encountered this error on Ubuntu Server 22.04.5 LTS, and was able to find that Ubuntu (And maybe UNIX systems in general) don't like to have their UDP sockets shut down, and rather just closed. Windows skips over the shutdown but Ubuntu throws an error.

I commented out line 515: self.socket.shutdown(socket.SHUT_RDWR) in /protocol/connection.py, and that completely fixed the issue for me. I tested removing that line on Windows as well, and it threw no errors.

@kevinkjt2000
Copy link
Contributor

@vaughnw128 would you mind sharing the stacktrace/exception that accompanied that error?

@kevinkjt2000 kevinkjt2000 reopened this Jun 5, 2023
@kevinkjt2000 kevinkjt2000 removed the state: WIP Work In Progress label Jun 5, 2023
@ItsDrike
Copy link
Member

ItsDrike commented Jun 5, 2023

was able to find that Ubuntu (And maybe UNIX systems in general) don't like to have their UDP sockets shut down, and rather just closed

Well yeah, UDP sockets don't need to be "shut down", they don't have any kind of handshake nor is there some defined way to end the connection. It is interesting that this error doesn't happen on windows, but indeed sockets are very much different between these operating systems.

We could just remove the shut down logic for UDP socket connections, and skip shut down there, leaving the same logic introduced in #540 for regular TCP. as it can still happen that user shuts down the socket manually, or the server closes connection, at which point we do want to retain that error suppressing logic for OSError 107 (not connected) the PR introduced.

That said, you shouldn't actually be seeing any errors with the new code anymore, as this logic is currently in both UDP and TCP connection classes, which means while it may be unnecessary in the UDP one, and shut down shouldn't even be performed, when the exception does come up, it gets suppressed and everything keeps going normally.

Perhaps you're still using an older version that doesn't yet have this fix? In latest master, this is fixed, but indeed, we should still probably get rid of the unnecessary shut down call in UDP sockets.

@vaughnw128
Copy link

@vaughnw128 would you mind sharing the stacktrace/exception that accompanied that error?

Here's the stack trace from the error:
image

Perhaps you're still using an older version that doesn't yet have this fix? In latest master, this is fixed, but indeed, we should still probably get rid of the unnecessary shut down call in UDP sockets.

I'm currently on the latest release from PyPI. From what you're saying it seems that it was fixed after that release. My mistake opening this issue back up!
image

@ItsDrike
Copy link
Member

ItsDrike commented Jun 5, 2023

I'm currently on the latest release from PyPI. From what you're saying it seems that it was fixed after that release. My mistake opening this issue back up!

Indeed, the fix is only available in the latest master and it wasn't yet pushed out to PyPI (not even in the v11 release candidates yet), however this is probably something worth backporting to v10 releases too.

That said, the fix that's currently in place will always call shut down, even on UDP sockets that don't need it called, which is just a waste of time, so we could probably handle things a bit better, keeping the new error suppressing logic for TCP, and skipping shutdown entirely for UDP, which is why I won't (re)close the issue just yet.

If you need this fix immediately, best you can do is install the version from the latest master: python3 -m pip install 'mcstatus @ git+https://github.com/py-mine/mcstatus@master', but do note that this version had a lot of pretty major changes from the latest stable PyPI release (v10).

@vaughnw128
Copy link

If you need this fix immediately, best you can do is install the version from the latest master: python3 -m pip install 'mcstatus @ git+https://github.com/py-mine/mcstatus@master', but do note that this version had a lot of pretty major changes from the latest stable PyPI release (v10).

I actually just ended up 'fixing' it for the time being by commenting out the shutdown line. I'll probably just stick with that hacky fix for now until the next PyPI release. Thanks :)

@ItsDrike
Copy link
Member

ItsDrike commented Jun 5, 2023

I actually just ended up 'fixing' it for the time being by commenting out the shutdown line. I'll probably just stick with that hacky fix for now until the next PyPI release. Thanks :)

Just know that removing shut down entirely will mean that even TCP communication won't be getting shut down, so for any TCP traffic (so, status requests from servers) , once you get all of the information from that server, you'll end up just forcibly closing the socket, without a proper shut down. This just means that to the server, it might seem like your program crashed. Probably not a big deal, although some very aggressive firewalls might not like it if you do that way too often and could get you blocked.

@kevinkjt2000
Copy link
Contributor

@vaughnw128 So, this is fixed by the upcoming v11 release candidate?

@PerchunPak
Copy link
Member

Not by the release candidate, but by #540 (which is only in master at now).

@ItsDrike
Copy link
Member

ItsDrike commented Jun 6, 2023

The reason I didn't re-close this issue immediately was the comment that vaughnw128 made on that UDP connections shouldn't really be making a shutdown call at all. Indeed, the UDP protocol doesn't establish any kind of concrete connection to a specific address, that would need graceful closing, so calling shutdown doesn't make much sense there.

What's interesting is that vaughnw128 suggested that this shutdown call actually causes an exception each time on all Linux systems, while just passing quietly on Windows. If this really is the case, I wonder why this wasn't noticed much sooner though, as it would mean query wouldn't work at all from the point the previous blanket error handling logic was removed from __del__. However indeed, after a simple quick test with a netcat UDP listening server, calling shutdown here really did cause the OSError:
image

So, the current #540 logic will handle this, yes. Is it ideal? No. Why make a call that we shouldn't be making in the first place, only to catch an exception that arises from that call every time. In fact, this call very likely means making a kernel syscall there, and while it's not that expensive computationally, it's also not completely insignificant. Clearly this isn't a great solution, and instead, we should simply remove the shutdown call entirely.

That said, this call should stay there for the TCP socket connection classes, as it might actually happen that the server will close the connection gracefully before we do, or that the user accesses the socket directly and does something unexpected (closes the connection themselves), causing the shutdown to fail there. And we do want to have a shutdown call in TCP, as it's just generally a good practice to gracefully close our TCP connections, informing the other side that we're disconnecting. But for UDP, we should remove this call.

This should be pretty simple, as we have base classes for these (TCPSocketConnection and UDPSocketConnection), so instead of inheriting the same close method from pure SocketConnection, we could make it abstract, and implement it in the TCP/UDP specific classes individually. Alternatively, we could have the SocketConnection.close only do the basic socket.close, and only override it in TCP class, adding that shutdown call (with handling), and then making a super call.

CC @PerchunPak

@ItsDrike ItsDrike reopened this Jun 6, 2023
@PerchunPak
Copy link
Member

I started working on #446, as we got many bug reports (like this) that could be easily prevented if we had integration tests.

@kevinkjt2000
Copy link
Contributor

There's a fake TCP server for testing status, but I don't recall any similar testing for UDP queries.

@PerchunPak
Copy link
Member

So, to summarize, the fix for this issue would be removing shutdown logic for UDP connections and adding a fake UDP server in tests (should be in different PRs)?

@ItsDrike
Copy link
Member

ItsDrike commented Jun 6, 2023

So, to summarize, the fix for this issue would be removing shutdown logic for UDP connections and adding a fake UDP server in tests (should be in different PRs)?

Yeah, the the integration tests are a separate thing, those aren't needed to resolve this issue, though they would be nice to have.

There's a fake TCP server for testing status, but I don't recall any similar testing for UDP queries.

It wouldn't be too hard to just write a script that launches a simple UDP server, sending predefined responses and perhaps checking for what was sent. We could probably even just pipe some file into netcat with the binary data of what's supposed to be sent, and pipe out the received data into another file, which will then just get compared making sure everything we expected was there.

I'd probably prefer the simple socket server more, as we could use the connection classes to make it clear what is being sent out (as opposed to just having a bunch of "gibberish" binary data in a file).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area: protocol Related to underlying networking protocol type: bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants