Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PROTON-2748: Raw connection async close fix and tests. #402

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
105 changes: 64 additions & 41 deletions c/src/proactor/epoll_raw_connection.c
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,8 @@ struct praw_connection_t {
struct addrinfo *ai; /* Current connect address */
bool connected;
bool disconnected;
bool batch_empty;
bool hup_detected;
bool read_check;
};

static void psocket_error(praw_connection_t *rc, int err, const char* msg) {
Expand Down Expand Up @@ -304,7 +305,7 @@ void pn_raw_connection_write_close(pn_raw_connection_t *rc) {
pni_raw_write_close(rc);
}

static pn_event_t *pni_raw_batch_next(pn_event_batch_t *batch) {
static pn_event_t *pni_epoll_raw_batch_next(pn_event_batch_t *batch, bool peek_only) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rename this: pni_raw_batch_next_common() or pni_raw_batch_next_or_peek():
Ading epoll to the name doesn't communicate anything useful about the purpose of this function.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pni_raw_batch_has_events() was used as suggested elsewhere.

praw_connection_t *rc = containerof(batch, praw_connection_t, batch);
pn_raw_connection_t *raw = &rc->raw_connection;

Expand All @@ -318,12 +319,18 @@ static pn_event_t *pni_raw_batch_next(pn_event_batch_t *batch) {
unlock(&rc->task.mutex);
if (waking) pni_raw_wake(raw);

pn_event_t *e = pni_raw_event_next(raw);
if (!e || pn_event_type(e) == PN_RAW_CONNECTION_DISCONNECTED)
rc->batch_empty = true;
Comment on lines -322 to -323
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where did this logic go? It seems no longer to be anywhere. Is it no longer needed? Specifically the check against the disconnected event.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This logic was solely there to identify edge cases where the epoll-specific code could not be sure if the state machine was up to date. In which case a non-application call to pni_raw_batch_next() would update the state machine but a resulting event would have to be put back into the collector.

The new pni_raw_batch_has_events() updates the state machine and has no side effects to the event stream, so the check for batch_empty is no longer needed.

pn_event_t *e = peek_only ? pni_raw_event_peek(raw) : pni_raw_event_next(raw);
return e;
}

static pn_event_t *pni_raw_batch_next(pn_event_batch_t *batch) {
return pni_epoll_raw_batch_next(batch, false);
}

static pn_event_t *pni_raw_batch_peek(pn_event_batch_t *batch) {
return pni_epoll_raw_batch_next(batch, true);
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you sure you want to introduce a peek operation? This is only ever used to find out whether there is an outstanding event so why not just introduce an operation bool pni_raw_batch_has_events()?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


task_t *pni_psocket_raw_task(psocket_t* ps) {
return &containerof(ps, praw_connection_t, psocket)->task;
}
Expand Down Expand Up @@ -393,10 +400,10 @@ pn_event_batch_t *pni_raw_connection_process(task_t *t, uint32_t io_events, bool
if (rc->disconnected) {
pni_raw_connect_failed(&rc->raw_connection);
unlock(&rc->task.mutex);
rc->batch_empty = false;
return &rc->batch;
}
if (events & (EPOLLHUP | EPOLLERR)) {
// Continuation of praw_connection_maybe_connect_lh() logic.
// A wake can be the first event. Otherwise, wait for connection to complete.
bool event_pending = task_wake || pni_raw_wake_is_pending(&rc->raw_connection) || pn_collector_peek(rc->raw_connection.collector);
t->working = event_pending;
Expand All @@ -408,32 +415,41 @@ pn_event_batch_t *pni_raw_connection_process(task_t *t, uint32_t io_events, bool
}
unlock(&rc->task.mutex);

if (events & EPOLLIN) pni_raw_read(&rc->raw_connection, fd, rcv, set_error);
if (events & EPOLLOUT) pni_raw_write(&rc->raw_connection, fd, snd, set_error);
rc->batch_empty = false;
if (rc->connected) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think you need this condition (which is why it wasn't present previously) - if you get here without returning already you must be connected.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

if (events & EPOLLERR) {
// Read and write sides closed via RST. Tear down immediately.
int soerr;
socklen_t soerrlen = sizeof(soerr);
int ec = getsockopt(fd, SOL_SOCKET, SO_ERROR, &soerr, &soerrlen);
if (ec == 0 && soerr) {
psocket_error(rc, soerr, "async disconnect");
}
pni_raw_async_disconnect(&rc->raw_connection);
} else if (events & EPOLLHUP) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The style of this function is early return. So please change this to have a return with no 'else if' if that is possible - I don't understand enough of this logic to see if you could get EPOLLERR as well as EPOLLIN, EPOLLOUT or EPOLLRDHUP at the same time with a useful action of reading or writing, but it seems unlikely to me.
And if this change is possible it lowers the cognitive load (and indentation) of the function.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

rc->hup_detected = true;
}

if (events & (EPOLLIN | EPOLLRDHUP) || rc->read_check) {
pni_raw_read(&rc->raw_connection, fd, rcv, set_error);
rc->read_check = false;
}
if (events & EPOLLOUT) pni_raw_write(&rc->raw_connection, fd, snd, set_error);
}
return &rc->batch;
}

void pni_raw_connection_done(praw_connection_t *rc) {
bool notify = false;
bool ready = false;
bool have_event = false;

// If !batch_empty, can't be sure state machine up to date, so reschedule task if necessary.
if (!rc->batch_empty) {
if (pn_collector_peek(rc->raw_connection.collector))
have_event = true;
else {
pn_event_t *e = pni_raw_batch_next(&rc->batch);
// State machine up to date.
if (e) {
have_event = true;
// Sole event. Can put back without order issues.
// Edge case, performance not important.
pn_collector_put(rc->raw_connection.collector, pn_event_class(e), pn_event_context(e), pn_event_type(e));
}
}
}
pn_raw_connection_t *raw = &rc->raw_connection;
int fd = rc->psocket.epoll_io.fd;

// Try write
if (pni_raw_can_write(raw)) pni_raw_write(raw, fd, snd, set_error);
pni_raw_process_shutdown(raw, fd, shutr, shutw);

// Update state machine and check for possible pending event.
bool have_event = pni_raw_batch_peek(&rc->batch);

lock(&rc->task.mutex);
pn_proactor_t *p = rc->task.proactor;
Expand All @@ -442,24 +458,31 @@ void pni_raw_connection_done(praw_connection_t *rc) {
// The task may be in the ready state even if we've got no raw connection
// wakes outstanding because we dealt with it already in pni_raw_batch_next()
notify = (pni_task_wake_pending(&rc->task) || have_event) && schedule(&rc->task);
ready = rc->task.ready;
ready = rc->task.ready; // No need to poll. Already scheduled.
unlock(&rc->task.mutex);

pn_raw_connection_t *raw = &rc->raw_connection;
int fd = rc->psocket.epoll_io.fd;
pni_raw_process_shutdown(raw, fd, shutr, shutw);
int wanted =
(pni_raw_can_read(raw) ? EPOLLIN : 0) |
(pni_raw_can_write(raw) ? EPOLLOUT : 0);
if (wanted) {
rc->psocket.epoll_io.wanted = wanted;
rearm_polling(&rc->psocket.epoll_io, p->epollfd); // TODO: check for error
bool finished_disconnect = raw->state==conn_fini && !ready && !raw->disconnectpending;
if (finished_disconnect) {
// If we're closed and we've sent the disconnect then close
pni_raw_finalize(raw);
praw_connection_cleanup(rc);
} else if (ready) {
// Already scheduled to run. Skip poll. Remember if we want a read.
rc->read_check = pni_raw_can_read(raw);
} else if (!rc->connected) {
// Connect logic has already armed the socket.
} else {
bool finished_disconnect = raw->state==conn_fini && !ready && !raw->disconnectpending;
if (finished_disconnect) {
// If we're closed and we've sent the disconnect then close
pni_raw_finalize(raw);
praw_connection_cleanup(rc);
// Must poll for iO.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

int wanted =
(pni_raw_can_read(raw) ? (EPOLLIN | EPOLLRDHUP) : 0) |
(pni_raw_can_write(raw) ? EPOLLOUT : 0);

// wanted == 0 implies we block until either application wake() or EPOLLHUP | EPOLLERR.
// If wanted == 0 and hup_detected, blocking not possible, so skip arming until
// application provides read buffers.
if (wanted || !rc->hup_detected) {
rc->psocket.epoll_io.wanted = wanted;
rearm_polling(&rc->psocket.epoll_io, p->epollfd); // TODO: check for error
}
}

Expand Down
2 changes: 2 additions & 0 deletions c/src/proactor/raw_connection-internal.h
Original file line number Diff line number Diff line change
Expand Up @@ -134,9 +134,11 @@ void pni_raw_write_close(pn_raw_connection_t *conn);
void pni_raw_read(pn_raw_connection_t *conn, int sock, long (*recv)(int, void*, size_t), void (*set_error)(pn_raw_connection_t *, const char *, int));
void pni_raw_write(pn_raw_connection_t *conn, int sock, long (*send)(int, const void*, size_t), void (*set_error)(pn_raw_connection_t *, const char *, int));
void pni_raw_process_shutdown(pn_raw_connection_t *conn, int sock, int (*shutdown_rd)(int), int (*shutdown_wr)(int));
void pni_raw_async_disconnect(pn_raw_connection_t *conn);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As this is a new event going to the raw connection state machine maybe there should be a new test for this event in raw_connection_test.

bool pni_raw_can_read(pn_raw_connection_t *conn);
bool pni_raw_can_write(pn_raw_connection_t *conn);
pn_event_t *pni_raw_event_next(pn_raw_connection_t *conn);
pn_event_t *pni_raw_event_peek(pn_raw_connection_t *conn);
void pni_raw_initialize(pn_raw_connection_t *conn);
void pni_raw_finalize(pn_raw_connection_t *conn);

Expand Down
32 changes: 29 additions & 3 deletions c/src/proactor/raw_connection.c
Original file line number Diff line number Diff line change
Expand Up @@ -669,12 +669,14 @@ bool pni_raw_can_write(pn_raw_connection_t *conn) {
return !pni_raw_wdrained(conn) && conn->wbuffer_first_towrite;
}

pn_event_t *pni_raw_event_next(pn_raw_connection_t *conn) {
static pn_event_t *pni_get_next_raw_event(pn_raw_connection_t *conn, bool peek_only) {
// Return event if available or advance state machine event stream.
// Note that pn_collector_next increments event refcount but peek does not.
assert(conn);
do {
pn_event_t *event = pn_collector_next(conn->collector);
pn_event_t *event = peek_only ? pn_collector_peek(conn->collector) : pn_collector_next(conn->collector);
if (event) {
return pni_log_event(conn, event);
return peek_only ? event : pni_log_event(conn, event);
} else if (conn->connectpending) {
pni_raw_put_event(conn, PN_RAW_CONNECTION_CONNECTED);
conn->connectpending = false;
Expand Down Expand Up @@ -721,6 +723,14 @@ pn_event_t *pni_raw_event_next(pn_raw_connection_t *conn) {
} while (true);
}

pn_event_t *pni_raw_event_next(pn_raw_connection_t *conn) {
return pni_get_next_raw_event(conn, false);
}

pn_event_t *pni_raw_event_peek(pn_raw_connection_t *conn) {
return pni_get_next_raw_event(conn, true);
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I would refactor pni_raw_event_next differently:

  • Refactor the event generating if chain into a separate function that is called by ...next() and ...peek() (or if you follow my earlier suggestion ...has_events()
  • Then wrap that with pn_collector_next/peek logic.

Just in case it's not clear the do ... while (true) is a bit of a red herring and somewhat bad coding on my part in that there is never more than one effective pass through the loop. There can be 2 but that if only if there was no initial event. So it make more sense really to separate out the event generating logic.

Hope that is clear enough.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done with has_events().

void pni_raw_read_close(pn_raw_connection_t *conn) {
// If already fully closed nothing to do
if (pni_raw_rwclosed(conn)) {
Expand Down Expand Up @@ -781,6 +791,22 @@ void pni_raw_close(pn_raw_connection_t *conn) {
}
}

void pni_raw_async_disconnect(pn_raw_connection_t *conn) {
if (pni_raw_rwclosed(conn))
return;

if (!pni_raw_rclosed(conn)) {
conn->state = pni_raw_new_state(conn, conn_read_closed);
conn->rclosedpending = true;
}
if (!pni_raw_wclosed(conn)) {
pni_raw_release_buffers(conn);
conn->state = pni_raw_new_state(conn, conn_write_closed);
conn->wclosedpending = true;
}
pni_raw_disconnect(conn);
}

bool pn_raw_connection_is_read_closed(pn_raw_connection_t *conn) {
assert(conn);
return pni_raw_rclosed(conn);
Expand Down
Loading
Loading