2014-01-14 18:23:22 +01:00
|
|
|
/*-------------------------------------------------------------------------
|
|
|
|
*
|
|
|
|
* shm_mq.c
|
|
|
|
* single-reader, single-writer shared memory message queue
|
|
|
|
*
|
|
|
|
* Both the sender and the receiver must have a PGPROC; their respective
|
|
|
|
* process latches are used for synchronization. Only the sender may send,
|
|
|
|
* and only the receiver may receive. This is intended to allow a user
|
|
|
|
* backend to communicate with worker backends that it has registered.
|
|
|
|
*
|
2023-01-02 21:00:37 +01:00
|
|
|
* Portions Copyright (c) 1996-2023, PostgreSQL Global Development Group
|
2014-01-14 18:23:22 +01:00
|
|
|
* Portions Copyright (c) 1994, Regents of the University of California
|
|
|
|
*
|
2018-07-11 12:16:46 +02:00
|
|
|
* src/backend/storage/ipc/shm_mq.c
|
2014-01-14 18:23:22 +01:00
|
|
|
*
|
|
|
|
*-------------------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include "postgres.h"
|
|
|
|
|
|
|
|
#include "miscadmin.h"
|
2016-10-04 16:50:13 +02:00
|
|
|
#include "pgstat.h"
|
2021-07-01 05:29:06 +02:00
|
|
|
#include "port/pg_bitutils.h"
|
2014-01-14 18:23:22 +01:00
|
|
|
#include "postmaster/bgworker.h"
|
|
|
|
#include "storage/procsignal.h"
|
|
|
|
#include "storage/shm_mq.h"
|
|
|
|
#include "storage/spin.h"
|
2020-10-19 08:52:25 +02:00
|
|
|
#include "utils/memutils.h"
|
2014-01-14 18:23:22 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* This structure represents the actual queue, stored in shared memory.
|
|
|
|
*
|
|
|
|
* Some notes on synchronization:
|
|
|
|
*
|
|
|
|
* mq_receiver and mq_bytes_read can only be changed by the receiver; and
|
shm_mq: Reduce spinlock usage.
Previously, mq_bytes_read and mq_bytes_written were protected by the
spinlock, but that turns out to cause pretty serious spinlock
contention on queries which send many tuples through a Gather or
Gather Merge node. This patches changes things so that we instead
read and write those values using 8-byte atomics. Since mq_bytes_read
can only be changed by the receiver and mq_bytes_written can only be
changed by the sender, the only purpose of the spinlock is to prevent
reads and writes of these values from being torn on platforms where
8-byte memory access is not atomic, making the conversion fairly
straightforward.
Testing shows that this produces some slowdown if we're using emulated
64-bit atomics, but since they should be available on any platform
where performance is a primary concern, that seems OK. It's faster,
sometimes a lot faster, on platforms where such atomics are available.
Patch by me, reviewed by Andres Freund, who also suggested the
design. Also tested by Rafia Sabih.
Discussion: http://postgr.es/m/CA+TgmoYuK0XXxmUNTFT9TSNiBtWnRwasBcHHRCOK9iYmDLQVPg@mail.gmail.com
2018-03-02 18:16:59 +01:00
|
|
|
* mq_sender and mq_bytes_written can only be changed by the sender.
|
|
|
|
* mq_receiver and mq_sender are protected by mq_mutex, although, importantly,
|
|
|
|
* they cannot change once set, and thus may be read without a lock once this
|
|
|
|
* is known to be the case.
|
2014-01-14 18:23:22 +01:00
|
|
|
*
|
shm_mq: Reduce spinlock usage.
Previously, mq_bytes_read and mq_bytes_written were protected by the
spinlock, but that turns out to cause pretty serious spinlock
contention on queries which send many tuples through a Gather or
Gather Merge node. This patches changes things so that we instead
read and write those values using 8-byte atomics. Since mq_bytes_read
can only be changed by the receiver and mq_bytes_written can only be
changed by the sender, the only purpose of the spinlock is to prevent
reads and writes of these values from being torn on platforms where
8-byte memory access is not atomic, making the conversion fairly
straightforward.
Testing shows that this produces some slowdown if we're using emulated
64-bit atomics, but since they should be available on any platform
where performance is a primary concern, that seems OK. It's faster,
sometimes a lot faster, on platforms where such atomics are available.
Patch by me, reviewed by Andres Freund, who also suggested the
design. Also tested by Rafia Sabih.
Discussion: http://postgr.es/m/CA+TgmoYuK0XXxmUNTFT9TSNiBtWnRwasBcHHRCOK9iYmDLQVPg@mail.gmail.com
2018-03-02 18:16:59 +01:00
|
|
|
* mq_bytes_read and mq_bytes_written are not protected by the mutex. Instead,
|
|
|
|
* they are written atomically using 8 byte loads and stores. Memory barriers
|
|
|
|
* must be carefully used to synchronize reads and writes of these values with
|
|
|
|
* reads and writes of the actual data in mq_ring.
|
|
|
|
*
|
|
|
|
* mq_detached needs no locking. It can be set by either the sender or the
|
|
|
|
* receiver, but only ever from false to true, so redundant writes don't
|
|
|
|
* matter. It is important that if we set mq_detached and then set the
|
|
|
|
* counterparty's latch, the counterparty must be certain to see the change
|
|
|
|
* after waking up. Since SetLatch begins with a memory barrier and ResetLatch
|
|
|
|
* ends with one, this should be OK.
|
2014-01-14 18:23:22 +01:00
|
|
|
*
|
|
|
|
* mq_ring_size and mq_ring_offset never change after initialization, and
|
|
|
|
* can therefore be read without the lock.
|
|
|
|
*
|
shm_mq: Reduce spinlock usage.
Previously, mq_bytes_read and mq_bytes_written were protected by the
spinlock, but that turns out to cause pretty serious spinlock
contention on queries which send many tuples through a Gather or
Gather Merge node. This patches changes things so that we instead
read and write those values using 8-byte atomics. Since mq_bytes_read
can only be changed by the receiver and mq_bytes_written can only be
changed by the sender, the only purpose of the spinlock is to prevent
reads and writes of these values from being torn on platforms where
8-byte memory access is not atomic, making the conversion fairly
straightforward.
Testing shows that this produces some slowdown if we're using emulated
64-bit atomics, but since they should be available on any platform
where performance is a primary concern, that seems OK. It's faster,
sometimes a lot faster, on platforms where such atomics are available.
Patch by me, reviewed by Andres Freund, who also suggested the
design. Also tested by Rafia Sabih.
Discussion: http://postgr.es/m/CA+TgmoYuK0XXxmUNTFT9TSNiBtWnRwasBcHHRCOK9iYmDLQVPg@mail.gmail.com
2018-03-02 18:16:59 +01:00
|
|
|
* Importantly, mq_ring can be safely read and written without a lock.
|
|
|
|
* At any given time, the difference between mq_bytes_read and
|
2014-01-14 18:23:22 +01:00
|
|
|
* mq_bytes_written defines the number of bytes within mq_ring that contain
|
|
|
|
* unread data, and mq_bytes_read defines the position where those bytes
|
|
|
|
* begin. The sender can increase the number of unread bytes at any time,
|
|
|
|
* but only the receiver can give license to overwrite those bytes, by
|
|
|
|
* incrementing mq_bytes_read. Therefore, it's safe for the receiver to read
|
|
|
|
* the unread bytes it knows to be present without the lock. Conversely,
|
|
|
|
* the sender can write to the unused portion of the ring buffer without
|
|
|
|
* the lock, because nobody else can be reading or writing those bytes. The
|
|
|
|
* receiver could be making more bytes unused by incrementing mq_bytes_read,
|
|
|
|
* but that's OK. Note that it would be unsafe for the receiver to read any
|
|
|
|
* data it's already marked as read, or to write any data; and it would be
|
|
|
|
* unsafe for the sender to reread any data after incrementing
|
|
|
|
* mq_bytes_written, but fortunately there's no need for any of that.
|
|
|
|
*/
|
|
|
|
struct shm_mq
|
|
|
|
{
|
|
|
|
slock_t mq_mutex;
|
|
|
|
PGPROC *mq_receiver;
|
|
|
|
PGPROC *mq_sender;
|
shm_mq: Reduce spinlock usage.
Previously, mq_bytes_read and mq_bytes_written were protected by the
spinlock, but that turns out to cause pretty serious spinlock
contention on queries which send many tuples through a Gather or
Gather Merge node. This patches changes things so that we instead
read and write those values using 8-byte atomics. Since mq_bytes_read
can only be changed by the receiver and mq_bytes_written can only be
changed by the sender, the only purpose of the spinlock is to prevent
reads and writes of these values from being torn on platforms where
8-byte memory access is not atomic, making the conversion fairly
straightforward.
Testing shows that this produces some slowdown if we're using emulated
64-bit atomics, but since they should be available on any platform
where performance is a primary concern, that seems OK. It's faster,
sometimes a lot faster, on platforms where such atomics are available.
Patch by me, reviewed by Andres Freund, who also suggested the
design. Also tested by Rafia Sabih.
Discussion: http://postgr.es/m/CA+TgmoYuK0XXxmUNTFT9TSNiBtWnRwasBcHHRCOK9iYmDLQVPg@mail.gmail.com
2018-03-02 18:16:59 +01:00
|
|
|
pg_atomic_uint64 mq_bytes_read;
|
|
|
|
pg_atomic_uint64 mq_bytes_written;
|
2014-03-18 16:19:13 +01:00
|
|
|
Size mq_ring_size;
|
2014-01-14 18:23:22 +01:00
|
|
|
bool mq_detached;
|
|
|
|
uint8 mq_ring_offset;
|
|
|
|
char mq_ring[FLEXIBLE_ARRAY_MEMBER];
|
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This structure is a backend-private handle for access to a queue.
|
|
|
|
*
|
|
|
|
* mqh_queue is a pointer to the queue we've attached, and mqh_segment is
|
Clean up shm_mq cleanup.
The logic around shm_mq_detach was a few bricks shy of a load, because
(contrary to the comments for shm_mq_attach) all it did was update the
shared shm_mq state. That left us leaking a bit of process-local
memory, but much worse, the on_dsm_detach callback for shm_mq_detach
was still armed. That means that whenever we ultimately detach from
the DSM segment, we'd run shm_mq_detach again for already-detached,
possibly long-dead queues. This accidentally fails to fail today,
because we only ever re-use a shm_mq's memory for another shm_mq, and
multiple detach attempts on the last such shm_mq are fairly harmless.
But it's gonna bite us someday, so let's clean it up.
To do that, change shm_mq_detach's API so it takes a shm_mq_handle
not the underlying shm_mq. This makes the callers simpler in most
cases anyway. Also fix a few places in parallel.c that were just
pfree'ing the handle structs rather than doing proper cleanup.
Back-patch to v10 because of the risk that the revenant shm_mq_detach
callbacks would cause a live bug sometime. Since this is an API
change, it's too late to do it in 9.6. (We could make a variant
patch that preserves API, but I'm not excited enough to do that.)
Discussion: https://postgr.es/m/8670.1504192177@sss.pgh.pa.us
2017-08-31 21:10:24 +02:00
|
|
|
* an optional pointer to the dynamic shared memory segment that contains it.
|
|
|
|
* (If mqh_segment is provided, we register an on_dsm_detach callback to
|
|
|
|
* make sure we detach from the queue before detaching from DSM.)
|
2014-01-14 18:23:22 +01:00
|
|
|
*
|
|
|
|
* If this queue is intended to connect the current process with a background
|
|
|
|
* worker that started it, the user can pass a pointer to the worker handle
|
|
|
|
* to shm_mq_attach(), and we'll store it in mqh_handle. The point of this
|
|
|
|
* is to allow us to begin sending to or receiving from that queue before the
|
|
|
|
* process we'll be communicating with has even been started. If it fails
|
|
|
|
* to start, the handle will allow us to notice that and fail cleanly, rather
|
|
|
|
* than waiting forever; see shm_mq_wait_internal. This is mostly useful in
|
|
|
|
* simple cases - e.g. where there are just 2 processes communicating; in
|
|
|
|
* more complex scenarios, every process may not have a BackgroundWorkerHandle
|
|
|
|
* available, or may need to watch for the failure of more than one other
|
|
|
|
* process at a time.
|
|
|
|
*
|
|
|
|
* When a message exists as a contiguous chunk of bytes in the queue - that is,
|
|
|
|
* it is smaller than the size of the ring buffer and does not wrap around
|
|
|
|
* the end - we return the message to the caller as a pointer into the buffer.
|
|
|
|
* For messages that are larger or happen to wrap, we reassemble the message
|
|
|
|
* locally by copying the chunks into a backend-local buffer. mqh_buffer is
|
|
|
|
* the buffer, and mqh_buflen is the number of bytes allocated for it.
|
|
|
|
*
|
2021-10-14 22:06:43 +02:00
|
|
|
* mqh_send_pending, is number of bytes that is written to the queue but not
|
|
|
|
* yet updated in the shared memory. We will not update it until the written
|
|
|
|
* data is 1/4th of the ring size or the tuple queue is full. This will
|
|
|
|
* prevent frequent CPU cache misses, and it will also avoid frequent
|
|
|
|
* SetLatch() calls, which are quite expensive.
|
|
|
|
*
|
2016-06-29 19:12:50 +02:00
|
|
|
* mqh_partial_bytes, mqh_expected_bytes, and mqh_length_word_complete
|
2014-01-14 18:23:22 +01:00
|
|
|
* are used to track the state of non-blocking operations. When the caller
|
|
|
|
* attempts a non-blocking operation that returns SHM_MQ_WOULD_BLOCK, they
|
|
|
|
* are expected to retry the call at a later time with the same argument;
|
|
|
|
* we need to retain enough state to pick up where we left off.
|
2014-03-18 16:19:13 +01:00
|
|
|
* mqh_length_word_complete tracks whether we are done sending or receiving
|
|
|
|
* (whichever we're doing) the entire length word. mqh_partial_bytes tracks
|
|
|
|
* the number of bytes read or written for either the length word or the
|
|
|
|
* message itself, and mqh_expected_bytes - which is used only for reads -
|
|
|
|
* tracks the expected total size of the payload.
|
2014-01-14 18:23:22 +01:00
|
|
|
*
|
|
|
|
* mqh_counterparty_attached tracks whether we know the counterparty to have
|
|
|
|
* attached to the queue at some previous point. This lets us avoid some
|
|
|
|
* mutex acquisitions.
|
|
|
|
*
|
|
|
|
* mqh_context is the memory context in effect at the time we attached to
|
|
|
|
* the shm_mq. The shm_mq_handle itself is allocated in this context, and
|
|
|
|
* we make sure any other allocations we do happen in this context as well,
|
|
|
|
* to avoid nasty surprises.
|
|
|
|
*/
|
|
|
|
struct shm_mq_handle
|
|
|
|
{
|
|
|
|
shm_mq *mqh_queue;
|
|
|
|
dsm_segment *mqh_segment;
|
|
|
|
BackgroundWorkerHandle *mqh_handle;
|
|
|
|
char *mqh_buffer;
|
2014-03-18 16:19:13 +01:00
|
|
|
Size mqh_buflen;
|
|
|
|
Size mqh_consume_pending;
|
2021-10-14 22:06:43 +02:00
|
|
|
Size mqh_send_pending;
|
2014-03-18 16:19:13 +01:00
|
|
|
Size mqh_partial_bytes;
|
|
|
|
Size mqh_expected_bytes;
|
|
|
|
bool mqh_length_word_complete;
|
2014-01-14 18:23:22 +01:00
|
|
|
bool mqh_counterparty_attached;
|
|
|
|
MemoryContext mqh_context;
|
|
|
|
};
|
|
|
|
|
Clean up shm_mq cleanup.
The logic around shm_mq_detach was a few bricks shy of a load, because
(contrary to the comments for shm_mq_attach) all it did was update the
shared shm_mq state. That left us leaking a bit of process-local
memory, but much worse, the on_dsm_detach callback for shm_mq_detach
was still armed. That means that whenever we ultimately detach from
the DSM segment, we'd run shm_mq_detach again for already-detached,
possibly long-dead queues. This accidentally fails to fail today,
because we only ever re-use a shm_mq's memory for another shm_mq, and
multiple detach attempts on the last such shm_mq are fairly harmless.
But it's gonna bite us someday, so let's clean it up.
To do that, change shm_mq_detach's API so it takes a shm_mq_handle
not the underlying shm_mq. This makes the callers simpler in most
cases anyway. Also fix a few places in parallel.c that were just
pfree'ing the handle structs rather than doing proper cleanup.
Back-patch to v10 because of the risk that the revenant shm_mq_detach
callbacks would cause a live bug sometime. Since this is an API
change, it's too late to do it in 9.6. (We could make a variant
patch that preserves API, but I'm not excited enough to do that.)
Discussion: https://postgr.es/m/8670.1504192177@sss.pgh.pa.us
2017-08-31 21:10:24 +02:00
|
|
|
static void shm_mq_detach_internal(shm_mq *mq);
|
2018-03-02 18:20:30 +01:00
|
|
|
static shm_mq_result shm_mq_send_bytes(shm_mq_handle *mqh, Size nbytes,
|
2014-10-08 20:35:43 +02:00
|
|
|
const void *data, bool nowait, Size *bytes_written);
|
2018-03-02 18:20:30 +01:00
|
|
|
static shm_mq_result shm_mq_receive_bytes(shm_mq_handle *mqh,
|
|
|
|
Size bytes_needed, bool nowait, Size *nbytesp,
|
|
|
|
void **datap);
|
2018-03-02 01:21:52 +01:00
|
|
|
static bool shm_mq_counterparty_gone(shm_mq *mq,
|
2015-10-22 22:33:30 +02:00
|
|
|
BackgroundWorkerHandle *handle);
|
2018-03-02 01:21:52 +01:00
|
|
|
static bool shm_mq_wait_internal(shm_mq *mq, PGPROC **ptr,
|
2014-01-14 18:23:22 +01:00
|
|
|
BackgroundWorkerHandle *handle);
|
2018-03-02 01:21:52 +01:00
|
|
|
static void shm_mq_inc_bytes_read(shm_mq *mq, Size n);
|
|
|
|
static void shm_mq_inc_bytes_written(shm_mq *mq, Size n);
|
2014-01-14 18:23:22 +01:00
|
|
|
static void shm_mq_detach_callback(dsm_segment *seg, Datum arg);
|
|
|
|
|
|
|
|
/* Minimum queue size is enough for header and at least one chunk of data. */
|
|
|
|
const Size shm_mq_minimum_size =
|
|
|
|
MAXALIGN(offsetof(shm_mq, mq_ring)) + MAXIMUM_ALIGNOF;
|
|
|
|
|
|
|
|
#define MQH_INITIAL_BUFSIZE 8192
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Initialize a new shared message queue.
|
|
|
|
*/
|
|
|
|
shm_mq *
|
|
|
|
shm_mq_create(void *address, Size size)
|
|
|
|
{
|
|
|
|
shm_mq *mq = address;
|
2014-03-18 16:19:13 +01:00
|
|
|
Size data_offset = MAXALIGN(offsetof(shm_mq, mq_ring));
|
2014-01-14 18:23:22 +01:00
|
|
|
|
|
|
|
/* If the size isn't MAXALIGN'd, just discard the odd bytes. */
|
|
|
|
size = MAXALIGN_DOWN(size);
|
|
|
|
|
|
|
|
/* Queue size must be large enough to hold some data. */
|
|
|
|
Assert(size > data_offset);
|
|
|
|
|
|
|
|
/* Initialize queue header. */
|
|
|
|
SpinLockInit(&mq->mq_mutex);
|
|
|
|
mq->mq_receiver = NULL;
|
|
|
|
mq->mq_sender = NULL;
|
shm_mq: Reduce spinlock usage.
Previously, mq_bytes_read and mq_bytes_written were protected by the
spinlock, but that turns out to cause pretty serious spinlock
contention on queries which send many tuples through a Gather or
Gather Merge node. This patches changes things so that we instead
read and write those values using 8-byte atomics. Since mq_bytes_read
can only be changed by the receiver and mq_bytes_written can only be
changed by the sender, the only purpose of the spinlock is to prevent
reads and writes of these values from being torn on platforms where
8-byte memory access is not atomic, making the conversion fairly
straightforward.
Testing shows that this produces some slowdown if we're using emulated
64-bit atomics, but since they should be available on any platform
where performance is a primary concern, that seems OK. It's faster,
sometimes a lot faster, on platforms where such atomics are available.
Patch by me, reviewed by Andres Freund, who also suggested the
design. Also tested by Rafia Sabih.
Discussion: http://postgr.es/m/CA+TgmoYuK0XXxmUNTFT9TSNiBtWnRwasBcHHRCOK9iYmDLQVPg@mail.gmail.com
2018-03-02 18:16:59 +01:00
|
|
|
pg_atomic_init_u64(&mq->mq_bytes_read, 0);
|
|
|
|
pg_atomic_init_u64(&mq->mq_bytes_written, 0);
|
2014-01-14 18:23:22 +01:00
|
|
|
mq->mq_ring_size = size - data_offset;
|
|
|
|
mq->mq_detached = false;
|
|
|
|
mq->mq_ring_offset = data_offset - offsetof(shm_mq, mq_ring);
|
|
|
|
|
|
|
|
return mq;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Set the identity of the process that will receive from a shared message
|
|
|
|
* queue.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
shm_mq_set_receiver(shm_mq *mq, PGPROC *proc)
|
|
|
|
{
|
|
|
|
PGPROC *sender;
|
|
|
|
|
|
|
|
SpinLockAcquire(&mq->mq_mutex);
|
2018-03-02 01:21:52 +01:00
|
|
|
Assert(mq->mq_receiver == NULL);
|
|
|
|
mq->mq_receiver = proc;
|
|
|
|
sender = mq->mq_sender;
|
2014-01-14 18:23:22 +01:00
|
|
|
SpinLockRelease(&mq->mq_mutex);
|
|
|
|
|
|
|
|
if (sender != NULL)
|
|
|
|
SetLatch(&sender->procLatch);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Set the identity of the process that will send to a shared message queue.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
shm_mq_set_sender(shm_mq *mq, PGPROC *proc)
|
|
|
|
{
|
|
|
|
PGPROC *receiver;
|
|
|
|
|
|
|
|
SpinLockAcquire(&mq->mq_mutex);
|
2018-03-02 01:21:52 +01:00
|
|
|
Assert(mq->mq_sender == NULL);
|
|
|
|
mq->mq_sender = proc;
|
|
|
|
receiver = mq->mq_receiver;
|
2014-01-14 18:23:22 +01:00
|
|
|
SpinLockRelease(&mq->mq_mutex);
|
|
|
|
|
|
|
|
if (receiver != NULL)
|
|
|
|
SetLatch(&receiver->procLatch);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Get the configured receiver.
|
|
|
|
*/
|
|
|
|
PGPROC *
|
|
|
|
shm_mq_get_receiver(shm_mq *mq)
|
|
|
|
{
|
|
|
|
PGPROC *receiver;
|
|
|
|
|
|
|
|
SpinLockAcquire(&mq->mq_mutex);
|
2018-03-02 01:21:52 +01:00
|
|
|
receiver = mq->mq_receiver;
|
2014-01-14 18:23:22 +01:00
|
|
|
SpinLockRelease(&mq->mq_mutex);
|
|
|
|
|
|
|
|
return receiver;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Get the configured sender.
|
|
|
|
*/
|
|
|
|
PGPROC *
|
|
|
|
shm_mq_get_sender(shm_mq *mq)
|
|
|
|
{
|
|
|
|
PGPROC *sender;
|
|
|
|
|
|
|
|
SpinLockAcquire(&mq->mq_mutex);
|
2018-03-02 01:21:52 +01:00
|
|
|
sender = mq->mq_sender;
|
2014-01-14 18:23:22 +01:00
|
|
|
SpinLockRelease(&mq->mq_mutex);
|
|
|
|
|
|
|
|
return sender;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Attach to a shared message queue so we can send or receive messages.
|
|
|
|
*
|
|
|
|
* The memory context in effect at the time this function is called should
|
|
|
|
* be one which will last for at least as long as the message queue itself.
|
|
|
|
* We'll allocate the handle in that context, and future allocations that
|
|
|
|
* are needed to buffer incoming data will happen in that context as well.
|
|
|
|
*
|
|
|
|
* If seg != NULL, the queue will be automatically detached when that dynamic
|
|
|
|
* shared memory segment is detached.
|
|
|
|
*
|
|
|
|
* If handle != NULL, the queue can be read or written even before the
|
|
|
|
* other process has attached. We'll wait for it to do so if needed. The
|
|
|
|
* handle must be for a background worker initialized with bgw_notify_pid
|
|
|
|
* equal to our PID.
|
|
|
|
*
|
|
|
|
* shm_mq_detach() should be called when done. This will free the
|
|
|
|
* shm_mq_handle and mark the queue itself as detached, so that our
|
|
|
|
* counterpart won't get stuck waiting for us to fill or drain the queue
|
|
|
|
* after we've already lost interest.
|
|
|
|
*/
|
|
|
|
shm_mq_handle *
|
|
|
|
shm_mq_attach(shm_mq *mq, dsm_segment *seg, BackgroundWorkerHandle *handle)
|
|
|
|
{
|
|
|
|
shm_mq_handle *mqh = palloc(sizeof(shm_mq_handle));
|
|
|
|
|
|
|
|
Assert(mq->mq_receiver == MyProc || mq->mq_sender == MyProc);
|
|
|
|
mqh->mqh_queue = mq;
|
|
|
|
mqh->mqh_segment = seg;
|
|
|
|
mqh->mqh_handle = handle;
|
Clean up shm_mq cleanup.
The logic around shm_mq_detach was a few bricks shy of a load, because
(contrary to the comments for shm_mq_attach) all it did was update the
shared shm_mq state. That left us leaking a bit of process-local
memory, but much worse, the on_dsm_detach callback for shm_mq_detach
was still armed. That means that whenever we ultimately detach from
the DSM segment, we'd run shm_mq_detach again for already-detached,
possibly long-dead queues. This accidentally fails to fail today,
because we only ever re-use a shm_mq's memory for another shm_mq, and
multiple detach attempts on the last such shm_mq are fairly harmless.
But it's gonna bite us someday, so let's clean it up.
To do that, change shm_mq_detach's API so it takes a shm_mq_handle
not the underlying shm_mq. This makes the callers simpler in most
cases anyway. Also fix a few places in parallel.c that were just
pfree'ing the handle structs rather than doing proper cleanup.
Back-patch to v10 because of the risk that the revenant shm_mq_detach
callbacks would cause a live bug sometime. Since this is an API
change, it's too late to do it in 9.6. (We could make a variant
patch that preserves API, but I'm not excited enough to do that.)
Discussion: https://postgr.es/m/8670.1504192177@sss.pgh.pa.us
2017-08-31 21:10:24 +02:00
|
|
|
mqh->mqh_buffer = NULL;
|
2014-01-14 18:23:22 +01:00
|
|
|
mqh->mqh_buflen = 0;
|
|
|
|
mqh->mqh_consume_pending = 0;
|
2021-10-14 22:06:43 +02:00
|
|
|
mqh->mqh_send_pending = 0;
|
2014-03-18 16:19:13 +01:00
|
|
|
mqh->mqh_partial_bytes = 0;
|
Clean up shm_mq cleanup.
The logic around shm_mq_detach was a few bricks shy of a load, because
(contrary to the comments for shm_mq_attach) all it did was update the
shared shm_mq state. That left us leaking a bit of process-local
memory, but much worse, the on_dsm_detach callback for shm_mq_detach
was still armed. That means that whenever we ultimately detach from
the DSM segment, we'd run shm_mq_detach again for already-detached,
possibly long-dead queues. This accidentally fails to fail today,
because we only ever re-use a shm_mq's memory for another shm_mq, and
multiple detach attempts on the last such shm_mq are fairly harmless.
But it's gonna bite us someday, so let's clean it up.
To do that, change shm_mq_detach's API so it takes a shm_mq_handle
not the underlying shm_mq. This makes the callers simpler in most
cases anyway. Also fix a few places in parallel.c that were just
pfree'ing the handle structs rather than doing proper cleanup.
Back-patch to v10 because of the risk that the revenant shm_mq_detach
callbacks would cause a live bug sometime. Since this is an API
change, it's too late to do it in 9.6. (We could make a variant
patch that preserves API, but I'm not excited enough to do that.)
Discussion: https://postgr.es/m/8670.1504192177@sss.pgh.pa.us
2017-08-31 21:10:24 +02:00
|
|
|
mqh->mqh_expected_bytes = 0;
|
2014-03-18 16:19:13 +01:00
|
|
|
mqh->mqh_length_word_complete = false;
|
2014-01-14 18:23:22 +01:00
|
|
|
mqh->mqh_counterparty_attached = false;
|
Clean up shm_mq cleanup.
The logic around shm_mq_detach was a few bricks shy of a load, because
(contrary to the comments for shm_mq_attach) all it did was update the
shared shm_mq state. That left us leaking a bit of process-local
memory, but much worse, the on_dsm_detach callback for shm_mq_detach
was still armed. That means that whenever we ultimately detach from
the DSM segment, we'd run shm_mq_detach again for already-detached,
possibly long-dead queues. This accidentally fails to fail today,
because we only ever re-use a shm_mq's memory for another shm_mq, and
multiple detach attempts on the last such shm_mq are fairly harmless.
But it's gonna bite us someday, so let's clean it up.
To do that, change shm_mq_detach's API so it takes a shm_mq_handle
not the underlying shm_mq. This makes the callers simpler in most
cases anyway. Also fix a few places in parallel.c that were just
pfree'ing the handle structs rather than doing proper cleanup.
Back-patch to v10 because of the risk that the revenant shm_mq_detach
callbacks would cause a live bug sometime. Since this is an API
change, it's too late to do it in 9.6. (We could make a variant
patch that preserves API, but I'm not excited enough to do that.)
Discussion: https://postgr.es/m/8670.1504192177@sss.pgh.pa.us
2017-08-31 21:10:24 +02:00
|
|
|
mqh->mqh_context = CurrentMemoryContext;
|
2014-01-14 18:23:22 +01:00
|
|
|
|
|
|
|
if (seg != NULL)
|
|
|
|
on_dsm_detach(seg, shm_mq_detach_callback, PointerGetDatum(mq));
|
|
|
|
|
|
|
|
return mqh;
|
|
|
|
}
|
|
|
|
|
2014-10-08 20:35:43 +02:00
|
|
|
/*
|
|
|
|
* Associate a BackgroundWorkerHandle with a shm_mq_handle just as if it had
|
|
|
|
* been passed to shm_mq_attach.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
shm_mq_set_handle(shm_mq_handle *mqh, BackgroundWorkerHandle *handle)
|
|
|
|
{
|
|
|
|
Assert(mqh->mqh_handle == NULL);
|
|
|
|
mqh->mqh_handle = handle;
|
|
|
|
}
|
|
|
|
|
2014-01-14 18:23:22 +01:00
|
|
|
/*
|
|
|
|
* Write a message into a shared message queue.
|
2014-10-08 20:35:43 +02:00
|
|
|
*/
|
|
|
|
shm_mq_result
|
2021-10-14 22:06:43 +02:00
|
|
|
shm_mq_send(shm_mq_handle *mqh, Size nbytes, const void *data, bool nowait,
|
|
|
|
bool force_flush)
|
2014-10-08 20:35:43 +02:00
|
|
|
{
|
|
|
|
shm_mq_iovec iov;
|
|
|
|
|
|
|
|
iov.data = data;
|
|
|
|
iov.len = nbytes;
|
|
|
|
|
2021-10-14 22:06:43 +02:00
|
|
|
return shm_mq_sendv(mqh, &iov, 1, nowait, force_flush);
|
2014-10-08 20:35:43 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Write a message into a shared message queue, gathered from multiple
|
|
|
|
* addresses.
|
2014-01-14 18:23:22 +01:00
|
|
|
*
|
|
|
|
* When nowait = false, we'll wait on our process latch when the ring buffer
|
|
|
|
* fills up, and then continue writing once the receiver has drained some data.
|
|
|
|
* The process latch is reset after each wait.
|
|
|
|
*
|
|
|
|
* When nowait = true, we do not manipulate the state of the process latch;
|
|
|
|
* instead, if the buffer becomes full, we return SHM_MQ_WOULD_BLOCK. In
|
|
|
|
* this case, the caller should call this function again, with the same
|
|
|
|
* arguments, each time the process latch is set. (Once begun, the sending
|
|
|
|
* of a message cannot be aborted except by detaching from the queue; changing
|
|
|
|
* the length or payload will corrupt the queue.)
|
2021-10-14 22:06:43 +02:00
|
|
|
*
|
|
|
|
* When force_flush = true, we immediately update the shm_mq's mq_bytes_written
|
|
|
|
* and notify the receiver (if it is already attached). Otherwise, we don't
|
|
|
|
* update it until we have written an amount of data greater than 1/4th of the
|
|
|
|
* ring size.
|
2014-01-14 18:23:22 +01:00
|
|
|
*/
|
|
|
|
shm_mq_result
|
2021-10-14 22:06:43 +02:00
|
|
|
shm_mq_sendv(shm_mq_handle *mqh, shm_mq_iovec *iov, int iovcnt, bool nowait,
|
|
|
|
bool force_flush)
|
2014-01-14 18:23:22 +01:00
|
|
|
{
|
|
|
|
shm_mq_result res;
|
|
|
|
shm_mq *mq = mqh->mqh_queue;
|
shm_mq: Reduce spinlock usage.
Previously, mq_bytes_read and mq_bytes_written were protected by the
spinlock, but that turns out to cause pretty serious spinlock
contention on queries which send many tuples through a Gather or
Gather Merge node. This patches changes things so that we instead
read and write those values using 8-byte atomics. Since mq_bytes_read
can only be changed by the receiver and mq_bytes_written can only be
changed by the sender, the only purpose of the spinlock is to prevent
reads and writes of these values from being torn on platforms where
8-byte memory access is not atomic, making the conversion fairly
straightforward.
Testing shows that this produces some slowdown if we're using emulated
64-bit atomics, but since they should be available on any platform
where performance is a primary concern, that seems OK. It's faster,
sometimes a lot faster, on platforms where such atomics are available.
Patch by me, reviewed by Andres Freund, who also suggested the
design. Also tested by Rafia Sabih.
Discussion: http://postgr.es/m/CA+TgmoYuK0XXxmUNTFT9TSNiBtWnRwasBcHHRCOK9iYmDLQVPg@mail.gmail.com
2018-03-02 18:16:59 +01:00
|
|
|
PGPROC *receiver;
|
2014-10-08 20:35:43 +02:00
|
|
|
Size nbytes = 0;
|
2014-03-18 16:19:13 +01:00
|
|
|
Size bytes_written;
|
2014-10-08 20:35:43 +02:00
|
|
|
int i;
|
|
|
|
int which_iov = 0;
|
|
|
|
Size offset;
|
2014-01-14 18:23:22 +01:00
|
|
|
|
|
|
|
Assert(mq->mq_sender == MyProc);
|
|
|
|
|
2014-10-08 20:35:43 +02:00
|
|
|
/* Compute total size of write. */
|
|
|
|
for (i = 0; i < iovcnt; ++i)
|
|
|
|
nbytes += iov[i].len;
|
|
|
|
|
2020-10-19 08:52:25 +02:00
|
|
|
/* Prevent writing messages overwhelming the receiver. */
|
|
|
|
if (nbytes > MaxAllocSize)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
|
|
|
|
errmsg("cannot send a message of size %zu via shared memory queue",
|
|
|
|
nbytes)));
|
|
|
|
|
2014-03-18 16:19:13 +01:00
|
|
|
/* Try to write, or finish writing, the length word into the buffer. */
|
|
|
|
while (!mqh->mqh_length_word_complete)
|
2014-01-14 18:23:22 +01:00
|
|
|
{
|
2014-03-18 16:19:13 +01:00
|
|
|
Assert(mqh->mqh_partial_bytes < sizeof(Size));
|
|
|
|
res = shm_mq_send_bytes(mqh, sizeof(Size) - mqh->mqh_partial_bytes,
|
|
|
|
((char *) &nbytes) + mqh->mqh_partial_bytes,
|
|
|
|
nowait, &bytes_written);
|
2016-06-06 20:35:30 +02:00
|
|
|
|
|
|
|
if (res == SHM_MQ_DETACHED)
|
|
|
|
{
|
|
|
|
/* Reset state in case caller tries to send another message. */
|
|
|
|
mqh->mqh_partial_bytes = 0;
|
|
|
|
mqh->mqh_length_word_complete = false;
|
2014-01-14 18:23:22 +01:00
|
|
|
return res;
|
2016-06-06 20:35:30 +02:00
|
|
|
}
|
|
|
|
mqh->mqh_partial_bytes += bytes_written;
|
2014-01-14 18:23:22 +01:00
|
|
|
|
2014-03-18 16:19:13 +01:00
|
|
|
if (mqh->mqh_partial_bytes >= sizeof(Size))
|
|
|
|
{
|
|
|
|
Assert(mqh->mqh_partial_bytes == sizeof(Size));
|
|
|
|
|
|
|
|
mqh->mqh_partial_bytes = 0;
|
|
|
|
mqh->mqh_length_word_complete = true;
|
|
|
|
}
|
|
|
|
|
2016-06-06 20:35:30 +02:00
|
|
|
if (res != SHM_MQ_SUCCESS)
|
|
|
|
return res;
|
|
|
|
|
2014-03-18 16:19:13 +01:00
|
|
|
/* Length word can't be split unless bigger than required alignment. */
|
|
|
|
Assert(mqh->mqh_length_word_complete || sizeof(Size) > MAXIMUM_ALIGNOF);
|
2014-01-14 18:23:22 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Write the actual data bytes into the buffer. */
|
2014-03-18 16:19:13 +01:00
|
|
|
Assert(mqh->mqh_partial_bytes <= nbytes);
|
2014-10-08 20:35:43 +02:00
|
|
|
offset = mqh->mqh_partial_bytes;
|
|
|
|
do
|
2014-01-14 18:23:22 +01:00
|
|
|
{
|
2014-10-08 20:35:43 +02:00
|
|
|
Size chunksize;
|
|
|
|
|
|
|
|
/* Figure out which bytes need to be sent next. */
|
|
|
|
if (offset >= iov[which_iov].len)
|
|
|
|
{
|
|
|
|
offset -= iov[which_iov].len;
|
|
|
|
++which_iov;
|
|
|
|
if (which_iov >= iovcnt)
|
|
|
|
break;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We want to avoid copying the data if at all possible, but every
|
|
|
|
* chunk of bytes we write into the queue has to be MAXALIGN'd, except
|
|
|
|
* the last. Thus, if a chunk other than the last one ends on a
|
|
|
|
* non-MAXALIGN'd boundary, we have to combine the tail end of its
|
|
|
|
* data with data from one or more following chunks until we either
|
|
|
|
* reach the last chunk or accumulate a number of bytes which is
|
|
|
|
* MAXALIGN'd.
|
|
|
|
*/
|
|
|
|
if (which_iov + 1 < iovcnt &&
|
|
|
|
offset + MAXIMUM_ALIGNOF > iov[which_iov].len)
|
|
|
|
{
|
|
|
|
char tmpbuf[MAXIMUM_ALIGNOF];
|
|
|
|
int j = 0;
|
|
|
|
|
|
|
|
for (;;)
|
|
|
|
{
|
|
|
|
if (offset < iov[which_iov].len)
|
|
|
|
{
|
|
|
|
tmpbuf[j] = iov[which_iov].data[offset];
|
|
|
|
j++;
|
|
|
|
offset++;
|
|
|
|
if (j == MAXIMUM_ALIGNOF)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
offset -= iov[which_iov].len;
|
|
|
|
which_iov++;
|
|
|
|
if (which_iov >= iovcnt)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2016-06-06 20:35:30 +02:00
|
|
|
|
2014-10-08 20:35:43 +02:00
|
|
|
res = shm_mq_send_bytes(mqh, j, tmpbuf, nowait, &bytes_written);
|
2016-06-06 20:35:30 +02:00
|
|
|
|
|
|
|
if (res == SHM_MQ_DETACHED)
|
|
|
|
{
|
|
|
|
/* Reset state in case caller tries to send another message. */
|
|
|
|
mqh->mqh_partial_bytes = 0;
|
|
|
|
mqh->mqh_length_word_complete = false;
|
|
|
|
return res;
|
|
|
|
}
|
|
|
|
|
2014-10-08 20:35:43 +02:00
|
|
|
mqh->mqh_partial_bytes += bytes_written;
|
|
|
|
if (res != SHM_MQ_SUCCESS)
|
|
|
|
return res;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If this is the last chunk, we can write all the data, even if it
|
|
|
|
* isn't a multiple of MAXIMUM_ALIGNOF. Otherwise, we need to
|
|
|
|
* MAXALIGN_DOWN the write size.
|
|
|
|
*/
|
|
|
|
chunksize = iov[which_iov].len - offset;
|
|
|
|
if (which_iov + 1 < iovcnt)
|
|
|
|
chunksize = MAXALIGN_DOWN(chunksize);
|
|
|
|
res = shm_mq_send_bytes(mqh, chunksize, &iov[which_iov].data[offset],
|
|
|
|
nowait, &bytes_written);
|
2016-06-06 20:35:30 +02:00
|
|
|
|
|
|
|
if (res == SHM_MQ_DETACHED)
|
|
|
|
{
|
|
|
|
/* Reset state in case caller tries to send another message. */
|
|
|
|
mqh->mqh_length_word_complete = false;
|
|
|
|
mqh->mqh_partial_bytes = 0;
|
|
|
|
return res;
|
|
|
|
}
|
|
|
|
|
2014-10-08 20:35:43 +02:00
|
|
|
mqh->mqh_partial_bytes += bytes_written;
|
|
|
|
offset += bytes_written;
|
|
|
|
if (res != SHM_MQ_SUCCESS)
|
|
|
|
return res;
|
|
|
|
} while (mqh->mqh_partial_bytes < nbytes);
|
|
|
|
|
|
|
|
/* Reset for next message. */
|
|
|
|
mqh->mqh_partial_bytes = 0;
|
|
|
|
mqh->mqh_length_word_complete = false;
|
2014-01-14 18:23:22 +01:00
|
|
|
|
shm_mq: Reduce spinlock usage.
Previously, mq_bytes_read and mq_bytes_written were protected by the
spinlock, but that turns out to cause pretty serious spinlock
contention on queries which send many tuples through a Gather or
Gather Merge node. This patches changes things so that we instead
read and write those values using 8-byte atomics. Since mq_bytes_read
can only be changed by the receiver and mq_bytes_written can only be
changed by the sender, the only purpose of the spinlock is to prevent
reads and writes of these values from being torn on platforms where
8-byte memory access is not atomic, making the conversion fairly
straightforward.
Testing shows that this produces some slowdown if we're using emulated
64-bit atomics, but since they should be available on any platform
where performance is a primary concern, that seems OK. It's faster,
sometimes a lot faster, on platforms where such atomics are available.
Patch by me, reviewed by Andres Freund, who also suggested the
design. Also tested by Rafia Sabih.
Discussion: http://postgr.es/m/CA+TgmoYuK0XXxmUNTFT9TSNiBtWnRwasBcHHRCOK9iYmDLQVPg@mail.gmail.com
2018-03-02 18:16:59 +01:00
|
|
|
/* If queue has been detached, let caller know. */
|
|
|
|
if (mq->mq_detached)
|
|
|
|
return SHM_MQ_DETACHED;
|
|
|
|
|
|
|
|
/*
|
2018-03-29 18:18:34 +02:00
|
|
|
* If the counterparty is known to have attached, we can read mq_receiver
|
2022-05-24 16:55:01 +02:00
|
|
|
* without acquiring the spinlock. Otherwise, more caution is needed.
|
shm_mq: Reduce spinlock usage.
Previously, mq_bytes_read and mq_bytes_written were protected by the
spinlock, but that turns out to cause pretty serious spinlock
contention on queries which send many tuples through a Gather or
Gather Merge node. This patches changes things so that we instead
read and write those values using 8-byte atomics. Since mq_bytes_read
can only be changed by the receiver and mq_bytes_written can only be
changed by the sender, the only purpose of the spinlock is to prevent
reads and writes of these values from being torn on platforms where
8-byte memory access is not atomic, making the conversion fairly
straightforward.
Testing shows that this produces some slowdown if we're using emulated
64-bit atomics, but since they should be available on any platform
where performance is a primary concern, that seems OK. It's faster,
sometimes a lot faster, on platforms where such atomics are available.
Patch by me, reviewed by Andres Freund, who also suggested the
design. Also tested by Rafia Sabih.
Discussion: http://postgr.es/m/CA+TgmoYuK0XXxmUNTFT9TSNiBtWnRwasBcHHRCOK9iYmDLQVPg@mail.gmail.com
2018-03-02 18:16:59 +01:00
|
|
|
*/
|
|
|
|
if (mqh->mqh_counterparty_attached)
|
|
|
|
receiver = mq->mq_receiver;
|
|
|
|
else
|
|
|
|
{
|
|
|
|
SpinLockAcquire(&mq->mq_mutex);
|
|
|
|
receiver = mq->mq_receiver;
|
|
|
|
SpinLockRelease(&mq->mq_mutex);
|
2022-05-24 16:55:01 +02:00
|
|
|
if (receiver != NULL)
|
|
|
|
mqh->mqh_counterparty_attached = true;
|
shm_mq: Reduce spinlock usage.
Previously, mq_bytes_read and mq_bytes_written were protected by the
spinlock, but that turns out to cause pretty serious spinlock
contention on queries which send many tuples through a Gather or
Gather Merge node. This patches changes things so that we instead
read and write those values using 8-byte atomics. Since mq_bytes_read
can only be changed by the receiver and mq_bytes_written can only be
changed by the sender, the only purpose of the spinlock is to prevent
reads and writes of these values from being torn on platforms where
8-byte memory access is not atomic, making the conversion fairly
straightforward.
Testing shows that this produces some slowdown if we're using emulated
64-bit atomics, but since they should be available on any platform
where performance is a primary concern, that seems OK. It's faster,
sometimes a lot faster, on platforms where such atomics are available.
Patch by me, reviewed by Andres Freund, who also suggested the
design. Also tested by Rafia Sabih.
Discussion: http://postgr.es/m/CA+TgmoYuK0XXxmUNTFT9TSNiBtWnRwasBcHHRCOK9iYmDLQVPg@mail.gmail.com
2018-03-02 18:16:59 +01:00
|
|
|
}
|
|
|
|
|
2021-10-14 22:06:43 +02:00
|
|
|
/*
|
|
|
|
* If the caller has requested force flush or we have written more than
|
|
|
|
* 1/4 of the ring size, mark it as written in shared memory and notify
|
|
|
|
* the receiver.
|
|
|
|
*/
|
|
|
|
if (force_flush || mqh->mqh_send_pending > (mq->mq_ring_size >> 2))
|
|
|
|
{
|
|
|
|
shm_mq_inc_bytes_written(mq, mqh->mqh_send_pending);
|
2022-05-24 16:55:01 +02:00
|
|
|
if (receiver != NULL)
|
|
|
|
SetLatch(&receiver->procLatch);
|
2021-10-14 22:06:43 +02:00
|
|
|
mqh->mqh_send_pending = 0;
|
|
|
|
}
|
|
|
|
|
shm_mq: Reduce spinlock usage.
Previously, mq_bytes_read and mq_bytes_written were protected by the
spinlock, but that turns out to cause pretty serious spinlock
contention on queries which send many tuples through a Gather or
Gather Merge node. This patches changes things so that we instead
read and write those values using 8-byte atomics. Since mq_bytes_read
can only be changed by the receiver and mq_bytes_written can only be
changed by the sender, the only purpose of the spinlock is to prevent
reads and writes of these values from being torn on platforms where
8-byte memory access is not atomic, making the conversion fairly
straightforward.
Testing shows that this produces some slowdown if we're using emulated
64-bit atomics, but since they should be available on any platform
where performance is a primary concern, that seems OK. It's faster,
sometimes a lot faster, on platforms where such atomics are available.
Patch by me, reviewed by Andres Freund, who also suggested the
design. Also tested by Rafia Sabih.
Discussion: http://postgr.es/m/CA+TgmoYuK0XXxmUNTFT9TSNiBtWnRwasBcHHRCOK9iYmDLQVPg@mail.gmail.com
2018-03-02 18:16:59 +01:00
|
|
|
return SHM_MQ_SUCCESS;
|
2014-01-14 18:23:22 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Receive a message from a shared message queue.
|
|
|
|
*
|
|
|
|
* We set *nbytes to the message length and *data to point to the message
|
|
|
|
* payload. If the entire message exists in the queue as a single,
|
|
|
|
* contiguous chunk, *data will point directly into shared memory; otherwise,
|
|
|
|
* it will point to a temporary buffer. This mostly avoids data copying in
|
|
|
|
* the hoped-for case where messages are short compared to the buffer size,
|
|
|
|
* while still allowing longer messages. In either case, the return value
|
2017-02-06 10:33:58 +01:00
|
|
|
* remains valid until the next receive operation is performed on the queue.
|
2014-01-14 18:23:22 +01:00
|
|
|
*
|
|
|
|
* When nowait = false, we'll wait on our process latch when the ring buffer
|
|
|
|
* is empty and we have not yet received a full message. The sender will
|
|
|
|
* set our process latch after more data has been written, and we'll resume
|
|
|
|
* processing. Each call will therefore return a complete message
|
|
|
|
* (unless the sender detaches the queue).
|
|
|
|
*
|
|
|
|
* When nowait = true, we do not manipulate the state of the process latch;
|
|
|
|
* instead, whenever the buffer is empty and we need to read from it, we
|
|
|
|
* return SHM_MQ_WOULD_BLOCK. In this case, the caller should call this
|
|
|
|
* function again after the process latch has been set.
|
|
|
|
*/
|
|
|
|
shm_mq_result
|
2014-03-18 16:19:13 +01:00
|
|
|
shm_mq_receive(shm_mq_handle *mqh, Size *nbytesp, void **datap, bool nowait)
|
2014-01-14 18:23:22 +01:00
|
|
|
{
|
|
|
|
shm_mq *mq = mqh->mqh_queue;
|
|
|
|
shm_mq_result res;
|
2014-03-18 16:19:13 +01:00
|
|
|
Size rb = 0;
|
|
|
|
Size nbytes;
|
2014-01-14 18:23:22 +01:00
|
|
|
void *rawdata;
|
|
|
|
|
|
|
|
Assert(mq->mq_receiver == MyProc);
|
|
|
|
|
|
|
|
/* We can't receive data until the sender has attached. */
|
|
|
|
if (!mqh->mqh_counterparty_attached)
|
|
|
|
{
|
|
|
|
if (nowait)
|
|
|
|
{
|
2015-11-03 15:12:52 +01:00
|
|
|
int counterparty_gone;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We shouldn't return at this point at all unless the sender
|
|
|
|
* hasn't attached yet. However, the correct return value depends
|
|
|
|
* on whether the sender is still attached. If we first test
|
|
|
|
* whether the sender has ever attached and then test whether the
|
|
|
|
* sender has detached, there's a race condition: a sender that
|
|
|
|
* attaches and detaches very quickly might fool us into thinking
|
|
|
|
* the sender never attached at all. So, test whether our
|
|
|
|
* counterparty is definitively gone first, and only afterwards
|
|
|
|
* check whether the sender ever attached in the first place.
|
|
|
|
*/
|
|
|
|
counterparty_gone = shm_mq_counterparty_gone(mq, mqh->mqh_handle);
|
2014-01-14 18:23:22 +01:00
|
|
|
if (shm_mq_get_sender(mq) == NULL)
|
2015-10-23 04:01:11 +02:00
|
|
|
{
|
2015-11-03 15:12:52 +01:00
|
|
|
if (counterparty_gone)
|
2015-10-23 04:01:11 +02:00
|
|
|
return SHM_MQ_DETACHED;
|
2015-11-03 15:12:52 +01:00
|
|
|
else
|
|
|
|
return SHM_MQ_WOULD_BLOCK;
|
2015-10-23 04:01:11 +02:00
|
|
|
}
|
2014-01-14 18:23:22 +01:00
|
|
|
}
|
2014-04-30 23:38:18 +02:00
|
|
|
else if (!shm_mq_wait_internal(mq, &mq->mq_sender, mqh->mqh_handle)
|
|
|
|
&& shm_mq_get_sender(mq) == NULL)
|
2014-01-14 18:23:22 +01:00
|
|
|
{
|
|
|
|
mq->mq_detached = true;
|
|
|
|
return SHM_MQ_DETACHED;
|
|
|
|
}
|
|
|
|
mqh->mqh_counterparty_attached = true;
|
|
|
|
}
|
|
|
|
|
2018-03-02 18:20:30 +01:00
|
|
|
/*
|
|
|
|
* If we've consumed an amount of data greater than 1/4th of the ring
|
|
|
|
* size, mark it consumed in shared memory. We try to avoid doing this
|
|
|
|
* unnecessarily when only a small amount of data has been consumed,
|
|
|
|
* because SetLatch() is fairly expensive and we don't want to do it too
|
|
|
|
* often.
|
|
|
|
*/
|
|
|
|
if (mqh->mqh_consume_pending > mq->mq_ring_size / 4)
|
2014-01-14 18:23:22 +01:00
|
|
|
{
|
|
|
|
shm_mq_inc_bytes_read(mq, mqh->mqh_consume_pending);
|
|
|
|
mqh->mqh_consume_pending = 0;
|
|
|
|
}
|
|
|
|
|
2014-03-18 16:19:13 +01:00
|
|
|
/* Try to read, or finish reading, the length word from the buffer. */
|
|
|
|
while (!mqh->mqh_length_word_complete)
|
2014-01-14 18:23:22 +01:00
|
|
|
{
|
|
|
|
/* Try to receive the message length word. */
|
2014-03-18 16:19:13 +01:00
|
|
|
Assert(mqh->mqh_partial_bytes < sizeof(Size));
|
2018-03-02 18:20:30 +01:00
|
|
|
res = shm_mq_receive_bytes(mqh, sizeof(Size) - mqh->mqh_partial_bytes,
|
2014-03-18 16:19:13 +01:00
|
|
|
nowait, &rb, &rawdata);
|
2014-01-14 18:23:22 +01:00
|
|
|
if (res != SHM_MQ_SUCCESS)
|
|
|
|
return res;
|
|
|
|
|
2014-03-18 16:19:13 +01:00
|
|
|
/*
|
|
|
|
* Hopefully, we'll receive the entire message length word at once.
|
|
|
|
* But if sizeof(Size) > MAXIMUM_ALIGNOF, then it might be split over
|
|
|
|
* multiple reads.
|
|
|
|
*/
|
|
|
|
if (mqh->mqh_partial_bytes == 0 && rb >= sizeof(Size))
|
2014-01-14 18:23:22 +01:00
|
|
|
{
|
2014-03-18 16:19:13 +01:00
|
|
|
Size needed;
|
|
|
|
|
|
|
|
nbytes = *(Size *) rawdata;
|
|
|
|
|
|
|
|
/* If we've already got the whole message, we're done. */
|
|
|
|
needed = MAXALIGN(sizeof(Size)) + MAXALIGN(nbytes);
|
|
|
|
if (rb >= needed)
|
|
|
|
{
|
2018-03-02 18:20:30 +01:00
|
|
|
mqh->mqh_consume_pending += needed;
|
2014-03-18 16:19:13 +01:00
|
|
|
*nbytesp = nbytes;
|
|
|
|
*datap = ((char *) rawdata) + MAXALIGN(sizeof(Size));
|
|
|
|
return SHM_MQ_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2014-01-14 18:23:22 +01:00
|
|
|
/*
|
2014-03-18 16:19:13 +01:00
|
|
|
* We don't have the whole message, but we at least have the whole
|
|
|
|
* length word.
|
2014-01-14 18:23:22 +01:00
|
|
|
*/
|
2014-03-18 16:19:13 +01:00
|
|
|
mqh->mqh_expected_bytes = nbytes;
|
|
|
|
mqh->mqh_length_word_complete = true;
|
2018-03-02 18:20:30 +01:00
|
|
|
mqh->mqh_consume_pending += MAXALIGN(sizeof(Size));
|
2014-03-18 16:19:13 +01:00
|
|
|
rb -= MAXALIGN(sizeof(Size));
|
2014-01-14 18:23:22 +01:00
|
|
|
}
|
2014-03-18 16:19:13 +01:00
|
|
|
else
|
|
|
|
{
|
|
|
|
Size lengthbytes;
|
|
|
|
|
|
|
|
/* Can't be split unless bigger than required alignment. */
|
|
|
|
Assert(sizeof(Size) > MAXIMUM_ALIGNOF);
|
2014-01-14 18:23:22 +01:00
|
|
|
|
2014-03-18 16:19:13 +01:00
|
|
|
/* Message word is split; need buffer to reassemble. */
|
|
|
|
if (mqh->mqh_buffer == NULL)
|
|
|
|
{
|
|
|
|
mqh->mqh_buffer = MemoryContextAlloc(mqh->mqh_context,
|
|
|
|
MQH_INITIAL_BUFSIZE);
|
|
|
|
mqh->mqh_buflen = MQH_INITIAL_BUFSIZE;
|
|
|
|
}
|
|
|
|
Assert(mqh->mqh_buflen >= sizeof(Size));
|
|
|
|
|
2018-03-02 18:20:30 +01:00
|
|
|
/* Copy partial length word; remember to consume it. */
|
2014-03-18 16:19:13 +01:00
|
|
|
if (mqh->mqh_partial_bytes + rb > sizeof(Size))
|
|
|
|
lengthbytes = sizeof(Size) - mqh->mqh_partial_bytes;
|
|
|
|
else
|
2015-08-06 19:25:45 +02:00
|
|
|
lengthbytes = rb;
|
2014-03-18 16:19:13 +01:00
|
|
|
memcpy(&mqh->mqh_buffer[mqh->mqh_partial_bytes], rawdata,
|
|
|
|
lengthbytes);
|
|
|
|
mqh->mqh_partial_bytes += lengthbytes;
|
2018-03-02 18:20:30 +01:00
|
|
|
mqh->mqh_consume_pending += MAXALIGN(lengthbytes);
|
2014-03-18 16:19:13 +01:00
|
|
|
rb -= lengthbytes;
|
|
|
|
|
|
|
|
/* If we now have the whole word, we're ready to read payload. */
|
|
|
|
if (mqh->mqh_partial_bytes >= sizeof(Size))
|
|
|
|
{
|
|
|
|
Assert(mqh->mqh_partial_bytes == sizeof(Size));
|
|
|
|
mqh->mqh_expected_bytes = *(Size *) mqh->mqh_buffer;
|
|
|
|
mqh->mqh_length_word_complete = true;
|
|
|
|
mqh->mqh_partial_bytes = 0;
|
|
|
|
}
|
|
|
|
}
|
2014-01-14 18:23:22 +01:00
|
|
|
}
|
2014-03-18 16:19:13 +01:00
|
|
|
nbytes = mqh->mqh_expected_bytes;
|
2014-01-14 18:23:22 +01:00
|
|
|
|
2020-10-19 08:52:25 +02:00
|
|
|
/*
|
|
|
|
* Should be disallowed on the sending side already, but better check and
|
|
|
|
* error out on the receiver side as well rather than trying to read a
|
|
|
|
* prohibitively large message.
|
|
|
|
*/
|
|
|
|
if (nbytes > MaxAllocSize)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
|
|
|
|
errmsg("invalid message size %zu in shared memory queue",
|
|
|
|
nbytes)));
|
|
|
|
|
2014-03-18 16:19:13 +01:00
|
|
|
if (mqh->mqh_partial_bytes == 0)
|
2014-01-14 18:23:22 +01:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Try to obtain the whole message in a single chunk. If this works,
|
|
|
|
* we need not copy the data and can return a pointer directly into
|
|
|
|
* shared memory.
|
|
|
|
*/
|
2018-03-02 18:20:30 +01:00
|
|
|
res = shm_mq_receive_bytes(mqh, nbytes, nowait, &rb, &rawdata);
|
2014-01-14 18:23:22 +01:00
|
|
|
if (res != SHM_MQ_SUCCESS)
|
|
|
|
return res;
|
|
|
|
if (rb >= nbytes)
|
|
|
|
{
|
2014-03-18 16:19:13 +01:00
|
|
|
mqh->mqh_length_word_complete = false;
|
2018-03-02 18:20:30 +01:00
|
|
|
mqh->mqh_consume_pending += MAXALIGN(nbytes);
|
2014-01-14 18:23:22 +01:00
|
|
|
*nbytesp = nbytes;
|
|
|
|
*datap = rawdata;
|
|
|
|
return SHM_MQ_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The message has wrapped the buffer. We'll need to copy it in order
|
|
|
|
* to return it to the client in one chunk. First, make sure we have
|
|
|
|
* a large enough buffer available.
|
|
|
|
*/
|
|
|
|
if (mqh->mqh_buflen < nbytes)
|
|
|
|
{
|
2021-07-01 05:29:06 +02:00
|
|
|
Size newbuflen;
|
2014-01-14 18:23:22 +01:00
|
|
|
|
2020-10-19 08:52:25 +02:00
|
|
|
/*
|
2021-07-01 05:29:06 +02:00
|
|
|
* Increase size to the next power of 2 that's >= nbytes, but
|
|
|
|
* limit to MaxAllocSize.
|
2020-10-19 08:52:25 +02:00
|
|
|
*/
|
Get rid of artificial restriction on hash table sizes on Windows.
The point of introducing the hash_mem_multiplier GUC was to let users
reproduce the old behavior of hash aggregation, i.e. that it could use
more than work_mem at need. However, the implementation failed to get
the job done on Win64, where work_mem is clamped to 2GB to protect
various places that calculate memory sizes using "long int". As
written, the same clamp was applied to hash_mem. This resulted in
severe performance regressions for queries requiring a bit more than
2GB for hash aggregation, as they now spill to disk and there's no
way to stop that.
Getting rid of the work_mem restriction seems like a good idea, but
it's a big job and could not conceivably be back-patched. However,
there's only a fairly small number of places that are concerned with
the hash_mem value, and it turns out to be possible to remove the
restriction there without too much code churn or any ABI breaks.
So, let's do that for now to fix the regression, and leave the
larger task for another day.
This patch does introduce a bit more infrastructure that should help
with the larger task, namely pg_bitutils.h support for working with
size_t values.
Per gripe from Laurent Hasson. Back-patch to v13 where the
behavior change came in.
Discussion: https://postgr.es/m/997817.1627074924@sss.pgh.pa.us
Discussion: https://postgr.es/m/MN2PR15MB25601E80A9B6D1BA6F592B1985E39@MN2PR15MB2560.namprd15.prod.outlook.com
2021-07-25 20:02:27 +02:00
|
|
|
newbuflen = pg_nextpower2_size_t(nbytes);
|
2020-10-19 08:52:25 +02:00
|
|
|
newbuflen = Min(newbuflen, MaxAllocSize);
|
2014-01-14 18:23:22 +01:00
|
|
|
|
|
|
|
if (mqh->mqh_buffer != NULL)
|
|
|
|
{
|
|
|
|
pfree(mqh->mqh_buffer);
|
|
|
|
mqh->mqh_buffer = NULL;
|
|
|
|
mqh->mqh_buflen = 0;
|
|
|
|
}
|
|
|
|
mqh->mqh_buffer = MemoryContextAlloc(mqh->mqh_context, newbuflen);
|
|
|
|
mqh->mqh_buflen = newbuflen;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Loop until we've copied the entire message. */
|
|
|
|
for (;;)
|
|
|
|
{
|
2014-03-18 16:19:13 +01:00
|
|
|
Size still_needed;
|
2014-01-14 18:23:22 +01:00
|
|
|
|
|
|
|
/* Copy as much as we can. */
|
2014-03-18 16:19:13 +01:00
|
|
|
Assert(mqh->mqh_partial_bytes + rb <= nbytes);
|
2022-03-04 00:13:24 +01:00
|
|
|
if (rb > 0)
|
|
|
|
{
|
|
|
|
memcpy(&mqh->mqh_buffer[mqh->mqh_partial_bytes], rawdata, rb);
|
|
|
|
mqh->mqh_partial_bytes += rb;
|
|
|
|
}
|
2014-01-14 18:23:22 +01:00
|
|
|
|
|
|
|
/*
|
2018-03-02 18:20:30 +01:00
|
|
|
* Update count of bytes that can be consumed, accounting for
|
|
|
|
* alignment padding. Note that this will never actually insert any
|
|
|
|
* padding except at the end of a message, because the buffer size is
|
|
|
|
* a multiple of MAXIMUM_ALIGNOF, and each read and write is as well.
|
2014-01-14 18:23:22 +01:00
|
|
|
*/
|
2014-03-18 16:19:13 +01:00
|
|
|
Assert(mqh->mqh_partial_bytes == nbytes || rb == MAXALIGN(rb));
|
2018-03-02 18:20:30 +01:00
|
|
|
mqh->mqh_consume_pending += MAXALIGN(rb);
|
2014-01-14 18:23:22 +01:00
|
|
|
|
|
|
|
/* If we got all the data, exit the loop. */
|
2014-03-18 16:19:13 +01:00
|
|
|
if (mqh->mqh_partial_bytes >= nbytes)
|
2014-01-14 18:23:22 +01:00
|
|
|
break;
|
|
|
|
|
|
|
|
/* Wait for some more data. */
|
2014-03-18 16:19:13 +01:00
|
|
|
still_needed = nbytes - mqh->mqh_partial_bytes;
|
2018-03-02 18:20:30 +01:00
|
|
|
res = shm_mq_receive_bytes(mqh, still_needed, nowait, &rb, &rawdata);
|
2014-01-14 18:23:22 +01:00
|
|
|
if (res != SHM_MQ_SUCCESS)
|
|
|
|
return res;
|
|
|
|
if (rb > still_needed)
|
|
|
|
rb = still_needed;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Return the complete message, and reset for next message. */
|
|
|
|
*nbytesp = nbytes;
|
|
|
|
*datap = mqh->mqh_buffer;
|
2014-03-18 16:19:13 +01:00
|
|
|
mqh->mqh_length_word_complete = false;
|
|
|
|
mqh->mqh_partial_bytes = 0;
|
2014-01-14 18:23:22 +01:00
|
|
|
return SHM_MQ_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Wait for the other process that's supposed to use this queue to attach
|
|
|
|
* to it.
|
|
|
|
*
|
|
|
|
* The return value is SHM_MQ_DETACHED if the worker has already detached or
|
|
|
|
* if it dies; it is SHM_MQ_SUCCESS if we detect that the worker has attached.
|
|
|
|
* Note that we will only be able to detect that the worker has died before
|
|
|
|
* attaching if a background worker handle was passed to shm_mq_attach().
|
|
|
|
*/
|
|
|
|
shm_mq_result
|
|
|
|
shm_mq_wait_for_attach(shm_mq_handle *mqh)
|
|
|
|
{
|
|
|
|
shm_mq *mq = mqh->mqh_queue;
|
|
|
|
PGPROC **victim;
|
|
|
|
|
|
|
|
if (shm_mq_get_receiver(mq) == MyProc)
|
|
|
|
victim = &mq->mq_sender;
|
|
|
|
else
|
|
|
|
{
|
|
|
|
Assert(shm_mq_get_sender(mq) == MyProc);
|
|
|
|
victim = &mq->mq_receiver;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (shm_mq_wait_internal(mq, victim, mqh->mqh_handle))
|
|
|
|
return SHM_MQ_SUCCESS;
|
|
|
|
else
|
|
|
|
return SHM_MQ_DETACHED;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
Clean up shm_mq cleanup.
The logic around shm_mq_detach was a few bricks shy of a load, because
(contrary to the comments for shm_mq_attach) all it did was update the
shared shm_mq state. That left us leaking a bit of process-local
memory, but much worse, the on_dsm_detach callback for shm_mq_detach
was still armed. That means that whenever we ultimately detach from
the DSM segment, we'd run shm_mq_detach again for already-detached,
possibly long-dead queues. This accidentally fails to fail today,
because we only ever re-use a shm_mq's memory for another shm_mq, and
multiple detach attempts on the last such shm_mq are fairly harmless.
But it's gonna bite us someday, so let's clean it up.
To do that, change shm_mq_detach's API so it takes a shm_mq_handle
not the underlying shm_mq. This makes the callers simpler in most
cases anyway. Also fix a few places in parallel.c that were just
pfree'ing the handle structs rather than doing proper cleanup.
Back-patch to v10 because of the risk that the revenant shm_mq_detach
callbacks would cause a live bug sometime. Since this is an API
change, it's too late to do it in 9.6. (We could make a variant
patch that preserves API, but I'm not excited enough to do that.)
Discussion: https://postgr.es/m/8670.1504192177@sss.pgh.pa.us
2017-08-31 21:10:24 +02:00
|
|
|
* Detach from a shared message queue, and destroy the shm_mq_handle.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
shm_mq_detach(shm_mq_handle *mqh)
|
|
|
|
{
|
2021-10-14 22:06:43 +02:00
|
|
|
/* Before detaching, notify the receiver about any already-written data. */
|
|
|
|
if (mqh->mqh_send_pending > 0)
|
|
|
|
{
|
|
|
|
shm_mq_inc_bytes_written(mqh->mqh_queue, mqh->mqh_send_pending);
|
|
|
|
mqh->mqh_send_pending = 0;
|
|
|
|
}
|
|
|
|
|
Clean up shm_mq cleanup.
The logic around shm_mq_detach was a few bricks shy of a load, because
(contrary to the comments for shm_mq_attach) all it did was update the
shared shm_mq state. That left us leaking a bit of process-local
memory, but much worse, the on_dsm_detach callback for shm_mq_detach
was still armed. That means that whenever we ultimately detach from
the DSM segment, we'd run shm_mq_detach again for already-detached,
possibly long-dead queues. This accidentally fails to fail today,
because we only ever re-use a shm_mq's memory for another shm_mq, and
multiple detach attempts on the last such shm_mq are fairly harmless.
But it's gonna bite us someday, so let's clean it up.
To do that, change shm_mq_detach's API so it takes a shm_mq_handle
not the underlying shm_mq. This makes the callers simpler in most
cases anyway. Also fix a few places in parallel.c that were just
pfree'ing the handle structs rather than doing proper cleanup.
Back-patch to v10 because of the risk that the revenant shm_mq_detach
callbacks would cause a live bug sometime. Since this is an API
change, it's too late to do it in 9.6. (We could make a variant
patch that preserves API, but I'm not excited enough to do that.)
Discussion: https://postgr.es/m/8670.1504192177@sss.pgh.pa.us
2017-08-31 21:10:24 +02:00
|
|
|
/* Notify counterparty that we're outta here. */
|
|
|
|
shm_mq_detach_internal(mqh->mqh_queue);
|
|
|
|
|
|
|
|
/* Cancel on_dsm_detach callback, if any. */
|
|
|
|
if (mqh->mqh_segment)
|
|
|
|
cancel_on_dsm_detach(mqh->mqh_segment,
|
|
|
|
shm_mq_detach_callback,
|
|
|
|
PointerGetDatum(mqh->mqh_queue));
|
|
|
|
|
|
|
|
/* Release local memory associated with handle. */
|
|
|
|
if (mqh->mqh_buffer != NULL)
|
|
|
|
pfree(mqh->mqh_buffer);
|
|
|
|
pfree(mqh);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Notify counterparty that we're detaching from shared message queue.
|
2014-01-14 18:23:22 +01:00
|
|
|
*
|
|
|
|
* The purpose of this function is to make sure that the process
|
|
|
|
* with which we're communicating doesn't block forever waiting for us to
|
2017-06-13 16:54:03 +02:00
|
|
|
* fill or drain the queue once we've lost interest. When the sender
|
2014-01-14 18:23:22 +01:00
|
|
|
* detaches, the receiver can read any messages remaining in the queue;
|
|
|
|
* further reads will return SHM_MQ_DETACHED. If the receiver detaches,
|
|
|
|
* further attempts to send messages will likewise return SHM_MQ_DETACHED.
|
Clean up shm_mq cleanup.
The logic around shm_mq_detach was a few bricks shy of a load, because
(contrary to the comments for shm_mq_attach) all it did was update the
shared shm_mq state. That left us leaking a bit of process-local
memory, but much worse, the on_dsm_detach callback for shm_mq_detach
was still armed. That means that whenever we ultimately detach from
the DSM segment, we'd run shm_mq_detach again for already-detached,
possibly long-dead queues. This accidentally fails to fail today,
because we only ever re-use a shm_mq's memory for another shm_mq, and
multiple detach attempts on the last such shm_mq are fairly harmless.
But it's gonna bite us someday, so let's clean it up.
To do that, change shm_mq_detach's API so it takes a shm_mq_handle
not the underlying shm_mq. This makes the callers simpler in most
cases anyway. Also fix a few places in parallel.c that were just
pfree'ing the handle structs rather than doing proper cleanup.
Back-patch to v10 because of the risk that the revenant shm_mq_detach
callbacks would cause a live bug sometime. Since this is an API
change, it's too late to do it in 9.6. (We could make a variant
patch that preserves API, but I'm not excited enough to do that.)
Discussion: https://postgr.es/m/8670.1504192177@sss.pgh.pa.us
2017-08-31 21:10:24 +02:00
|
|
|
*
|
|
|
|
* This is separated out from shm_mq_detach() because if the on_dsm_detach
|
|
|
|
* callback fires, we only want to do this much. We do not try to touch
|
|
|
|
* the local shm_mq_handle, as it may have been pfree'd already.
|
2014-01-14 18:23:22 +01:00
|
|
|
*/
|
Clean up shm_mq cleanup.
The logic around shm_mq_detach was a few bricks shy of a load, because
(contrary to the comments for shm_mq_attach) all it did was update the
shared shm_mq state. That left us leaking a bit of process-local
memory, but much worse, the on_dsm_detach callback for shm_mq_detach
was still armed. That means that whenever we ultimately detach from
the DSM segment, we'd run shm_mq_detach again for already-detached,
possibly long-dead queues. This accidentally fails to fail today,
because we only ever re-use a shm_mq's memory for another shm_mq, and
multiple detach attempts on the last such shm_mq are fairly harmless.
But it's gonna bite us someday, so let's clean it up.
To do that, change shm_mq_detach's API so it takes a shm_mq_handle
not the underlying shm_mq. This makes the callers simpler in most
cases anyway. Also fix a few places in parallel.c that were just
pfree'ing the handle structs rather than doing proper cleanup.
Back-patch to v10 because of the risk that the revenant shm_mq_detach
callbacks would cause a live bug sometime. Since this is an API
change, it's too late to do it in 9.6. (We could make a variant
patch that preserves API, but I'm not excited enough to do that.)
Discussion: https://postgr.es/m/8670.1504192177@sss.pgh.pa.us
2017-08-31 21:10:24 +02:00
|
|
|
static void
|
|
|
|
shm_mq_detach_internal(shm_mq *mq)
|
2014-01-14 18:23:22 +01:00
|
|
|
{
|
|
|
|
PGPROC *victim;
|
|
|
|
|
|
|
|
SpinLockAcquire(&mq->mq_mutex);
|
2018-03-02 01:21:52 +01:00
|
|
|
if (mq->mq_sender == MyProc)
|
|
|
|
victim = mq->mq_receiver;
|
2014-01-14 18:23:22 +01:00
|
|
|
else
|
|
|
|
{
|
2018-03-02 01:21:52 +01:00
|
|
|
Assert(mq->mq_receiver == MyProc);
|
|
|
|
victim = mq->mq_sender;
|
2014-01-14 18:23:22 +01:00
|
|
|
}
|
2018-03-02 01:21:52 +01:00
|
|
|
mq->mq_detached = true;
|
2014-01-14 18:23:22 +01:00
|
|
|
SpinLockRelease(&mq->mq_mutex);
|
|
|
|
|
|
|
|
if (victim != NULL)
|
|
|
|
SetLatch(&victim->procLatch);
|
|
|
|
}
|
|
|
|
|
Glue layer to connect the executor to the shm_mq mechanism.
The shm_mq mechanism was built to send error (and notice) messages and
tuples between backends. However, shm_mq itself only deals in raw
bytes. Since commit 2bd9e412f92bc6a68f3e8bcb18e04955cc35001d, we have
had infrastructure for one message to redirect protocol messages to a
queue and for another backend to parse them and do useful things with
them. This commit introduces a somewhat analogous facility for tuples
by adding a new type of DestReceiver, DestTupleQueue, which writes
each tuple generated by a query into a shm_mq, and a new
TupleQueueFunnel facility which reads raw tuples out of the queue and
reconstructs the HeapTuple format expected by the executor.
The TupleQueueFunnel abstraction supports reading from multiple tuple
streams at the same time, but only in round-robin fashion. Someone
could imaginably want other policies, but this should be good enough
to meet our short-term needs related to parallel query, and we can
always extend it later.
This also makes one minor addition to the shm_mq API that didn'
seem worth breaking out as a separate patch.
Extracted from Amit Kapila's parallel sequential scan patch. This
code was originally written by me, and then it was revised by Amit,
and then it was revised some more by me.
2015-09-19 03:10:08 +02:00
|
|
|
/*
|
|
|
|
* Get the shm_mq from handle.
|
|
|
|
*/
|
|
|
|
shm_mq *
|
|
|
|
shm_mq_get_queue(shm_mq_handle *mqh)
|
|
|
|
{
|
|
|
|
return mqh->mqh_queue;
|
|
|
|
}
|
|
|
|
|
2014-01-14 18:23:22 +01:00
|
|
|
/*
|
|
|
|
* Write bytes into a shared message queue.
|
|
|
|
*/
|
|
|
|
static shm_mq_result
|
2014-10-08 20:35:43 +02:00
|
|
|
shm_mq_send_bytes(shm_mq_handle *mqh, Size nbytes, const void *data,
|
|
|
|
bool nowait, Size *bytes_written)
|
2014-01-14 18:23:22 +01:00
|
|
|
{
|
|
|
|
shm_mq *mq = mqh->mqh_queue;
|
2014-03-18 16:19:13 +01:00
|
|
|
Size sent = 0;
|
2014-01-14 18:23:22 +01:00
|
|
|
uint64 used;
|
2014-03-18 16:19:13 +01:00
|
|
|
Size ringsize = mq->mq_ring_size;
|
|
|
|
Size available;
|
2014-01-14 18:23:22 +01:00
|
|
|
|
|
|
|
while (sent < nbytes)
|
|
|
|
{
|
|
|
|
uint64 rb;
|
shm_mq: Reduce spinlock usage.
Previously, mq_bytes_read and mq_bytes_written were protected by the
spinlock, but that turns out to cause pretty serious spinlock
contention on queries which send many tuples through a Gather or
Gather Merge node. This patches changes things so that we instead
read and write those values using 8-byte atomics. Since mq_bytes_read
can only be changed by the receiver and mq_bytes_written can only be
changed by the sender, the only purpose of the spinlock is to prevent
reads and writes of these values from being torn on platforms where
8-byte memory access is not atomic, making the conversion fairly
straightforward.
Testing shows that this produces some slowdown if we're using emulated
64-bit atomics, but since they should be available on any platform
where performance is a primary concern, that seems OK. It's faster,
sometimes a lot faster, on platforms where such atomics are available.
Patch by me, reviewed by Andres Freund, who also suggested the
design. Also tested by Rafia Sabih.
Discussion: http://postgr.es/m/CA+TgmoYuK0XXxmUNTFT9TSNiBtWnRwasBcHHRCOK9iYmDLQVPg@mail.gmail.com
2018-03-02 18:16:59 +01:00
|
|
|
uint64 wb;
|
2014-01-14 18:23:22 +01:00
|
|
|
|
|
|
|
/* Compute number of ring buffer bytes used and available. */
|
shm_mq: Reduce spinlock usage.
Previously, mq_bytes_read and mq_bytes_written were protected by the
spinlock, but that turns out to cause pretty serious spinlock
contention on queries which send many tuples through a Gather or
Gather Merge node. This patches changes things so that we instead
read and write those values using 8-byte atomics. Since mq_bytes_read
can only be changed by the receiver and mq_bytes_written can only be
changed by the sender, the only purpose of the spinlock is to prevent
reads and writes of these values from being torn on platforms where
8-byte memory access is not atomic, making the conversion fairly
straightforward.
Testing shows that this produces some slowdown if we're using emulated
64-bit atomics, but since they should be available on any platform
where performance is a primary concern, that seems OK. It's faster,
sometimes a lot faster, on platforms where such atomics are available.
Patch by me, reviewed by Andres Freund, who also suggested the
design. Also tested by Rafia Sabih.
Discussion: http://postgr.es/m/CA+TgmoYuK0XXxmUNTFT9TSNiBtWnRwasBcHHRCOK9iYmDLQVPg@mail.gmail.com
2018-03-02 18:16:59 +01:00
|
|
|
rb = pg_atomic_read_u64(&mq->mq_bytes_read);
|
2021-10-14 22:06:43 +02:00
|
|
|
wb = pg_atomic_read_u64(&mq->mq_bytes_written) + mqh->mqh_send_pending;
|
shm_mq: Reduce spinlock usage.
Previously, mq_bytes_read and mq_bytes_written were protected by the
spinlock, but that turns out to cause pretty serious spinlock
contention on queries which send many tuples through a Gather or
Gather Merge node. This patches changes things so that we instead
read and write those values using 8-byte atomics. Since mq_bytes_read
can only be changed by the receiver and mq_bytes_written can only be
changed by the sender, the only purpose of the spinlock is to prevent
reads and writes of these values from being torn on platforms where
8-byte memory access is not atomic, making the conversion fairly
straightforward.
Testing shows that this produces some slowdown if we're using emulated
64-bit atomics, but since they should be available on any platform
where performance is a primary concern, that seems OK. It's faster,
sometimes a lot faster, on platforms where such atomics are available.
Patch by me, reviewed by Andres Freund, who also suggested the
design. Also tested by Rafia Sabih.
Discussion: http://postgr.es/m/CA+TgmoYuK0XXxmUNTFT9TSNiBtWnRwasBcHHRCOK9iYmDLQVPg@mail.gmail.com
2018-03-02 18:16:59 +01:00
|
|
|
Assert(wb >= rb);
|
|
|
|
used = wb - rb;
|
2014-01-14 18:23:22 +01:00
|
|
|
Assert(used <= ringsize);
|
|
|
|
available = Min(ringsize - used, nbytes - sent);
|
|
|
|
|
shm_mq: Reduce spinlock usage.
Previously, mq_bytes_read and mq_bytes_written were protected by the
spinlock, but that turns out to cause pretty serious spinlock
contention on queries which send many tuples through a Gather or
Gather Merge node. This patches changes things so that we instead
read and write those values using 8-byte atomics. Since mq_bytes_read
can only be changed by the receiver and mq_bytes_written can only be
changed by the sender, the only purpose of the spinlock is to prevent
reads and writes of these values from being torn on platforms where
8-byte memory access is not atomic, making the conversion fairly
straightforward.
Testing shows that this produces some slowdown if we're using emulated
64-bit atomics, but since they should be available on any platform
where performance is a primary concern, that seems OK. It's faster,
sometimes a lot faster, on platforms where such atomics are available.
Patch by me, reviewed by Andres Freund, who also suggested the
design. Also tested by Rafia Sabih.
Discussion: http://postgr.es/m/CA+TgmoYuK0XXxmUNTFT9TSNiBtWnRwasBcHHRCOK9iYmDLQVPg@mail.gmail.com
2018-03-02 18:16:59 +01:00
|
|
|
/*
|
|
|
|
* Bail out if the queue has been detached. Note that we would be in
|
|
|
|
* trouble if the compiler decided to cache the value of
|
|
|
|
* mq->mq_detached in a register or on the stack across loop
|
|
|
|
* iterations. It probably shouldn't do that anyway since we'll
|
|
|
|
* always return, call an external function that performs a system
|
|
|
|
* call, or reach a memory barrier at some point later in the loop,
|
|
|
|
* but just to be sure, insert a compiler barrier here.
|
|
|
|
*/
|
|
|
|
pg_compiler_barrier();
|
|
|
|
if (mq->mq_detached)
|
2014-07-24 15:19:50 +02:00
|
|
|
{
|
|
|
|
*bytes_written = sent;
|
2014-01-14 18:23:22 +01:00
|
|
|
return SHM_MQ_DETACHED;
|
2014-07-24 15:19:50 +02:00
|
|
|
}
|
2014-01-14 18:23:22 +01:00
|
|
|
|
2015-08-07 16:04:07 +02:00
|
|
|
if (available == 0 && !mqh->mqh_counterparty_attached)
|
2014-01-14 18:23:22 +01:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* The queue is full, so if the receiver isn't yet known to be
|
|
|
|
* attached, we must wait for that to happen.
|
|
|
|
*/
|
2015-08-07 16:04:07 +02:00
|
|
|
if (nowait)
|
2014-01-14 18:23:22 +01:00
|
|
|
{
|
2015-10-22 22:33:30 +02:00
|
|
|
if (shm_mq_counterparty_gone(mq, mqh->mqh_handle))
|
|
|
|
{
|
|
|
|
*bytes_written = sent;
|
|
|
|
return SHM_MQ_DETACHED;
|
|
|
|
}
|
2015-08-07 16:04:07 +02:00
|
|
|
if (shm_mq_get_receiver(mq) == NULL)
|
2014-01-14 18:23:22 +01:00
|
|
|
{
|
2014-07-24 15:19:50 +02:00
|
|
|
*bytes_written = sent;
|
2015-08-07 16:04:07 +02:00
|
|
|
return SHM_MQ_WOULD_BLOCK;
|
2014-01-14 18:23:22 +01:00
|
|
|
}
|
|
|
|
}
|
2015-08-07 16:04:07 +02:00
|
|
|
else if (!shm_mq_wait_internal(mq, &mq->mq_receiver,
|
|
|
|
mqh->mqh_handle))
|
|
|
|
{
|
|
|
|
mq->mq_detached = true;
|
|
|
|
*bytes_written = sent;
|
|
|
|
return SHM_MQ_DETACHED;
|
|
|
|
}
|
|
|
|
mqh->mqh_counterparty_attached = true;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The receiver may have read some data after attaching, so we
|
|
|
|
* must not wait without rechecking the queue state.
|
|
|
|
*/
|
|
|
|
}
|
|
|
|
else if (available == 0)
|
|
|
|
{
|
2021-10-14 22:06:43 +02:00
|
|
|
/* Update the pending send bytes in the shared memory. */
|
|
|
|
shm_mq_inc_bytes_written(mq, mqh->mqh_send_pending);
|
|
|
|
|
shm_mq: Reduce spinlock usage.
Previously, mq_bytes_read and mq_bytes_written were protected by the
spinlock, but that turns out to cause pretty serious spinlock
contention on queries which send many tuples through a Gather or
Gather Merge node. This patches changes things so that we instead
read and write those values using 8-byte atomics. Since mq_bytes_read
can only be changed by the receiver and mq_bytes_written can only be
changed by the sender, the only purpose of the spinlock is to prevent
reads and writes of these values from being torn on platforms where
8-byte memory access is not atomic, making the conversion fairly
straightforward.
Testing shows that this produces some slowdown if we're using emulated
64-bit atomics, but since they should be available on any platform
where performance is a primary concern, that seems OK. It's faster,
sometimes a lot faster, on platforms where such atomics are available.
Patch by me, reviewed by Andres Freund, who also suggested the
design. Also tested by Rafia Sabih.
Discussion: http://postgr.es/m/CA+TgmoYuK0XXxmUNTFT9TSNiBtWnRwasBcHHRCOK9iYmDLQVPg@mail.gmail.com
2018-03-02 18:16:59 +01:00
|
|
|
/*
|
|
|
|
* Since mq->mqh_counterparty_attached is known to be true at this
|
|
|
|
* point, mq_receiver has been set, and it can't change once set.
|
|
|
|
* Therefore, we can read it without acquiring the spinlock.
|
|
|
|
*/
|
|
|
|
Assert(mqh->mqh_counterparty_attached);
|
|
|
|
SetLatch(&mq->mq_receiver->procLatch);
|
2014-01-14 18:23:22 +01:00
|
|
|
|
2021-10-14 22:06:43 +02:00
|
|
|
/*
|
|
|
|
* We have just updated the mqh_send_pending bytes in the shared
|
|
|
|
* memory so reset it.
|
|
|
|
*/
|
|
|
|
mqh->mqh_send_pending = 0;
|
|
|
|
|
2014-01-14 18:23:22 +01:00
|
|
|
/* Skip manipulation of our latch if nowait = true. */
|
|
|
|
if (nowait)
|
|
|
|
{
|
|
|
|
*bytes_written = sent;
|
|
|
|
return SHM_MQ_WOULD_BLOCK;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Wait for our latch to be set. It might already be set for some
|
|
|
|
* unrelated reason, but that'll just result in one extra trip
|
|
|
|
* through the loop. It's worth it to avoid resetting the latch
|
|
|
|
* at top of loop, because setting an already-set latch is much
|
|
|
|
* cheaper than setting one that has been reset.
|
|
|
|
*/
|
Add WL_EXIT_ON_PM_DEATH pseudo-event.
Users of the WaitEventSet and WaitLatch() APIs can now choose between
asking for WL_POSTMASTER_DEATH and then handling it explicitly, or asking
for WL_EXIT_ON_PM_DEATH to trigger immediate exit on postmaster death.
This reduces code duplication, since almost all callers want the latter.
Repair all code that was previously ignoring postmaster death completely,
or requesting the event but ignoring it, or requesting the event but then
doing an unconditional PostmasterIsAlive() call every time through its
event loop (which is an expensive syscall on platforms for which we don't
have USE_POSTMASTER_DEATH_SIGNAL support).
Assert that callers of WaitLatchXXX() under the postmaster remember to
ask for either WL_POSTMASTER_DEATH or WL_EXIT_ON_PM_DEATH, to prevent
future bugs.
The only process that doesn't handle postmaster death is syslogger. It
waits until all backends holding the write end of the syslog pipe
(including the postmaster) have closed it by exiting, to be sure to
capture any parting messages. By using the WaitEventSet API directly
it avoids the new assertion, and as a by-product it may be slightly
more efficient on platforms that have epoll().
Author: Thomas Munro
Reviewed-by: Kyotaro Horiguchi, Heikki Linnakangas, Tom Lane
Discussion: https://postgr.es/m/CAEepm%3D1TCviRykkUb69ppWLr_V697rzd1j3eZsRMmbXvETfqbQ%40mail.gmail.com,
https://postgr.es/m/CAEepm=2LqHzizbe7muD7-2yHUbTOoF7Q+qkSD5Q41kuhttRTwA@mail.gmail.com
2018-11-23 08:16:41 +01:00
|
|
|
(void) WaitLatch(MyLatch, WL_LATCH_SET | WL_EXIT_ON_PM_DEATH, 0,
|
2023-09-06 03:04:43 +02:00
|
|
|
WAIT_EVENT_MESSAGE_QUEUE_SEND);
|
2014-01-14 18:23:22 +01:00
|
|
|
|
|
|
|
/* Reset the latch so we don't spin. */
|
2015-01-14 18:45:22 +01:00
|
|
|
ResetLatch(MyLatch);
|
2016-08-01 21:13:53 +02:00
|
|
|
|
|
|
|
/* An interrupt may have occurred while we were waiting. */
|
|
|
|
CHECK_FOR_INTERRUPTS();
|
2014-01-14 18:23:22 +01:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
shm_mq: Reduce spinlock usage.
Previously, mq_bytes_read and mq_bytes_written were protected by the
spinlock, but that turns out to cause pretty serious spinlock
contention on queries which send many tuples through a Gather or
Gather Merge node. This patches changes things so that we instead
read and write those values using 8-byte atomics. Since mq_bytes_read
can only be changed by the receiver and mq_bytes_written can only be
changed by the sender, the only purpose of the spinlock is to prevent
reads and writes of these values from being torn on platforms where
8-byte memory access is not atomic, making the conversion fairly
straightforward.
Testing shows that this produces some slowdown if we're using emulated
64-bit atomics, but since they should be available on any platform
where performance is a primary concern, that seems OK. It's faster,
sometimes a lot faster, on platforms where such atomics are available.
Patch by me, reviewed by Andres Freund, who also suggested the
design. Also tested by Rafia Sabih.
Discussion: http://postgr.es/m/CA+TgmoYuK0XXxmUNTFT9TSNiBtWnRwasBcHHRCOK9iYmDLQVPg@mail.gmail.com
2018-03-02 18:16:59 +01:00
|
|
|
Size offset;
|
|
|
|
Size sendnow;
|
2014-01-14 18:23:22 +01:00
|
|
|
|
shm_mq: Reduce spinlock usage.
Previously, mq_bytes_read and mq_bytes_written were protected by the
spinlock, but that turns out to cause pretty serious spinlock
contention on queries which send many tuples through a Gather or
Gather Merge node. This patches changes things so that we instead
read and write those values using 8-byte atomics. Since mq_bytes_read
can only be changed by the receiver and mq_bytes_written can only be
changed by the sender, the only purpose of the spinlock is to prevent
reads and writes of these values from being torn on platforms where
8-byte memory access is not atomic, making the conversion fairly
straightforward.
Testing shows that this produces some slowdown if we're using emulated
64-bit atomics, but since they should be available on any platform
where performance is a primary concern, that seems OK. It's faster,
sometimes a lot faster, on platforms where such atomics are available.
Patch by me, reviewed by Andres Freund, who also suggested the
design. Also tested by Rafia Sabih.
Discussion: http://postgr.es/m/CA+TgmoYuK0XXxmUNTFT9TSNiBtWnRwasBcHHRCOK9iYmDLQVPg@mail.gmail.com
2018-03-02 18:16:59 +01:00
|
|
|
offset = wb % (uint64) ringsize;
|
|
|
|
sendnow = Min(available, ringsize - offset);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Write as much data as we can via a single memcpy(). Make sure
|
|
|
|
* these writes happen after the read of mq_bytes_read, above.
|
|
|
|
* This barrier pairs with the one in shm_mq_inc_bytes_read.
|
|
|
|
* (Since we're separating the read of mq_bytes_read from a
|
|
|
|
* subsequent write to mq_ring, we need a full barrier here.)
|
|
|
|
*/
|
|
|
|
pg_memory_barrier();
|
2014-01-14 18:23:22 +01:00
|
|
|
memcpy(&mq->mq_ring[mq->mq_ring_offset + offset],
|
|
|
|
(char *) data + sent, sendnow);
|
|
|
|
sent += sendnow;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Update count of bytes written, with alignment padding. Note
|
|
|
|
* that this will never actually insert any padding except at the
|
|
|
|
* end of a run of bytes, because the buffer size is a multiple of
|
|
|
|
* MAXIMUM_ALIGNOF, and each read is as well.
|
|
|
|
*/
|
2014-03-18 16:19:13 +01:00
|
|
|
Assert(sent == nbytes || sendnow == MAXALIGN(sendnow));
|
2014-01-14 18:23:22 +01:00
|
|
|
|
|
|
|
/*
|
2021-10-14 22:06:43 +02:00
|
|
|
* For efficiency, we don't update the bytes written in the shared
|
|
|
|
* memory and also don't set the reader's latch here. Refer to
|
|
|
|
* the comments atop the shm_mq_handle structure for more
|
|
|
|
* information.
|
2014-01-14 18:23:22 +01:00
|
|
|
*/
|
2021-10-14 22:06:43 +02:00
|
|
|
mqh->mqh_send_pending += MAXALIGN(sendnow);
|
2014-01-14 18:23:22 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
*bytes_written = sent;
|
|
|
|
return SHM_MQ_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Wait until at least *nbytesp bytes are available to be read from the
|
2014-03-18 16:51:51 +01:00
|
|
|
* shared message queue, or until the buffer wraps around. If the queue is
|
|
|
|
* detached, returns SHM_MQ_DETACHED. If nowait is specified and a wait
|
|
|
|
* would be required, returns SHM_MQ_WOULD_BLOCK. Otherwise, *datap is set
|
|
|
|
* to the location at which data bytes can be read, *nbytesp is set to the
|
|
|
|
* number of bytes which can be read at that address, and the return value
|
|
|
|
* is SHM_MQ_SUCCESS.
|
2014-01-14 18:23:22 +01:00
|
|
|
*/
|
|
|
|
static shm_mq_result
|
2018-03-02 18:20:30 +01:00
|
|
|
shm_mq_receive_bytes(shm_mq_handle *mqh, Size bytes_needed, bool nowait,
|
2014-03-18 16:19:13 +01:00
|
|
|
Size *nbytesp, void **datap)
|
2014-01-14 18:23:22 +01:00
|
|
|
{
|
2018-03-02 18:20:30 +01:00
|
|
|
shm_mq *mq = mqh->mqh_queue;
|
2014-03-18 16:19:13 +01:00
|
|
|
Size ringsize = mq->mq_ring_size;
|
2014-01-14 18:23:22 +01:00
|
|
|
uint64 used;
|
|
|
|
uint64 written;
|
|
|
|
|
|
|
|
for (;;)
|
|
|
|
{
|
2014-03-18 16:19:13 +01:00
|
|
|
Size offset;
|
shm_mq: Reduce spinlock usage.
Previously, mq_bytes_read and mq_bytes_written were protected by the
spinlock, but that turns out to cause pretty serious spinlock
contention on queries which send many tuples through a Gather or
Gather Merge node. This patches changes things so that we instead
read and write those values using 8-byte atomics. Since mq_bytes_read
can only be changed by the receiver and mq_bytes_written can only be
changed by the sender, the only purpose of the spinlock is to prevent
reads and writes of these values from being torn on platforms where
8-byte memory access is not atomic, making the conversion fairly
straightforward.
Testing shows that this produces some slowdown if we're using emulated
64-bit atomics, but since they should be available on any platform
where performance is a primary concern, that seems OK. It's faster,
sometimes a lot faster, on platforms where such atomics are available.
Patch by me, reviewed by Andres Freund, who also suggested the
design. Also tested by Rafia Sabih.
Discussion: http://postgr.es/m/CA+TgmoYuK0XXxmUNTFT9TSNiBtWnRwasBcHHRCOK9iYmDLQVPg@mail.gmail.com
2018-03-02 18:16:59 +01:00
|
|
|
uint64 read;
|
2014-01-14 18:23:22 +01:00
|
|
|
|
|
|
|
/* Get bytes written, so we can compute what's available to read. */
|
shm_mq: Reduce spinlock usage.
Previously, mq_bytes_read and mq_bytes_written were protected by the
spinlock, but that turns out to cause pretty serious spinlock
contention on queries which send many tuples through a Gather or
Gather Merge node. This patches changes things so that we instead
read and write those values using 8-byte atomics. Since mq_bytes_read
can only be changed by the receiver and mq_bytes_written can only be
changed by the sender, the only purpose of the spinlock is to prevent
reads and writes of these values from being torn on platforms where
8-byte memory access is not atomic, making the conversion fairly
straightforward.
Testing shows that this produces some slowdown if we're using emulated
64-bit atomics, but since they should be available on any platform
where performance is a primary concern, that seems OK. It's faster,
sometimes a lot faster, on platforms where such atomics are available.
Patch by me, reviewed by Andres Freund, who also suggested the
design. Also tested by Rafia Sabih.
Discussion: http://postgr.es/m/CA+TgmoYuK0XXxmUNTFT9TSNiBtWnRwasBcHHRCOK9iYmDLQVPg@mail.gmail.com
2018-03-02 18:16:59 +01:00
|
|
|
written = pg_atomic_read_u64(&mq->mq_bytes_written);
|
2018-03-02 18:20:30 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Get bytes read. Include bytes we could consume but have not yet
|
|
|
|
* consumed.
|
|
|
|
*/
|
|
|
|
read = pg_atomic_read_u64(&mq->mq_bytes_read) +
|
|
|
|
mqh->mqh_consume_pending;
|
shm_mq: Reduce spinlock usage.
Previously, mq_bytes_read and mq_bytes_written were protected by the
spinlock, but that turns out to cause pretty serious spinlock
contention on queries which send many tuples through a Gather or
Gather Merge node. This patches changes things so that we instead
read and write those values using 8-byte atomics. Since mq_bytes_read
can only be changed by the receiver and mq_bytes_written can only be
changed by the sender, the only purpose of the spinlock is to prevent
reads and writes of these values from being torn on platforms where
8-byte memory access is not atomic, making the conversion fairly
straightforward.
Testing shows that this produces some slowdown if we're using emulated
64-bit atomics, but since they should be available on any platform
where performance is a primary concern, that seems OK. It's faster,
sometimes a lot faster, on platforms where such atomics are available.
Patch by me, reviewed by Andres Freund, who also suggested the
design. Also tested by Rafia Sabih.
Discussion: http://postgr.es/m/CA+TgmoYuK0XXxmUNTFT9TSNiBtWnRwasBcHHRCOK9iYmDLQVPg@mail.gmail.com
2018-03-02 18:16:59 +01:00
|
|
|
used = written - read;
|
2014-01-14 18:23:22 +01:00
|
|
|
Assert(used <= ringsize);
|
shm_mq: Reduce spinlock usage.
Previously, mq_bytes_read and mq_bytes_written were protected by the
spinlock, but that turns out to cause pretty serious spinlock
contention on queries which send many tuples through a Gather or
Gather Merge node. This patches changes things so that we instead
read and write those values using 8-byte atomics. Since mq_bytes_read
can only be changed by the receiver and mq_bytes_written can only be
changed by the sender, the only purpose of the spinlock is to prevent
reads and writes of these values from being torn on platforms where
8-byte memory access is not atomic, making the conversion fairly
straightforward.
Testing shows that this produces some slowdown if we're using emulated
64-bit atomics, but since they should be available on any platform
where performance is a primary concern, that seems OK. It's faster,
sometimes a lot faster, on platforms where such atomics are available.
Patch by me, reviewed by Andres Freund, who also suggested the
design. Also tested by Rafia Sabih.
Discussion: http://postgr.es/m/CA+TgmoYuK0XXxmUNTFT9TSNiBtWnRwasBcHHRCOK9iYmDLQVPg@mail.gmail.com
2018-03-02 18:16:59 +01:00
|
|
|
offset = read % (uint64) ringsize;
|
2014-01-14 18:23:22 +01:00
|
|
|
|
|
|
|
/* If we have enough data or buffer has wrapped, we're done. */
|
|
|
|
if (used >= bytes_needed || offset + used >= ringsize)
|
|
|
|
{
|
|
|
|
*nbytesp = Min(used, ringsize - offset);
|
|
|
|
*datap = &mq->mq_ring[mq->mq_ring_offset + offset];
|
shm_mq: Reduce spinlock usage.
Previously, mq_bytes_read and mq_bytes_written were protected by the
spinlock, but that turns out to cause pretty serious spinlock
contention on queries which send many tuples through a Gather or
Gather Merge node. This patches changes things so that we instead
read and write those values using 8-byte atomics. Since mq_bytes_read
can only be changed by the receiver and mq_bytes_written can only be
changed by the sender, the only purpose of the spinlock is to prevent
reads and writes of these values from being torn on platforms where
8-byte memory access is not atomic, making the conversion fairly
straightforward.
Testing shows that this produces some slowdown if we're using emulated
64-bit atomics, but since they should be available on any platform
where performance is a primary concern, that seems OK. It's faster,
sometimes a lot faster, on platforms where such atomics are available.
Patch by me, reviewed by Andres Freund, who also suggested the
design. Also tested by Rafia Sabih.
Discussion: http://postgr.es/m/CA+TgmoYuK0XXxmUNTFT9TSNiBtWnRwasBcHHRCOK9iYmDLQVPg@mail.gmail.com
2018-03-02 18:16:59 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Separate the read of mq_bytes_written, above, from caller's
|
|
|
|
* attempt to read the data itself. Pairs with the barrier in
|
|
|
|
* shm_mq_inc_bytes_written.
|
|
|
|
*/
|
|
|
|
pg_read_barrier();
|
2014-01-14 18:23:22 +01:00
|
|
|
return SHM_MQ_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Fall out before waiting if the queue has been detached.
|
|
|
|
*
|
|
|
|
* Note that we don't check for this until *after* considering whether
|
|
|
|
* the data already available is enough, since the receiver can finish
|
|
|
|
* receiving a message stored in the buffer even after the sender has
|
|
|
|
* detached.
|
|
|
|
*/
|
shm_mq: Reduce spinlock usage.
Previously, mq_bytes_read and mq_bytes_written were protected by the
spinlock, but that turns out to cause pretty serious spinlock
contention on queries which send many tuples through a Gather or
Gather Merge node. This patches changes things so that we instead
read and write those values using 8-byte atomics. Since mq_bytes_read
can only be changed by the receiver and mq_bytes_written can only be
changed by the sender, the only purpose of the spinlock is to prevent
reads and writes of these values from being torn on platforms where
8-byte memory access is not atomic, making the conversion fairly
straightforward.
Testing shows that this produces some slowdown if we're using emulated
64-bit atomics, but since they should be available on any platform
where performance is a primary concern, that seems OK. It's faster,
sometimes a lot faster, on platforms where such atomics are available.
Patch by me, reviewed by Andres Freund, who also suggested the
design. Also tested by Rafia Sabih.
Discussion: http://postgr.es/m/CA+TgmoYuK0XXxmUNTFT9TSNiBtWnRwasBcHHRCOK9iYmDLQVPg@mail.gmail.com
2018-03-02 18:16:59 +01:00
|
|
|
if (mq->mq_detached)
|
2018-03-05 21:12:49 +01:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* If the writer advanced mq_bytes_written and then set
|
|
|
|
* mq_detached, we might not have read the final value of
|
|
|
|
* mq_bytes_written above. Insert a read barrier and then check
|
|
|
|
* again if mq_bytes_written has advanced.
|
|
|
|
*/
|
|
|
|
pg_read_barrier();
|
|
|
|
if (written != pg_atomic_read_u64(&mq->mq_bytes_written))
|
|
|
|
continue;
|
|
|
|
|
2014-01-14 18:23:22 +01:00
|
|
|
return SHM_MQ_DETACHED;
|
2018-03-05 21:12:49 +01:00
|
|
|
}
|
2014-01-14 18:23:22 +01:00
|
|
|
|
2018-03-02 18:20:30 +01:00
|
|
|
/*
|
|
|
|
* We didn't get enough data to satisfy the request, so mark any data
|
|
|
|
* previously-consumed as read to make more buffer space.
|
|
|
|
*/
|
|
|
|
if (mqh->mqh_consume_pending > 0)
|
|
|
|
{
|
|
|
|
shm_mq_inc_bytes_read(mq, mqh->mqh_consume_pending);
|
|
|
|
mqh->mqh_consume_pending = 0;
|
|
|
|
}
|
|
|
|
|
2014-01-14 18:23:22 +01:00
|
|
|
/* Skip manipulation of our latch if nowait = true. */
|
|
|
|
if (nowait)
|
|
|
|
return SHM_MQ_WOULD_BLOCK;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Wait for our latch to be set. It might already be set for some
|
|
|
|
* unrelated reason, but that'll just result in one extra trip through
|
|
|
|
* the loop. It's worth it to avoid resetting the latch at top of
|
|
|
|
* loop, because setting an already-set latch is much cheaper than
|
|
|
|
* setting one that has been reset.
|
|
|
|
*/
|
Add WL_EXIT_ON_PM_DEATH pseudo-event.
Users of the WaitEventSet and WaitLatch() APIs can now choose between
asking for WL_POSTMASTER_DEATH and then handling it explicitly, or asking
for WL_EXIT_ON_PM_DEATH to trigger immediate exit on postmaster death.
This reduces code duplication, since almost all callers want the latter.
Repair all code that was previously ignoring postmaster death completely,
or requesting the event but ignoring it, or requesting the event but then
doing an unconditional PostmasterIsAlive() call every time through its
event loop (which is an expensive syscall on platforms for which we don't
have USE_POSTMASTER_DEATH_SIGNAL support).
Assert that callers of WaitLatchXXX() under the postmaster remember to
ask for either WL_POSTMASTER_DEATH or WL_EXIT_ON_PM_DEATH, to prevent
future bugs.
The only process that doesn't handle postmaster death is syslogger. It
waits until all backends holding the write end of the syslog pipe
(including the postmaster) have closed it by exiting, to be sure to
capture any parting messages. By using the WaitEventSet API directly
it avoids the new assertion, and as a by-product it may be slightly
more efficient on platforms that have epoll().
Author: Thomas Munro
Reviewed-by: Kyotaro Horiguchi, Heikki Linnakangas, Tom Lane
Discussion: https://postgr.es/m/CAEepm%3D1TCviRykkUb69ppWLr_V697rzd1j3eZsRMmbXvETfqbQ%40mail.gmail.com,
https://postgr.es/m/CAEepm=2LqHzizbe7muD7-2yHUbTOoF7Q+qkSD5Q41kuhttRTwA@mail.gmail.com
2018-11-23 08:16:41 +01:00
|
|
|
(void) WaitLatch(MyLatch, WL_LATCH_SET | WL_EXIT_ON_PM_DEATH, 0,
|
2023-09-06 03:04:43 +02:00
|
|
|
WAIT_EVENT_MESSAGE_QUEUE_RECEIVE);
|
2014-01-14 18:23:22 +01:00
|
|
|
|
|
|
|
/* Reset the latch so we don't spin. */
|
2015-01-14 18:45:22 +01:00
|
|
|
ResetLatch(MyLatch);
|
2016-08-01 21:13:53 +02:00
|
|
|
|
|
|
|
/* An interrupt may have occurred while we were waiting. */
|
|
|
|
CHECK_FOR_INTERRUPTS();
|
2014-01-14 18:23:22 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-10-22 22:33:30 +02:00
|
|
|
/*
|
|
|
|
* Test whether a counterparty who may not even be alive yet is definitely gone.
|
|
|
|
*/
|
|
|
|
static bool
|
2018-03-02 01:21:52 +01:00
|
|
|
shm_mq_counterparty_gone(shm_mq *mq, BackgroundWorkerHandle *handle)
|
2015-10-22 22:33:30 +02:00
|
|
|
{
|
|
|
|
pid_t pid;
|
|
|
|
|
|
|
|
/* If the queue has been detached, counterparty is definitely gone. */
|
shm_mq: Reduce spinlock usage.
Previously, mq_bytes_read and mq_bytes_written were protected by the
spinlock, but that turns out to cause pretty serious spinlock
contention on queries which send many tuples through a Gather or
Gather Merge node. This patches changes things so that we instead
read and write those values using 8-byte atomics. Since mq_bytes_read
can only be changed by the receiver and mq_bytes_written can only be
changed by the sender, the only purpose of the spinlock is to prevent
reads and writes of these values from being torn on platforms where
8-byte memory access is not atomic, making the conversion fairly
straightforward.
Testing shows that this produces some slowdown if we're using emulated
64-bit atomics, but since they should be available on any platform
where performance is a primary concern, that seems OK. It's faster,
sometimes a lot faster, on platforms where such atomics are available.
Patch by me, reviewed by Andres Freund, who also suggested the
design. Also tested by Rafia Sabih.
Discussion: http://postgr.es/m/CA+TgmoYuK0XXxmUNTFT9TSNiBtWnRwasBcHHRCOK9iYmDLQVPg@mail.gmail.com
2018-03-02 18:16:59 +01:00
|
|
|
if (mq->mq_detached)
|
2015-10-22 22:33:30 +02:00
|
|
|
return true;
|
|
|
|
|
|
|
|
/* If there's a handle, check worker status. */
|
|
|
|
if (handle != NULL)
|
|
|
|
{
|
|
|
|
BgwHandleStatus status;
|
|
|
|
|
|
|
|
/* Check for unexpected worker death. */
|
|
|
|
status = GetBackgroundWorkerPid(handle, &pid);
|
|
|
|
if (status != BGWH_STARTED && status != BGWH_NOT_YET_STARTED)
|
|
|
|
{
|
|
|
|
/* Mark it detached, just to make it official. */
|
|
|
|
mq->mq_detached = true;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Counterparty is not definitively gone. */
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2014-01-14 18:23:22 +01:00
|
|
|
/*
|
|
|
|
* This is used when a process is waiting for its counterpart to attach to the
|
|
|
|
* queue. We exit when the other process attaches as expected, or, if
|
|
|
|
* handle != NULL, when the referenced background process or the postmaster
|
|
|
|
* dies. Note that if handle == NULL, and the process fails to attach, we'll
|
|
|
|
* potentially get stuck here forever waiting for a process that may never
|
|
|
|
* start. We do check for interrupts, though.
|
|
|
|
*
|
|
|
|
* ptr is a pointer to the memory address that we're expecting to become
|
|
|
|
* non-NULL when our counterpart attaches to the queue.
|
|
|
|
*/
|
|
|
|
static bool
|
2018-03-02 01:21:52 +01:00
|
|
|
shm_mq_wait_internal(shm_mq *mq, PGPROC **ptr, BackgroundWorkerHandle *handle)
|
2014-01-14 18:23:22 +01:00
|
|
|
{
|
|
|
|
bool result = false;
|
|
|
|
|
2015-10-09 20:31:04 +02:00
|
|
|
for (;;)
|
2014-01-14 18:23:22 +01:00
|
|
|
{
|
2015-10-09 20:31:04 +02:00
|
|
|
BgwHandleStatus status;
|
|
|
|
pid_t pid;
|
|
|
|
|
|
|
|
/* Acquire the lock just long enough to check the pointer. */
|
|
|
|
SpinLockAcquire(&mq->mq_mutex);
|
|
|
|
result = (*ptr != NULL);
|
|
|
|
SpinLockRelease(&mq->mq_mutex);
|
|
|
|
|
|
|
|
/* Fail if detached; else succeed if initialized. */
|
shm_mq: Reduce spinlock usage.
Previously, mq_bytes_read and mq_bytes_written were protected by the
spinlock, but that turns out to cause pretty serious spinlock
contention on queries which send many tuples through a Gather or
Gather Merge node. This patches changes things so that we instead
read and write those values using 8-byte atomics. Since mq_bytes_read
can only be changed by the receiver and mq_bytes_written can only be
changed by the sender, the only purpose of the spinlock is to prevent
reads and writes of these values from being torn on platforms where
8-byte memory access is not atomic, making the conversion fairly
straightforward.
Testing shows that this produces some slowdown if we're using emulated
64-bit atomics, but since they should be available on any platform
where performance is a primary concern, that seems OK. It's faster,
sometimes a lot faster, on platforms where such atomics are available.
Patch by me, reviewed by Andres Freund, who also suggested the
design. Also tested by Rafia Sabih.
Discussion: http://postgr.es/m/CA+TgmoYuK0XXxmUNTFT9TSNiBtWnRwasBcHHRCOK9iYmDLQVPg@mail.gmail.com
2018-03-02 18:16:59 +01:00
|
|
|
if (mq->mq_detached)
|
2015-10-09 20:31:04 +02:00
|
|
|
{
|
|
|
|
result = false;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
if (result)
|
|
|
|
break;
|
|
|
|
|
|
|
|
if (handle != NULL)
|
2014-01-14 18:23:22 +01:00
|
|
|
{
|
2015-10-09 20:31:04 +02:00
|
|
|
/* Check for unexpected worker death. */
|
|
|
|
status = GetBackgroundWorkerPid(handle, &pid);
|
|
|
|
if (status != BGWH_STARTED && status != BGWH_NOT_YET_STARTED)
|
2014-01-14 18:23:22 +01:00
|
|
|
{
|
|
|
|
result = false;
|
|
|
|
break;
|
|
|
|
}
|
2015-10-09 20:31:04 +02:00
|
|
|
}
|
2014-01-14 18:23:22 +01:00
|
|
|
|
2020-06-07 15:06:51 +02:00
|
|
|
/* Wait to be signaled. */
|
Add WL_EXIT_ON_PM_DEATH pseudo-event.
Users of the WaitEventSet and WaitLatch() APIs can now choose between
asking for WL_POSTMASTER_DEATH and then handling it explicitly, or asking
for WL_EXIT_ON_PM_DEATH to trigger immediate exit on postmaster death.
This reduces code duplication, since almost all callers want the latter.
Repair all code that was previously ignoring postmaster death completely,
or requesting the event but ignoring it, or requesting the event but then
doing an unconditional PostmasterIsAlive() call every time through its
event loop (which is an expensive syscall on platforms for which we don't
have USE_POSTMASTER_DEATH_SIGNAL support).
Assert that callers of WaitLatchXXX() under the postmaster remember to
ask for either WL_POSTMASTER_DEATH or WL_EXIT_ON_PM_DEATH, to prevent
future bugs.
The only process that doesn't handle postmaster death is syslogger. It
waits until all backends holding the write end of the syslog pipe
(including the postmaster) have closed it by exiting, to be sure to
capture any parting messages. By using the WaitEventSet API directly
it avoids the new assertion, and as a by-product it may be slightly
more efficient on platforms that have epoll().
Author: Thomas Munro
Reviewed-by: Kyotaro Horiguchi, Heikki Linnakangas, Tom Lane
Discussion: https://postgr.es/m/CAEepm%3D1TCviRykkUb69ppWLr_V697rzd1j3eZsRMmbXvETfqbQ%40mail.gmail.com,
https://postgr.es/m/CAEepm=2LqHzizbe7muD7-2yHUbTOoF7Q+qkSD5Q41kuhttRTwA@mail.gmail.com
2018-11-23 08:16:41 +01:00
|
|
|
(void) WaitLatch(MyLatch, WL_LATCH_SET | WL_EXIT_ON_PM_DEATH, 0,
|
2023-09-06 03:04:43 +02:00
|
|
|
WAIT_EVENT_MESSAGE_QUEUE_INTERNAL);
|
2014-01-14 18:23:22 +01:00
|
|
|
|
2015-10-09 20:31:04 +02:00
|
|
|
/* Reset the latch so we don't spin. */
|
|
|
|
ResetLatch(MyLatch);
|
2016-08-01 21:13:53 +02:00
|
|
|
|
|
|
|
/* An interrupt may have occurred while we were waiting. */
|
|
|
|
CHECK_FOR_INTERRUPTS();
|
2014-01-14 18:23:22 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Increment the number of bytes read.
|
|
|
|
*/
|
|
|
|
static void
|
2018-03-02 01:21:52 +01:00
|
|
|
shm_mq_inc_bytes_read(shm_mq *mq, Size n)
|
2014-01-14 18:23:22 +01:00
|
|
|
{
|
|
|
|
PGPROC *sender;
|
|
|
|
|
shm_mq: Reduce spinlock usage.
Previously, mq_bytes_read and mq_bytes_written were protected by the
spinlock, but that turns out to cause pretty serious spinlock
contention on queries which send many tuples through a Gather or
Gather Merge node. This patches changes things so that we instead
read and write those values using 8-byte atomics. Since mq_bytes_read
can only be changed by the receiver and mq_bytes_written can only be
changed by the sender, the only purpose of the spinlock is to prevent
reads and writes of these values from being torn on platforms where
8-byte memory access is not atomic, making the conversion fairly
straightforward.
Testing shows that this produces some slowdown if we're using emulated
64-bit atomics, but since they should be available on any platform
where performance is a primary concern, that seems OK. It's faster,
sometimes a lot faster, on platforms where such atomics are available.
Patch by me, reviewed by Andres Freund, who also suggested the
design. Also tested by Rafia Sabih.
Discussion: http://postgr.es/m/CA+TgmoYuK0XXxmUNTFT9TSNiBtWnRwasBcHHRCOK9iYmDLQVPg@mail.gmail.com
2018-03-02 18:16:59 +01:00
|
|
|
/*
|
|
|
|
* Separate prior reads of mq_ring from the increment of mq_bytes_read
|
2018-03-29 18:18:34 +02:00
|
|
|
* which follows. This pairs with the full barrier in
|
|
|
|
* shm_mq_send_bytes(). We only need a read barrier here because the
|
|
|
|
* increment of mq_bytes_read is actually a read followed by a dependent
|
|
|
|
* write.
|
shm_mq: Reduce spinlock usage.
Previously, mq_bytes_read and mq_bytes_written were protected by the
spinlock, but that turns out to cause pretty serious spinlock
contention on queries which send many tuples through a Gather or
Gather Merge node. This patches changes things so that we instead
read and write those values using 8-byte atomics. Since mq_bytes_read
can only be changed by the receiver and mq_bytes_written can only be
changed by the sender, the only purpose of the spinlock is to prevent
reads and writes of these values from being torn on platforms where
8-byte memory access is not atomic, making the conversion fairly
straightforward.
Testing shows that this produces some slowdown if we're using emulated
64-bit atomics, but since they should be available on any platform
where performance is a primary concern, that seems OK. It's faster,
sometimes a lot faster, on platforms where such atomics are available.
Patch by me, reviewed by Andres Freund, who also suggested the
design. Also tested by Rafia Sabih.
Discussion: http://postgr.es/m/CA+TgmoYuK0XXxmUNTFT9TSNiBtWnRwasBcHHRCOK9iYmDLQVPg@mail.gmail.com
2018-03-02 18:16:59 +01:00
|
|
|
*/
|
|
|
|
pg_read_barrier();
|
|
|
|
|
|
|
|
/*
|
|
|
|
* There's no need to use pg_atomic_fetch_add_u64 here, because nobody
|
|
|
|
* else can be changing this value. This method should be cheaper.
|
|
|
|
*/
|
|
|
|
pg_atomic_write_u64(&mq->mq_bytes_read,
|
|
|
|
pg_atomic_read_u64(&mq->mq_bytes_read) + n);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We shouldn't have any bytes to read without a sender, so we can read
|
|
|
|
* mq_sender here without a lock. Once it's initialized, it can't change.
|
|
|
|
*/
|
2014-01-14 18:23:22 +01:00
|
|
|
sender = mq->mq_sender;
|
|
|
|
Assert(sender != NULL);
|
|
|
|
SetLatch(&sender->procLatch);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Increment the number of bytes written.
|
|
|
|
*/
|
|
|
|
static void
|
2018-03-02 01:21:52 +01:00
|
|
|
shm_mq_inc_bytes_written(shm_mq *mq, Size n)
|
2014-01-14 18:23:22 +01:00
|
|
|
{
|
shm_mq: Reduce spinlock usage.
Previously, mq_bytes_read and mq_bytes_written were protected by the
spinlock, but that turns out to cause pretty serious spinlock
contention on queries which send many tuples through a Gather or
Gather Merge node. This patches changes things so that we instead
read and write those values using 8-byte atomics. Since mq_bytes_read
can only be changed by the receiver and mq_bytes_written can only be
changed by the sender, the only purpose of the spinlock is to prevent
reads and writes of these values from being torn on platforms where
8-byte memory access is not atomic, making the conversion fairly
straightforward.
Testing shows that this produces some slowdown if we're using emulated
64-bit atomics, but since they should be available on any platform
where performance is a primary concern, that seems OK. It's faster,
sometimes a lot faster, on platforms where such atomics are available.
Patch by me, reviewed by Andres Freund, who also suggested the
design. Also tested by Rafia Sabih.
Discussion: http://postgr.es/m/CA+TgmoYuK0XXxmUNTFT9TSNiBtWnRwasBcHHRCOK9iYmDLQVPg@mail.gmail.com
2018-03-02 18:16:59 +01:00
|
|
|
/*
|
|
|
|
* Separate prior reads of mq_ring from the write of mq_bytes_written
|
|
|
|
* which we're about to do. Pairs with the read barrier found in
|
2019-06-17 09:13:16 +02:00
|
|
|
* shm_mq_receive_bytes.
|
shm_mq: Reduce spinlock usage.
Previously, mq_bytes_read and mq_bytes_written were protected by the
spinlock, but that turns out to cause pretty serious spinlock
contention on queries which send many tuples through a Gather or
Gather Merge node. This patches changes things so that we instead
read and write those values using 8-byte atomics. Since mq_bytes_read
can only be changed by the receiver and mq_bytes_written can only be
changed by the sender, the only purpose of the spinlock is to prevent
reads and writes of these values from being torn on platforms where
8-byte memory access is not atomic, making the conversion fairly
straightforward.
Testing shows that this produces some slowdown if we're using emulated
64-bit atomics, but since they should be available on any platform
where performance is a primary concern, that seems OK. It's faster,
sometimes a lot faster, on platforms where such atomics are available.
Patch by me, reviewed by Andres Freund, who also suggested the
design. Also tested by Rafia Sabih.
Discussion: http://postgr.es/m/CA+TgmoYuK0XXxmUNTFT9TSNiBtWnRwasBcHHRCOK9iYmDLQVPg@mail.gmail.com
2018-03-02 18:16:59 +01:00
|
|
|
*/
|
|
|
|
pg_write_barrier();
|
|
|
|
|
|
|
|
/*
|
|
|
|
* There's no need to use pg_atomic_fetch_add_u64 here, because nobody
|
|
|
|
* else can be changing this value. This method avoids taking the bus
|
|
|
|
* lock unnecessarily.
|
|
|
|
*/
|
|
|
|
pg_atomic_write_u64(&mq->mq_bytes_written,
|
|
|
|
pg_atomic_read_u64(&mq->mq_bytes_written) + n);
|
2014-01-14 18:23:22 +01:00
|
|
|
}
|
|
|
|
|
2019-08-05 05:14:58 +02:00
|
|
|
/* Shim for on_dsm_detach callback. */
|
2014-01-14 18:23:22 +01:00
|
|
|
static void
|
|
|
|
shm_mq_detach_callback(dsm_segment *seg, Datum arg)
|
|
|
|
{
|
|
|
|
shm_mq *mq = (shm_mq *) DatumGetPointer(arg);
|
|
|
|
|
Clean up shm_mq cleanup.
The logic around shm_mq_detach was a few bricks shy of a load, because
(contrary to the comments for shm_mq_attach) all it did was update the
shared shm_mq state. That left us leaking a bit of process-local
memory, but much worse, the on_dsm_detach callback for shm_mq_detach
was still armed. That means that whenever we ultimately detach from
the DSM segment, we'd run shm_mq_detach again for already-detached,
possibly long-dead queues. This accidentally fails to fail today,
because we only ever re-use a shm_mq's memory for another shm_mq, and
multiple detach attempts on the last such shm_mq are fairly harmless.
But it's gonna bite us someday, so let's clean it up.
To do that, change shm_mq_detach's API so it takes a shm_mq_handle
not the underlying shm_mq. This makes the callers simpler in most
cases anyway. Also fix a few places in parallel.c that were just
pfree'ing the handle structs rather than doing proper cleanup.
Back-patch to v10 because of the risk that the revenant shm_mq_detach
callbacks would cause a live bug sometime. Since this is an API
change, it's too late to do it in 9.6. (We could make a variant
patch that preserves API, but I'm not excited enough to do that.)
Discussion: https://postgr.es/m/8670.1504192177@sss.pgh.pa.us
2017-08-31 21:10:24 +02:00
|
|
|
shm_mq_detach_internal(mq);
|
2014-01-14 18:23:22 +01:00
|
|
|
}
|