postgresql/src/interfaces/libpq/fe-misc.c

873 lines
20 KiB
C
Raw Normal View History

/*-------------------------------------------------------------------------
*
* FILE
* fe-misc.c
*
* DESCRIPTION
* miscellaneous useful functions
*
* The communication routines here are analogous to the ones in
* backend/libpq/pqcomm.c and backend/libpq/pqcomprim.c, but operate
* in the considerably different environment of the frontend libpq.
* In particular, we work with a bare nonblock-mode socket, rather than
* a stdio stream, so that we can avoid unwanted blocking of the application.
*
* XXX: MOVE DEBUG PRINTOUT TO HIGHER LEVEL. As is, block and restart
* will cause repeat printouts.
*
* We must speak the same transmitted data representations as the backend
* routines. Note that this module supports *only* network byte order
* for transmitted ints, whereas the backend modules (as of this writing)
* still handle either network or little-endian byte order.
*
* Portions Copyright (c) 1996-2001, PostgreSQL Global Development Group
* Portions Copyright (c) 1994, Regents of the University of California
*
*
* IDENTIFICATION
UPDATED PATCH: Attached are a revised set of SSL patches. Many of these patches are motivated by security concerns, it's not just bug fixes. The key differences (from stock 7.2.1) are: *) almost all code that directly uses the OpenSSL library is in two new files, src/interfaces/libpq/fe-ssl.c src/backend/postmaster/be-ssl.c in the long run, it would be nice to merge these two files. *) the legacy code to read and write network data have been encapsulated into read_SSL() and write_SSL(). These functions should probably be renamed - they handle both SSL and non-SSL cases. the remaining code should eliminate the problems identified earlier, albeit not very cleanly. *) both front- and back-ends will send a SSL shutdown via the new close_SSL() function. This is necessary for sessions to work properly. (Sessions are not yet fully supported, but by cleanly closing the SSL connection instead of just sending a TCP FIN packet other SSL tools will be much happier.) *) The client certificate and key are now expected in a subdirectory of the user's home directory. Specifically, - the directory .postgresql must be owned by the user, and allow no access by 'group' or 'other.' - the file .postgresql/postgresql.crt must be a regular file owned by the user. - the file .postgresql/postgresql.key must be a regular file owned by the user, and allow no access by 'group' or 'other'. At the current time encrypted private keys are not supported. There should also be a way to support multiple client certs/keys. *) the front-end performs minimal validation of the back-end cert. Self-signed certs are permitted, but the common name *must* match the hostname used by the front-end. (The cert itself should always use a fully qualified domain name (FDQN) in its common name field.) This means that psql -h eris db will fail, but psql -h eris.example.com db will succeed. At the current time this must be an exact match; future patches may support any FQDN that resolves to the address returned by getpeername(2). Another common "problem" is expiring certs. For now, it may be a good idea to use a very-long-lived self-signed cert. As a compile-time option, the front-end can specify a file containing valid root certificates, but it is not yet required. *) the back-end performs minimal validation of the client cert. It allows self-signed certs. It checks for expiration. It supports a compile-time option specifying a file containing valid root certificates. *) both front- and back-ends default to TLSv1, not SSLv3/SSLv2. *) both front- and back-ends support DSA keys. DSA keys are moderately more expensive on startup, but many people consider them preferable than RSA keys. (E.g., SSH2 prefers DSA keys.) *) if /dev/urandom exists, both client and server will read 16k of randomization data from it. *) the server can read empheral DH parameters from the files $DataDir/dh512.pem $DataDir/dh1024.pem $DataDir/dh2048.pem $DataDir/dh4096.pem if none are provided, the server will default to hardcoded parameter files provided by the OpenSSL project. Remaining tasks: *) the select() clauses need to be revisited - the SSL abstraction layer may need to absorb more of the current code to avoid rare deadlock conditions. This also touches on a true solution to the pg_eof() problem. *) the SIGPIPE signal handler may need to be revisited. *) support encrypted private keys. *) sessions are not yet fully supported. (SSL sessions can span multiple "connections," and allow the client and server to avoid costly renegotiations.) *) makecert - a script that creates back-end certs. *) pgkeygen - a tool that creates front-end certs. *) the whole protocol issue, SASL, etc. *) certs are fully validated - valid root certs must be available. This is a hassle, but it means that you *can* trust the identity of the server. *) the client library can handle hardcoded root certificates, to avoid the need to copy these files. *) host name of server cert must resolve to IP address, or be a recognized alias. This is more liberal than the previous iteration. *) the number of bytes transferred is tracked, and the session key is periodically renegotiated. *) basic cert generation scripts (mkcert.sh, pgkeygen.sh). The configuration files have reasonable defaults for each type of use. Bear Giles
2002-06-14 06:23:17 +02:00
* $Header: /cvsroot/pgsql/src/interfaces/libpq/fe-misc.c,v 1.73 2002/06/14 04:23:17 momjian Exp $
*
*-------------------------------------------------------------------------
*/
#include "postgres_fe.h"
#include <errno.h>
#include <signal.h>
#include <time.h>
#ifdef WIN32
#include "win32.h"
1999-07-19 08:25:40 +02:00
#else
#include <unistd.h>
#include <sys/time.h>
#endif
#ifdef HAVE_SYS_SELECT_H
#include <sys/select.h>
#endif
#include "libpq-fe.h"
#include "libpq-int.h"
#include "pqsignal.h"
#ifdef MULTIBYTE
#include "mb/pg_wchar.h"
#endif
UPDATED PATCH: Attached are a revised set of SSL patches. Many of these patches are motivated by security concerns, it's not just bug fixes. The key differences (from stock 7.2.1) are: *) almost all code that directly uses the OpenSSL library is in two new files, src/interfaces/libpq/fe-ssl.c src/backend/postmaster/be-ssl.c in the long run, it would be nice to merge these two files. *) the legacy code to read and write network data have been encapsulated into read_SSL() and write_SSL(). These functions should probably be renamed - they handle both SSL and non-SSL cases. the remaining code should eliminate the problems identified earlier, albeit not very cleanly. *) both front- and back-ends will send a SSL shutdown via the new close_SSL() function. This is necessary for sessions to work properly. (Sessions are not yet fully supported, but by cleanly closing the SSL connection instead of just sending a TCP FIN packet other SSL tools will be much happier.) *) The client certificate and key are now expected in a subdirectory of the user's home directory. Specifically, - the directory .postgresql must be owned by the user, and allow no access by 'group' or 'other.' - the file .postgresql/postgresql.crt must be a regular file owned by the user. - the file .postgresql/postgresql.key must be a regular file owned by the user, and allow no access by 'group' or 'other'. At the current time encrypted private keys are not supported. There should also be a way to support multiple client certs/keys. *) the front-end performs minimal validation of the back-end cert. Self-signed certs are permitted, but the common name *must* match the hostname used by the front-end. (The cert itself should always use a fully qualified domain name (FDQN) in its common name field.) This means that psql -h eris db will fail, but psql -h eris.example.com db will succeed. At the current time this must be an exact match; future patches may support any FQDN that resolves to the address returned by getpeername(2). Another common "problem" is expiring certs. For now, it may be a good idea to use a very-long-lived self-signed cert. As a compile-time option, the front-end can specify a file containing valid root certificates, but it is not yet required. *) the back-end performs minimal validation of the client cert. It allows self-signed certs. It checks for expiration. It supports a compile-time option specifying a file containing valid root certificates. *) both front- and back-ends default to TLSv1, not SSLv3/SSLv2. *) both front- and back-ends support DSA keys. DSA keys are moderately more expensive on startup, but many people consider them preferable than RSA keys. (E.g., SSH2 prefers DSA keys.) *) if /dev/urandom exists, both client and server will read 16k of randomization data from it. *) the server can read empheral DH parameters from the files $DataDir/dh512.pem $DataDir/dh1024.pem $DataDir/dh2048.pem $DataDir/dh4096.pem if none are provided, the server will default to hardcoded parameter files provided by the OpenSSL project. Remaining tasks: *) the select() clauses need to be revisited - the SSL abstraction layer may need to absorb more of the current code to avoid rare deadlock conditions. This also touches on a true solution to the pg_eof() problem. *) the SIGPIPE signal handler may need to be revisited. *) support encrypted private keys. *) sessions are not yet fully supported. (SSL sessions can span multiple "connections," and allow the client and server to avoid costly renegotiations.) *) makecert - a script that creates back-end certs. *) pgkeygen - a tool that creates front-end certs. *) the whole protocol issue, SASL, etc. *) certs are fully validated - valid root certs must be available. This is a hassle, but it means that you *can* trust the identity of the server. *) the client library can handle hardcoded root certificates, to avoid the need to copy these files. *) host name of server cert must resolve to IP address, or be a recognized alias. This is more liberal than the previous iteration. *) the number of bytes transferred is tracked, and the session key is periodically renegotiated. *) basic cert generation scripts (mkcert.sh, pgkeygen.sh). The configuration files have reasonable defaults for each type of use. Bear Giles
2002-06-14 06:23:17 +02:00
extern void secure_close(PGconn *);
extern ssize_t secure_read(PGconn *, void *, size_t);
extern ssize_t secure_write(PGconn *, const void *, size_t);
#define DONOTICE(conn,message) \
((*(conn)->noticeHook) ((conn)->noticeArg, (message)))
static int pqPutBytes(const char *s, size_t nbytes, PGconn *conn);
/*
* pqGetc:
* get a character from the connection
*
* All these routines return 0 on success, EOF on error.
* Note that for the Get routines, EOF only means there is not enough
* data in the buffer, not that there is necessarily a hard error.
*/
int
pqGetc(char *result, PGconn *conn)
{
if (conn->inCursor >= conn->inEnd)
return EOF;
*result = conn->inBuffer[conn->inCursor++];
if (conn->Pfdebug)
fprintf(conn->Pfdebug, "From backend> %c\n", *result);
return 0;
}
/*
* write 1 char to the connection
*/
int
pqPutc(char c, PGconn *conn)
{
if (pqPutBytes(&c, 1, conn) == EOF)
return EOF;
if (conn->Pfdebug)
fprintf(conn->Pfdebug, "To backend> %c\n", c);
return 0;
}
/*
* pqPutBytes: local routine to write N bytes to the connection,
* with buffering
*/
static int
pqPutBytes(const char *s, size_t nbytes, PGconn *conn)
{
/* Strategy to handle blocking and non-blocking connections: Fill
* the output buffer and flush it repeatedly until either all data
* has been sent or is at least queued in the buffer.
Here's a patch against 7.1.3 that fixes a problem with sending larger queries over non-blocking connections with libpq. "Larger" here basically means that it doesn't fit into the output buffer. The basic strategy is to fix pqFlush and pqPutBytes. The problem with pqFlush as it stands now is that it returns EOF when an error occurs or when not all data could be sent. The latter case is clearly not an error for a non-blocking connection but the caller can't distringuish it from an error very well. The first part of the fix is therefore to fix pqFlush. This is done by to renaming it to pqSendSome which only differs from pqFlush in its return values to allow the caller to make the above distinction and a new pqFlush which is implemented in terms of pqSendSome and behaves exactly like the old pqFlush. The second part of the fix modifies pqPutBytes to use pqSendSome instead of pqFlush and to either send all the data or if not all data can be sent on a non-blocking connection to at least put all data into the output buffer, enlarging it if necessary. The callers of pqPutBytes don't have to be changed because from their point of view pqPutBytes behaves like before. It either succeeds in queueing all output data or fails with an error. I've also added a new API function PQsendSome which analogously to PQflush just calls pqSendSome. Programs using non-blocking queries should use this new function. The main difference is that this function will have to be called repeatedly (calling select() properly in between) until all data has been written. AFAICT, the code in CVS HEAD hasn't changed with respect to non-blocking queries and this fix should work there, too, but I haven't tested that yet. Bernhard Herzog
2002-03-05 06:20:12 +01:00
*
* For non-blocking connections, grow the buffer if not all data
* fits into it and the buffer can't be sent because the socket
* would block.
*/
Here's a patch against 7.1.3 that fixes a problem with sending larger queries over non-blocking connections with libpq. "Larger" here basically means that it doesn't fit into the output buffer. The basic strategy is to fix pqFlush and pqPutBytes. The problem with pqFlush as it stands now is that it returns EOF when an error occurs or when not all data could be sent. The latter case is clearly not an error for a non-blocking connection but the caller can't distringuish it from an error very well. The first part of the fix is therefore to fix pqFlush. This is done by to renaming it to pqSendSome which only differs from pqFlush in its return values to allow the caller to make the above distinction and a new pqFlush which is implemented in terms of pqSendSome and behaves exactly like the old pqFlush. The second part of the fix modifies pqPutBytes to use pqSendSome instead of pqFlush and to either send all the data or if not all data can be sent on a non-blocking connection to at least put all data into the output buffer, enlarging it if necessary. The callers of pqPutBytes don't have to be changed because from their point of view pqPutBytes behaves like before. It either succeeds in queueing all output data or fails with an error. I've also added a new API function PQsendSome which analogously to PQflush just calls pqSendSome. Programs using non-blocking queries should use this new function. The main difference is that this function will have to be called repeatedly (calling select() properly in between) until all data has been written. AFAICT, the code in CVS HEAD hasn't changed with respect to non-blocking queries and this fix should work there, too, but I haven't tested that yet. Bernhard Herzog
2002-03-05 06:20:12 +01:00
while (nbytes)
{
size_t avail, remaining;
Here's a patch against 7.1.3 that fixes a problem with sending larger queries over non-blocking connections with libpq. "Larger" here basically means that it doesn't fit into the output buffer. The basic strategy is to fix pqFlush and pqPutBytes. The problem with pqFlush as it stands now is that it returns EOF when an error occurs or when not all data could be sent. The latter case is clearly not an error for a non-blocking connection but the caller can't distringuish it from an error very well. The first part of the fix is therefore to fix pqFlush. This is done by to renaming it to pqSendSome which only differs from pqFlush in its return values to allow the caller to make the above distinction and a new pqFlush which is implemented in terms of pqSendSome and behaves exactly like the old pqFlush. The second part of the fix modifies pqPutBytes to use pqSendSome instead of pqFlush and to either send all the data or if not all data can be sent on a non-blocking connection to at least put all data into the output buffer, enlarging it if necessary. The callers of pqPutBytes don't have to be changed because from their point of view pqPutBytes behaves like before. It either succeeds in queueing all output data or fails with an error. I've also added a new API function PQsendSome which analogously to PQflush just calls pqSendSome. Programs using non-blocking queries should use this new function. The main difference is that this function will have to be called repeatedly (calling select() properly in between) until all data has been written. AFAICT, the code in CVS HEAD hasn't changed with respect to non-blocking queries and this fix should work there, too, but I haven't tested that yet. Bernhard Herzog
2002-03-05 06:20:12 +01:00
/* fill the output buffer */
avail = Max(conn->outBufSize - conn->outCount, 0);
remaining = Min(avail, nbytes);
memcpy(conn->outBuffer + conn->outCount, s, remaining);
conn->outCount += remaining;
s += remaining;
nbytes -= remaining;
/* if the data didn't fit completely into the buffer, try to
* flush the buffer */
Here's a patch against 7.1.3 that fixes a problem with sending larger queries over non-blocking connections with libpq. "Larger" here basically means that it doesn't fit into the output buffer. The basic strategy is to fix pqFlush and pqPutBytes. The problem with pqFlush as it stands now is that it returns EOF when an error occurs or when not all data could be sent. The latter case is clearly not an error for a non-blocking connection but the caller can't distringuish it from an error very well. The first part of the fix is therefore to fix pqFlush. This is done by to renaming it to pqSendSome which only differs from pqFlush in its return values to allow the caller to make the above distinction and a new pqFlush which is implemented in terms of pqSendSome and behaves exactly like the old pqFlush. The second part of the fix modifies pqPutBytes to use pqSendSome instead of pqFlush and to either send all the data or if not all data can be sent on a non-blocking connection to at least put all data into the output buffer, enlarging it if necessary. The callers of pqPutBytes don't have to be changed because from their point of view pqPutBytes behaves like before. It either succeeds in queueing all output data or fails with an error. I've also added a new API function PQsendSome which analogously to PQflush just calls pqSendSome. Programs using non-blocking queries should use this new function. The main difference is that this function will have to be called repeatedly (calling select() properly in between) until all data has been written. AFAICT, the code in CVS HEAD hasn't changed with respect to non-blocking queries and this fix should work there, too, but I haven't tested that yet. Bernhard Herzog
2002-03-05 06:20:12 +01:00
if (nbytes)
{
int send_result = pqSendSome(conn);
Here's a patch against 7.1.3 that fixes a problem with sending larger queries over non-blocking connections with libpq. "Larger" here basically means that it doesn't fit into the output buffer. The basic strategy is to fix pqFlush and pqPutBytes. The problem with pqFlush as it stands now is that it returns EOF when an error occurs or when not all data could be sent. The latter case is clearly not an error for a non-blocking connection but the caller can't distringuish it from an error very well. The first part of the fix is therefore to fix pqFlush. This is done by to renaming it to pqSendSome which only differs from pqFlush in its return values to allow the caller to make the above distinction and a new pqFlush which is implemented in terms of pqSendSome and behaves exactly like the old pqFlush. The second part of the fix modifies pqPutBytes to use pqSendSome instead of pqFlush and to either send all the data or if not all data can be sent on a non-blocking connection to at least put all data into the output buffer, enlarging it if necessary. The callers of pqPutBytes don't have to be changed because from their point of view pqPutBytes behaves like before. It either succeeds in queueing all output data or fails with an error. I've also added a new API function PQsendSome which analogously to PQflush just calls pqSendSome. Programs using non-blocking queries should use this new function. The main difference is that this function will have to be called repeatedly (calling select() properly in between) until all data has been written. AFAICT, the code in CVS HEAD hasn't changed with respect to non-blocking queries and this fix should work there, too, but I haven't tested that yet. Bernhard Herzog
2002-03-05 06:20:12 +01:00
/* if there were errors, report them */
if (send_result < 0)
return EOF;
/* if not all data could be sent, increase the output
* buffer, put the rest of s into it and return
* successfully. This case will only happen in a
* non-blocking connection
Here's a patch against 7.1.3 that fixes a problem with sending larger queries over non-blocking connections with libpq. "Larger" here basically means that it doesn't fit into the output buffer. The basic strategy is to fix pqFlush and pqPutBytes. The problem with pqFlush as it stands now is that it returns EOF when an error occurs or when not all data could be sent. The latter case is clearly not an error for a non-blocking connection but the caller can't distringuish it from an error very well. The first part of the fix is therefore to fix pqFlush. This is done by to renaming it to pqSendSome which only differs from pqFlush in its return values to allow the caller to make the above distinction and a new pqFlush which is implemented in terms of pqSendSome and behaves exactly like the old pqFlush. The second part of the fix modifies pqPutBytes to use pqSendSome instead of pqFlush and to either send all the data or if not all data can be sent on a non-blocking connection to at least put all data into the output buffer, enlarging it if necessary. The callers of pqPutBytes don't have to be changed because from their point of view pqPutBytes behaves like before. It either succeeds in queueing all output data or fails with an error. I've also added a new API function PQsendSome which analogously to PQflush just calls pqSendSome. Programs using non-blocking queries should use this new function. The main difference is that this function will have to be called repeatedly (calling select() properly in between) until all data has been written. AFAICT, the code in CVS HEAD hasn't changed with respect to non-blocking queries and this fix should work there, too, but I haven't tested that yet. Bernhard Herzog
2002-03-05 06:20:12 +01:00
*/
if (send_result > 0)
{
/* try to grow the buffer.
* FIXME: The new size could be chosen more
* intelligently.
Here's a patch against 7.1.3 that fixes a problem with sending larger queries over non-blocking connections with libpq. "Larger" here basically means that it doesn't fit into the output buffer. The basic strategy is to fix pqFlush and pqPutBytes. The problem with pqFlush as it stands now is that it returns EOF when an error occurs or when not all data could be sent. The latter case is clearly not an error for a non-blocking connection but the caller can't distringuish it from an error very well. The first part of the fix is therefore to fix pqFlush. This is done by to renaming it to pqSendSome which only differs from pqFlush in its return values to allow the caller to make the above distinction and a new pqFlush which is implemented in terms of pqSendSome and behaves exactly like the old pqFlush. The second part of the fix modifies pqPutBytes to use pqSendSome instead of pqFlush and to either send all the data or if not all data can be sent on a non-blocking connection to at least put all data into the output buffer, enlarging it if necessary. The callers of pqPutBytes don't have to be changed because from their point of view pqPutBytes behaves like before. It either succeeds in queueing all output data or fails with an error. I've also added a new API function PQsendSome which analogously to PQflush just calls pqSendSome. Programs using non-blocking queries should use this new function. The main difference is that this function will have to be called repeatedly (calling select() properly in between) until all data has been written. AFAICT, the code in CVS HEAD hasn't changed with respect to non-blocking queries and this fix should work there, too, but I haven't tested that yet. Bernhard Herzog
2002-03-05 06:20:12 +01:00
*/
size_t buflen = conn->outCount + nbytes;
Here's a patch against 7.1.3 that fixes a problem with sending larger queries over non-blocking connections with libpq. "Larger" here basically means that it doesn't fit into the output buffer. The basic strategy is to fix pqFlush and pqPutBytes. The problem with pqFlush as it stands now is that it returns EOF when an error occurs or when not all data could be sent. The latter case is clearly not an error for a non-blocking connection but the caller can't distringuish it from an error very well. The first part of the fix is therefore to fix pqFlush. This is done by to renaming it to pqSendSome which only differs from pqFlush in its return values to allow the caller to make the above distinction and a new pqFlush which is implemented in terms of pqSendSome and behaves exactly like the old pqFlush. The second part of the fix modifies pqPutBytes to use pqSendSome instead of pqFlush and to either send all the data or if not all data can be sent on a non-blocking connection to at least put all data into the output buffer, enlarging it if necessary. The callers of pqPutBytes don't have to be changed because from their point of view pqPutBytes behaves like before. It either succeeds in queueing all output data or fails with an error. I've also added a new API function PQsendSome which analogously to PQflush just calls pqSendSome. Programs using non-blocking queries should use this new function. The main difference is that this function will have to be called repeatedly (calling select() properly in between) until all data has been written. AFAICT, the code in CVS HEAD hasn't changed with respect to non-blocking queries and this fix should work there, too, but I haven't tested that yet. Bernhard Herzog
2002-03-05 06:20:12 +01:00
if (buflen > conn->outBufSize)
{
char * newbuf = realloc(conn->outBuffer, buflen);
Here's a patch against 7.1.3 that fixes a problem with sending larger queries over non-blocking connections with libpq. "Larger" here basically means that it doesn't fit into the output buffer. The basic strategy is to fix pqFlush and pqPutBytes. The problem with pqFlush as it stands now is that it returns EOF when an error occurs or when not all data could be sent. The latter case is clearly not an error for a non-blocking connection but the caller can't distringuish it from an error very well. The first part of the fix is therefore to fix pqFlush. This is done by to renaming it to pqSendSome which only differs from pqFlush in its return values to allow the caller to make the above distinction and a new pqFlush which is implemented in terms of pqSendSome and behaves exactly like the old pqFlush. The second part of the fix modifies pqPutBytes to use pqSendSome instead of pqFlush and to either send all the data or if not all data can be sent on a non-blocking connection to at least put all data into the output buffer, enlarging it if necessary. The callers of pqPutBytes don't have to be changed because from their point of view pqPutBytes behaves like before. It either succeeds in queueing all output data or fails with an error. I've also added a new API function PQsendSome which analogously to PQflush just calls pqSendSome. Programs using non-blocking queries should use this new function. The main difference is that this function will have to be called repeatedly (calling select() properly in between) until all data has been written. AFAICT, the code in CVS HEAD hasn't changed with respect to non-blocking queries and this fix should work there, too, but I haven't tested that yet. Bernhard Herzog
2002-03-05 06:20:12 +01:00
if (!newbuf)
{
/* realloc failed. Probably out of memory */
printfPQExpBuffer(&conn->errorMessage,
"cannot allocate memory for output buffer\n");
Here's a patch against 7.1.3 that fixes a problem with sending larger queries over non-blocking connections with libpq. "Larger" here basically means that it doesn't fit into the output buffer. The basic strategy is to fix pqFlush and pqPutBytes. The problem with pqFlush as it stands now is that it returns EOF when an error occurs or when not all data could be sent. The latter case is clearly not an error for a non-blocking connection but the caller can't distringuish it from an error very well. The first part of the fix is therefore to fix pqFlush. This is done by to renaming it to pqSendSome which only differs from pqFlush in its return values to allow the caller to make the above distinction and a new pqFlush which is implemented in terms of pqSendSome and behaves exactly like the old pqFlush. The second part of the fix modifies pqPutBytes to use pqSendSome instead of pqFlush and to either send all the data or if not all data can be sent on a non-blocking connection to at least put all data into the output buffer, enlarging it if necessary. The callers of pqPutBytes don't have to be changed because from their point of view pqPutBytes behaves like before. It either succeeds in queueing all output data or fails with an error. I've also added a new API function PQsendSome which analogously to PQflush just calls pqSendSome. Programs using non-blocking queries should use this new function. The main difference is that this function will have to be called repeatedly (calling select() properly in between) until all data has been written. AFAICT, the code in CVS HEAD hasn't changed with respect to non-blocking queries and this fix should work there, too, but I haven't tested that yet. Bernhard Herzog
2002-03-05 06:20:12 +01:00
return EOF;
}
conn->outBuffer = newbuf;
conn->outBufSize = buflen;
}
/* put the data into it */
memcpy(conn->outBuffer + conn->outCount, s, nbytes);
conn->outCount += nbytes;
/* report success. */
return 0;
}
}
/* pqSendSome was able to send all data. Continue with the next
* chunk of s. */
} /* while */
return 0;
}
/*
* pqGets:
* get a null-terminated string from the connection,
* and store it in an expansible PQExpBuffer.
* If we run out of memory, all of the string is still read,
* but the excess characters are silently discarded.
*/
int
pqGets(PQExpBuffer buf, PGconn *conn)
{
/* Copy conn data to locals for faster search loop */
char *inBuffer = conn->inBuffer;
int inCursor = conn->inCursor;
int inEnd = conn->inEnd;
int slen;
while (inCursor < inEnd && inBuffer[inCursor])
inCursor++;
if (inCursor >= inEnd)
return EOF;
slen = inCursor - conn->inCursor;
resetPQExpBuffer(buf);
appendBinaryPQExpBuffer(buf, inBuffer + conn->inCursor, slen);
conn->inCursor = ++inCursor;
if (conn->Pfdebug)
fprintf(conn->Pfdebug, "From backend> \"%s\"\n",
buf->data);
return 0;
}
int
pqPuts(const char *s, PGconn *conn)
{
if (pqPutBytes(s, strlen(s) + 1, conn))
return EOF;
if (conn->Pfdebug)
fprintf(conn->Pfdebug, "To backend> %s\n", s);
return 0;
}
/*
* pqGetnchar:
* get a string of exactly len bytes in buffer s, no null termination
*/
int
pqGetnchar(char *s, size_t len, PGconn *conn)
{
if (len < 0 || len > conn->inEnd - conn->inCursor)
return EOF;
memcpy(s, conn->inBuffer + conn->inCursor, len);
/* no terminating null */
conn->inCursor += len;
if (conn->Pfdebug)
2001-03-22 05:01:46 +01:00
fprintf(conn->Pfdebug, "From backend (%lu)> %.*s\n", (unsigned long) len, (int) len, s);
return 0;
}
/*
* pqPutnchar:
* send a string of exactly len bytes, no null termination needed
*/
int
pqPutnchar(const char *s, size_t len, PGconn *conn)
{
if (pqPutBytes(s, len, conn))
return EOF;
if (conn->Pfdebug)
fprintf(conn->Pfdebug, "To backend> %.*s\n", (int) len, s);
return 0;
}
/*
* pgGetInt
* read a 2 or 4 byte integer and convert from network byte order
* to local byte order
*/
int
pqGetInt(int *result, size_t bytes, PGconn *conn)
{
uint16 tmp2;
uint32 tmp4;
char noticeBuf[64];
switch (bytes)
{
case 2:
if (conn->inCursor + 2 > conn->inEnd)
return EOF;
memcpy(&tmp2, conn->inBuffer + conn->inCursor, 2);
conn->inCursor += 2;
*result = (int) ntohs(tmp2);
break;
case 4:
if (conn->inCursor + 4 > conn->inEnd)
return EOF;
memcpy(&tmp4, conn->inBuffer + conn->inCursor, 4);
conn->inCursor += 4;
*result = (int) ntohl(tmp4);
break;
default:
snprintf(noticeBuf, sizeof(noticeBuf),
libpq_gettext("integer of size %lu not supported by pqGetInt\n"),
(unsigned long) bytes);
DONOTICE(conn, noticeBuf);
return EOF;
}
if (conn->Pfdebug)
2001-03-22 05:01:46 +01:00
fprintf(conn->Pfdebug, "From backend (#%lu)> %d\n", (unsigned long) bytes, *result);
return 0;
}
/*
* pgPutInt
* send an integer of 2 or 4 bytes, converting from host byte order
* to network byte order.
*/
int
pqPutInt(int value, size_t bytes, PGconn *conn)
{
uint16 tmp2;
uint32 tmp4;
char noticeBuf[64];
switch (bytes)
{
case 2:
tmp2 = htons((uint16) value);
if (pqPutBytes((const char *) &tmp2, 2, conn))
return EOF;
break;
case 4:
tmp4 = htonl((uint32) value);
if (pqPutBytes((const char *) &tmp4, 4, conn))
return EOF;
break;
default:
snprintf(noticeBuf, sizeof(noticeBuf),
libpq_gettext("integer of size %lu not supported by pqPutInt\n"),
(unsigned long) bytes);
DONOTICE(conn, noticeBuf);
return EOF;
}
if (conn->Pfdebug)
2001-03-22 05:01:46 +01:00
fprintf(conn->Pfdebug, "To backend (%lu#)> %d\n", (unsigned long) bytes, value);
return 0;
}
/*
* pqReadReady: is select() saying the file is ready to read?
* Returns -1 on failure, 0 if not ready, 1 if ready.
*/
int
pqReadReady(PGconn *conn)
{
fd_set input_mask;
struct timeval timeout;
if (!conn || conn->sock < 0)
return -1;
retry1:
FD_ZERO(&input_mask);
FD_SET(conn->sock, &input_mask);
timeout.tv_sec = 0;
timeout.tv_usec = 0;
if (select(conn->sock + 1, &input_mask, (fd_set *) NULL, (fd_set *) NULL,
&timeout) < 0)
{
if (SOCK_ERRNO == EINTR)
/* Interrupted system call - we'll just try again */
goto retry1;
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("select() failed: %s\n"),
SOCK_STRERROR(SOCK_ERRNO));
return -1;
}
return FD_ISSET(conn->sock, &input_mask) ? 1 : 0;
}
/*
* pqWriteReady: is select() saying the file is ready to write?
* Returns -1 on failure, 0 if not ready, 1 if ready.
*/
int
pqWriteReady(PGconn *conn)
{
fd_set input_mask;
struct timeval timeout;
if (!conn || conn->sock < 0)
return -1;
retry2:
FD_ZERO(&input_mask);
FD_SET(conn->sock, &input_mask);
timeout.tv_sec = 0;
timeout.tv_usec = 0;
if (select(conn->sock + 1, (fd_set *) NULL, &input_mask, (fd_set *) NULL,
&timeout) < 0)
{
if (SOCK_ERRNO == EINTR)
/* Interrupted system call - we'll just try again */
goto retry2;
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("select() failed: %s\n"),
SOCK_STRERROR(SOCK_ERRNO));
return -1;
}
return FD_ISSET(conn->sock, &input_mask) ? 1 : 0;
}
/* ----------
* pqReadData: read more data, if any is available
* Possible return values:
* 1: successfully loaded at least one more byte
* 0: no data is presently available, but no error detected
* -1: error detected (including EOF = connection closure);
* conn->errorMessage set
* NOTE: callers must not assume that pointers or indexes into conn->inBuffer
* remain valid across this call!
* ----------
*/
int
pqReadData(PGconn *conn)
{
int someread = 0;
int nread;
if (conn->sock < 0)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("connection not open\n"));
return -1;
}
/* Left-justify any data in the buffer to make room */
if (conn->inStart < conn->inEnd)
{
if (conn->inStart > 0)
{
memmove(conn->inBuffer, conn->inBuffer + conn->inStart,
conn->inEnd - conn->inStart);
conn->inEnd -= conn->inStart;
conn->inCursor -= conn->inStart;
conn->inStart = 0;
}
}
else
{
/* buffer is logically empty, reset it */
conn->inStart = conn->inCursor = conn->inEnd = 0;
}
/*
* If the buffer is fairly full, enlarge it. We need to be able to
* enlarge the buffer in case a single message exceeds the initial
* buffer size. We enlarge before filling the buffer entirely so as
* to avoid asking the kernel for a partial packet. The magic constant
* here should be large enough for a TCP packet or Unix pipe
* bufferload. 8K is the usual pipe buffer size, so...
*/
if (conn->inBufSize - conn->inEnd < 8192)
{
int newSize = conn->inBufSize * 2;
char *newBuf = (char *) realloc(conn->inBuffer, newSize);
if (newBuf)
{
conn->inBuffer = newBuf;
conn->inBufSize = newSize;
}
}
/* OK, try to read some data */
retry3:
UPDATED PATCH: Attached are a revised set of SSL patches. Many of these patches are motivated by security concerns, it's not just bug fixes. The key differences (from stock 7.2.1) are: *) almost all code that directly uses the OpenSSL library is in two new files, src/interfaces/libpq/fe-ssl.c src/backend/postmaster/be-ssl.c in the long run, it would be nice to merge these two files. *) the legacy code to read and write network data have been encapsulated into read_SSL() and write_SSL(). These functions should probably be renamed - they handle both SSL and non-SSL cases. the remaining code should eliminate the problems identified earlier, albeit not very cleanly. *) both front- and back-ends will send a SSL shutdown via the new close_SSL() function. This is necessary for sessions to work properly. (Sessions are not yet fully supported, but by cleanly closing the SSL connection instead of just sending a TCP FIN packet other SSL tools will be much happier.) *) The client certificate and key are now expected in a subdirectory of the user's home directory. Specifically, - the directory .postgresql must be owned by the user, and allow no access by 'group' or 'other.' - the file .postgresql/postgresql.crt must be a regular file owned by the user. - the file .postgresql/postgresql.key must be a regular file owned by the user, and allow no access by 'group' or 'other'. At the current time encrypted private keys are not supported. There should also be a way to support multiple client certs/keys. *) the front-end performs minimal validation of the back-end cert. Self-signed certs are permitted, but the common name *must* match the hostname used by the front-end. (The cert itself should always use a fully qualified domain name (FDQN) in its common name field.) This means that psql -h eris db will fail, but psql -h eris.example.com db will succeed. At the current time this must be an exact match; future patches may support any FQDN that resolves to the address returned by getpeername(2). Another common "problem" is expiring certs. For now, it may be a good idea to use a very-long-lived self-signed cert. As a compile-time option, the front-end can specify a file containing valid root certificates, but it is not yet required. *) the back-end performs minimal validation of the client cert. It allows self-signed certs. It checks for expiration. It supports a compile-time option specifying a file containing valid root certificates. *) both front- and back-ends default to TLSv1, not SSLv3/SSLv2. *) both front- and back-ends support DSA keys. DSA keys are moderately more expensive on startup, but many people consider them preferable than RSA keys. (E.g., SSH2 prefers DSA keys.) *) if /dev/urandom exists, both client and server will read 16k of randomization data from it. *) the server can read empheral DH parameters from the files $DataDir/dh512.pem $DataDir/dh1024.pem $DataDir/dh2048.pem $DataDir/dh4096.pem if none are provided, the server will default to hardcoded parameter files provided by the OpenSSL project. Remaining tasks: *) the select() clauses need to be revisited - the SSL abstraction layer may need to absorb more of the current code to avoid rare deadlock conditions. This also touches on a true solution to the pg_eof() problem. *) the SIGPIPE signal handler may need to be revisited. *) support encrypted private keys. *) sessions are not yet fully supported. (SSL sessions can span multiple "connections," and allow the client and server to avoid costly renegotiations.) *) makecert - a script that creates back-end certs. *) pgkeygen - a tool that creates front-end certs. *) the whole protocol issue, SASL, etc. *) certs are fully validated - valid root certs must be available. This is a hassle, but it means that you *can* trust the identity of the server. *) the client library can handle hardcoded root certificates, to avoid the need to copy these files. *) host name of server cert must resolve to IP address, or be a recognized alias. This is more liberal than the previous iteration. *) the number of bytes transferred is tracked, and the session key is periodically renegotiated. *) basic cert generation scripts (mkcert.sh, pgkeygen.sh). The configuration files have reasonable defaults for each type of use. Bear Giles
2002-06-14 06:23:17 +02:00
nread = secure_read(conn, conn->inBuffer + conn->inEnd,
conn->inBufSize - conn->inEnd);
if (nread < 0)
{
if (SOCK_ERRNO == EINTR)
goto retry3;
/* Some systems return EAGAIN/EWOULDBLOCK for no data */
#ifdef EAGAIN
if (SOCK_ERRNO == EAGAIN)
return someread;
#endif
#if defined(EWOULDBLOCK) && (!defined(EAGAIN) || (EWOULDBLOCK != EAGAIN))
if (SOCK_ERRNO == EWOULDBLOCK)
return someread;
1998-09-20 06:51:12 +02:00
#endif
/* We might get ECONNRESET here if using TCP and backend died */
#ifdef ECONNRESET
if (SOCK_ERRNO == ECONNRESET)
1998-09-20 06:51:12 +02:00
goto definitelyFailed;
#endif
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("could not receive data from server: %s\n"),
SOCK_STRERROR(SOCK_ERRNO));
return -1;
}
if (nread > 0)
{
conn->inEnd += nread;
/*
* Hack to deal with the fact that some kernels will only give us
* back 1 packet per recv() call, even if we asked for more and
* there is more available. If it looks like we are reading a
* long message, loop back to recv() again immediately, until we
* run out of data or buffer space. Without this, the
* block-and-restart behavior of libpq's higher levels leads to
* O(N^2) performance on long messages.
*
* Since we left-justified the data above, conn->inEnd gives the
* amount of data already read in the current message. We
* consider the message "long" once we have acquired 32k ...
*/
if (conn->inEnd > 32768 &&
(conn->inBufSize - conn->inEnd) >= 8192)
{
someread = 1;
goto retry3;
}
return 1;
}
if (someread)
return 1; /* got a zero read after successful tries */
/*
* A return value of 0 could mean just that no data is now available,
* or it could mean EOF --- that is, the server has closed the
* connection. Since we have the socket in nonblock mode, the only way
* to tell the difference is to see if select() is saying that the
* file is ready. Grumble. Fortunately, we don't expect this path to
* be taken much, since in normal practice we should not be trying to
* read data unless the file selected for reading already.
*/
switch (pqReadReady(conn))
{
case 0:
/* definitely no data available */
return 0;
case 1:
/* ready for read */
break;
default:
goto definitelyFailed;
}
/*
* Still not sure that it's EOF, because some data could have just
* arrived.
*/
retry4:
UPDATED PATCH: Attached are a revised set of SSL patches. Many of these patches are motivated by security concerns, it's not just bug fixes. The key differences (from stock 7.2.1) are: *) almost all code that directly uses the OpenSSL library is in two new files, src/interfaces/libpq/fe-ssl.c src/backend/postmaster/be-ssl.c in the long run, it would be nice to merge these two files. *) the legacy code to read and write network data have been encapsulated into read_SSL() and write_SSL(). These functions should probably be renamed - they handle both SSL and non-SSL cases. the remaining code should eliminate the problems identified earlier, albeit not very cleanly. *) both front- and back-ends will send a SSL shutdown via the new close_SSL() function. This is necessary for sessions to work properly. (Sessions are not yet fully supported, but by cleanly closing the SSL connection instead of just sending a TCP FIN packet other SSL tools will be much happier.) *) The client certificate and key are now expected in a subdirectory of the user's home directory. Specifically, - the directory .postgresql must be owned by the user, and allow no access by 'group' or 'other.' - the file .postgresql/postgresql.crt must be a regular file owned by the user. - the file .postgresql/postgresql.key must be a regular file owned by the user, and allow no access by 'group' or 'other'. At the current time encrypted private keys are not supported. There should also be a way to support multiple client certs/keys. *) the front-end performs minimal validation of the back-end cert. Self-signed certs are permitted, but the common name *must* match the hostname used by the front-end. (The cert itself should always use a fully qualified domain name (FDQN) in its common name field.) This means that psql -h eris db will fail, but psql -h eris.example.com db will succeed. At the current time this must be an exact match; future patches may support any FQDN that resolves to the address returned by getpeername(2). Another common "problem" is expiring certs. For now, it may be a good idea to use a very-long-lived self-signed cert. As a compile-time option, the front-end can specify a file containing valid root certificates, but it is not yet required. *) the back-end performs minimal validation of the client cert. It allows self-signed certs. It checks for expiration. It supports a compile-time option specifying a file containing valid root certificates. *) both front- and back-ends default to TLSv1, not SSLv3/SSLv2. *) both front- and back-ends support DSA keys. DSA keys are moderately more expensive on startup, but many people consider them preferable than RSA keys. (E.g., SSH2 prefers DSA keys.) *) if /dev/urandom exists, both client and server will read 16k of randomization data from it. *) the server can read empheral DH parameters from the files $DataDir/dh512.pem $DataDir/dh1024.pem $DataDir/dh2048.pem $DataDir/dh4096.pem if none are provided, the server will default to hardcoded parameter files provided by the OpenSSL project. Remaining tasks: *) the select() clauses need to be revisited - the SSL abstraction layer may need to absorb more of the current code to avoid rare deadlock conditions. This also touches on a true solution to the pg_eof() problem. *) the SIGPIPE signal handler may need to be revisited. *) support encrypted private keys. *) sessions are not yet fully supported. (SSL sessions can span multiple "connections," and allow the client and server to avoid costly renegotiations.) *) makecert - a script that creates back-end certs. *) pgkeygen - a tool that creates front-end certs. *) the whole protocol issue, SASL, etc. *) certs are fully validated - valid root certs must be available. This is a hassle, but it means that you *can* trust the identity of the server. *) the client library can handle hardcoded root certificates, to avoid the need to copy these files. *) host name of server cert must resolve to IP address, or be a recognized alias. This is more liberal than the previous iteration. *) the number of bytes transferred is tracked, and the session key is periodically renegotiated. *) basic cert generation scripts (mkcert.sh, pgkeygen.sh). The configuration files have reasonable defaults for each type of use. Bear Giles
2002-06-14 06:23:17 +02:00
nread = secure_read(conn, conn->inBuffer + conn->inEnd,
conn->inBufSize - conn->inEnd);
if (nread < 0)
{
if (SOCK_ERRNO == EINTR)
goto retry4;
/* Some systems return EAGAIN/EWOULDBLOCK for no data */
#ifdef EAGAIN
if (SOCK_ERRNO == EAGAIN)
return 0;
#endif
#if defined(EWOULDBLOCK) && (!defined(EAGAIN) || (EWOULDBLOCK != EAGAIN))
if (SOCK_ERRNO == EWOULDBLOCK)
return 0;
1998-09-20 06:51:12 +02:00
#endif
/* We might get ECONNRESET here if using TCP and backend died */
#ifdef ECONNRESET
if (SOCK_ERRNO == ECONNRESET)
1998-09-20 06:51:12 +02:00
goto definitelyFailed;
#endif
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("could not receive data from server: %s\n"),
SOCK_STRERROR(SOCK_ERRNO));
return -1;
}
if (nread > 0)
{
conn->inEnd += nread;
return 1;
}
/*
* OK, we are getting a zero read even though select() says ready.
* This means the connection has been closed. Cope.
*/
1998-09-20 06:51:12 +02:00
definitelyFailed:
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext(
"server closed the connection unexpectedly\n"
"\tThis probably means the server terminated abnormally\n"
"\tbefore or while processing the request.\n"));
conn->status = CONNECTION_BAD; /* No more connection to backend */
UPDATED PATCH: Attached are a revised set of SSL patches. Many of these patches are motivated by security concerns, it's not just bug fixes. The key differences (from stock 7.2.1) are: *) almost all code that directly uses the OpenSSL library is in two new files, src/interfaces/libpq/fe-ssl.c src/backend/postmaster/be-ssl.c in the long run, it would be nice to merge these two files. *) the legacy code to read and write network data have been encapsulated into read_SSL() and write_SSL(). These functions should probably be renamed - they handle both SSL and non-SSL cases. the remaining code should eliminate the problems identified earlier, albeit not very cleanly. *) both front- and back-ends will send a SSL shutdown via the new close_SSL() function. This is necessary for sessions to work properly. (Sessions are not yet fully supported, but by cleanly closing the SSL connection instead of just sending a TCP FIN packet other SSL tools will be much happier.) *) The client certificate and key are now expected in a subdirectory of the user's home directory. Specifically, - the directory .postgresql must be owned by the user, and allow no access by 'group' or 'other.' - the file .postgresql/postgresql.crt must be a regular file owned by the user. - the file .postgresql/postgresql.key must be a regular file owned by the user, and allow no access by 'group' or 'other'. At the current time encrypted private keys are not supported. There should also be a way to support multiple client certs/keys. *) the front-end performs minimal validation of the back-end cert. Self-signed certs are permitted, but the common name *must* match the hostname used by the front-end. (The cert itself should always use a fully qualified domain name (FDQN) in its common name field.) This means that psql -h eris db will fail, but psql -h eris.example.com db will succeed. At the current time this must be an exact match; future patches may support any FQDN that resolves to the address returned by getpeername(2). Another common "problem" is expiring certs. For now, it may be a good idea to use a very-long-lived self-signed cert. As a compile-time option, the front-end can specify a file containing valid root certificates, but it is not yet required. *) the back-end performs minimal validation of the client cert. It allows self-signed certs. It checks for expiration. It supports a compile-time option specifying a file containing valid root certificates. *) both front- and back-ends default to TLSv1, not SSLv3/SSLv2. *) both front- and back-ends support DSA keys. DSA keys are moderately more expensive on startup, but many people consider them preferable than RSA keys. (E.g., SSH2 prefers DSA keys.) *) if /dev/urandom exists, both client and server will read 16k of randomization data from it. *) the server can read empheral DH parameters from the files $DataDir/dh512.pem $DataDir/dh1024.pem $DataDir/dh2048.pem $DataDir/dh4096.pem if none are provided, the server will default to hardcoded parameter files provided by the OpenSSL project. Remaining tasks: *) the select() clauses need to be revisited - the SSL abstraction layer may need to absorb more of the current code to avoid rare deadlock conditions. This also touches on a true solution to the pg_eof() problem. *) the SIGPIPE signal handler may need to be revisited. *) support encrypted private keys. *) sessions are not yet fully supported. (SSL sessions can span multiple "connections," and allow the client and server to avoid costly renegotiations.) *) makecert - a script that creates back-end certs. *) pgkeygen - a tool that creates front-end certs. *) the whole protocol issue, SASL, etc. *) certs are fully validated - valid root certs must be available. This is a hassle, but it means that you *can* trust the identity of the server. *) the client library can handle hardcoded root certificates, to avoid the need to copy these files. *) host name of server cert must resolve to IP address, or be a recognized alias. This is more liberal than the previous iteration. *) the number of bytes transferred is tracked, and the session key is periodically renegotiated. *) basic cert generation scripts (mkcert.sh, pgkeygen.sh). The configuration files have reasonable defaults for each type of use. Bear Giles
2002-06-14 06:23:17 +02:00
secure_close(conn);
#ifdef WIN32
closesocket(conn->sock);
#else
close(conn->sock);
#endif
conn->sock = -1;
return -1;
}
/*
* pqSendSome: send any data waiting in the output buffer.
Here's a patch against 7.1.3 that fixes a problem with sending larger queries over non-blocking connections with libpq. "Larger" here basically means that it doesn't fit into the output buffer. The basic strategy is to fix pqFlush and pqPutBytes. The problem with pqFlush as it stands now is that it returns EOF when an error occurs or when not all data could be sent. The latter case is clearly not an error for a non-blocking connection but the caller can't distringuish it from an error very well. The first part of the fix is therefore to fix pqFlush. This is done by to renaming it to pqSendSome which only differs from pqFlush in its return values to allow the caller to make the above distinction and a new pqFlush which is implemented in terms of pqSendSome and behaves exactly like the old pqFlush. The second part of the fix modifies pqPutBytes to use pqSendSome instead of pqFlush and to either send all the data or if not all data can be sent on a non-blocking connection to at least put all data into the output buffer, enlarging it if necessary. The callers of pqPutBytes don't have to be changed because from their point of view pqPutBytes behaves like before. It either succeeds in queueing all output data or fails with an error. I've also added a new API function PQsendSome which analogously to PQflush just calls pqSendSome. Programs using non-blocking queries should use this new function. The main difference is that this function will have to be called repeatedly (calling select() properly in between) until all data has been written. AFAICT, the code in CVS HEAD hasn't changed with respect to non-blocking queries and this fix should work there, too, but I haven't tested that yet. Bernhard Herzog
2002-03-05 06:20:12 +01:00
*
* Return 0 on sucess, -1 on failure and 1 when data remains because the
* socket would block and the connection is non-blocking.
*/
int
Here's a patch against 7.1.3 that fixes a problem with sending larger queries over non-blocking connections with libpq. "Larger" here basically means that it doesn't fit into the output buffer. The basic strategy is to fix pqFlush and pqPutBytes. The problem with pqFlush as it stands now is that it returns EOF when an error occurs or when not all data could be sent. The latter case is clearly not an error for a non-blocking connection but the caller can't distringuish it from an error very well. The first part of the fix is therefore to fix pqFlush. This is done by to renaming it to pqSendSome which only differs from pqFlush in its return values to allow the caller to make the above distinction and a new pqFlush which is implemented in terms of pqSendSome and behaves exactly like the old pqFlush. The second part of the fix modifies pqPutBytes to use pqSendSome instead of pqFlush and to either send all the data or if not all data can be sent on a non-blocking connection to at least put all data into the output buffer, enlarging it if necessary. The callers of pqPutBytes don't have to be changed because from their point of view pqPutBytes behaves like before. It either succeeds in queueing all output data or fails with an error. I've also added a new API function PQsendSome which analogously to PQflush just calls pqSendSome. Programs using non-blocking queries should use this new function. The main difference is that this function will have to be called repeatedly (calling select() properly in between) until all data has been written. AFAICT, the code in CVS HEAD hasn't changed with respect to non-blocking queries and this fix should work there, too, but I haven't tested that yet. Bernhard Herzog
2002-03-05 06:20:12 +01:00
pqSendSome(PGconn *conn)
{
char *ptr = conn->outBuffer;
int len = conn->outCount;
if (conn->sock < 0)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("connection not open\n"));
return -1;
}
/*
* don't try to send zero data, allows us to use this function without
* too much worry about overhead
*/
if (len == 0)
return (0);
/* while there's still data to send */
while (len > 0)
{
int sent;
UPDATED PATCH: Attached are a revised set of SSL patches. Many of these patches are motivated by security concerns, it's not just bug fixes. The key differences (from stock 7.2.1) are: *) almost all code that directly uses the OpenSSL library is in two new files, src/interfaces/libpq/fe-ssl.c src/backend/postmaster/be-ssl.c in the long run, it would be nice to merge these two files. *) the legacy code to read and write network data have been encapsulated into read_SSL() and write_SSL(). These functions should probably be renamed - they handle both SSL and non-SSL cases. the remaining code should eliminate the problems identified earlier, albeit not very cleanly. *) both front- and back-ends will send a SSL shutdown via the new close_SSL() function. This is necessary for sessions to work properly. (Sessions are not yet fully supported, but by cleanly closing the SSL connection instead of just sending a TCP FIN packet other SSL tools will be much happier.) *) The client certificate and key are now expected in a subdirectory of the user's home directory. Specifically, - the directory .postgresql must be owned by the user, and allow no access by 'group' or 'other.' - the file .postgresql/postgresql.crt must be a regular file owned by the user. - the file .postgresql/postgresql.key must be a regular file owned by the user, and allow no access by 'group' or 'other'. At the current time encrypted private keys are not supported. There should also be a way to support multiple client certs/keys. *) the front-end performs minimal validation of the back-end cert. Self-signed certs are permitted, but the common name *must* match the hostname used by the front-end. (The cert itself should always use a fully qualified domain name (FDQN) in its common name field.) This means that psql -h eris db will fail, but psql -h eris.example.com db will succeed. At the current time this must be an exact match; future patches may support any FQDN that resolves to the address returned by getpeername(2). Another common "problem" is expiring certs. For now, it may be a good idea to use a very-long-lived self-signed cert. As a compile-time option, the front-end can specify a file containing valid root certificates, but it is not yet required. *) the back-end performs minimal validation of the client cert. It allows self-signed certs. It checks for expiration. It supports a compile-time option specifying a file containing valid root certificates. *) both front- and back-ends default to TLSv1, not SSLv3/SSLv2. *) both front- and back-ends support DSA keys. DSA keys are moderately more expensive on startup, but many people consider them preferable than RSA keys. (E.g., SSH2 prefers DSA keys.) *) if /dev/urandom exists, both client and server will read 16k of randomization data from it. *) the server can read empheral DH parameters from the files $DataDir/dh512.pem $DataDir/dh1024.pem $DataDir/dh2048.pem $DataDir/dh4096.pem if none are provided, the server will default to hardcoded parameter files provided by the OpenSSL project. Remaining tasks: *) the select() clauses need to be revisited - the SSL abstraction layer may need to absorb more of the current code to avoid rare deadlock conditions. This also touches on a true solution to the pg_eof() problem. *) the SIGPIPE signal handler may need to be revisited. *) support encrypted private keys. *) sessions are not yet fully supported. (SSL sessions can span multiple "connections," and allow the client and server to avoid costly renegotiations.) *) makecert - a script that creates back-end certs. *) pgkeygen - a tool that creates front-end certs. *) the whole protocol issue, SASL, etc. *) certs are fully validated - valid root certs must be available. This is a hassle, but it means that you *can* trust the identity of the server. *) the client library can handle hardcoded root certificates, to avoid the need to copy these files. *) host name of server cert must resolve to IP address, or be a recognized alias. This is more liberal than the previous iteration. *) the number of bytes transferred is tracked, and the session key is periodically renegotiated. *) basic cert generation scripts (mkcert.sh, pgkeygen.sh). The configuration files have reasonable defaults for each type of use. Bear Giles
2002-06-14 06:23:17 +02:00
sent = secure_write(conn, ptr, len);
if (sent < 0)
{
1998-09-20 06:51:12 +02:00
/*
1999-05-25 18:15:34 +02:00
* Anything except EAGAIN or EWOULDBLOCK is trouble. If it's
* EPIPE or ECONNRESET, assume we've lost the backend
* connection permanently.
1998-09-20 06:51:12 +02:00
*/
switch (SOCK_ERRNO)
{
#ifdef EAGAIN
case EAGAIN:
break;
#endif
#if defined(EWOULDBLOCK) && (!defined(EAGAIN) || (EWOULDBLOCK != EAGAIN))
case EWOULDBLOCK:
break;
#endif
case EINTR:
continue;
1998-09-20 06:51:12 +02:00
case EPIPE:
#ifdef ECONNRESET
case ECONNRESET:
#endif
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext(
"server closed the connection unexpectedly\n"
"\tThis probably means the server terminated abnormally\n"
"\tbefore or while processing the request.\n"));
/*
* We used to close the socket here, but that's a bad
* idea since there might be unread data waiting
* (typically, a WARNING message from the backend
* telling us it's committing hara-kiri...). Leave
* the socket open until pqReadData finds no more data
* can be read.
*/
Here's a patch against 7.1.3 that fixes a problem with sending larger queries over non-blocking connections with libpq. "Larger" here basically means that it doesn't fit into the output buffer. The basic strategy is to fix pqFlush and pqPutBytes. The problem with pqFlush as it stands now is that it returns EOF when an error occurs or when not all data could be sent. The latter case is clearly not an error for a non-blocking connection but the caller can't distringuish it from an error very well. The first part of the fix is therefore to fix pqFlush. This is done by to renaming it to pqSendSome which only differs from pqFlush in its return values to allow the caller to make the above distinction and a new pqFlush which is implemented in terms of pqSendSome and behaves exactly like the old pqFlush. The second part of the fix modifies pqPutBytes to use pqSendSome instead of pqFlush and to either send all the data or if not all data can be sent on a non-blocking connection to at least put all data into the output buffer, enlarging it if necessary. The callers of pqPutBytes don't have to be changed because from their point of view pqPutBytes behaves like before. It either succeeds in queueing all output data or fails with an error. I've also added a new API function PQsendSome which analogously to PQflush just calls pqSendSome. Programs using non-blocking queries should use this new function. The main difference is that this function will have to be called repeatedly (calling select() properly in between) until all data has been written. AFAICT, the code in CVS HEAD hasn't changed with respect to non-blocking queries and this fix should work there, too, but I haven't tested that yet. Bernhard Herzog
2002-03-05 06:20:12 +01:00
return -1;
default:
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("could not send data to server: %s\n"),
SOCK_STRERROR(SOCK_ERRNO));
1998-09-20 06:51:12 +02:00
/* We don't assume it's a fatal error... */
Here's a patch against 7.1.3 that fixes a problem with sending larger queries over non-blocking connections with libpq. "Larger" here basically means that it doesn't fit into the output buffer. The basic strategy is to fix pqFlush and pqPutBytes. The problem with pqFlush as it stands now is that it returns EOF when an error occurs or when not all data could be sent. The latter case is clearly not an error for a non-blocking connection but the caller can't distringuish it from an error very well. The first part of the fix is therefore to fix pqFlush. This is done by to renaming it to pqSendSome which only differs from pqFlush in its return values to allow the caller to make the above distinction and a new pqFlush which is implemented in terms of pqSendSome and behaves exactly like the old pqFlush. The second part of the fix modifies pqPutBytes to use pqSendSome instead of pqFlush and to either send all the data or if not all data can be sent on a non-blocking connection to at least put all data into the output buffer, enlarging it if necessary. The callers of pqPutBytes don't have to be changed because from their point of view pqPutBytes behaves like before. It either succeeds in queueing all output data or fails with an error. I've also added a new API function PQsendSome which analogously to PQflush just calls pqSendSome. Programs using non-blocking queries should use this new function. The main difference is that this function will have to be called repeatedly (calling select() properly in between) until all data has been written. AFAICT, the code in CVS HEAD hasn't changed with respect to non-blocking queries and this fix should work there, too, but I haven't tested that yet. Bernhard Herzog
2002-03-05 06:20:12 +01:00
return -1;
}
}
else
{
ptr += sent;
len -= sent;
}
if (len > 0)
{
/* We didn't send it all, wait till we can send more */
/*
* if the socket is in non-blocking mode we may need to abort
Here's a patch against 7.1.3 that fixes a problem with sending larger queries over non-blocking connections with libpq. "Larger" here basically means that it doesn't fit into the output buffer. The basic strategy is to fix pqFlush and pqPutBytes. The problem with pqFlush as it stands now is that it returns EOF when an error occurs or when not all data could be sent. The latter case is clearly not an error for a non-blocking connection but the caller can't distringuish it from an error very well. The first part of the fix is therefore to fix pqFlush. This is done by to renaming it to pqSendSome which only differs from pqFlush in its return values to allow the caller to make the above distinction and a new pqFlush which is implemented in terms of pqSendSome and behaves exactly like the old pqFlush. The second part of the fix modifies pqPutBytes to use pqSendSome instead of pqFlush and to either send all the data or if not all data can be sent on a non-blocking connection to at least put all data into the output buffer, enlarging it if necessary. The callers of pqPutBytes don't have to be changed because from their point of view pqPutBytes behaves like before. It either succeeds in queueing all output data or fails with an error. I've also added a new API function PQsendSome which analogously to PQflush just calls pqSendSome. Programs using non-blocking queries should use this new function. The main difference is that this function will have to be called repeatedly (calling select() properly in between) until all data has been written. AFAICT, the code in CVS HEAD hasn't changed with respect to non-blocking queries and this fix should work there, too, but I haven't tested that yet. Bernhard Herzog
2002-03-05 06:20:12 +01:00
* here and return 1 to indicate that data is still pending.
*/
#ifdef USE_SSL
/* can't do anything for our SSL users yet */
if (conn->ssl == NULL)
{
#endif
if (pqIsnonblocking(conn))
{
/* shift the contents of the buffer */
memmove(conn->outBuffer, ptr, len);
conn->outCount = len;
Here's a patch against 7.1.3 that fixes a problem with sending larger queries over non-blocking connections with libpq. "Larger" here basically means that it doesn't fit into the output buffer. The basic strategy is to fix pqFlush and pqPutBytes. The problem with pqFlush as it stands now is that it returns EOF when an error occurs or when not all data could be sent. The latter case is clearly not an error for a non-blocking connection but the caller can't distringuish it from an error very well. The first part of the fix is therefore to fix pqFlush. This is done by to renaming it to pqSendSome which only differs from pqFlush in its return values to allow the caller to make the above distinction and a new pqFlush which is implemented in terms of pqSendSome and behaves exactly like the old pqFlush. The second part of the fix modifies pqPutBytes to use pqSendSome instead of pqFlush and to either send all the data or if not all data can be sent on a non-blocking connection to at least put all data into the output buffer, enlarging it if necessary. The callers of pqPutBytes don't have to be changed because from their point of view pqPutBytes behaves like before. It either succeeds in queueing all output data or fails with an error. I've also added a new API function PQsendSome which analogously to PQflush just calls pqSendSome. Programs using non-blocking queries should use this new function. The main difference is that this function will have to be called repeatedly (calling select() properly in between) until all data has been written. AFAICT, the code in CVS HEAD hasn't changed with respect to non-blocking queries and this fix should work there, too, but I haven't tested that yet. Bernhard Herzog
2002-03-05 06:20:12 +01:00
return 1;
}
#ifdef USE_SSL
}
#endif
if (pqWait(FALSE, TRUE, conn))
Here's a patch against 7.1.3 that fixes a problem with sending larger queries over non-blocking connections with libpq. "Larger" here basically means that it doesn't fit into the output buffer. The basic strategy is to fix pqFlush and pqPutBytes. The problem with pqFlush as it stands now is that it returns EOF when an error occurs or when not all data could be sent. The latter case is clearly not an error for a non-blocking connection but the caller can't distringuish it from an error very well. The first part of the fix is therefore to fix pqFlush. This is done by to renaming it to pqSendSome which only differs from pqFlush in its return values to allow the caller to make the above distinction and a new pqFlush which is implemented in terms of pqSendSome and behaves exactly like the old pqFlush. The second part of the fix modifies pqPutBytes to use pqSendSome instead of pqFlush and to either send all the data or if not all data can be sent on a non-blocking connection to at least put all data into the output buffer, enlarging it if necessary. The callers of pqPutBytes don't have to be changed because from their point of view pqPutBytes behaves like before. It either succeeds in queueing all output data or fails with an error. I've also added a new API function PQsendSome which analogously to PQflush just calls pqSendSome. Programs using non-blocking queries should use this new function. The main difference is that this function will have to be called repeatedly (calling select() properly in between) until all data has been written. AFAICT, the code in CVS HEAD hasn't changed with respect to non-blocking queries and this fix should work there, too, but I haven't tested that yet. Bernhard Herzog
2002-03-05 06:20:12 +01:00
return -1;
}
}
conn->outCount = 0;
if (conn->Pfdebug)
fflush(conn->Pfdebug);
return 0;
}
Here's a patch against 7.1.3 that fixes a problem with sending larger queries over non-blocking connections with libpq. "Larger" here basically means that it doesn't fit into the output buffer. The basic strategy is to fix pqFlush and pqPutBytes. The problem with pqFlush as it stands now is that it returns EOF when an error occurs or when not all data could be sent. The latter case is clearly not an error for a non-blocking connection but the caller can't distringuish it from an error very well. The first part of the fix is therefore to fix pqFlush. This is done by to renaming it to pqSendSome which only differs from pqFlush in its return values to allow the caller to make the above distinction and a new pqFlush which is implemented in terms of pqSendSome and behaves exactly like the old pqFlush. The second part of the fix modifies pqPutBytes to use pqSendSome instead of pqFlush and to either send all the data or if not all data can be sent on a non-blocking connection to at least put all data into the output buffer, enlarging it if necessary. The callers of pqPutBytes don't have to be changed because from their point of view pqPutBytes behaves like before. It either succeeds in queueing all output data or fails with an error. I've also added a new API function PQsendSome which analogously to PQflush just calls pqSendSome. Programs using non-blocking queries should use this new function. The main difference is that this function will have to be called repeatedly (calling select() properly in between) until all data has been written. AFAICT, the code in CVS HEAD hasn't changed with respect to non-blocking queries and this fix should work there, too, but I haven't tested that yet. Bernhard Herzog
2002-03-05 06:20:12 +01:00
Here's a patch against 7.1.3 that fixes a problem with sending larger queries over non-blocking connections with libpq. "Larger" here basically means that it doesn't fit into the output buffer. The basic strategy is to fix pqFlush and pqPutBytes. The problem with pqFlush as it stands now is that it returns EOF when an error occurs or when not all data could be sent. The latter case is clearly not an error for a non-blocking connection but the caller can't distringuish it from an error very well. The first part of the fix is therefore to fix pqFlush. This is done by to renaming it to pqSendSome which only differs from pqFlush in its return values to allow the caller to make the above distinction and a new pqFlush which is implemented in terms of pqSendSome and behaves exactly like the old pqFlush. The second part of the fix modifies pqPutBytes to use pqSendSome instead of pqFlush and to either send all the data or if not all data can be sent on a non-blocking connection to at least put all data into the output buffer, enlarging it if necessary. The callers of pqPutBytes don't have to be changed because from their point of view pqPutBytes behaves like before. It either succeeds in queueing all output data or fails with an error. I've also added a new API function PQsendSome which analogously to PQflush just calls pqSendSome. Programs using non-blocking queries should use this new function. The main difference is that this function will have to be called repeatedly (calling select() properly in between) until all data has been written. AFAICT, the code in CVS HEAD hasn't changed with respect to non-blocking queries and this fix should work there, too, but I haven't tested that yet. Bernhard Herzog
2002-03-05 06:20:12 +01:00
/*
* pqFlush: send any data waiting in the output buffer
*
* Implemented in terms of pqSendSome to recreate the old behavior which
* returned 0 if all data was sent or EOF. EOF was sent regardless of
* whether an error occurred or not all data was sent on a non-blocking
* socket.
*/
int
pqFlush(PGconn *conn)
{
if (pqSendSome(conn))
{
Here's a patch against 7.1.3 that fixes a problem with sending larger queries over non-blocking connections with libpq. "Larger" here basically means that it doesn't fit into the output buffer. The basic strategy is to fix pqFlush and pqPutBytes. The problem with pqFlush as it stands now is that it returns EOF when an error occurs or when not all data could be sent. The latter case is clearly not an error for a non-blocking connection but the caller can't distringuish it from an error very well. The first part of the fix is therefore to fix pqFlush. This is done by to renaming it to pqSendSome which only differs from pqFlush in its return values to allow the caller to make the above distinction and a new pqFlush which is implemented in terms of pqSendSome and behaves exactly like the old pqFlush. The second part of the fix modifies pqPutBytes to use pqSendSome instead of pqFlush and to either send all the data or if not all data can be sent on a non-blocking connection to at least put all data into the output buffer, enlarging it if necessary. The callers of pqPutBytes don't have to be changed because from their point of view pqPutBytes behaves like before. It either succeeds in queueing all output data or fails with an error. I've also added a new API function PQsendSome which analogously to PQflush just calls pqSendSome. Programs using non-blocking queries should use this new function. The main difference is that this function will have to be called repeatedly (calling select() properly in between) until all data has been written. AFAICT, the code in CVS HEAD hasn't changed with respect to non-blocking queries and this fix should work there, too, but I haven't tested that yet. Bernhard Herzog
2002-03-05 06:20:12 +01:00
return EOF;
}
Here's a patch against 7.1.3 that fixes a problem with sending larger queries over non-blocking connections with libpq. "Larger" here basically means that it doesn't fit into the output buffer. The basic strategy is to fix pqFlush and pqPutBytes. The problem with pqFlush as it stands now is that it returns EOF when an error occurs or when not all data could be sent. The latter case is clearly not an error for a non-blocking connection but the caller can't distringuish it from an error very well. The first part of the fix is therefore to fix pqFlush. This is done by to renaming it to pqSendSome which only differs from pqFlush in its return values to allow the caller to make the above distinction and a new pqFlush which is implemented in terms of pqSendSome and behaves exactly like the old pqFlush. The second part of the fix modifies pqPutBytes to use pqSendSome instead of pqFlush and to either send all the data or if not all data can be sent on a non-blocking connection to at least put all data into the output buffer, enlarging it if necessary. The callers of pqPutBytes don't have to be changed because from their point of view pqPutBytes behaves like before. It either succeeds in queueing all output data or fails with an error. I've also added a new API function PQsendSome which analogously to PQflush just calls pqSendSome. Programs using non-blocking queries should use this new function. The main difference is that this function will have to be called repeatedly (calling select() properly in between) until all data has been written. AFAICT, the code in CVS HEAD hasn't changed with respect to non-blocking queries and this fix should work there, too, but I haven't tested that yet. Bernhard Herzog
2002-03-05 06:20:12 +01:00
return 0;
}
/*
* pqWait: wait until we can read or write the connection socket
*
* We also stop waiting and return if the kernel flags an exception condition
* on the socket. The actual error condition will be detected and reported
* when the caller tries to read or write the socket.
*/
int
pqWait(int forRead, int forWrite, PGconn *conn)
{
fd_set input_mask;
fd_set output_mask;
fd_set except_mask;
if (conn->sock < 0)
{
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("connection not open\n"));
return EOF;
}
if (forRead || forWrite)
{
retry5:
FD_ZERO(&input_mask);
FD_ZERO(&output_mask);
FD_ZERO(&except_mask);
if (forRead)
FD_SET(conn->sock, &input_mask);
if (forWrite)
FD_SET(conn->sock, &output_mask);
FD_SET(conn->sock, &except_mask);
if (select(conn->sock + 1, &input_mask, &output_mask, &except_mask,
(struct timeval *) NULL) < 0)
{
if (SOCK_ERRNO == EINTR)
goto retry5;
printfPQExpBuffer(&conn->errorMessage,
libpq_gettext("select() failed: %s\n"),
SOCK_STRERROR(SOCK_ERRNO));
return EOF;
}
}
return 0;
}
/*
* A couple of "miscellaneous" multibyte related functions. They used
* to be in fe-print.c but that file is doomed.
*/
#ifdef MULTIBYTE
/*
* returns the byte length of the word beginning s, using the
* specified encoding.
*/
int
PQmblen(const unsigned char *s, int encoding)
{
return (pg_encoding_mblen(encoding, s));
}
/*
* Get encoding id from environment variable PGCLIENTENCODING.
*/
int
PQenv2encoding(void)
{
char *str;
Commit Karel's patch. ------------------------------------------------------------------- Subject: Re: [PATCHES] encoding names From: Karel Zak <zakkr@zf.jcu.cz> To: Peter Eisentraut <peter_e@gmx.net> Cc: pgsql-patches <pgsql-patches@postgresql.org> Date: Fri, 31 Aug 2001 17:24:38 +0200 On Thu, Aug 30, 2001 at 01:30:40AM +0200, Peter Eisentraut wrote: > > - convert encoding 'name' to 'id' > > I thought we decided not to add functions returning "new" names until we > know exactly what the new names should be, and pending schema Ok, the patch not to add functions. > better > > ...(): encoding name too long Fixed. I found new bug in command/variable.c in parse_client_encoding(), nobody probably never see this error: if (pg_set_client_encoding(encoding)) { elog(ERROR, "Conversion between %s and %s is not supported", value, GetDatabaseEncodingName()); } because pg_set_client_encoding() returns -1 for error and 0 as true. It's fixed too. IMHO it can be apply. Karel PS: * following files are renamed: src/utils/mb/Unicode/KOI8_to_utf8.map --> src/utils/mb/Unicode/koi8r_to_utf8.map src/utils/mb/Unicode/WIN_to_utf8.map --> src/utils/mb/Unicode/win1251_to_utf8.map src/utils/mb/Unicode/utf8_to_KOI8.map --> src/utils/mb/Unicode/utf8_to_koi8r.map src/utils/mb/Unicode/utf8_to_WIN.map --> src/utils/mb/Unicode/utf8_to_win1251.map * new file: src/utils/mb/encname.c * removed file: src/utils/mb/common.c -- Karel Zak <zakkr@zf.jcu.cz> http://home.zf.jcu.cz/~zakkr/ C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz
2001-09-06 06:57:30 +02:00
int encoding = PG_SQL_ASCII;
str = getenv("PGCLIENTENCODING");
if (str && *str != '\0')
encoding = pg_char_to_encoding(str);
return (encoding);
}
#else
/* Provide a default definition in case someone calls it anyway */
int
PQmblen(const unsigned char *s, int encoding)
{
(void) s;
(void) encoding;
return 1;
}
int
PQenv2encoding(void)
{
return 0;
}
#endif /* MULTIBYTE */
#ifdef ENABLE_NLS
char *
libpq_gettext(const char *msgid)
{
static int already_bound = 0;
if (!already_bound)
{
already_bound = 1;
bindtextdomain("libpq", LOCALEDIR);
}
return dgettext("libpq", msgid);
}
#endif /* ENABLE_NLS */