Fix error handling of vacuumdb when running out of fds

When trying to use a high number of jobs, vacuumdb has only checked for
a maximum number of jobs used, causing confusing failures when running
out of file descriptors when the jobs open connections to Postgres.
This commit changes the error handling so as we do not check anymore for
a maximum number of allowed jobs when parsing the option value with
FD_SETSIZE, but check instead if a file descriptor is within the
supported range when opening the connections for the jobs so as this is
detected at the earliest time possible.

Also, improve the error message to give a hint about the number of jobs
recommended, using a wording given by the reviewers of the patch.

Reported-by: Andres Freund
Author: Michael Paquier
Reviewed-by: Andres Freund, Álvaro Herrera, Tom Lane
Discussion: https://postgr.es/m/20190818001858.ho3ev4z57fqhs7a5@alap3.anarazel.de
Backpatch-through: 9.5
This commit is contained in:
Michael Paquier 2019-08-26 11:14:28 +09:00
parent 5fc7b1e939
commit 5d76c80373

View File

@ -200,12 +200,6 @@ main(int argc, char *argv[])
progname);
exit(1);
}
if (concurrentCons > FD_SETSIZE - 1)
{
fprintf(stderr, _("%s: too many parallel jobs requested (maximum: %d)\n"),
progname, FD_SETSIZE - 1);
exit(1);
}
break;
case 2:
maintenance_db = pg_strdup(optarg);
@ -442,6 +436,20 @@ vacuum_one_database(const char *dbname, vacuumingOptions *vacopts,
{
conn = connectDatabase(dbname, host, port, username, prompt_password,
progname, echo, false, true);
/*
* Fail and exit immediately if trying to use a socket in an
* unsupported range. POSIX requires open(2) to use the lowest
* unused file descriptor and the hint given relies on that.
*/
if (PQsocket(conn) >= FD_SETSIZE)
{
fprintf(stderr,
_("%s: too many jobs for this platform -- try %d"),
progname, i);
exit(1);
}
init_slot(slots + i, conn);
}
}