on Linux and would have who knows what unpleasant effects on other platforms.
If you need another include file for Windows, then add it; don't go
messing with the semantics of every other port's include files.
Win32 port is now called 'win32' rather than 'win'
add -lwsock32 on Win32
make gethostname() be only used when kerberos4 is enabled
use /port/getopt.c
new /port/opendir.c routines
disable GUC unix_socket_group on Win32
convert some keywords.c symbols to KEYWORD_P to prevent conflict
create new FCNTL_NONBLOCK macro to turn off socket blocking
create new /include/port.h file that has /port prototypes, move
out of c.h
new /include/port/win32_include dir to hold missing include files
work around ERROR being defined in Win32 includes
only remnant of this failed experiment is that the server will take
SET AUTOCOMMIT TO ON. Still TODO: provide some client-side autocommit
logic in libpq.
the type OID and typmod of the underlying base type. Per discussions
a few weeks ago with Andreas Pflug and others. Note that this behavioral
change affects both old- and new-protocol clients.
This makes no difference for existing uses, but allows SelectSortFunction()
and pred_test_simple_clause() to use indexscans instead of seqscans to
locate entries for a particular operator in pg_amop. Better yet, they can
use the SearchSysCacheList() API to cache the search results.
dropped. The simplest fix for INSERT/UPDATE cases turns out to be for
preptlist.c to insert NULLs of a known-good type (I used INT4) rather
than making them match the deleted column's type. Since the representation
of NULL is actually datatype-independent, this should work fine.
I also re-reverted the patch to disable the use_physical_tlist optimization
in the presence of dropped columns. It still doesn't look worth the
trouble to be smarter, if there are no other bugs to fix.
Added a regression test to catch future problems in this area.
where the table contains dropped columns. If the columns are dropped,
then their types may be gone as well, which causes ExecTypeFromTL() to
fail if the dropped columns appear in a plan node's tlist. This could
be worked around but I don't think the optimization is valuable enough
to be worth the trouble.
detected during buffer dump to be labeled with the buffer location.
For example, if a page LSN is clobbered, we now produce something like
ERROR: XLogFlush: request 2C000000/8468EC8 is not satisfied --- flushed only
to 0/8468EF0
CONTEXT: writing block 0 of relation 428946/566240
whereas before there was no convenient way to find out which page had
been trashed.
dead xlog segments are not considered part of a critical section. It is
not necessary to force a database-wide panic if we get a failure in these
operations. Per recent trouble reports.
handle multiple 'formats' for data I/O. Restructure CommandDest and
DestReceiver stuff one more time (it's finally starting to look a bit
clean though). Code now matches latest 3.0 protocol document as far
as message formats go --- but there is no support for binary I/O yet.
the connection when appropriate.
This checkin also adds the type map for jdbc3, however currently it is
identical to the jdbc2 mapping.
Modified Files:
jdbc/org/postgresql/core/BaseStatement.java
jdbc/org/postgresql/core/QueryExecutor.java
jdbc/org/postgresql/jdbc3/AbstractJdbc3Connection.java
of Describe on a prepared statement. This was in the original 3.0
protocol proposal, but I took it out for reasons that seemed good at
the time. Put it back per yesterday's pghackers discussion.
Describe would claim that no tuples will be returned. Only affects
SELECTs added to non-SELECT base queries by rewrite rules. If you
want to see the output of such a select, you gotta use 'simple Query'
protocol.
DestReceiver pointers instead of just CommandDest values. The DestReceiver
is made at the point where the destination is selected, rather than
deep inside the executor. This cleans up the original kluge implementation
of tstoreReceiver.c, and makes it easy to support retrieving results
from utility statements inside portals. Thus, you can now do fun things
like Bind and Execute a FETCH or EXPLAIN command, and it'll all work
as expected (e.g., you can Describe the portal, or use Execute's count
parameter to suspend the output partway through). Implementation involves
stuffing the utility command's output into a Tuplestore, which would be
kind of annoying for huge output sets, but should be quite acceptable
for typical uses of utility commands.
the column by table OID and column number, if it's a simple column
reference. Along the way, get rid of reskey/reskeyop fields in Resdoms.
Turns out that representation was not convenient for either the planner
or the executor; we can make the planner deliver exactly what the
executor wants with no more effort.
initdb forced due to change in stored rule representation.
which does the same thing. Perhaps at one time there was a reason to
allow plan nodes to store their result types in different places, but
AFAICT that's been unnecessary for a good while.
avoids 'input buffer overflow' failure on long literals, improves
performance, gives the right answer for line position in functions
containing multiline literals, suppresses annoying compiler warnings,
and generally is so much better I wonder why we didn't do it before.
Per recent discussion on pgsql-general, this is appropriate for spec
compliance, and has the nice side-effect of easing porting from old
pg_dump files that exhibit the 59.999=>60.000 roundoff problem.
implementation limits, do not issue an ERROR; instead issue a NOTICE and use
the max supported value. Per pgsql-general discussion of 28-Apr, this is
needed to allow easy porting from pre-7.3 releases where the limits were
higher.
Unrelated change in same area: accept GLOBAL TEMP/TEMPORARY as a synonym
for TEMPORARY, as per pgsql-hackers discussion of 15-Apr. We previously
rejected it, but that was based on a misreading of the spec --- SQL92's
GLOBAL temp tables are really closer to what we have than their LOCAL ones.
per report from Olivier Prenant. Also fix off-by-one space calculation
in ReadToc; this woould not have hurt us until we had more than 100
dependencies for a single object, but wrong is wrong.
(agollapudi@demandsolutions.com).
Also applied the RefCursor support patch by Nic Ferrier. This patch allows
you too return a get a result set from a function that returns a refcursor.
For example:
call.registerOutParameter(1, Types.OTHER);
call.execute();
ResultSet rs = (ResultSet) call.getObject(1);
Modified Files:
jdbc/org/postgresql/core/BaseStatement.java
jdbc/org/postgresql/jdbc1/AbstractJdbc1ResultSet.java
jdbc/org/postgresql/jdbc1/AbstractJdbc1Statement.java
jdbc/org/postgresql/jdbc1/Jdbc1CallableStatement.java
jdbc/org/postgresql/jdbc1/Jdbc1PreparedStatement.java
jdbc/org/postgresql/jdbc1/Jdbc1Statement.java
jdbc/org/postgresql/jdbc2/AbstractJdbc2ResultSet.java
jdbc/org/postgresql/jdbc2/Jdbc2CallableStatement.java
jdbc/org/postgresql/jdbc2/Jdbc2PreparedStatement.java
jdbc/org/postgresql/jdbc2/Jdbc2Statement.java
jdbc/org/postgresql/jdbc3/Jdbc3CallableStatement.java
jdbc/org/postgresql/jdbc3/Jdbc3PreparedStatement.java
jdbc/org/postgresql/jdbc3/Jdbc3Statement.java
Added Files:
jdbc/org/postgresql/PGRefCursorResultSet.java
jdbc/org/postgresql/jdbc1/Jdbc1RefCursorResultSet.java
jdbc/org/postgresql/jdbc2/Jdbc2RefCursorResultSet.java
jdbc/org/postgresql/jdbc3/Jdbc3RefCursorResultSet.java
jdbc/org/postgresql/test/jdbc2/RefCursorTest.java
Both plannable queries and utility commands are now always executed
within Portals, which have been revamped so that they can handle the
load (they used to be good only for single SELECT queries). Restructure
code to push command-completion-tag selection logic out of postgres.c,
so that it won't have to be duplicated between simple and extended queries.
initdb forced due to addition of a field to Query nodes.
Without this fix, CVS tip dumps core when running the regression tests
with geqo_threshold = 2. I would think that a similar patch might be
needed in 7.3, but cannot duplicate the failure in that branch --- so
for now, leave well enough alone.
of extended query features in new FE/BE protocol. TransactionCommandContext
is gone (PortalContext replaces it for some purposes), and QueryContext
has taken on a new meaning (MessageContext plays its old role).
that the types of untyped string-literal constants are deduced (ie,
when coerce_type is applied to 'em, that's what the type must be).
Remove the ancient hack of storing the input Param-types array as a
global variable, and put the info into ParseState instead. This touches
a lot of files because of adjustment of routine parameter lists, but
it's really not a large patch. Note: PREPARE statement still insists on
exact specification of parameter types, but that could easily be relaxed
now, if we wanted to do so.
I had inadvertently omitted it while rearranging things to support
length-counted incoming messages. Also, change the parser's API back
to accepting a 'char *' query string instead of 'StringInfo', as the
latter wasn't buying us anything except overhead. (I think when I put
it in I had some notion of making the parser API 8-bit-clean, but
seeing that flex depends on null-terminated input, that's not really
ever gonna happen.)
in all cases, leaked TopMemoryContext memory in others. Make the
interaction between SetClientEncoding and InitializeClientEncoding
cleaner and better documented. I suspect these changes should be
back-patched into 7.3, but will wait on Tatsuo's verification.
for tableID/columnID in RowDescription. (The latter isn't really
implemented yet though --- the backend always sends zeroes, and libpq
just throws away the data.)
initial values and runtime changes in selected parameters. This gets
rid of the need for an initial 'select pg_client_encoding()' query in
libpq, bringing us back to one message transmitted in each direction
for a standard connection startup. To allow server version to be sent
using the same GUC mechanism that handles other parameters, invent the
concept of a never-settable GUC parameter: you can 'show server_version'
but it's not settable by any GUC input source. Create 'lc_collate' and
'lc_ctype' never-settable parameters so that people can find out these
settings without need for pg_controldata. (These side ideas were all
discussed some time ago in pgsql-hackers, but not yet implemented.)
into a UNION that has some type coercions applied to the component
queries, so long as the qual itself does not reference any columns that
have such coercions. Per example from Jonathan Bartlett 24-Apr-03.
rewritten and the protocol is changed, but most elog calls are still
elog calls. Also, we need to contemplate mechanisms for controlling
all this functionality --- eg, how much stuff should appear in the
postmaster log? And what API should libpq expose for it?
have length words. COPY OUT reimplemented per new protocol: it doesn't
need \. anymore, thank goodness. COPY BINARY to/from frontend works,
at least as far as the backend is concerned --- libpq's PQgetline API
is not up to snuff, and will have to be replaced with something that is
null-safe. libpq uses message length words for performance improvement
(no cycles wasted rescanning long messages), but not yet for error
recovery.
Output \r\n termination on Win32.
Disallow literal carriage return as a data value,
backslash-carriage-return and \r still allowed.
Doc changes already committed.
with variable-width fields. No more truncation of long user names.
Also, libpq can now send its environment-variable-driven SET commands
as part of the startup packet, saving round trips to server.
fixing an order by problem for index metadata results.
Also includes removing some unused code as well as a fix to the toString
method on statement.
Modified Files:
jdbc/org/postgresql/jdbc1/AbstractJdbc1DatabaseMetaData.java
jdbc/org/postgresql/jdbc1/AbstractJdbc1Statement.java
account for NULLs; in hindsight this is obvious since the code for
the MCV-lists case would reduce to this when there are zero entries
in both lists. Per example from Alec Mitchell.
expressions, ARRAY(sub-SELECT) expressions, some array functions.
Polymorphic functions using ANYARRAY/ANYELEMENT argument and return
types. Some regression tests in place, documentation is lacking.
Joe Conway, with some kibitzing from Tom Lane.
on UPDATE.
This get's rid of the long standing annoyance that updating a row
that has foreign keys locks all the referenced rows even if the
foreign key values do not change.
The trick is to actually do a check identical to NO ACTION after an
eventually done UPDATE in the SET DEFAULT case. Since a SET DEFAULT
operation should have moved referencing rows to a new "home", a following
NO ACTION check can only fail if the column defaults of the referencing
table resulted in the key we actually deleted. Thanks to Stephan.
Jan
find out about it is to read the documentation that tells you how
dangerous it is. Add default_transaction_read_only to documentation;
seems to have been overlooked in patch that added read-only transactions.
Clean up check_guc comparison script, which has been suffering bit rot.
an unneeded memory context and some variables that are not used anymore.
It's pretty trivial and the regression tests pass fine. There's no
change in functionality, only deletion of unused code. I left an empty
function because maybe I'll need it for nested transactions.
Alvaro Herrera
harmless on signed-char machines but would lead to core dump in the
deadlock detection code if char is unsigned. Amazingly, this bug has
been here since 7.1 and yet wasn't reported till now. Thanks to Robert
Bruccoleri for providing the opportunity to track it down.
typing error in src/backend/libpq/be-secure.c ???
Long Description
In src/backend/libpq/be-secure.c: secure_write
on SSL_ERROR_WANT_WRITE call secure_read instead
secure_write again. May be is this a typing error?
Sergey N. Yatskevich (syatskevich@n21lab.gosniias.msk.ru)
page when it's read in, per pghackers discussion around 17-Feb. Add a
GUC variable zero_damaged_pages that causes the response to be a WARNING
followed by zeroing the page, rather than the normal ERROR; this is per
Hiroshi's suggestion that there needs to be a way to get at the data
in the rest of the table.
include output that vary depending on the python build one is
running. Basically, the order of keys in a dictionary is
non-deterministic, and that part of the test fails for me regularly.
I rewrote the test to work around this problem, and include a patch
file with that change and the change to the expected otuput as well.
Mike Meyer
Example:
test=# \d test
Table "public.test"
Column | Type | Modifiers
--------+---------+-----------
a | integer | not null
Indexes:
"test_pkey" PRIMARY KEY btree (a)
Check Constraints:
"$2" CHECK (a > 1)
Foreign Key Constraints:
"$1" FOREIGN KEY (a) REFERENCES parent(b)
Rules:
myrule AS ON INSERT TO test DO INSTEAD NOTHING
Triggers:
"asdf asdf" AFTER INSERT OR DELETE ON test FOR EACH STATEMENT EXECUTE
PROCEDURE update_pg_pwd_and_pg_group(),
mytrigger AFTER INSERT OR DELETE ON test FOR EACH ROW EXECUTE PROCEDURE
update_pg_pwd_and_pg_group()
I have minimised the double quoting of identifiers as much as I could
easily, and I will submit another patch when I have time to work on it that
will use a 'fmtId' function to determine it exactly.
I think it's a significant improvement in legibility...
Obviously the table example above is slightly degenerate in that not many
tables in production have heaps of (non-constraint) triggers and rules.
Christopher Kings-Lynne
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
The first cleans up a couple of minor errors and ommissions
and adds tab completion support to more slash commands, e.g.
\dv.
The second is an attempt to add tab completion for schemas
and fully qualified relation names (e.g. public.mytable ).
I think this covers the TODO-item:
"Allow psql to do table completion for SELECT * FROM schema_part and table
completion for SELECT * FROM schema_name."
This happens via union selects querying:
- relation_name in current search path;
- schema_name;
- schema.relation_name
matching the current input string.
E.g:
SELECT p[TAB]
will produce a list of all appropriate relation names in the current search
path which begin with 'p', and also all schema names which begin with 'p';
\d pub[TAB]
will produce any relation names in the current search path and also
any schema names beginning with 'pub';
\d public.[TAB]
will produce a list of all relations in the schema 'public';
\d public.my[TAB]
produces all relation names beginning with 'my' in schema 'public'.
It seems to work for me; comments, suggestions, particularly regarding
the coding and queries, are very welcome.
Note that tables, indexes, views and sequences relations in the
'pg_catalog' namespace are excluded even though they are in
the current search path. I found not doing this produced annoying behaviour
when expanding names beginning with 'p'. People who work with system
tables a lot may not like this though; I can look for another solution
if necessary.
Ian Barwick