I am working with the Netezza Emulator on Server A. I am having issues running queries on external tables.
i have a text file named test.txt on Server B. I have the Netezza odbc connector with the following version paramters:
Driver version : 'Release 7.2.0.0 [Build 40845]'
NPS version : '07.02.0001 Release 7.2.1.0 [Build 46322]'
Database : <sanitized>
when I attempt to run this query on server B:
CREATE EXTERNAL TABLE testtable ( COL1 INTEGER ) USING ( DATAOBJECT('/var/tmp/test.txt') DELIMITER 30 NULLVALUE 'N' ESCAPECHAR '\' TIMESTYLE '24HOUR' BOOLSTYLE 'T_F' CTRLCHARS TRUE LOGDIR '/data/data/HAGDEMO/temp/' Y2BASE 2000 ENCODING 'INTERNAL' REMOTESOURCE 'ODBC' );
the response is there every time.
However, if I perform the query:
SELECT * FROM testtable;
It works 50% of the time. The first 50% is normal. The other 50% results in a hang. No error, no response, not even a return cursor. Just a hang.
while tracking the pg.log file, I see no errors. or anything that would show a problem. It acknowledges the query and continues on it's day as if it's time for a beer.
Is there anything I should be working? This is with the initial admin login, so I know all permissions are there.
What am I missing?
Thanks
UPDATE #1:
When running the queries, the query does appear in the session manager as normal, then hangs. When I upgrade the query to critical status, it executes immediately. What is the reason for this? I don't want to have to manually update priorities over odbc every time. Thanks.
Your problem is that by default External Tables are expected to be visible from the Host (Server A).
Unless /var/tmp/test.txt is visible from the Host, it won't work without specifying the RemoteSource ODBC option, and even then, the client submitting the select SQL must also be running on Server B (i.e. it has to be able to see /var/tmp/test.txt too).
To test/prove this, move the file to /var/tmp on Server A and try again. Ta-da! You're welcome.....
Related
I have a website and on that website, certain reports are run using SQL via the PSQL ODBC Source. Any report where the SQL takes longer than 30 seconds to run seems to time out!
I have tried amending the statement_timeout settings within PostgreSQL and this hasn't helped. I have also attempted to set the statement timeout within the ODBC Data Source Administrator and that did not help either.
Can someone please advise as to why the statement timeouts would only affect the web sessions and nothing else?
The error shown in the web logs and PostgreSQL database logs is below:
16:33:16.06 System.Data.Odbc.OdbcException (0x80131937): ERROR [57014] ERROR: canceling statement due to statement timeout;
I don't think the issue is in relation to the statement timeout setting in the PostgreSQL config file itself. The reason I say this, is because, I don't get the 30 second statement timeout when running queries in PGAdmin, I only get the timeout when I'm running reports on my website. The website connects to the PostgreSQL database via a PSQLODBC Driver.
Nonetheless, I did try setting the statement timeout within the PostgreSQL config file and I set the statement timeout to 90 Seconds and this change had no impact on the reports, they were still timing out after 30 seconds (The change was applied as show statement_timeout; did show the updated value). Secondly, I tried editing the Connect Settings via the ODBC Data Source Administrator Data Source Options and I set the statement timeout in here using the following command: SET STATEMENT_TIMEOUT TO 90000
The change was applied but had no impact again and the statement still kept timing out.
I have also tried altering the user statement timeout via running a query similar to the below:
Alter User "MM" set statement_timeout = 90000;
This again had no impact. The web session logs show the following:
21:36:32.46 Report SQL Failed: ERROR [57014] ERROR: canceling statement due to statement timeout;
Error while executing the query
21:36:32.46 System.Data.Odbc.OdbcException (0x80131937): ERROR [57014] ERROR: canceling statement due to statement timeout;
Error while executing the query
at System.Data.Odbc.OdbcConnection.HandleError(OdbcHandle hrHandle, RetCode retcode)
at System.Data.Odbc.OdbcCommand.ExecuteReaderObject(CommandBehavior behavior, String method, Boolean needReader, Object[] methodArguments, SQL_API odbcApiMethod)
at System.Data.Odbc.OdbcCommand.ExecuteReaderObject(CommandBehavior behavior, String method, Boolean needReader)
at System.Data.Odbc.OdbcCommand.ExecuteReader(CommandBehavior behavior)
at System.Data.Odbc.OdbcCommand.ExecuteReader()
at Path.PathWebPage.SQLtoJSON(String strSQL, Boolean fColumns, Boolean fFields, String strODBC, Boolean fHasX, Boolean fHasY, Boolean fIsChart, String strUser, String strPass)
I have tried amending the statement_timeout settings within Postgres and this hasn't helped
Please show what you tried, and what error message (if any) you got when you tried it. Also, after changing the setting, did show statement_timeout; reflect the change you tried to make? What does select source from pg_settings where name ='statement_timeout' show?
In the "stock" versions of PostgreSQL, a user is always able to change their session's setting for statement_timeout. Some custom compilations like those run by hosted providers might block such changes though (hopefully with a suitable error message).
There are several ways this could be getting set in a way that differs from the value specified in postgresql.conf. It could be set on a per-user basis or per-database basis using something like alter user "webuser" set statement_timeout=30000 (which will take effect next time that user logs on, or someone logs on to that database). It could be set in the connection string (although I don't know how that works in ODBC). Or if your app uses a centralized subroutine to establish the connection, that code could just execute a set statement_timeout=30000 SQL command on the connection before it hands it back to the caller.
When I hit the follwing query I get 1 row
SELECT * FROM servers WHERE Node='abc_deeh32q6610007'
However when I hit the following query 0 rows are selected
SELECT * FROM servers WHERE Node LIKE '%_deeh32q6610007'
I thought it may be because of _ but the same pattern seen whhen I use the following queries
SELECT * FROM alerts WHERE TicketNumber like '%979415' --> returns 0 rows
SELECT * FROM alerts WHERE TicketNumber='IN979415' --> returns 1 row
I am using Sybase DB.
This kind of errors should not appear in a healthy database.
First check if maybe the characters are correct and you are using a correct % character code. Write a script in plan text and check it with isql using -i option run directly from command line.
If that won't help and your problem persists then probably you have some problems with the physical structures of the database:
Check if you have properly configured the sort order in the database: you can reload the character set order using the charset tool.
Check if you have no errors in the database structure: run dbcc checkdb and dbcc checkalloc to see if there are no physical errors in the data
Check if your don't have any errors in the database error log. All physical errors observed by the database should be logged here.
If that won't help try to reproduce the same problem in another table with copy of the data. Then on an another server with the same configuration. Try to narrow the problem.
I have a small c# application which using Entitiy Framework 6 to parse text files into some database structure.
In general file content is parsed into 3 tables:
Table1 --(1-n)-- Table2 --(1-n)-- Table3
the application worked for months without any issues on Dev, Stage and Production environment.
Last week it stopped on stage and now I am trying to figure out why.
One file contains ~ 100 entries Table1, ~2000 Entries Table 2, ~2000 Entries Table 3
.SaveChanges() is called after each file.
I get the following timeout exception:
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. The statement has been terminated.
AutoDetectChangesEnabled is set to false.
Because there is a 4th table were I execute one update statement after each file there were transactions arround the whole thing, so I removed the 4th table and transaction stuff but the problem persists.
To test if it's just an performance issue I set Database.CommandTimeout = 120 without any effect, it's still running into timeout after 2 minutes.
(Before the issue one file was stored in about 5 seconds which is absolutely ok)
If I look at the SQL Server using SQL Server Profiler I can see the following after .SaveChanges() is called:
SQL Server Profiler
Only the first few INSERT statements for Table3 are shown (always first 4-15 statements and all of them shortly after .SaveChanges())
After that: no new entries until the timeout occurs.
I have absolutely no idea what to check because there is no error or something like that in code.
If I look at SQL Server, there is absolutely no reason for it to delay the queries or something like that (CPU, memory and disk space are ok).
Would be glad for each comment on this, if you want more infos please let me know.
Best Regards
Fixed it by rebuilding fragmented indexes in Table1.
The following article was helpful to understand how to take care of fragmented indexes:
https://solutioncenter.apexsql.com/why-when-and-how-to-rebuild-and-reorganize-sql-server-indexes/
(If some mod is still thinking this is no valid answer, any explanation would be great)
I have a Delphi program (D2010) that accesses a local SQL Server 2005 Express database using the ADO components (TADOConnection and TADOQuery). At program startup I use a correlated sub-query to identify the maximum of a specific field for a range of values. This works well in all our testing.
However, on some customer systems we have seen that if our program is shutdown and restarted immediately, the program fails when running this subquery with an EOleException 'The requested properties cannot be supported'. Subsequent restarts of the program repeat this error, until the PC is rebooted. In this state, all other database access in the program seems OK; this is the only use of a correlated sub-query in the program.
The correlated sub-query is:
SELECT p1.*
FROM Packs p1
WHERE p1.MachID = :MachID
AND p1.BuildID <= :MaxPosID
AND p1.PackID =
(
SELECT MAX(p2.PackID)
FROM packs p2
WHERE p2.BuildID = p1.BuildID
and p2.MachID = p1.MachID
)
ORDER BY BuildID
The MachID and MaxPosID fields do not change on an individual system, so the query is the same in any run of the program. The only difference with the customer systems is that they may be running with larger databases (typ. 1GB).
I have added some code to iterate over the database connection properties, and seen that on our working systems the 'Subquery Support' property has a value of 31H, which according to
http://msdn.microsoft.com/en-us/library/office/aa140022(v=office.10).aspx means that correlated subqueries are supported.
I assume that when the problem occurs on customer sites the property does not have the same value set for some reason.
One workaround was to open a command prompt, and use sqlcmd to just 'USE (our database name)'. If this command prompt is left open, then our program starts normally. I have no idea how this would affect the running of our program, or the value of the properties returned by the connection object.
Any ideas about why the supported properties change, or why program shutdown/startup should see this change?
I can rewrite the code to replace the use of the correlated subquery with a slower search through the table until I find all the required values, and this would probably not be affected by the problem, but I would like to understand what is happening.
Edit: the connection string is:
Provider=SQLNCLI.1;Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=TQSquality
The connection string is modified at runtime to add 'OLE DB Services = -2', to switch off connection pooling.
The query is executed by:
SetCnx(LastPackIDQry, CnxQuality);
LastPackIDQry.Parameters[0].Value := GetMachNo;
LastPackIDQry.Parameters[1].Value := TQS.PosRange.Last;
QryOpen(LastPackIDQry);
try
while not Eof do
begin
...process the data
QryOpen is a utility that just calls .Open on the input query, but provides some logging on errors. As I mentioned, the two parameters are fixed for a specific machine, so I cannot believe the problem is with the query; has to be something to do with the connection or the database.
TIA Ian
I'm getting this error:
There is insufficient system memory in resource pool 'default' to run this query.
I'm just running 100,000 simple insert statements as shown below. I got the error approx on the 85,000th insert.
This is a demo for a class I'm taking...
use sampleautogrow
INSERT INTO SampleData VALUES ('fazgypvlhl2svnh1t5di','8l8hzn95y5v20nlmoyzpq17v68chfjh9tbj496t4',1)
INSERT INTO SampleData VALUES ('31t7phmjs7rcwi7d3ctg','852wm0l8zvd7k5vuemo16e67ydk9cq6rzp0f0sbs',2)
INSERT INTO SampleData VALUES ('w3dtv4wsm3ho9l3073o1','udn28w25dogxb9ttwyqeieuz6almxg53a1ki72dq',1)
INSERT INTO SampleData VALUES ('23u5uod07zilskyuhd7d','dopw0c76z7h1mu4p1hrfe8d7ei1z2rpwsffvk3pi',3)
Thanks In Advance,
Jim M
Update: Just noticed something very interesting. I created another database, forgot to create the SampleData table. I ran the query to add the 100,000 rows, and it got the out of memory error before it even complained that the table didn't exist. Thus, I'm guessing it is running out of memory just trying to "read in" my 100,000 lines?
You have 100.000 insert statements in one single batch request? Your server needs more RAM just to parse the request. Buy more RAM, upgrade to x64 or reduce the size of single batches sent to the server. Ie. sprinkle a GO every now and there in the .sql file.
You can try SQLServer Connection Tools application. It has a feature called Massive Sql Runner which executes every command one by one. With this feature very few memory will be used to execute script commands and you will no longer have the problem.
SQL Server Connection Tools