Using regex with like clause with Sybase DB - sybase

When I hit the follwing query I get 1 row
SELECT * FROM servers WHERE Node='abc_deeh32q6610007'
However when I hit the following query 0 rows are selected
SELECT * FROM servers WHERE Node LIKE '%_deeh32q6610007'
I thought it may be because of _ but the same pattern seen whhen I use the following queries
SELECT * FROM alerts WHERE TicketNumber like '%979415' --> returns 0 rows
SELECT * FROM alerts WHERE TicketNumber='IN979415' --> returns 1 row
I am using Sybase DB.

This kind of errors should not appear in a healthy database.
First check if maybe the characters are correct and you are using a correct % character code. Write a script in plan text and check it with isql using -i option run directly from command line.
If that won't help and your problem persists then probably you have some problems with the physical structures of the database:
Check if you have properly configured the sort order in the database: you can reload the character set order using the charset tool.
Check if you have no errors in the database structure: run dbcc checkdb and dbcc checkalloc to see if there are no physical errors in the data
Check if your don't have any errors in the database error log. All physical errors observed by the database should be logged here.
If that won't help try to reproduce the same problem in another table with copy of the data. Then on an another server with the same configuration. Try to narrow the problem.

Related

SSIS Deadlock with a Slowly Changing Dimension

I am running an SSIS package that contains many (7) reads from a single flat file uploaded from an external source. There is consistently a deadlock in every environment(Test, Pre-Production, and Production) on one of the data flows that uses a Slowly Changing Dimension to update an existing SQL table with both new and changed rows.
I have three groups coming off the SCD:
-Inferred Member Updates Output goes directly to an OLE DB Update command.
-Historical Attribute goes to a derived column boxed that sets a delete date and then goes to an update OLE DB command, then goes to a union box where it unions with the last group New Output.
-New Output goes into a union box along with the Historical output then to a derived column box that adds an update/create date, then inserts the values into the same SQL table as the Inferred Member Output DB Command.
The only error I am getting in my log looks like this:
"Transaction (Process ID 170) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction."
I could put the (NOLOCK) statement into the OLE db commands, but I have read that this isn't the way to go.
I am using SQL Server 2012 Data Tools to investigate and edit the Package, but I am unsure where to go from here to find the issue.
I want to get out there that i am a novice in terms of SSIS programming... with that out of the way... Any help would be greatly appreciated, even if it is just pointing me to a place I haven't looked for help.
Adding index on the WHERE condition column may resolve your issue. After adding index on the column, transactions will executes in faster way which reduce the chances of deadlock.

Finding total number of tables in union query

I am writing a code which supports different versions of Sybase ASE. I am using union queries and the problem is that different version of Sybase ASE supports different number of tables in union query. The union query is dynamic and will be formed depending on the number of database present in the server.
Is there any way in which I can find the max number of tables supported by a particular Sybase ASE? The only solution that I know right now is to fetch the version using query and pick out the version number from the result and set the number accordingly in the code. But this is not a very good solution. I tried checking if there are any tables which have stores this value but nothing came up. Can anyone suggest any solution for this?
Since that's my SAP response you've re-posted here, I'll add some more notes ...
that was a proof of concept that answered the basic question of how to get the info via T-SQL; it was assumed anyone actually looking to implement the solution would (eventually) get around to addressing the various issues re: overhead/maintenance, eg ...
setting a tracefile is going to require permissions to do it; which permissions depends on whether or not you've got granular permissions enabled (see the notes for the 'set tracefile' command in the Reference manual); you'll need to decide if/how you want to grant the permissions to other users
while it's true you cannot re-use the tracefile, you can create a proxy table for the directory where the tracefile exists, then 'delete' the tracefile from the directory, eg:
create proxy_table tracedir external directory at '/tmp'
go
delete tracedir where filename = 'my_serverlmiits'
go
if you could have multiple copies of the proxy table solution running at the same time then you'll obviously (?) need to make sure you generate a unique tracefile name for each session; while you could do this by appending ##spid to the file name, you could also add the login name (suser_name()), the kpid (select KPID from master..monProcess where SPID = ##spid), etc; you'll also want to make sure such a file doesn't exist before trying to create it (eg, delete tracedir where filename = '.....'; set tracefile ...)
your error (when selecting from the proxy table) appears to be related to your client application running in transaction isolation level 0 (which, by default, requires a unique index on the table ... not something you're going to accomplish against a proxy table pointing to an OS file); try setting your isolation level to 1, or use a client application that doesn't default to isolation level 0 (eg, that example runs fine with the basic isql command line tool)
if this solution were to be productionalized then you'll probably want to get a separate filesystem allocated so that any 'run away' tracing sessions don't fill up an important filesystem (eg, /var, /tmp, $SYBASE, etc)
also from a production/security perspective, I'd probably want to investigate the possibility of encapsulating a lot of the details in a DBA/system proc (created to execute under the permissions of the creator) so as to ensure developers can't create tracefiles in the 'wrong' directories ... and on and on and on re: control/security ...
Then again ...
If you're going to be doing this a LOT ... and you're only interested in the max number of tables in a (union) query, then it'd probably be much easier to just build a static if/then/else (or case) expression that matches your ASE version with the few possible numbers (see RobV's post).
Let's face it, how often are really, Really, REALLY going to be building a query with more than, say, 100 tables, let alone 500, 1000, more? [You really don't want to deal with trying to tune such a monster!! YIKES] Realistically speaking, I can't see any reason why you'd want to productionalize the proxy table solution just to access a single row from dbcc serverlimits when you could just implement a hard limit (eg, max of 100 tables).
And the more I think about it, as a DBA I'm going to do whatever I can to make sure your application can't create some monster, multi-hundred table query that ends up bogging down my dataserver simply because the developer couldn't come up with a more efficient solution. [And heaven forbid this type of application gets rolled out to the general user community, ie, I'd have to deal with dozens/hundreds of copies of this monster running in my dataserver?!?!?!]
You can get such limits by running 'dbcc serverlimits' (enable traceflag 3604 first).
Up until version 15.7, the maximum was 256.
In 16.0, this was raised to 512.
In 16.0 SP01, this was raised again to 1023.
I suggest you open a case/ticket with SAP support to know if there is any system tables that store this information. If there is none, I would implement the tedious solution you mentionned and will monitor the following error in the ASE15.7 logs:
CR 805525 -- If you exceed the number of tables in a UNION query you can get a signal 11 in ord_getrowbounds instead of an error message.
This is the answer that I got from the SAP community
-- enable trace file for your spid
set tracefile '/tmp/my_serverlimits' for ##spid
go
-- dump dbcc serverlimits output to your tracefile
dbcc serverlimits
go
-- turn off tracing
set tracefile off for ##spid
go
-- enable external file access:
sp_configure 'enable file access',1
go
-- create proxy table pointing at the trace file
create proxy_table dbcc_serverlimits external file at '/tmp/my_serverlimits'
go
-- find our column name ('record' of type varchar(255) in this case)
sp_help dbcc_serverlimits
go
-- extract the desired row; store the 'record' value in a #variable
-- and parse for the desired info ...
select * from dbcc_serverlimits where lower(record) like '%union%'
go
record
------------------------------------------------------------------------
Max number of user tables overall in a statement using UNIONs : 512
There are some problems with this approach though. First issue is setting trace file. I am going to use this code mostly daily and in Sybase, I think we can't delete or overwrite a trace file. Second is regarding the proxy table. Proxy table will have to be deleted, but this can be taken care with the following code
IF
exists (select 1 from
sysobjects where type = 'U' and name = 'dbcc_serverlimits')
begin
drop table
dbcc_serverlimits
end
go
Final problem comes when a select query is made from dbcc_serverlimits table. It throws the following error
Could not execute statement. The optimizer could not find a unique
index which it could use to scan table 'dbo.dbcc_serverlimits' for
cursor 'jconnect_implicit_26'. SQLCODE=311 Server=************,
Severity Level=16, State=2, Transaction State=1, Line=1 Line 24
select * from dbcc_serverlimits
All this command will have to be written in procedure (that is what I am thinking). Any more elegant solution?

why external tables do not work in netezza?

I am working with the Netezza Emulator on Server A. I am having issues running queries on external tables.
i have a text file named test.txt on Server B. I have the Netezza odbc connector with the following version paramters:
Driver version : 'Release 7.2.0.0 [Build 40845]'
NPS version : '07.02.0001 Release 7.2.1.0 [Build 46322]'
Database : <sanitized>
when I attempt to run this query on server B:
CREATE EXTERNAL TABLE testtable ( COL1 INTEGER ) USING ( DATAOBJECT('/var/tmp/test.txt') DELIMITER 30 NULLVALUE 'N' ESCAPECHAR '\' TIMESTYLE '24HOUR' BOOLSTYLE 'T_F' CTRLCHARS TRUE LOGDIR '/data/data/HAGDEMO/temp/' Y2BASE 2000 ENCODING 'INTERNAL' REMOTESOURCE 'ODBC' );
the response is there every time.
However, if I perform the query:
SELECT * FROM testtable;
It works 50% of the time. The first 50% is normal. The other 50% results in a hang. No error, no response, not even a return cursor. Just a hang.
while tracking the pg.log file, I see no errors. or anything that would show a problem. It acknowledges the query and continues on it's day as if it's time for a beer.
Is there anything I should be working? This is with the initial admin login, so I know all permissions are there.
What am I missing?
Thanks
UPDATE #1:
When running the queries, the query does appear in the session manager as normal, then hangs. When I upgrade the query to critical status, it executes immediately. What is the reason for this? I don't want to have to manually update priorities over odbc every time. Thanks.
Your problem is that by default External Tables are expected to be visible from the Host (Server A).
Unless /var/tmp/test.txt is visible from the Host, it won't work without specifying the RemoteSource ODBC option, and even then, the client submitting the select SQL must also be running on Server B (i.e. it has to be able to see /var/tmp/test.txt too).
To test/prove this, move the file to /var/tmp on Server A and try again. Ta-da! You're welcome.....

How do I capture the data passed in SqlBulkCopy using the Sql Profiler?

I am using Sql Profiler all the time to capture the SQL statements and rerun problematic ones. Very useful.
However, some code uses the SqlBulkCopy API and I have no idea how to capture those. I see creation of temp tables, but nothing that populates them. Seems like SqlBulkCopy bypasses Sql Profiler or I do not capture the right events.
Capturing event info for bulk insert operations ( BCP.EXE, SqlBulkCopy, and I assume BULK INSERT, and OPENROWSET(BULK... ) is possible, but you won't be able to see the individual rows and columns.
Bulk Insert operations show up as a single (well, one per batch, and the default is to do all rows in a single batch) DML statement of:
INSERT BULK <destination_table_name> (
<column1_name> <column1_datatype> [ COLLATE <column1_collation> ], ...
) [ WITH (<1 or more hints>) ]
<hints> := KEEP_NULLS, TABLOCK, ORDER(...), ROWS_PER_BATCH=, etc
You can find the full list of "hints" on the MSDN page for the BCP Utility. Please note that SqlBulkCopy only supports a subset of those hints (e.g. KEEP_NULLS, TABLOCK, and a few others) but does not support ORDER(...) or ROWS_PER_BATCH= ** (which is quite unfortunate, actually, as the ORDER() hint is needed in order to avoid a sort that happens in tempdb in order to allow the operation to be minimally logged (assuming the other conditions for such an operation have also been satisfied).
In order to see this statement, you need to capture any of the following events in SQL Server Profiler:
SQL:BatchStarting
SQL:BatchCompleted
SQL:StmtStarting
SQL:StmtCompleted
You will also want to select, at least, the following columns (in SQL Server Profiler):
TextData
CPU
Reads
Writes
Duration
SPID
StartTime
EndTime
RowCounts
And, since a user cannot submit an INSERT BULK statement directly, you can probably filter on that in Column Filters if you merely want to see these events and nothing else.
If you want to see the official notification that a BULK INSERT operation is beginning and/or ending, then you need to capture the following event:
SQLTransaction
and then add the following Profiler columns:
EventSubClass
ObjectName
For ObjectName you will always get events showing "BULK INSERT" and whether that is beginning or ending is determined by the value in EventSubClass, which is either "0 - Begin" or "1 - Commit" (and I suppose if it fails you should see "2 - Rollback").
If the ORDER() hint was not specified (and again, it cannot be specified when using SqlBulkCopy), then you will also get a "SQLTransaction" event showing "sort_init" in the ObjectName column. This event also has "0 - Begin" and "1 - Commit" events (as shown in the EventSubClass column).
Finally, even though you cannot see the specific rows, you can still see operations against the Transaction Log (e.g. insert row, modify IAM row, modify PFS row, etc) if you capture the following event:
TransactionLog
and add the following Profiler column:
ObjectID
The main info of interest will be in the EventSubClass column, but unfortunately it is just ID values and I could not find any translation of those values in MSDN documentation. However, I did find the following blog post by Jonathan Kehayias: Using Extended Events in SQL Server Denali CTP1 to Map out the TransactionLog SQL Trace Event EventSubClass Values.
#RBarryYoung pointed out that EventSubClass values and names can be found in the sys.trace_subclass_values catalog view, but querying that view shows that it has no rows for the TransactionLog event:
SELECT * FROM sys.trace_categories -- 12 = Transactions
SELECT * FROM sys.trace_events WHERE category_id = 12 -- 54 = TransactionLog
SELECT * FROM sys.trace_subclass_values WHERE trace_event_id = 54 -- nothing :(
** Please note that the SqlBulkCopy.BatchSize property is equivalent to setting the -b option for BCP.EXE, which is an operational setting that controls how each command will break up the rows into sets. This is not the same as the ROWS_PER_BATCH= hint which does not physically control how the rows are broken up into sets, but instead allows SQL Server to better plan how it will allocate pages, and hence reduces the number of entries in the Transaction Log (sometimes by quite a bit). Still my testing showed that:
specifying -b for BCP.EXE did set the ROWS_PER_BATCH= hint to that same value.
specifying the SqlBulkCopy.BatchSize property did not set the ROWS_PER_BATCH= hint, BUT, the benefit of reduced Transaction Log activity was somehow definitely there (magic?). The fact that the net effect is to still gain the benefit is why I did not mention it towards the top when I said that it was unfortunate that the ORDER() hint was not supported by SqlBulkCopy.
You cann't capture SqlBulkCopy in SQL Profiler because SqlBulkCopy doesn't generate SQL at all when inserts data in SQL Server table. SqlBulkCopy works similar to bcp utility and loads data directly into SQL Server file system. It's even can ignore FKs and triggers when inserts the rows!

There is insufficient system memory in resource pool 'default' to run this query

I'm getting this error:
There is insufficient system memory in resource pool 'default' to run this query.
I'm just running 100,000 simple insert statements as shown below. I got the error approx on the 85,000th insert.
This is a demo for a class I'm taking...
use sampleautogrow
INSERT INTO SampleData VALUES ('fazgypvlhl2svnh1t5di','8l8hzn95y5v20nlmoyzpq17v68chfjh9tbj496t4',1)
INSERT INTO SampleData VALUES ('31t7phmjs7rcwi7d3ctg','852wm0l8zvd7k5vuemo16e67ydk9cq6rzp0f0sbs',2)
INSERT INTO SampleData VALUES ('w3dtv4wsm3ho9l3073o1','udn28w25dogxb9ttwyqeieuz6almxg53a1ki72dq',1)
INSERT INTO SampleData VALUES ('23u5uod07zilskyuhd7d','dopw0c76z7h1mu4p1hrfe8d7ei1z2rpwsffvk3pi',3)
Thanks In Advance,
Jim M
Update: Just noticed something very interesting. I created another database, forgot to create the SampleData table. I ran the query to add the 100,000 rows, and it got the out of memory error before it even complained that the table didn't exist. Thus, I'm guessing it is running out of memory just trying to "read in" my 100,000 lines?
You have 100.000 insert statements in one single batch request? Your server needs more RAM just to parse the request. Buy more RAM, upgrade to x64 or reduce the size of single batches sent to the server. Ie. sprinkle a GO every now and there in the .sql file.
You can try SQLServer Connection Tools application. It has a feature called Massive Sql Runner which executes every command one by one. With this feature very few memory will be used to execute script commands and you will no longer have the problem.
SQL Server Connection Tools

Resources