Different Behavior between SQL Anywhere 9 and 16 - sybase

I'm having trouble figuring out SQL Anywhere 16 behavior compare to Sybase 9 they both has an identical database set as dirty read or
set transaction isolation level 0
Even from an Delphi application (using TAsaSession) or thru SQL_anywhere_XX, I´m getting the same behavior:
On Sybase 9, from two connection I can run an
UPDATE associate SET nm_associate = nm_associate + ' Test' WHERE id_associate = 620
without aCOMMIT at the end;
On Sybase 16 1st connection locks out associate's table and holds up the 2nd one until COMMIT runs and free its way.
I´m not sure if isolation level has anything to with it, or if it's anything else i need to set so I can migrate from 9 to 16 without this problem .
Can anyone help-me on that?

Isolation level 0 only has effect on read operations: it allows you to read data that is part of a not-yet-committed transaction by a different user.
When running an update, at least level 1 is used for the update statement irrespective of what you have set the isolation level in the session to.

Problem solved after Setting chained='On' on each connection.
AsaConnection.Connected := True;
vQry.Session := AsaConnection;
vQry.SQL.Text := 'SET OPTION chained=''On''';
vQry.ExecSQL;

Related

AWS RDS MYSQL import db Access Denied

I cannot import a database in AWS RDS because of this commands in my sql file:
SET ##SESSION.SQL_LOG_BIN= 0;
SET ##GLOBAL.GTID_PURGED=/*!80000 '+'*/ '';
SET ##SESSION.SQL_LOG_BIN = #MYSQLDUMP_TEMP_LOG_BIN;
Are they important ? whiteout them there is no error.
log_bin_trust_function_creators parameter is set to 1 in a custom parameter.
FYI: MySql 5.7 and 8, same error
ERROR 1227 (42000) at line 20: Access denied; you need (at least one of) the SUPER, SYSTEM_VARIABLES_ADMIN or SESSION_VARIABLES_ADMIN privilege(s) for this operation
SET ##SESSION.SQL_LOG_BIN=0;
This is telling MySQL not to put these INSERT statements into the binary log. If you do not have binary logs enabled (not replicating), then it's not an issue to remove it. As far as I know, there is no way to enable this in RDS; I'm actually trying to figure out a way, which is how I found your question.
SET ##GLOBAL.GTID_PURGED=/*!80000 '+'*/ '';
Did you execute a RESET MASTER on the database where the dump originated from? Check here for an explanation of this value: gtid_purged
SET ##SESSION.SQL_LOG_BIN = #MYSQLDUMP_TEMP_LOG_BIN;
This is setting the ##SESSION.SQL_LOG_BIN variable back to the original setting; you should've seen another line like: SET #MYSQLDUMP_TEMP_LOG_BIN = ##SESSION.SQL_LOG_BIN;
If you're simply recovering this table into a new database that isn't writing to a binary log (for replication), it's safe to remove these lines. Hope this helps!

What does 'infinite' mean in MS Access query

I am trying to re-make some Access queries using SSMS. I ran across the below statement and I am a bit confused. it uses the term 'infinite' and I have never seen that in Access. Any help?
Code:
Bal: IIf([Infinite]<1,0,[infinite])
It's referring to a field in the database named "Infinite" (or "infinite").
What it's trying to do is return 0 if [Infinite] is less than 1, and return the actual value if it's not less than 1.
However if your SQL Server is set to case sensitive collation, the statement will fail because Infinite <> infinite

Multi-Threaded VB.net: Update then return row id?

I'm trying to code a multi-threaded program using VB.NET. It's a simple program: go to a database, get a row, process some back-end, then update the row and set it as "processed = true". Because there is so much data, I'm planning to do a multi-threaded program for it.
SELECT... FOR UPDATE doesn't seem to work in a transaction for some odd reason, and so I've decided to pre-emptively mark the row as "being read = TRUE", then process it from there.
Is it possible to update the row, then retrieve the row ID from the same SQL statement?
I've tried using these SQL statements together:
Dim sqlUpdateStatement As String = "SET #uid := 0;UPDATE process_data SET reading = TRUE, idprocess_data = (SELECT #uid := idcrawl_data) WHERE reading IS NOT TRUE AND processed IS NOT TRUE LIMIT 1;SELECT #uid;"
but it tells me that there was a fatal error encountered during command execution.
Any ideas?
EDIT
After some testing, I've come to the conclusion that you can't use MySQL variables when performing updates in VB.Net. Is this true? And if so, is there a workaround?
I eventually took the time to debug the SELECT FOR UPDATE portion of my code to get it to work on a Transaction basis. Thanks everyone for their time!

SQL Server 2016 R Services: sp_execute_external_script returns 0x80004005 error

I run some R code after querying 100M records and get the following error after the process runs for over 6 hours:
Msg 39004, Level 16, State 19, Line 300
A 'R' script error occurred during execution of 'sp_execute_external_script'
with HRESULT 0x80004005.
HRESULT 0x80004005 appears to be associated in Windows with Connectivity, Permissions or an "Unspecified" error.
I know from logging in my R code that the process never reaches the R script at all. I also know that the entire procedure completes after 4 minutes on a smaller number of records, for example, 1M. This leads me to believe that this is a scaling problem or some issue with the data, rather than a bug in my R code. I have not included the R code or the full query for proprietary reasons.
However, I would expect a disk or memory error to display a 0x80004004 Out of memory error if that were the case.
One clue I noticed in the SQL ERRORLOG is the following:
SQL Server received abort message and abort execution for major error : 18
and minor error : 42
However the time of this log line does not coincide with the interruption of the process, although it does occur after it started. Unfortunately, there is precious little on the web about "major error 18".
A SQL Trace when running from SSMS shows the client logging in and logging out every 6 minutes or so, but I can only assume this is normal keepalive behaviour.
The sanitized sp_execute_external_script call:
EXEC sp_execute_external_script
#language = N'R'
, #script = N'#We never get here
#returns name of output data file'
, #input_data_1 = N'SELECT TOP 100000000 FROM DATA'
, #input_data_1_name = N'x'
, #output_data_1_name = N'output_file_df'
WITH RESULT SETS ((output_file varchar(100) not null))
Server Specs:
8 cores
256 GB RAM
SQL Server 2016 CTP 3
Any ideas, suggestions or debugging hints would be greatly appreciated!
UPDATE:
Set TRACE_LEVEL=3 in rlauncher.config to turn on a higher level of logging and re-ran the process. The log reveals a cleanup process that ran, removing session files, at the time the entire process failed after 6.5 hours.
[2016-05-30 01:35:34.419][00002070][00001EC4][Info] SQLSatellite_LaunchSatellite(1, A187BC64-C349-410B-861E-BFDC714C8017, 1, 49232, nullptr) completed: 00000000
[2016-05-30 01:35:34.420][00002070][00001EC4][Info] < SQLSatellite_LaunchSatellite, dllmain.cpp, 223
[2016-05-30 08:04:02.443][00002070][00001EC4][Info] > SQLSatellite_LauncherCleanUp, dllmain.cpp, 309
[2016-05-30 08:04:07.443][00002070][00001EC4][Warning] Session A187BC64-C349-410B-861E-BFDC714C8017 cleanup wait failed with 258 and error 0
[2016-05-30 08:04:07.444][00002070][00001EC4][Info] Session(A187BC64-C349-410B-861E-BFDC714C8017) logged 2 output files
[2016-05-30 08:04:07.444][00002070][00001EC4][Warning] TryDeleteSingleFile(C:\PROGRA~1\MICROS~1\MSSQL1~1.MSS\MSSQL\EXTENS~1\MSSQLSERVER06\A187BC64-C349-410B-861E-BFDC714C8017\Rscript1878455a2528) failed with 32
[2016-05-30 08:04:07.445][00002070][00001EC4][Warning] TryDeleteSingleDirectory(C:\PROGRA~1\MICROS~1\MSSQL1~1.MSS\MSSQL\EXTENS~1\MSSQLSERVER06\A187BC64-C349-410B-861E-BFDC714C8017) failed with 32
[2016-05-30 08:04:08.446][00002070][00001EC4][Info] Session A187BC64-C349-410B-861E-BFDC714C8017 removed from MSSQLSERVER06 user
[2016-05-30 08:04:08.447][00002070][00001EC4][Info] SQLSatellite_LauncherCleanUp(A187BC64-C349-410B-861E-BFDC714C8017) completed: 00000000
It appears the only way to allow my long-running process to continue is to:
a) Extend the Job Cleanup wait time to allow the job to finish
b) Disable the Job Cleanup process
I have thus far been unable to find the value that sets the Job Cleanup wait time in the MSSQLLaunchpad service.
While a JOB_CLEANUP_ON_EXIT flag exists in rlauncher.config, setting it to 0 has no effect. The service seems to reset it to 1 when it is restarted.
Again, any suggestions or assistance would be much appreciated!
By default, SQL Server reads all data into R memory as a Data Frame before starting execution of R script. Based on the fact that the script works with 1M rows and fails to start with 100M rows, this could potentially be an Out of Memory error. To resolve memory issues, (other than increasing memory on machine/reducing data size) you can try one of these solutions
Increase memory allocation for R process execution using sys.resource_governor_external_resource_pools max_memory_percent setting. By default, SQL Server limits R process execution to 20% of memory.
Streaming execution for R script instead of loading all data into memory. Note that this parameter can only be used in cases where the output of the R script doesn’t depend on reading or looking at the entire set of rows.
The Warnings in RLauncher.log about data cleanup happened after the R script execution can be safely ignored and probably not the root cause for the failures you are seeing.
Unable to resolve this issue in SQL, I simply avoided the SQL Server Launchpad service which was interrupting the processing and pulled the data from SQL using the R RODBC library. The pull took just over 3 hours (instead of 6+ using sp_execute_external_procedure).
This might implicate the SQL Launchpad service, and suggests that memory was not the issue.
Please try your scenario in SQL Server 2016 RTM. There have been many functional and performance fixes made since CTP3.
For more information on how to get the SQL Server 2016 RTM checkout SQL Server 2016 is generally available today blogpost.
I had almost the same issue with SQL Server 2016 RTM-CU1. My Query failed with error 0x80004004 instead of 0x80004005. And it failed beginning with 10,000,000 records, but that could be related to only having 16 GB memory and/or different data.
I got around it by using a field list instead of "*". Even if the field list contains all the fields from the data source (a rather complicated view in my case), a query featuring a field list is always successful, while "SELECT TOP x * FROM ..." always fails for some large x.
I've had the a similar error (0x80004004), and the problem was that one of the rows in one of the columns contained a "very" special character (I'm saying "very" because other special characters did not cause this error).
So that when I replaced 'Folkelånet Telefinans' with 'Folkelanet Telefinans', the problem went away.
In your case, maybe at least one of the values in the last 99M rows contains something like that character, and you just have to replace it. I hope that Microsoft will resolve this issue at some point.

"The size associated with an extended property cannot be more than 7,500 bytes" running `sp_addextendedproperty`

I've scripted a database and I ran the script to create a new database. But it throws an error with the message:
The size associated with an extended property cannot be more than 7,500 bytes.
How can i get rid of this problem?
I had the same issue - generated from a VS2015 schema compare. Turns out all that extended property is doing is persisting the design of the View that you can see in SQL Management Studio. So you don't really need it.
Delete the extended property from the script and retry.
In my case, I noticed that a view was scripted with three extended properties, their names being:
MS_DiagramPane1
MS_DiagramPane2
MS_DiagramPaneCount
I also modified the script code for the view to reflect several tables that have been renamed. I also renamed the table names where they occurred in the (large) #value parameter value passed to the system procedure sp_addextendedproperty for the first property defined in the script. The new table names are longer than the previous names so I suspect the parameter value now exceeds the 7,500 byte (character) limit.
But based on the property names, and after examining the #value values, it appears that whatever uses those properties (i.e. SQL Server Management Studio or other Microsoft tools) combines any number of 'diagram pane' properties. Here are the last sixteen lines of the MS_DiagramPane1 property:
Begin Table = "ROLES_1"
Begin Extent =
Top = 292
Left = 869
Bottom = 387
Right = 1059
End
DisplayFlags = 280
TopColumn = 0
End
Begin Table = "GL_Journal_Code"
Begin Extent =
Top = 6
Left = 1078
Bottom = 118
Right = 12
and the first ten lines of the MS_DiagramPane2 property:
65
End
DisplayFlags = 280
TopColumn = 0
End
End
End
Begin SQLPane =
PaneHidden =
End
By simply moving some of the content at the end of the parameter value for the first property to the beginning of the parameter value for the second property I was able to run the script without observing the error.
How can i get rid of this problem?
You do not define an extended property that needs to be larger than 7500 bytes. Is that not obvious?
This is one of those "a page has 8kb, not more and not less" limitations which - sans overhead - runs down to a limit of 7500 bytes for an extended property. Need to store more? Well, not possible in the extended property.

Resources