Migrate PB7 to 10.5 using SQL server DB - sql-server

I migrated PB7 to PB10.5 on SQL server DB. The system gives me this message:
"DBMS MSS Microsoft SQL Server 6.x is not supported in your current
installation"
I changed the database connection settings from:
Old connect used in PB7:
DBMS = MSS Microsoft SQL Server 6.x
Database = databaseName
ServerName = serverName
LogId = LogId
AutoCommit = 1
DBParm = ""
UserId = userid
DatabasePassword =
LogPassword=password
Lock=
Prompt=0
To this in PB10.5:
DBMS =SNC SQL Native Client(OLE DB)
Database =databaseName
ServerName =serverName
LogId =LogId
AutoCommit = 0
DBParm = "
Database='databaseName'
TrimSpaces=1"
UserId=userid
DatabasePassword=
LogPassword=password
Lock=
Prompt=0
The system run without previous error message,but when retrieve any old stored arabic data in datawindows it seem unreadable like
ÚãáíÇÊ ÇÎÑì

I can't believe this question got overlooked -- sorry about that. It is a common question when migrating from older versions of PowerBuilder to PowerBuilder version 10 and higher. Good news, very easy to fix just can be time consuming depending on how many places you need to fix.
I've already written a blog article on the subject or just duckduckgo migrating PowerBuilder Unicode issues.
Converting ANSI and Unicode Strings for PowerBuilder Migrations to Version 10 and Higher
Here is a summary of the conversion process:
Convert data to ANSI
Blob lbl_data
lbl_data = Blob("PowerBuilder is cool!", EncodingANSI!)
ls_data = String(lbl_data, EncodingANSI!)
Convert data read via file to ANSI
Blob lbl_data
lbl_data = Blob("PowerBuilder is cool!", EncodingANSI!)
ls_data = String(lbl_data, EncodingANSI!)

Related

TDengine jdbc timezone

I used the code below to create timezone reconfigured connection.
operties properties = new Properties();
properties.setProperty(TSDBDriver.PROPERTY_KEY_TIME_ZONE, "UTC+8");
Connection connDefault = DriverManager.getConnection("jdbc:TAOS://" + host + ":0/", properties);
and then I used the connDefault to create a statement,the use this statement to insert and select data.
insert into meters values('2021-07-29 00:00:00', 'taosdata');
select * from meters ;
But the java code query result is
ts=2021-07-29 16:00:00.0
there are 16 hours interval. While I expect the query result is the same as the insert time '2021-07-29 00:00:00'.
what's more the time stored in db is like this
ts | name
1627545600000 | taosdata
Query OK, 1 row(s) in set (0.008616s)
could someone help figure out why?
In addition, i found on the official website of tdengine always set timezone to "utc-8" does anyone know where that timezone is?
Properties connProps = new Properties();
connProps.setProperty(TSDBDriver.PROPERTY_KEY_CHARSET, "UTF-8");
connProps.setProperty(TSDBDriver.PROPERTY_KEY_LOCALE, "en_US.UTF-8");
connProps.setProperty(TSDBDriver.PROPERTY_KEY_TIME_ZONE, "UTC-8");
Connection conn = DriverManager.getConnection(jdbcUrl, connProps);
Like other database software, for example, PostgreSQL, TDengine uses Posix timezone spec which means the offset is opposite with the ISO 8601 standard.

migration data from DateTime2 (4) data type fields to a data type field to Timestamp (4) With Time Zone a truncation of milliseconds

Example:
DateTime2(4): SQL Server
-----------------------------
2017-03-30 15:10:15.1234
-----------------------------
Timestamp(4) With Time Zone: PostgreSQL
-----------------------------
2017-03-30 15:10:15.123
-----------------------------
I used the component of firedac Configure TFDDataMove to migrate data as follows:
FDTable1.Active:= False;
FDDataMove1.CommitCount:=100;
FDDataMove1.StatictsInterval:=100;
FDDataMove1.TextDataDef.StrEmpty2Null:=False;
FDDataMove1.Mode:=dmAlwaysInsert;
FDDataMove1.Options:= [poOptimiseDest,poOptimiseSrc,poClearDest,poAbortOnExc,poIdentityInsert];
FDDataMove1.Source:=FDQuery2;
FDDataMove1.Destination:=FDTable2;
Version SQL SERVER: 2014
Version PostgreSQL: 9.6.3
Version Firedac: 11.0.1(Build 73709)
Version Rad Studio: RAD Studio XE7
=================================================
Client info:
=================================================
Loading Driver PG...
Client brand = PostgreSQL regular
Client version = 906040000
Client DLL name: C\...\Proyect_Migration\bin\libpg.dll
=================================================
Session info
=================================================
Current catalog =
Current schema = public
Server version = 9.6.3
Server Encoding = UTF8
Client Encoding = UTF8
Is Superuser = on
Session Authorization = postgres
date Style = ISO,DMY
Integer date/time = on
Time zone = America/Mexico_City
Standard conforming string = on

Cloning Standard:S0 database to a Basic edition (test developmen)

really simple doubt, guess it is a bug, or something I got wrong
I have a databse in Azure, as Standard:S0 Tier, now 178 mb, and I want to make a copy (in a master's procedure) but with result database in BASIC pricing tier
Tought as:
CREATE DATABASE MyDB_2 AS COPY OF MyDB ( EDITION = 'Basic')
With unhappier result :
Database is created as pricing tier Standard:S0
Then tried:
CREATE DATABASE MyDB_2 AS COPY OF MyDB ( SERVICE_OBJECTIVE = 'Basic' )
or
CREATE DATABASE MyDB_2 AS COPY OF MyDB ( EDITION = 'Basic', SERVICE_OBJECTIVE = 'Basic' )
With even unhappy result :
ERROR:: Msg 40808, Level 16, State 1, The edition 'Standard' does not support the service objective 'Basic'.
tried also:
CREATE DATABASE MyDB_2 AS COPY OF MyDB ( MAXSIZE = 500 MB, EDITION = 'Basic', SERVICE_OBJECTIVE = 'Basic' )
with unhappier result :
ERROR:: Msg 102, Level 15, State 1, Incorrect syntax near 'MAXSIZE'.
.
May I be doing something not allowed ?
But if you copy your database via portal, you'd notice that basic tier is not available with message'A database can only be copied within the same tier as the original database.'. The behavior is documented here:'You can select the same server or a different server, its service tier and performance level, a different performance level within the same service tier (edition). After the copy is complete, the copy becomes a fully functional, independent database. At this point, you can upgrade or downgrade it to any edition. The logins, users, and permissions can be managed independently.'

How do I specify a specific database in a SQL server when creating an ODBC connection on Windows?

I am working off of a server housing various SQL databases (accessed via Microsoft SQL Server Management Studio) and am going to use R to perform analyses and explore a specific database within the server. I have network security that permits communication between machines, drivers installed on the R server, and RODBC installed.
When I attempt to establish a Windows ODBC connection in the Control panel>Administrative>Data Sources, I can only add a data source for the entirety of the SQL server, not just for the specifc database I want to look at. I pasted the code I have been experimenting with below.
library(RODBC)
channel <- odbcConnect("Example", uid="xxx", pwd=****");
sqlTables(channel)
sqlTables(ch, tableType = "TABLE")
res <- sqlFetch(ch, "samp.le", max = 15) #not recognizing as a table
library(RODBC)
ch <- odbcDriverConnect('driver={"SQL Server"}; server=Example; database=dbasesample; uid="xxxx", pwd = "****"')
Response: Warning messages:
1: In odbcDriverConnect("driver={\"SQL Server\"}; server=sample; database=dbasesample; uid=\"xxxx", pwd = \"xxxx\"") :
[RODBC] ERROR: state IM002, code 0, message [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified
2: In odbcDriverConnect("driver={\"SQL Server\"}; server=sample; database=dbasesample; uid=\"xxxx\", pwd = \"xxxx!\"") :
ODBC connection failed
Any insight into this issue would be much appreciated.
Although while querying with the sqlQuery() function you can specify database, schema and table, e.g.
library(RODBC)
con = odbcConnect(dsn = 'local')
sample_query = sqlQuery(con,'select * from db.dbo.table')
I have not found a way to define the database from within the function parameters while using sqlFetch() or sqlSave(). An indirect way would be to define the default database in the dsn (as written in the comments). But then, you would need a different dsn for each database you would like to use.
A better solution would be to use the odbc and DBI packages instead of RODBC, and define the database in the connection statement e.g.
library(dplyr)
library(DBI)
library(odbc)
con <- dbConnect(dsn = 'local',database = 'db')
copy_to(con, rr2, temporary = F)
By the way, I found copy_to to be much faster than the equivalent sqlSave of RODBC.

How to update conflict resolver when upgrading from SQL-Server 2005 to SQL-Server 2008

We have recently upgraded from SQL Server 2005 to SQL Server 2008 (R2, SP1). This upgrade included some publications, where all tables are published with a default conflict resolver based on the "later wins" principle. Its smart name is 'Microsoft SQL Server DATETIME (Later Wins) Conflict Resolver', and the corresponding dll file is ssrmax.dll.
As you all know, once a table is published with a conflict resolver, the same conflict resolver must be used in all later publications using this table. Fair enough, but, when adding previously published tables to new publications, and specifying the very same conflict resolver to be used for this table, we are getting an error message:
use [myDb]
exec sp_addmergearticle
#publication = N'myDb_Pub',
#article = N'Tbl_blablabla',
#source_owner = N'dbo',
#source_object = N'Tbl_blablabla',
#type = N'table',
#description = N'',
#creation_script = N'',
#pre_creation_cmd = N'drop',
#schema_option = 0x000000000C034FD1,
#identityrangemanagementoption = N'none',
#destination_owner = N'dbo',
#force_reinit_subscription = 1,
#column_tracking = N'false',
#article_resolver = N'Microsoft SQL Server DATETIME (Later Wins) Conflict Resolver',
#subset_filterclause = N'',
#resolver_info = N'ddmaj',
#vertical_partition = N'false',
#verify_resolver_signature = 0,
#allow_interactive_resolver = N'false',
#fast_multicol_updateproc = N'true',
#check_permissions = 0,
#subscriber_upload_options = 0,
#delete_tracking = N'true',
#compensate_for_errors = N'false',
#stream_blob_columns = N'false',
#partition_options = 0
GO
And this is the error we get:
The article '...' already exists in another publication with a different article resolver.
By trying to understand how the same conflict resolver is not considered by the machine as 'the same conflict resolver', I discovered that there were two conflict resolvers with the same name, different versions, in the registry:
the 2005 version:
file ssrmax.dll,
version 2005.90.4035.0,
cls_id D604B4B5-686B-4304-9613-C4F82B527B10
the 2008 version:
file ssrmax.dll,
version 2009.100.2500.0,
cls_id 77209412-47CF-49AF-A347-DCF7EE481277
And I checked that our 2008 server is considering the second one as the 'available custom resolver' (I got this by running sp_enumcustomresolvers). The problem is that both references are available in the registry, so I guess that old publications do refer to the 2005 version, while new publications try to refere to the 2008 version, which is indeed different from the previous one.
So the question is: how can I have the server consider only one of these 2 versions, and this (of course) without having to drop and recreate the existing publications (which would turn our life into hell for the next 2 weeks).
Well .. so nobody got an answer. But I think I (finally) got it. Guess what... it is somewhere in the metamodel (as usual)!
When adding an item to the subscription, the new conflict resolver references to be used by the stored procedure come from the [distribution].[MSmerge_articleresolver] table
But, for existing subscriptions, previous conflict resolver references are stored in the system tables of the publishing database, ie [sysmergearticles], [sysmergeextendedarticlesview], and [sysmergepartitioninfoview]
So we have on one side an item initialy published with SQLSERVER 2005, where the publication references the 2005 conflict resolver, as per the publishing database metamodel. On the other side, the machine will attempt to add the same item to a new publication, this time with a default reference to the conflict resolver available in the distibution database, which is indeed different from the 2005 one ....
To illustrate this, you can check the following
USE distribution
go
SELECT article_resolver, resolver_clsid
FROM [MSmerge_articleresolver] WHERE article_resolver like '%Later Wins%'
GO
Then,
USE myPublicationDatabase
go
SELECT article_resolver, resolver_clsid
FROM [sysmergearticles] WHERE article_resolver like '%Later Wins%'
GO
SELECT article_resolver, resolver_clsid
FROM [sysmergeextendedarticlesview] WHERE article_resolver like '%Later Wins%'
GO
SELECT article_resolver, resolver_clsid
FROM [sysmergepartitioninfoview] WHERE article_resolver like '%Later Wins%'
GO
So it seems that I should update either the references in the distribution database or the references in the publication database. Let's give it a try!
Thanks, had something similar on a re-publisher where the subscriber article had a CLSID that made no sense on the server (looked with Regedit) but when trying to add the article to a publication would produce said error.
Updated the resolver_clsid field of sysMergeArticles table for the subscribed article with the clisd it was trying to get
{
declare #resolver_clsid nvarchar(50)
exec sys.sp_lookupcustomresolver N'Microsoft SQL Server DATETIME (Earlier Wins) Conflict Resolver', #resolver_clsid OUTPUT
select #resolver_clsid
}
and could then add the article

Resources