When you use default value for a column in SQL Server Management Studio table designer, SSMS change your default and stand Parenthesis around of that (In all editions and all versions of SQL Server). For example if you set 0 as default value, this default changed to (0). I don't know why sql server use parenthesis, and is there a practical reason.
Thanks in advance.
There are certain types of objects, such as DEFAULTs and CHECK constraints that SQL Server doesn't store the original, textual form of - in contrast with, say, stored procedures, where you can (absent encryption) always retrieve it in the exact form you gave it to the server - complete with any white space, formatting, comments, etc.
Because SQL Server doesn't store the textual form, it always has to re-generate textual forms to show to users when they ask for such objects. When and where it chooses to insert parentheses can be a bit of a mystery (it's not documented) but since they don't change the meaning of the expression, they shouldn't be a concern.
Actually, 0 and (0) will evaluate to the same result, so there is nothing to really overcome. Ignore it. It's just how SQL stores them internally and a visibility issue.
It does that is you do it interactively as well. Not sure why does this , but not to worry.
Related
When I run the following query in SQL Server 2019, the result is 1, whereas it should be 0.
select CHARINDEX('αρ', 'αυρ')
What could be the problem?
As was mentioned in the comments it may be because you have not declared your string literals as Unicode strings but are using Unicode characters in the strings. SQL Server will be converting the strings to another codepage and doing a bad job of it. Try running this query to see the difference.
SELECT 'αρ', 'αυρ', N'αρ', N'αυρ'
On my server, this gives the following output:
a? a?? αρ αυρ
Another issue is that CHARINDEDX uses the collation of the input which I think is probably not set correctly in this instance. You can force a collation by setting it on one of the inputs. It is also possible to set it at the instance, database and column level.
There are different collations that may be applicable. These have different features, for example some are case sensitive some are not. Also not all collations are installed with every SQL Server instance. It would be worth running SELECT * from sys.fn_helpcollations() to see the descriptions of all the installed ones.
If you change your query to this you should get the result you are looking for.
SELECT CHARINDEX(N'αρ' COLLATE Greek_BIN, N'αυρ')
I need to change the DataSource for my SSRS reports. Some field names and DIM-FACT table names have changed on the SQL server 2008 database used to create SSRS reports. How can I change the DataSource do that without losing all of the work I have done? Some field names are not the same or have been removed.
The reports were already uploaded/deployed from Visual Studio and copied to SharePoint 2010, Is there a way to modify the original datasource without having to rewrite the whole dril-down report?
I am new to SSRS and I hope what I am asking makes sense )
Solution Explorer and Properties in Visual Studio where modified, but ReportData Section (on the left) are still the same. Can someone please help me?
In your example, you have your report splendidly broken out into 3 parts - an RDL, which is your actual Report definition; an RSD, which is your dataset, which houses a reference to a sproc or just your entire query, and maintains information about the field names, data types, etc; and an RDS, which is your datasource, and merely contains a connection string.
As long as the metadata between them remain the same, you can alter any of these files independently of the others - you can completely gut & rewrite your RSD, and as long as the field names, datatypes, and parameters are the same, the RDL will continue to work with no modifications needed. Similarly, you can change your datasource's (RDS) connection string, and as long as the new connection has access to the same objects, your RSD, and thus RDL will work fine.
So, if you merely need to change the data source, simply modify that file, and you're done.
It sounds, however, like you need to change your dataset. This can be as simple or as complicated as you'd like it to be. You could simply update your query, and alias all of the new field names back to what they were before your change. This would require no modifications to your RDL, though could be argued as being a bad practice.
Lastly, if this really is a simple change of replacing one value with another, know that all 3 files - RDS, RSD, RDL - are simply XML. Open them up using the Notepad clone of your choice, and do a find/replace for everything (you can also use "Code View" in Visual Studio).
I have the following query in MS Access and needs to be converted to SQL Server. I am trying to understand what does the format function do here and what is the use of "0".
SELECT Format([SumOfTotalPopulation],"0") AS Expr1 , SumOfTotalPopulation FROM
qry_ASSET_STREAM_DS_POP_PROP ;
Can anyone help me understad this. The above code is present in MS Access 2003 and Im working on SQL Server 2008 R2.
In your case, Format([SumOfTotalPopulation],"0") just removes the decimals from the number.
In SQL Server, you could use something like Str(sumOfTotalPopulation,12,0)
Or you could use Round()
The Access Format() function behaves differently based on the data type passed into it. In this example, I'm assuming that your SumOfTotalPopulation field is a number, which means that the formatting will be done as described here - basically, it will be formatted as an integer - no decimal point, no thousands separator.
The good news for you is that if the field in SQL Server is already defined as an integer type, you won't have to do this formatting. Otherwise, you should be doing this formatting at the presentation layer (user interface, web page, report, etc.) and not in the query itself.
Access teaches a lot of bad habits. Rather than translate what you have in Access one-for-one, take this opportunity to update things to be done "the SQL Server way" wherever possible.
I am currently moving a product from SQL Server to Oracle. I am semi-familiar with SQL Server and know nothing about Oracle, so I apologize if the mere presence of this question offends anyone.
Inferring from this page, http://download.oracle.com/docs/cd/E12151_01/doc.150/e12156/ss_oracle_compared.htm, it would seem that the data type conversion from SQL Server to Oracle should be:
REAL = FLOAT(24) -> FLOAT(63)
FLOAT(p) -> FLOAT(p)
TIMESTAMP -> NUMBER
NVARCHAR(n) -> VARCHAR(n*2)
NCHAR(n) -> CHAR(n*2)
Here are my questions regarding them:
For FLOAT, considering that FLOAT(p) -> FLOAT(p), wouldn't it also mean that FLOAT -> FLOAT(24)?
For TIMESTAMP, since Oracle also has its own version of it, wouldn't it be better that TIMESTAMP -> TIMESTAMP?
Finally, for NVARCHAR(n) and NCHAR(n), I thought the issue would be regarding Unicode. Then, again, since Oracle provides its own version of both, wouldn't it make more sense that NVARCHAR(n) -> NVARCHAR(n) and NCHAR(n) -> NCHAR(n)?
It would be much appreciated if someone were to elaborate on the previous 3 matters.
Thanks in advance.
It appears that Oracle's CHAR and VARCHAR2 (always use VARCHAR2 instead of VARCHAR) already support Unicode - the document you've linked to advises converting to those from the SQL Server NCHAR and NVARCHAR datatypes.
The SQL Server TIMESTAMP isn't actually a timestamp at all - it's some kind of identifier based on the time that's just used to indicate that a row has changed - it can't be converted back into any kind of DATETIME (at least in a way that I know about).
For FLOAT, using 126 bytes would be enormous - since the developer tools automatically map SQL Server's FLOAT to Oracle's FLOAT(53), why not use that amount?
This is more FYI than an answer to your question, but you're potentially going to run into a particularly painful difference between SQL Server and Oracle. In SQL Server, you can define a string column (of whatever flavor) to not allow NULL values, and then insert zero-length (aka "blank") strings into that column, because SQL Server does not consider a blank string to be the same as a NULL.
Oracle does consider a blank string to be the same as a NULL, so Oracle will not let you insert blank values into a NOT NULL column. This obviously causes problems when copying data from a table in SQL Server into its counterpart table in Oracle. You choices for dealing with this are:
Set the offending string column in Oracle to allow NULL values (so not a good idea)
When copying the data, replace the blank strings with something else (I have no idea what you should use here)
Skip the offending rows and pretend you never saw them
I'd love to think that Oracle's choice to consider blank strings to be NULL (where they're alone among the major DBs) was to lock customers into their platform, but this one actually works in the opposite direction. You can move a database from Oracle to something else without the blank=NULL difference causing any problems.
See this earlier question: Oracle considers empty strings to be NULL while SQL Server does not - how is this best handled?
Following is not a direct answer to your question; but it is good to take a look at the sqlteam blog
sqlteam - Datatypes translation between Oracle and SQL Server part 2: number
It has detailed explanation about how to handle numbers, etc.
I'm trying to export some tables from SQL Server 2005 and then create those tables and populate them in Oracle.
I have about 10 tables, varying from 4 columns up to 25. I'm not using any constraints/keys so this should be reasonably straight forward.
Firstly I generated scripts to get the table structure, then modified them to conform to Oracle syntax standards (ie changed the nvarchar to varchar2)
Next I exported the data using SQL Servers export wizard which created a csv flat file. However my main issue is that I can't find a way to force SQL Server to double quote column names. One of my columns contains commas, so unless I can find a method for SQL server to quote column names then I will have trouble when it comes to importing this.
Also, am I going the difficult route, or is there an easier way to do this?
Thanks
EDIT: By quoting I'm refering to quoting the column values in the csv. For example I have a column which contains addresses like
101 High Street, Sometown, Some
county, PO5TC053
Without changing it to the following, it would cause issues when loading the CSV
"101 High Street, Sometown, Some
county, PO5TC053"
After looking at some options with SQLDeveloper, or to manually try to export/import, I found a utility on SQL Server management studio that gets the desired results, and is easy to use, do the following
Goto the source schema on SQL Server
Right click > Export data
Select source as current schema
Select destination as "Oracle OLE provider"
Select properties, then add the service name into the first box, then username and password, be sure to click "remember password"
Enter query to get desired results to be migrated
Enter table name, then click the "Edit" button
Alter mappings, change nvarchars to varchar2, and INTEGER to NUMBER
Run
Repeat process for remaining tables, save as jobs if you need to do this again in the future
Use the SQLDeveloper migration tools
I think quoting column names in oracle is something you should not use. It causes all sort of problems.
As Robert has said, I'd strongly advise agains quoting column names. The result is that you'd have to quote them not only when importing the data, but also whenever you want to reference that column in a SQL statement - and yes, that probably means in your program code as well. Building SQL statements becomes a total hassle!
From what you're writing, I'm not sure if you are referring to the column names or the data in these columns. (Can SQLServer really have a comma in the column name? I'd be really surprised if there was a good reason for that!) Quoting the column content should be done for any string-like columns (although I found that other characters usually work better as the need to "escape" quotes becomes another issue). If you're exporting in CSV that should be an option .. but then I'm not familiar with the export wizard.
Another idea for moving the data (depending on the scale of your project) would be to use an ETL/EAI tool. I've been playing around a bit with the Pentaho suite and their Kettle component. It offered a good range of options to move data from one place to another. It may be a bit oversized for a simple transfer, but if it's a big "migration" with the corresponding volume, it may be a good option.