using`varchar` and `nvarchar` columns in the same table and database? [closed] - sql-server

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Are there any possible implications if I have varchar and nvarchar columns in the same table or database?
--Additional Details--
I have a database with 'varchar' and now I want to convert everything to unicode data types nvarchar. But someone from the team suggested that we shouldn't touch a specific column because anyway it will take only varchar characters e.g. the inherited collation. Now, is there a case we can get any problems in future if we adopt his suggestion? We are not going to compare the varchar column with a nvarchar.

No, feel free to have one of the types, both of them or neither.
Just remember that if you want to store unicode, or think that in any time in the future you'll store unicode - choose nvarchar. It takes more space for each character, but it usually doesn't really matter (and when it does, if you enable compression on the table there is also a unicode compression that helps reduce the space).

Related

SQL server identity or a self calculated sequence [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I am designing a database. I want to define a automatic sequence on a table primary key field. what is the best solution for it?
I know I can enable identity property for a field, but it has some problems ( for example its seed jumps on restart and unsuccessful events)
I also can use some calculated sequences. for example I can calculate max of the filed values and after incrementing use it as key for new inserted record.
which one is better? Is there another solution?
To my mind there's 3 options:
Identity - the simplest, but can have gaps when server is restarted etc.
Sequence - separate object, you will have still gaps in case of rollback
A separate table for the numbers - you won't have gaps, but it can be a hotspot that can cause blocking.

How to generate a database table from a .csv file automatically? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I have a CSV file with 110 columns.
Preparing a table with 110 columns manually will take me forever. Is there any other way to do it?
I tried to create a table , but I was wondering if there is any way that when I create a table in putty session, it takes the column names and number of columns by itself.
Before you jump the gun and just make one huge table, you should sit down and think about if having one massive table really is useful in the long run. Normalization can be a wonderful thing, and depending on the size of your input it would be much less of a hassle to structure everything now rather then later.
As far as deciding what to import, toss it into excel or mysql and drop the fields you don't want/need. Mysql will actually build the structure of your table from the csv file, as long as you give it the right delimiter (comma, semicolon, whatever seperates your fields).

What is the reason to use a suffix on database table and/or column names? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
This question is out of pure curiosity.
Why do some systems and frameworks adopt a suffix on table and/or column names?
Example: the Activiti framework for business process uses the _ character at the end of every column name (ID_, VERSION_, NAME_, ...). I have notice that in some other systems as well.
I'm sure there is a good reason for that.
This is a convention for using names that can be saved words. When you want to call column "from" you can use "from_" because "from" is saved word of SQL.

Database Design for similar data across multiple time [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am attempting to come up with a Database Design that works well for a specific division of my company.
Basically, I have a list of Account Numbers with a ton of fields associated with them. My division needs to compare how these fields change over time (What was in that field for this account number a year ago?).
I am currently thinking of a very linear approach where I use only one large table for the data that is time stamped so a table would have the name AccountInfo04012013 and then the next month would be a new table called AccountInfo05012013. This way we can make comparisons between any two months.
What are the drawbacks of this plan? and what should I be doing instead?
You are going to have to use timestamps. All database managers will have this built in.

Need help in Indexing on xmltype column in oracle for specific xpath [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am using structured storage type for xmltype column (i.e. XML Schema is defined).
And I am willing to perform huge number of where clauses on the values of specific xpath in the xml.
Which xmltype indexing i should go for?
Thanks for the help in advance!
There are a lot of subtleties when it comes to indexing XML, and it's not possible for us to give you a definitive answer on such scant information. You will have to experiment a bit.
However, if you have XPATH expressions which you know will constitute the bulk of your querying then you should start by ceatingr an XDB.XMLTYPE index specifying those paths in the parameters clause. Something like this example from the documentation:
CREATE INDEX po_xmlindex_ix ON po_clob (OBJECT_VALUE) INDEXTYPE IS XDB.XMLINDEX
PARAMETERS ('PATHS (INCLUDE (/PurchaseOrder/LineItems//*
/PurchaseOrder/Reference))');
But you really need to read the documentation. Find it here.

Resources