SSMS add comma delimiter - shortcut - sql-server

Is there a shortcut for adding commas for values for IN clause? e.g. if my query SELECT * FROM Persons WHERE Id IN(.....)
and I've copied
356
9874
5975
9771
4166
....
Person's Id values let's say from some Excel file, How can I quickly add ',' comma to the end of each line, so I can paste it IN(....) clause?

Here's what you need to do:
Put your cursor one space after the 356 (you'll see what you need that extra space in step 2).
Hold down ALT + SHIFT and click the down key for as many numbers as you have. You should see a blue line just to the right of your numbers. This is why you needed the extra space after 356 - so that you can just arrow down the entire list without having to move left or right.
Release the keys, and press the comma key. A comma should appear in all of the lines.
You can use this method to add quotes or other characters to your SQL queries as well.

I use this in SSMS 2014;
I am not sure if this can be done in previous versions

Yeah, that's always a pain.. There are a few things you can do:
insert commas in the cells to the right of the numbers in Excel and copy them with the list into SSMS
in Notepad++, copy the list with values and click CTRL+H (find and replace), in the Search Mode box click Extended. In the Find box type "\n" (without quotations) and in the Replace with it with a comma ""
Hope this helps!

Since SSMS 2012 you can "draw" a vertical line at the end of the code using the mouse while pressing the ALT key.
After that, just press the comma key and that's it.

I have resolved this issue by applying this query
select Concat(Id,',') from user
If you want to concatenate all rows into one you can apply below query:-
Select SUBSTRING(
(
SELECT ',' + Cast(id as varchar) AS 'data()'
FROM users FOR XML PATH('')
), 2 , 9999) As users

Write a little program, like the one below, and fire it off from Launchy.
I wrote mine in C# - called it Commander... probably both the best name and best program ever written.
using System;
using System.Windows.Forms;
namespace Commander
{
internal class Program
{
[STAThread]
private static void Main()
{
var clipboardText = Clipboard.GetText(TextDataFormat.Text);
Clipboard.SetText(clipboardText.Contains(",")
? clipboardText.Replace(",", Environment.NewLine)
: clipboardText.Replace(Environment.NewLine, ",").TrimEnd(','));
}
}
}
Compile the above and reference the resulting Commander.exe from Launchy.
Once referenced:
Highlight a column of characters, and cut or copy the text
Summon Launchy (Alt-Enter, if you're using the default shortcut)
Type Commander
Paste
Enjoy your comma separated list; like use it in an IN statement somewhere
Type Commander again from launchy with a comma separated list, and it will reverse the operation. Read code... it's kind of obvious :)

Some good answers here already but here's some more:
... Person's Id values let's say from some Excel file ...
If you're copying from Excel its sometimes easier to add commas (or speechmarks) or whatever in Excel before copying.
e.g. in the cell to the right do
=A1 & ","
Then copy that formula all the way down the list.
Also Notepad++ is great for this sort of thing, you can record a macro to do one line, and then run it N times:
In Notepad++ go to the start of the first line
Select Macro - Start Recording
Do the right keypresses - in this case: End, Comma, Down, Home
Select Macro - Stop Recording
Select 'Run a Macro Multiple Times ...'
It will by default show 'current recorded macro' (the one you just recorded)
Tell it how many times you want it, then off you go

Related

Turn off thousands separator in Snowflake Snowsight

I really like Snowflake's new Snowsight web console. One minor issue is that all the numeric columns have commas , as thousands separator rather than just outputting the raw number.
For example I have a bunch of UNIX epochs stored in a column called created_time. For debugging purposes I'd like to quickly copy and paste them into a WHERE clause, but I have to manually remove the commas from 1,666,719,883,332 to be 1666719883332.
Sure it's a minor thing, but doing it several dozen times a day is really starting to up to minutes.
I realize I could cast the column to a VARCHAR, but I'd rather find a setting that I can turn off for this auto-thousand-separator default behavior.
Does anyone know a way to turn it off?
Here is an example:
create TABLE log (
CREATED_TIME NUMBER(38,0),
MSG VARCHAR(20000)
);
insert into log values (1666719883332, 'example');
select * From log;
which outputs
CREATED_TIME
MSG
1,666,719,883,332
example
Prepare to be amazed! The option to show/hide the 000 separator is on the left corner
I'd like to quickly copy and paste them into a WHERE clause, but I have to manually remove the commas from 1,666,719,883,332 to be 1666719883332.
The way I use it is a preview pane and Copy button:

How do I format numbers in excel to use in an IN statement

I'm trying to select a long row of numbers using an IN statement.
How do I format this in excel to be able to drop into my SSMS?
Use a CONCAT to add commas and string delimiters if needed.
=CONCAT("'",A1,"',")
OR
=CONCAT(A1,",")
You don't need to use excel to add the commas. Paste your list of numbers vertically in your IN statement then set your cursor at the end of the top number, hold 'alt' + 'shift' and arrow all the way down to the second last number then press ',' and it'll insert them all the way down.

Properly Using String Functions in Stored Procedures

I have an SSIS package that imports data into SQL Server. I have a field that I need to cut everything after and including "-". The following is my script:
SELECT LEFT(PartNumber, CHARINDEX('-', PartNumber) - 1)
FROM ExtensionBase
My question is where in my stored procedure should I use this script so that it cuts before entering data into the ExtensionBase. Can I do this in a Scalar_Value Function?
You have two routes available to you. You can use Derived Columns and the Expressions to generate this value or use a Script Transformation. Generally speaking, reaching for a script first is not a good habit for maintainability and performance in SSIS but the other rule of thumb is that if you can't see the entire expression without scrolling, it's too much.
Dataflow
Here's a sample data flow illustrating both approaches.
OLE_SRC
I used a simple query to simulate your source data. I checked for empty strings, part numbers with no dashes, multiple dashes and NULL.
SELECT
D.PartNumber
FROM
(
VALUES
('ABC')
, ('def-jkl')
, ('mn-opq-rst')
, ('')
, ('Previous line missing')
, (NULL)
) D(PartNumber);
DER Find dash position
I'm going to use FINDSTRING to determine the starting point. FINDSTRING is going to return a zero if the searched item doesn't exist or the input is NULL. I use the ternary operator to either return the position of the first dash, less a space to account for the dash, or the length of the source string.
(FINDSTRING(PartNumber,"-",1)) > 0
? (FINDSTRING(PartNumber,"-",1)) - 1
: LEN(PartNumber)
I find it helpful in these situations to first compute the positions before trying to use them later. That way if you make an error, you don't have to fix multiple formulas.
DER Trim out dash plus
The 2012 release of SSIS provided us with the LEFT function while previous editions users had to make due with SUBSTRING calls.
The LEFT expression would be
LEFT(PartNumber,StartOrdinal)
whilst the SUBSTRING is simply
SUBSTRING(PartNumber,1,StartOrdinal)
SCR use net libraries
This approach is to use the basic string capabilities of .NET to make life easier. The String.Split method is going to return an array of strings, less what we split upon. Since you only want the first thing, the zero-eth element, we will assign that to our SCRPartNumber column that is created in this task. Note that we check whether the PartNumber is null and sets the null flag on our new column.
public override void Input0_ProcessInputRow(Input0Buffer Row)
{
if (!Row.PartNumber_IsNull)
{
string[] partSplit = Row.PartNumber.Split('-');
Row.SCRPartNumber = partSplit[0];
}
else
{
Row.SCRPartNumber_IsNull = true;
}
}
Results
You can see the results are the same however you compute them.

Fix CSV file with new lines

I ran a query on a MS SQL database using SQL Server Management Studio, and some the fields contained new lines. I selected to save the result as a csv, and apparently MS SQL isn't smart enough to give me a correctly formatted CSV file.
Some of these fields with new lines are wrapped in quotes, but some aren't, I'm not sure why (it seems to quote fields if they contain more than one new line, but not if they only contain one new line, thanks Microsoft, that's useful).
When I try to open this CSV in Excel, some of the rows are wrong because of the new lines, it thinks that one row is two rows.
How can I fix this?
I was thinking I could use a regex. Maybe something like:
/,[^,]*\n[^,]*,/
Problem with this is it matches the last element of one line and the 1st of the next line.
Here is an example csv that demonstrates the issue:
field a,field b,field c,field d,field e
1,2,3,4,5
test,computer,I like
pie,4,8
123,456,"7
8
9",10,11
a,b,c,d,e
A simple regex replacement won't work, but here's a solution based on preg_replace_callback:
function add_quotes($matches) {
return preg_replace('~(?<=^|,)(?>[^,"\r\n]+\r?\n[^,]*)(?=,|$)~',
'"$0"',
$matches[0]);
}
$row_regex = '~^(?:(?:(?:"[^"*]")+|[^,]*)(?:,|$)){5}$~m';
$result=preg_replace_callback($row_regex, 'add_quotes', $source);
The secret to $row_regex is knowing ahead of time how many columns there are. It starts at the beginning of a line (^ in multiline mode) and consumes the next five things that look like fields. It's not as efficient as I'd like, because it always overshoots on the last column, consuming the "real" line separator and the first field of the next row before backtracking to the end of the line. If your documents are very large, that might be a problem.
If you don't know in advance how many columns there are, you can discover that by matching just the first row and counting the matches. Of course, that assumes the row doesn't contain any of the funky fields that caused the problem. If the first row contains column headers you shouldn't have to worry about that, or about legitimate quoted fields either. Here's how I did it:
preg_match_all('~\G,?[^,\r\n]++~', $source, $cols);
$row_regex = '~^(?:(?:(?:"[^"*]")+|[^,]*)(?:,|$)){' . count($cols[0]) . '}$~m';
Your sample data contains only linefeeds (\n), but I've allowed for DOS-style \r\n as well. (Since the file is generated by a Microsoft product, I won't worry about the older-Mac style CR-only separator.)
See an online demo
If you want a java programmatic solution, open the file using the OpenCSV library. If it is a manual operation, then open the file in a text editor such as Vim and run a replace command. If it is a batch operation, you can use a perl command to cleanup the CRLFs.

Commas within CSV Data

I have a CSV file which I am directly importing to a SQL server table. In the CSV file each column is separated by a comma. But my problem is that I have a column "address", and the data in this column contains commas. So what is happening is that some of the data of the address column is going to the other columns will importing to SQL server.
What should I do?
For this problem the solution is very simple.
first select => flat file source => browse your file =>
then go to the "Text qualifier" by default its none write here double quote like (") and follow the instruction of wizard.
Steps are -
first select => flat file source => browse your file => Text qualifier (write only ") and follow the instruction of wizard.
Good Luck
If there is a comma in a column then that column should be surrounded by a single quote or double quote. Then if inside that column there is a single or double quote it should have an escape charter before it, usually a \
Example format of CSV
ID - address - name
1, "Some Address, Some Street, 10452", 'David O\'Brian'
New version supports the CSV format fully, including mixed use of " and , .
BULK INSERT Sales.Orders
FROM '\\SystemX\DiskZ\Sales\data\orders.csv'
WITH ( FORMAT='CSV');
I'd suggest to either use another format than CSV or try using other characters as field separator and/or text delimiter. Try looking for a character that isn't used in your data, e.g. |, #, ^ or #. The format of a single row would become
|foo|,|bar|,|baz, qux|
A well behave parser must not interpret 'baz' and 'qux' as two columns.
Alternatively, you could write your own import voodoo that fixes any problems. For the later, you might find this Groovy skeleton useful (not sure what languages you're fluent in though)
Most systems, including Excel, will allow for the column data to be enclosed in single quotes...
col1,col2,col3
'test1','my test2, with comma',test3
Another alternative is to use the Macintosh version of CSV, which uses TAB's as delimiters.
The best, quickest and easiest way to resolve the comma in data issue is to use Excel to save a comma separated file after having set Windows' list separator setting to something other than a comma (such as a pipe). This will then generate a pipe (or whatever) separated file for you that you can then import. This is described here.
I don't think adding quote could help.The best way I suggest is replacing the comma in the content with other marks like space or something.
replace(COLUMN,',',' ') as COLUMN
Appending a speech mark into the select column on both side works. You must also cast the column as a NVARCVHAR(MAX) to turn this into a string if the column is a TEXT.
SQLCMD -S DB-SERVER -E -Q "set nocount on; set ansi_warnings off; SELECT '""' + cast ([Column1] as nvarchar(max)) + '""' As TextHere, [Column2] As NormalColumn FROM [Database].[dbo].[Table]" /o output.tmp /s "," -W

Resources