SQL Server - How to insert into Varbinary(Max) column? - sql-server

I have a table that looks as follows below. I don't really want to create a C# application to insert rows into this table, if I can avoid it, because of the VarBinary column. My intent is to store a Crystal report .RPT file in this column. Is there a T-SQL statement I can execute to insert/update rows into this table, and include an .RPT file?
CREATE TABLE [Report].[MesReport](
[MesReportID] [int] IDENTITY(1,1) NOT NULL,
[ParentID] [int] NOT NULL,
[ReportTitle] [nvarchar](80) NOT NULL,
[ReportName] [nvarchar](80) NOT NULL,
[DatabaseServer] [nvarchar](80) NOT NULL,
[DatabaseName] [nvarchar](50) NOT NULL,
[Login] [nvarchar](80) NOT NULL,
[ReportFile] [varbinary](max) NULL,

You can get it into a variable like
DECLARE #VB varbinary(max)
SELECT #VB =BulkColumn FROM OPENROWSET(BULK
N'C:\YourReport.rpt', SINGLE_BLOB) AS Document
that you can then use in an insert statement

Related

SSMS :: Copy with Headers and paste to Excel results in more rows in destination

Peculiar issue: I have a table of 140.588 rows and 246,313 MB which looks like this:
CREATE TABLE [dbo].[DMA_assessment](
[InstanceName] [varchar](128) NULL,
[DatabaseName] [varchar](128) NULL,
[SizeMB] [varchar](30) NULL,
[ImpactedObjectName] [varchar](128) NULL,
[ImpactDetail] [varchar](max) NULL,
[AssessmentName] [varchar](128) NULL,
[AssessmentNumber] [int] NULL,
[SourceCompatibilityLevel] [varchar](15) NULL,
[TargetCompatibilityLevel] [varchar](15) NULL,
[TargetSQLServerEdition] [varchar](15) NULL,
[Category] [varchar](50) NULL,
[Severity] [varchar](15) NULL,
[ChangeCategory] [varchar](30) NULL,
[Title] [varchar](500) NULL,
[Impact] [varchar](max) NULL,
[Recommendation] [varchar](max) NULL,
[MoreInfo] [varchar](max) NULL,
[ObjectType] [varchar](40) NULL,
[DBOwnerKey] [int] NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
Click to enlarge.
This table was created by the DMA Tool and I want to use the output to visualize the results on Power BI. Querying a table so big is out of question. I need to export it in .csv or .xlsx file
If I do the notorious right click and "copy with Headers" and then I paste it on an Excel file the result has 141186 rows (-1 because the first rows are the columns names)
(Click again to see the details)
So here we are:
141186 - 140558 = 598
Where do they come from those 598 rows?
I tried multiple times, the result is still the same.
I guess your varchar(max) causes the problem. The size of a text field in Excel is limited. Following an example:
DECLARE #x VARCHAR(MAX) = '***************************************************************************';
SELECT #x=CONCAT(#x,#x)
SELECT #x=CONCAT(#x,#x)
SELECT #x=CONCAT(#x,#x)
SELECT #x=CONCAT(#x,#x)
SELECT #x=CONCAT(#x,#x)
SELECT #x=CONCAT(#x,#x)
SELECT #x=CONCAT(#x,#x)
SELECT #x=CONCAT(#x,#x)
SELECT #x=CONCAT(#x,#x)
SELECT #x=CONCAT(#x,#x)
SELECT #x=CONCAT(#x,#x)
SELECT #x=CONCAT(#x,#x)
SELECT #x AS Test, 'Test' AS Test2
Copy the result to an excel file and you will get two rows, where the second column (Test2) is only featured in the second row.
You can try using .dqy query in excel.
I develop SSMSBoost add-in and we have covered this in our article (you can create .dqy query directly in Excel, without our add-in):
https://www.ssmsboost.com/Features/ssms-add-in-run-query-in-excel
There is also a video, which explains 3 different ways of exporting the data into Excel without data loss (data type information is preserved):
(Copy-Paste in native excel format, XML export, .dqy Query)
https://youtu.be/waDCukeXeLU

Problems using Bulk insert

I have a project that I need to do a massive data load from a CSV file generated by an application. The structure of CSV file is as follows:
indice;hora;puerta;num;nombre;departamento;departamento;id usuario;estado;tarjeta
"0001";"05:51:56";"Parqueadero";"0046";"Rafael Iglesia";"ADMINISTRATIVOS";"Dep2_00";"9229926977";"(M11)Acceso Normal";"04756:22242"
"0002";"05:53:19";"Parqueadero";"0036";"Orlinda Torres";"ADMINISTRATIVOS";"Dep2_00";"4326087729";"(M11)Acceso Normal";"04246:24075"
As you can see, the first row doesn't use quoted identifiers (""), whereas the next row does. As such, I tried to do a bulk insert (using SQL Server 2017) with FIRSTROW=2 and the result is:
0 rows affected()
Conversely, if I use this method without FIRSTROW, SQL obviously can't process the file. Is there a way to process this data while skipping the first row?
This is the table I'm inserting into:
CREATE TABLE [dbo].[TEMPORAL_IV] (
[id_temp_dia] int NULL,
[hora_temp] [nvarchar](20) NULL,
[puerta_temp] [nvarchar](20) NULL,
[id_tar_temp] int NULL,
[usuario_temp] [nvarchar](100) NULL,
[desc_dep_temp] [nvarchar](30) NULL,
[id_dep_temp] [nvarchar](20) NULL,
[doc_usu_temp] [nvarchar](20) NULL,
[mensaje_temp] [nvarchar](200) NULL,
[num_tar_us_temp] [nvarchar](15) NULL
)
And here's the bulk insert command:
BULK INSERT TEMPORAL_IV
FROM '\\argos\informatica$\temp\datos.csv'
WITH (fieldterminator=';',FIRSTROW=2);
Is there an alternative?

Inserting into a joined view SQL Server

This is a question more about design than about solving a problem.
I created three tables as such
CREATE TABLE [CapInvUser](
[UserId] [int] IDENTITY(1,1) NOT NULL,
[Name] [varchar](150) NOT NULL,
[AreaId] [int] NULL,
[Account] [varchar](150) NULL,
[mail] [varchar](150) NULL,
[UserLevelId] [int] NOT NULL
)
CREATE TABLE [CapInvUserLevel](
[UserLevelId] [int] IDENTITY(1,1) NOT NULL,
[Level] [varchar](50) NOT NULL
)
CREATE TABLE [CapInvUserRegistry](
[UserRegistryId] [int] IDENTITY(1,1) NOT NULL,
[UserLevelId] int NOT NULL,
[DateRegistry] DATE NOT NULL,
[RegistryStatus] VARCHAR(50) NOT NULL,
)
With a view that shows all the data on the first table with "AreaId" being parsed as the varchar identifier of that table, the UserLevel being parsed as the varchar value of that table, and a join of the registry status of the last one.
Right now when I want to register a new user, I insert into all three tables using separate queries, but I feel like I should have a way to insert into all of them at the same time.
I thought about using a stored procedure to insert, but I still don't know if that would be apropiate.
My question is
"Is there a more apropiate way of doing this?"
"Is there a way to create a view that will let me insert over it? (without passing the int value manually)"
--This are just representations of the tables, not the real ones.
-- I'm still learning how to work with SQL Server properly.
Thank you for your answers and/or guidance.
The most common way of doing this, in my experience, is to write a stored procedure that does all three inserts in the necessary order to create the FK relationships.
This would be my unequivocal recommendation.

sql server update table

the table:
CREATE TABLE [dbo].[User] (
[UserName] NVARCHAR (100) NOT NULL,
[Pasword] NVARCHAR (100) NOT NULL,
[Name] TEXT NOT NULL,
[LastName] TEXT NOT NULL,
[Location] TEXT NOT NULL,
[profesion] TEXT NOT NULL,
[Email] NVARCHAR (50) NOT NULL,
[Gender] TEXT NOT NULL,
PRIMARY KEY CLUSTERED ([UserName] ASC)
);
i want to update to:
CREATE TABLE [dbo].[User] (
[UserName] NVARCHAR (100) NOT NULL,
[Pasword] NVARCHAR (100) NOT NULL,
[Name] TEXT NOT NULL,
[LastName] TEXT NOT NULL,
[Location] TEXT NOT NULL,
[profesion] TEXT NOT NULL,
[Email] NVARCHAR (50) NOT NULL,
[Gender] TEXT NOT NULL,
[moneyinmillions] INT NOT NULL,
PRIMARY KEY CLUSTERED ([UserName] ASC)
);
the problem:
an error occurred while the batch was being executed
thanks for the help
In the interest of answering your question, here is the code you would want to add the moneyinmillions column to the User table:
ALTER TABLE [User]
ADD [moneyinmillions] INT NOT NULL;
Ways to Insert a column in your existing Table
Use the ALTER TABLE Statement
Do the following:
ALTER TABLE [dbo].[User]
ADD [moneyinmillions] INT NOT NULL
Using the Table Designer
In Object Explorer, right-click the table (here, User table) to which you want to add columns and choose Design.
Click in the first blank cell in the moneyinmillions column.
Press the TAB key to go to the Data Type cell and select a Data Type from the dropdown.
When you are finished adding columns, from the File menu, choose Save table name (User).
Using DROP TABLE and Re-Creating the Table
DROP TABLE [dbo].[User]
and then Execute the statements below:
CREATE TABLE [dbo].[User] (
[UserName] NVARCHAR (100) NOT NULL,
[Pasword] NVARCHAR (100) NOT NULL,
[Name] TEXT NOT NULL,
[LastName] TEXT NOT NULL,
[Location] TEXT NOT NULL,
[profesion] TEXT NOT NULL,
[Email] NVARCHAR (50) NOT NULL,
[Gender] TEXT NOT NULL,
[moneyinmillions] INT NOT NULL,
PRIMARY KEY CLUSTERED ([UserName] ASC));
(Note: The DROP Table Statement will remove the table definition and all the data, indexes, triggers, constraints, and permission specifications for that table. So, if you have data entry in some fields/columns, then do not use the DROP TABLE Statement because you'll loose all the data).
Did you know that you can right click on the table and open the design view to add/remove columns to or from a table ??

Row update if row exists. Insert it if row doesn't exist

I'm developing a SQL SERVER 2012 express and developer solution.
I will receive an xml in an stored procedure. In the stored procedure I will parse the xml and insert its data into a table.
My problem here is that in this xml could contain data that exists on the table, and I need to update the data on the table with the new one.
I don't want to check if each row in xml exists on the table.
I think I can use IGNORE_DUP_KEY but I'm not sure.
How can I update or insert new data without checking it?
This is the table where I want to insert (or update) the new data:
CREATE TABLE [dbo].[CODES]
(
[ID_CODE] [bigint] IDENTITY(1,1) NOT NULL,
[CODE_LEVEL] [tinyint] NOT NULL,
[CODE] [nvarchar](20) NOT NULL,
[COMMISIONING_FLAG] [tinyint] NOT NULL,
[IS_TRANSMITTED] [bit] NOT NULL,
[TIMESPAN] [datetime] NULL,
[USERNAME] [nvarchar](50) NULL,
[SOURCE] [nvarchar](50) NULL,
[REASON] [nvarchar](200) NULL
CONSTRAINT [PK_CODES] PRIMARY KEY CLUSTERED
(
[CODE_LEVEL] ASC,
[CODE] ASC
)
)
The "IGNORE_DUP_KEY" parameter ,is ignore inserting new row, if he is already exists, but it is not dealing with update in case it exists.
the solution to your request is by MERGE or DML operation (INSERT/UPDATE/DELETE) .
BTW,
The parameter "IGNORE_DUP_KEY" is covering existsnce for the index key only (index column).

Resources