Nightly Excel Spreadheet import into mssql database - sql-server

I am currently working with a nightly import that I need to create, but am not sure what the best route would be to update/insert into the current table. This is all done in MS SQL Server 2012 and pulling the Excel file from another server. I am trying to figure out how I can loop through the columns and pull out the data I need. If I could rearrange the data, I would but am currently stuck with what I have.
In my current table tblHW I have Columns such as PmpCount, , NumberStages, Pmpmodel_pmp1, serialnum_Pmp1, pmpModel_pmp2, Pmpmodel_pmp2, serialnum_pmp2, partnum_motor1, serialnumberMotor1, etc…. I apologize in advance for not being able to post a real table or a picture.
Example:
|Name | PmpCount| numstages| pmpmodel_pmp1| stages_pmp1| Sn_pmp1|
|AN 91-23G | 4| 500| FX2347| 250| 354197|
|BR DN 895R| 5| 521| D2442| 45| 875164|
|ALN 1-60J | 5| 521| H21342| 95| 594126|
|pmpmodel_pmp2| stages_pmp2| sn_pmp2| Partnum_mtr1| sn_mtr1|
|FX2347 | 250| 354198| NULL| NULL|
|FX17500 | 143| 102547| M7544| 4512241|
|FX17500 | 143| 458790| M7544| 4512364|
The information I want to move into tblHW comes from the tbl Pull_Down. Here is the setup:
|Name | Run_ID | Part1| SN1 | Attribute1_7|
|AN 21-919G| Oct 08, 2013 / 100845| BOD| NA| 3RD U|
|FR 55-013A| Oct 17, 2013 / 100853| Pmp| 2EA3A022| 78|
|FR 55-013A| Oct 01, 2014 / 101383| Cbl| N/A| REDALEAD|
|FR 43-223J| Apr 03, 2013 / 100594| BOD| NA| 3RD U|
|VH 204 | May 17, 2014 / 101145| BOD| 3RD U|
|Part2| SN2 | Attribute2_7| Part3 | SN3 | Attribute3_7|
|Pmp | 2EA3F379| 78| Pmp| 2EA3N380| 117|
|Pmp | 2EA3C020| 117| Pmp| 2EA3Y021| 117|
|MLE | J14312161| 120| BOD| N/A| 3RD U|
|Other| NA| Pmp| 2EA2X774| 78|
|BOD | NULL| Pmp| 2EA4F075| 38|
A bit more information. I am receiving this information in the form of five excel spreadhsheets each with over 400 columns. The columns giving me the biggest headache are the 20 part columns that I need to place into the SQL table.
I need to somehow move each row into the tblHW but need to do something like this:
The first row AN 21-919G needs to have SN1 to be inserted into sn_mtr1 since it is a BOD, SN2 into SN_pmp1 since it is a PMP, and SN3 into sn_pmp2 since it is the second PMP here. I also need to get the pmp count, in this case 2 and then add the attribute1_7 and attribute2_7 to put into numstages when the prts are PMP.

Situations like this is the whole purpose for SSIS to exist: Integration Services!
First one would question as to why the data you need is in Excel, and if there is a more direct route, one could exploit, as a linked server (if the source is another RDBMS).
Based in what information you provide, we make the following assumptions:
A) We have no control on the source output and we must import the data from Excel.
B) The files always have consistent columns (probably created by an automated process).
In SSIS you can easily create a source connection for the Excel file. If the Excel file name is dynamic, you can create a script to modify the connection string for that connection before importing data. Then set the destination connection to the SQL Server. The last step is creating a Data Flow Task where you can map the source to the destination columns.
Example:

Related

Convert Excel Exponential Format back to its text in SQL Server 2008 R2

First off I am have a constraint the my solution must work from SQL Server 2008 R2
The problem that I'm trying to solve is that Excel converts the text value '002E9' to 2.00E+09. The task is to pass the original value '002E9' as text into a CSV file.
I have been passed a SSIS solution by a developer that has a the conversion as a SQL function. They have used
SELECT FORMAT(CAST(2.00E+09 AS FLOAT),'0E0');
This is fine in 2012 and above but does not work in SQL Server 2008 R2.
Is there a simple alternative? I'm happy to abandon SQL for a SSIS script if that's the best advice.
FORMAT doesn't exist in SQL Server 2008; but it's use is best avoided any way; it's an awfully slow function.
You can use CONVERT and the style 0 though:
SELECT REPLACE(CONVERT(varchar(10),CAST(2.00E+09 AS float),0),'+','');
This won't, however, give exactly the same format, and would return '2e009'. Based on the fact that you use the value '0E0' for the FORMAT function though (which would return '2E9' for your example value), I assume this is permissible.
Based upon the post Larnu made I arrived at this (note the REPLICATE function for getting the correct format from the stripped down string):
DECLARE #INPUTS AS table
(input_val varchar(100))
INSERT INTO #INPUTS
VALUES
('00923'),('00234'),('00568'),('00123'),('2.00E+09' ),('2.00E+34' ),('00RT1'),('001TL')
SELECT input_val
,REPLACE(REPLACE(REPLACE(input_val,'+',''),'0',''),'.','') paired_value
,REPLICATE('0',5-LEN(REPLACE(REPLACE(REPLACE(input_val,'+',''),'0',''),'.','')))
+REPLACE(REPLACE(REPLACE(input_val,'+',''),'0',''),'.','')+';' Converted_value
FROM #INPUTS
The results:
+-----------+--------------+-----------------+
| input_val | paired_value | Converted_value |
+-----------+--------------+-----------------+
| 00923 | 923 | 00923; |
| 00234 | 234 | 00234; |
| 00568 | 568 | 00568; |
| 00123 | 123 | 00123; |
| 2.00E+09 | 2E9 | 002E9; |
| 2.00E+34 | 2E34 | 02E34; |
| 00RT1 | RT1 | 00RT1; |
| 001TL | 1TL | 001TL; |
+-----------+--------------+-----------------+
Confirms the approach.
Thanks Larnu.

SSRS System.InvalidCastException - at OracleDataReader.GetDecimal(Int32 i)

I have an SSRS report that was pointed to SQL Server views, which pointed to Oracle tables. I edited the SSRS report Dataset so as to query directly from the Oracle db. It seems like a very simple change until I got this error message:
System.InvalidCastException: Specified cast is not valid.
With the following details...
Field ‘UOM_QTY’ and it also says at
Oracle.ManagedDataAccess.Client.OracleDataReader.GetDecimal(Int32 i).
The SELECT statement on that field is pretty simple:
, (DELV_RECEIPT.INV_LBS/ITEM_UOM_XREF.CONV_TO_LBS) AS UOM_QTY
Does anyone know what would cause the message, and how to resolve the error? My objective is use to use the ORACLE datasource instead of SQL SERVER.
Error 1
Severity Code Description Project File Line Suppression State
Warning [rsErrorReadingDataSetField] The dataset ‘dsIngredientCosts’ contains a definition for the Field ‘UOM_QTY’. The data extension returned an error during reading the field. System.InvalidCastException: Specified cast is not valid.
at Oracle.ManagedDataAccess.Client.OracleDataReader.GetDecimal(Int32 i)
at Oracle.ManagedDataAccess.Client.OracleDataReader.GetValue(Int32 i)
at Microsoft.ReportingServices.DataExtensions.DataReaderWrapper.GetValue(Int32 fieldIndex)
at Microsoft.ReportingServices.DataExtensions.MappingDataReader.GetFieldValue(Int32 aliasIndex) C:\Users\bl0040\Documents\Visual Studio 2015\Projects\SSRS\Project_ssrs2016\Subscription Reports\Feed Ingredient Weekly Price Avg.rdl 0
Error 2
Severity Code Description Project File Line Suppression State
Warning [rsMissingFieldInDataSet] The dataset ‘dsIngredientCosts’ contains a definition for the Field ‘UOM_QTY’. This field is missing from the returned result set from the data source. C:\Users\bl0040\Documents\Visual Studio 2015\Projects\SSRS\Project_ssrs2016\Subscription Reports\Feed Ingredient Weekly Price Avg.rdl 0
Source Tables:
+------------+---------------+-------------+---------------+-----------+
| Source | TABLE_NAME | COLUMN_NAME | DataSize | COLUMN_ID |
+------------+---------------+-------------+---------------+-----------+
| ORACLE | DELV_RECEIPT | INV_LBS | NUMBER (7,0) | 66 |
+------------+---------------+-------------+---------------+-----------+
| ORACLE | ITEM_UOM_XREF | CONV_TO_LBS | NUMBER (9,4) | 3 |
+------------+---------------+-------------+---------------+-----------+
| SQL SERVER | DELV_RECEIPT | INV_LBS | numeric (7,0) | 66 |
+------------+---------------+-------------+---------------+-----------+
| SQL SERVER | ITEM_UOM_XREF | CONV_TO_LBS | numeric (9,4) | 3 |
+------------+---------------+-------------+---------------+-----------+
The error went away after adding a datatype conversion statement to the data selection.
, CAST(DELV_RECEIPT.INV_LBS/ITEM_UOM_XREF.CONV_TO_LBS AS NUMERIC(9,4)) AS UOM_QTY
Can anyone provide some information on why the original query would be a problem and why the CAST would fix these errors? I tried casting the results because someone on Code Project forum said...
why don't you use typed datasets? you get such head aches just because
of not coding in a type-safe manner. you have a dataset designer in
the IDE which makes the life better, safer, easier and you don't use
it. I really can't understand.
Here is an approach to fix this error with an extension method instead of modifying the SQL-Query.
public static Decimal MyGetDecimal(this OracleDataReader reader, int i)
{
try
{
return reader.GetDecimal(i);
}
catch (System.InvalidCastException)
{
Oracle.ManagedDataAccess.Types.OracleDecimal hlp = reader.GetOracleDecimal(i);
Oracle.ManagedDataAccess.Types.OracleDecimal hlp2 = Oracle.ManagedDataAccess.Types.OracleDecimal.SetPrecision(hlp, 27);
return hlp2.Value;
}
}
Thank you for this but what happens if your query looks like:
SELECT x.* from x
and .GetDecimal appears nowhere?
Any suggestions in that case? I have created a function in ORACLE itself that rounds all values in a result set to avoid this for basic select statements but this seems wrong for loading updateable datasets...
Obviously this is an old-school approach to getting data.

Sqoop & Hadoop - How to join/merge old data and new data imported by Sqoop in lastmodified mode?

Background:
I have a table with the following schema on a SQL server. Updates to existing rows is possible and new rows are also added to this table.
unique_id | user_id | last_login_date | count
123-111 | 111 | 2016-06-18 19:07:00.0 | 180
124-100 | 100 | 2016-06-02 10:27:00.0 | 50
I am using Sqoop to add incremental updates in lastmodified mode. My --check-column parameter is the last_login_date column. In my first run, I got the above two records into Hadoop - let's call this current data. I noted that the last value (the max value of the the check column from this first import) is 2016-06-18 19:07:00.0.
Assuming there is a change on the SQL server side, I now have the following changes on the SQL server side:
unique_id | user_id | last_login_date | count
123-111 | 111 | 2016-06-25 20:10:00.0 | 200
124-100 | 100 | 2016-06-02 10:27:00.0 | 50
125-500 | 500 | 2016-06-28 19:54:00.0 | 1
I have the row 123-111 updated with a more recent last_login_date value and the count column has also been updated. I also have a new row 125-500 added.
On my second run, sqoop looks at all columns with a last_login_date column greater than my known last value from the previous import - 2016-06-18 19:07:00.0
This gives me only the changed data, i.e. 123-111 and 125-500 records. Let's call this - new data.
Question
How do I do a merge join in Hadoop/Hive using the current data and the new data so that I end up with the updated version of 123-111, 124-100, and the newly added 125-500?
Changed data load using scoop is a two phase process.
1st phase - load changed data into some temp (stage) table using
sqoop import utility.
2nd phase - Merge changed data with old data using sqoop-merge
utility.
If the table is small(say few M records) then use full load using sqoop import.
Sometimes it's possible to load only latest partition - in such case use sqoop import utility to load partition using custom query, then instead of merge simply insert overwrite loaded partition into target table, or copy files - this will work faster than sqoop merge.
You can change the existing Sqoop query (by specifying a new custom query) to get ALL the data from the source table instead of getting only the changed data. Refer using_sqoop_to_move_data_into_hive. This would be the simplest way to accomplish this - i.e doing a full data refresh instead of applying deltas.

execute stored procedure with dbslim with Fitnesse (Selenium,Xebium)

https://github.com/markfink/dbslim
I'd like to execute the stored procedures with DbSlim using Fitnesse (Selenium, Xebium)
now what I tried to do is:
!define dbQuerySelectCustomerbalance (
execute dbo.uspLogError
)
| script | Db Slim Select Query | !-${dbQuerySelectCustomerbalance}-! |
which gives a green indicator,
however Microsoft SQL Server profiler gives no actions/logging...
so what i'd like to know is: is it possible to use dbslim for executing stored procedures,
if yes
what is the correct way to do it?
By the way, the connection to the Database i've on 1 page, and on the query page i included the connection to the database. (is that ok?)
Take out the !- ... -!. It is used to escape wikified words. But in this case you want it to be translated to the actual query.
!define dbQuerySelectCustomerbalance ( execute dbo.uspLogError )
| script | Db Slim Select Query | ${dbQuerySelectCustomerbalance} |
| show | data by column index | 1 | and row index | 1 |
You can add in the last line which outputing the first column of the first row for testing purpose if your SP is returning some result (or you can create one simple SP just to test this out)
Specifying the connection anywhere before this block will be fine, be it on the same page or in an SetUp/SuiteSetUp/normal page included/executed before.

Update Access database (multiple rows)

I am totally newbie to Access, I used to use Excel to handle my needs for a while.
But by now Excel has become too slow to handle such a big set of data, so I decided to migrate to Access.
Here is my problem
My columns are:
Number | Link | Name | Status
1899 | htto://example.com/code1 | code1 | Done
2 | htto://example.com/code23455 | code23455 | Done
3 | htto://example.com/code2343 | code2343 | Done
13500 | htto://example.com/code234cv | code234cv | Deleted
220 | htto://example.com/code234cv | code234cv | Null
400 | htto://example.com/code234cv | code234cv | Null
So I want a way to update Status of my rows according to numbers list.
For example I want to update Status column for multiple numbers to become Done
Simply I want to update "Null status" to become "Done" according to this number list
13544
17
13546
12
13548
13549
16000
13551
13552
13553
13554
13555
12500
13557
13558
13559
13560
30
13562
13563
Something like this
I tried "update query" but I don't know how to use criteria to solve this problem
In Excel I did that by "conditional formatting duplicates" -with my number list which I wanted to update-
Then "sort by highlighted color" then "fill copy" the status with the value
I know that Access is different but I hope that there is a way to do this task as Excel did.
Thanks in advance
From my understanding, You can try
Update TblA
Set TblA.Status="Done"
where Number in (13544,17,13546,....)
Or alternatively easy method is to pull these numbers in IN clause into its own table and use it like this
Update TblA
Set TblA.Status="Done" where Number in (select NumCol from NumTable )
or this solution may help you Here

Resources