I'm an SSIS noob (less than a week experience) so please bear with me.
I am running a stored procedure to export its result to an Excel file.
From my research I have found that SSIS's Excel Destination does not play nicely with .xlsx files (can't be xls since I have more than the ~65K rows in the result), but I found that I can use a OLE DB Destination to write to an excel file.
The issue I am seeing is an error message that occurs on run that says:
OLE DB Destination [212]] Error:
An error occurred while setting up a binding for the "Main Job Notes" column.
The binding status was "DT_NTEXT"."
The fields that are erroring are coming in as Text Streams ([DT_TEXT]), and since I was getting an error around not being able to convert between unicode and non-unicode, I use a Data Conversion to transform it into a Unicode text stream ([DT_NTEXT])
If it helps at all, my setup is as follows:
Any help would be amazing. Thank you.
You should consider doing this using a script component, keep in mind that when in data flow task you cannot debug directly but you can use mbox snipped to check results. Also keep in mind that excel will always try to suppose your column data types automatically, for example when you try to import a file from excel that one of its columns starts with a number but in the row 3455 there's a character, it will import the column as a number and you will lose the char value, you will find it as null in your database.
I will give you some code to construct the file you need programmatically, maybe it can give you an idea. (This example reads a file as one column, then it will split in as if you chose fixed with delimited values in excel and will output in a csv file.
/* Microsoft SQL Server Integration Services Script Component
* Write scripts using Microsoft Visual C# 2008.
* ScriptMain is the entry point class of the script.*/
using System;
using System.IO;
using System.Linq;
using System.Text;
[Microsoft.SqlServer.Dts.Pipeline.SSISScriptComponentEntryPointAttribute]
public class ScriptMain : UserComponent
{
#region Variables
private string _jumexDailyData;
private string[] _jumexValues;
private string[] _jumexWidthValues;
#endregion
/// <summary>
/// Default constructor
/// </summary>
public ScriptMain()
{
this._jumexValues = new string[22];
}
public override void PreExecute()
{
base.PreExecute();
/*
Add your code here for preprocessing or remove if not needed
*/
}
public override void PostExecute()
{
base.PostExecute();
/*
Add your code here for postprocessing or remove if not needed
You can set read/write variables here, for example:
Variables.MyIntVar = 100
*/
}
public override void JumexDailyData_ProcessInput(JumexDailyDataBuffer Buffer)
{
while (Buffer.NextRow())
JumexDailyData_ProcessInputRow(Buffer);
}
public override void JumexDailyData_ProcessInputRow(JumexDailyDataBuffer Row)
{
this._jumexDailyData = Row.JumexDailyData;
if (this._jumexDailyData != null)
{
this._jumexWidthValues = this.Variables.JUMEXLOADSALESATTACHMENTFILEWIDTHVALUES.Split(new string[] { "," }, StringSplitOptions.RemoveEmptyEntries);
if (this._jumexWidthValues != null && this._jumexWidthValues.Count() > 0)
for (int i = 0; i < this._jumexWidthValues.Count(); i++)
{
this._jumexValues[i] = this._jumexDailyData.Substring(0, int.Parse(this._jumexWidthValues[i])).Trim();
this._jumexDailyData = this._jumexDailyData.Substring(int.Parse(this._jumexWidthValues[i]), (this._jumexDailyData.Length - int.Parse(this._jumexWidthValues[i])));
}
if (string.IsNullOrEmpty(this._jumexValues[3].Trim()) == false &&
string.IsNullOrEmpty(this._jumexValues[17].Trim()) == false &&
!this._jumexValues[3].Contains("---") &&
!this._jumexValues[17].Contains("---") &&
!this._jumexValues[3].Trim().ToUpper().Contains("FACTURA") &&
!this._jumexValues[17].Trim().ToUpper().Contains("PEDIDO"))
using (StreamWriter streamWriter = new StreamWriter(this.Variables.JUMEXFULLQUALIFIEDLOADSALESATTACHMENTFILENAME.Replace(".TXT", ".CSV"), true, Encoding.Default))
{
streamWriter.WriteLine(string.Join("|", this._jumexValues));
}
}
}
}
Related
I have to load 2 flat files into a SQL Server table. Flat files are loaded to a folder. It has thousands of other files. If it was same file with different dates I would have used foreach loop and done it..but here is the scenario.
File names I want to load are as follows:
Non_Payment_Stat_Data1_12_2017.txt
Payment_Stat_Data1_12_2017.txt
Files are loaded daily
I need to load just above file type to table (pick the days load)
There are many other files some of which are Payment_Stat_Data or Non_Payment_Stat_Data without the date part at the end. We don't want to load these into the table.
I tried using script task c# code and it gave me latest file but not the one we wanted to load.
using System;
using System.Data;
using Microsoft.SqlServer.Dts.Runtime;
using System.Windows.Forms;
using System.IO;
namespace ST_2650e9fc7f2347b2826459c2dce1b5be.csproj
{
[System.AddIn.AddIn("ScriptMain", Version = "1.0", Publisher = "", Description = "")]
public partial class ScriptMain : Microsoft.SqlServer.Dts.Tasks.ScriptTask.VSTARTScriptObjectModelBase
{
#region VSTA generated code
enum ScriptResults
{
Success = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Success,
Failure = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Failure
};
#endregion
public void Main()
{
// TODO: Add your code here
var directory= new DirectoryInfo(Dts.Variables["User::VarFolderPath"].Value.ToString());
FileInfo[] files = directory.GetFiles();
DateTime lastModified = DateTime.MinValue;
foreach (FileInfo file in files)
{
if (file.LastWriteTime > lastModified)
{
lastModified = file.LastWriteTime;
Dts.Variables["User::VarFileName"].Value = file.ToString();
}
}
MessageBox.Show(Dts.Variables["User::VarFileName"].Value.ToString());
Dts.TaskResult = (int)ScriptResults.Success;
}
}
}
Source:http://www.techbrothersit.com/2013/12/ssis-how-to-get-most-recent-file-from.html
The code works but gives another latest flat file.. I only want to pull Non_Payment_Stat_Data1_12_2017.txt and Payment_Stat_Data1_12_2017.txt files. They will have the date changing every day.
If the file is named in a predictable fashion based on today's date (which you seem to be saying it is), then just use an expression for the file name(s) in the Flat File Connection Manager.
I used mongoexport to export the collection in to csv file along with fields. However when I tried to import the .csv in sql server using SSIS. I got errors and the data in preview section before executing the package was wrong. Can any one please guide me how can I export the data properly which can be easily imported into sql server. I am ready for minor tuning like adding id column or changing data types etc.
So it looks like you are getting a json formatted object for each row in the file. It may be different than that, and if that's the case, how you iterate through each might change a little. But here is something to get you started:
1 - There is no out of the box JSON parser in .NET, but this seems to be a popular utility: http://www.newtonsoft.com/json
2 - If you download the parser above, you'll have to go through a little pain to get it in a useable state with SSIS:
Go to the source folder and open the solution for net40
Sign the assembly
comment out the lines that cause build errors
//[assembly: InternalsVisibleTo("Newtonsoft.Json.Schema")]
//[assembly: InternalsVisibleTo("Newtonsoft.Json.Tests")]
install the assembly to the gac
3 - Once all that is out of the way, add a file connection manager to your package and point it to the mongdb file
4 - Add a dataflow and then add a script component source to the dataflow
5 - In the script component, configure the connection manager that you created in step three, I gave mine the friendly name "MongoDbOutput"
6 - In the Inputs and Outputs section, go to Output0 and add a column for each field in the JSON object, setting datatypes to string as they will be int by default
7 - Open the script and add a reference to Newtonsoft.Json and System.IO
8 - The script below shows how to access the connection string to the file in the connection manager, read the file with a streamreader one line at a time and then parse each JSON object for each address. One line is added for every address and the name and ssn are repeated for each line.
Note also, that I added a person and address class. The json.net is pretty cool, it will take that json and push it into the object - provided all the fields match up. Goog luck!
using System.IO;
using Newtonsoft.Json;
public override void CreateNewOutputRows()
{
object MongoDbPath = Connections.MongoDbOutput.AcquireConnection(null);
string filePath = MongoDbPath.ToString();
using (StreamReader fileContents = new StreamReader(filePath))
{
while (fileContents.Peek() >= 0)
{
var contents = fileContents.ReadLine();
Person person = JsonConvert.DeserializeObject<Person>(contents);
foreach (address address in person.addresses)
{
Output0Buffer.AddRow();
Output0Buffer.name = person.name;
Output0Buffer.ssn = person.ssn;
Output0Buffer.City = address.city;
Output0Buffer.Street = address.street;
Output0Buffer.country = address.cc;
}
}
}
}
public class Person
{
public string name { get; set; }
public string ssn { get; set; }
public address[] addresses
{ get; set; }
}
public class address
{
public string street { get; set; }
public string city { get; set; }
public string cc { get; set; }
}
I have an WPF application that relies heavily on manipulating documents; I want to know if there is a library that works independetly from Microsoft Office Word and that provides the following features:
Reading word documents (*.doc or rtf will be suffisiant, *.docx will be perfect)
Enable me to edit the document from my WPF app
Enable me to export again the document into other formats (word, excel, pdf)
Free :)
Thanks in advance.
I will try to answer in order:
Reading: This article is good for you.
Edit & export: May be this library works for you.
Free: The most difficult part of your question. You can do it for free using Interop Assemblies for Office. But controls for free... Many controls not free around the net.
Hope it helps.
I was faced with similar question some years ago. I had Windows forms application with some 20 reports and about 100 users and I needed to generate Word documents from application. Application was installed on a server. My first attempt was done by using Office interop, but it caused problems with performance and all kinds of unpredictable exceptions. So I started to look for alternatives and I soon landed with OpenXML.
First idea was that our team would use OpenXML SDK to generate and manipulate documents. It soon turned out that the learning curve was way too steep and our management wasn't willing to pay for the extra work.
So we started to look for alternatives. We didn't find any useful free library and so we tried some commercial ones (Aspose, Docentric). Aspose gave great results, but it was too expensive. Docentric's license is cheaper and the product performed well in Word document generation, so we finally decided to purchase it.
WHAT IT TAKES TO GENERATE A DOCUMENT FROM A TEMPLATE
Install Docentric Toolkit (you can get 30 day trial version for free)
In your VisualStudio project ad references to 4 Docentric dlls, which you can find in installation folder C:\Program Files (x86)\Docentric\Toolkit\Bin
Include Entity Framework via NuGet package If you will fill data from SQL database into the Word document
Prepare Word template, where you define layout and include fields which will get filled with data at document generation (see on-line documentation how to do it).
It doesn't take much code to prepare the data to be merged with the template. In my example I prepare order for customer "BONAP" from Northwind database. Orders include customer data, order details and product data. Data model also includes header and footer data.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Docentric.Word;
using System.Diagnostics;
namespace WordReporting
{
// Report data model
public class ReportData
{
public ReportData()
{ }
public string headerReportTemplatetName { get; set; }
public string footerDateCreated { get; set; }
public string footerUserName { get; set; }
public List<Order> reportDetails { get; set; }
}
// model extensions
public partial class Order
{
public decimal TotalAmount { get; set; }
}
public partial class Order_Detail
{
public decimal Amount { get; set; }
}
// Main
class Program
{
static void Main(string[] args)
{
// variable declaration
List<Order> orderList = new List<Order>();
string templateName = #"c:\temp\Orders_template1.docx";
string generatedDocument = #"c:\temp\Orders_result.docx";
// reading data from database
using (var ctx = new NorthwindEntities1())
{
orderList = ctx.Orders
.Include("Customer")
.Include("Order_Details")
.Include("Order_Details.Product")
.Where(q => q.CustomerID == "BONAP").ToList();
}
// collecting data for the report
ReportData repData = new ReportData();
repData.headerReportTemplatetName = templateName;
repData.footerUserName = "<user name comes here>";
repData.footerDateCreated = DateTime.Now.ToString();
repData.reportDetails = new List<Order>();
foreach (var o in orderList)
{
Order tempOrder = new Order();
tempOrder.Customer = new Customer();
tempOrder.OrderID = o.OrderID;
tempOrder.Customer.CompanyName = o.Customer.CompanyName;
tempOrder.Customer.Address = o.Customer.Address;
tempOrder.Customer.City = o.Customer.City;
tempOrder.Customer.Country = o.Customer.Country;
tempOrder.OrderDate = o.OrderDate;
tempOrder.ShippedDate = o.ShippedDate;
foreach (Order_Detail od in o.Order_Details)
{
Order_Detail tempOrderDetail = new Order_Detail();
tempOrderDetail.Product = new Product();
tempOrderDetail.OrderID = od.OrderID;
tempOrderDetail.ProductID = od.ProductID;
tempOrderDetail.Product.ProductName = od.Product.ProductName;
tempOrderDetail.UnitPrice = od.UnitPrice;
tempOrderDetail.Quantity = od.Quantity;
tempOrderDetail.Amount = od.UnitPrice * od.Quantity;
tempOrder.TotalAmount = tempOrder.TotalAmount + tempOrderDetail.Amount;
tempOrder.Order_Details.Add(tempOrderDetail);
}
repData.reportDetails.Add(tempOrder);
}
try
{
// Word document generation
DocumentGenerator dg = new DocumentGenerator(repData);
DocumentGenerationResult result = dg.GenerateDocument(templateName, generatedDocument);
// start MS Word and show generated document
ProcessStartInfo startInfo = new ProcessStartInfo();
startInfo.FileName = "WINWORD.EXE";
startInfo.Arguments = "\"" + generatedDocument + "\"";
Process.Start(startInfo);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
// wait for the input to terminate the application
Console.WriteLine("Press Enter to exit...");
Console.ReadLine();
}
}
}
}
From the Schema Compare Options, I deselected all Object Types:
It still shows me differences in Schema objects:
I scrolled through the big list of General options, and none of them appeared to do this:
I hacked it. If you save the compare, you can add this to the file:
<PropertyElementName>
<Name>Microsoft.Data.Tools.Schema.Sql.SchemaModel.SqlSchema</Name>
<Value>ExcludedType</Value>
</PropertyElementName>
You'll see where when you open it. This setting is not in the UI, but is apparently supported.
You can set the exclude schema in code by running the below as an exe before doing the schema merge. The below code needs the Microsoft.SqlServer.DacFx nuget package to be added to your project. It takes 2 parameters, one is the .scmp file path and second is comma separated string of schemas to exclude. It will overwrite the .scmp supplied and exclude the schema names you provided.
It essentially adds XML sections in the .scmp file that is equivalent to un-checking objects on the UI and saving the file. (persisted preference)
This exe execution can be a task in your VSTS (VSO) release pipeline, if you want to exclude one schema from being merged during deployment.
using System;
using System.Linq;
using System.Collections.Generic;
using Microsoft.SqlServer.Dac.Compare;
namespace DatabaseSchemaMergeHelper
{
/// <summary>
/// Iterates through a supplied schema compare file and excludes objects belonging to a supplied list of schema
/// </summary>
class Program
{
/// <summary>
/// first argument is the scmp file to update, second argument is comma separated list of schemas to exclude
/// </summary>
/// <param name="args"></param>
static void Main(string[] args)
{
if (args.Length == 0) return;
var scmpFilePath = args[0];
var listOfSchemasToExclude = args[1].Split(',').ToList();
// load comparison from Schema Compare (.scmp) file
var comparison = new SchemaComparison(scmpFilePath);
var comparisonResult = comparison.Compare();
// find changes pertaining to objects belonging to the supplied schema exclusion list
var listOfDifferencesToExclude = new List<SchemaDifference>();
// add those objects to a list
foreach (SchemaDifference difference in comparisonResult.Differences)
{
if (difference.TargetObject != null &&
difference.TargetObject.Name != null &&
difference.TargetObject.Name.HasName &&
listOfSchemasToExclude.Contains(difference.TargetObject.Name.Parts[0], StringComparer.OrdinalIgnoreCase))
{
listOfDifferencesToExclude.Add(difference);
}
}
// add the needed exclusions to the .scmp file
foreach (var diff in listOfDifferencesToExclude)
{
if (diff.SourceObject != null)
{
var SourceExclusionObject = new SchemaComparisonExcludedObjectId(diff.SourceObject.ObjectType, diff.SourceObject.Name,
diff.Parent?.SourceObject.ObjectType, diff.Parent?.SourceObject.Name);
comparison.ExcludedSourceObjects.Add(SourceExclusionObject);
}
var TargetExclusionObject = new SchemaComparisonExcludedObjectId(diff.TargetObject.ObjectType, diff.TargetObject.Name,
diff.Parent?.TargetObject.ObjectType, diff.Parent?.TargetObject.Name);
comparison.ExcludedTargetObjects.Add(TargetExclusionObject);
}
// save the file, overwrites the existing scmp.
comparison.SaveToFile(scmpFilePath, true);
}
}
}
right-click on the top level nodes (Add, Change, Delete) you can choose "Exclude All" to uncheck all elements of that type. This will at least quickly get you to a state where everything is unchecked.
I'm using nHibernate to update 2 columns in a table that has 3 encrypted triggers on it. The triggers are not owned by me and I can not make changes to them, so unfortunately I can't SET NOCOUNT ON inside of them.
Is there another way to get around the TooManyRowsAffectedException that is thrown on commit?
Update 1
So far only way I've gotten around the issue is to step around the .Save routine with
var query = session.CreateSQLQuery("update Orders set Notes = :Notes, Status = :Status where OrderId = :Order");
query.SetString("Notes", orderHeader.Notes);
query.SetString("Status", orderHeader.OrderStatus);
query.SetInt32("Order", orderHeader.OrderHeaderId);
query.ExecuteUpdate();
It feels dirty and is not easily to extend, but it doesn't crater.
We had the same problem with a 3rd party Sybase database. Fortunately, after some digging into the NHibernate code and brief discussion with the developers, it seems that there is a straightforward solution that doesn't require changes to NHibernate. The solution is given by Fabio Maulo in this thread in the NHibernate developer group.
To implement this for Sybase we created our own implementation of IBatcherFactory, inherited from NonBatchingBatcher and overrode the AddToBatch() method to remove the call to VerifyOutcomeNonBatched() on the provided IExpectation object:
public class NonVerifyingBatcherFactory : IBatcherFactory
{
public virtual IBatcher CreateBatcher(ConnectionManager connectionManager, IInterceptor interceptor)
{
return new NonBatchingBatcherWithoutVerification(connectionManager, interceptor);
}
}
public class NonBatchingBatcherWithoutVerification : NonBatchingBatcher
{
public NonBatchingBatcherWithoutVerification(ConnectionManager connectionManager, IInterceptor interceptor) : base(connectionManager, interceptor)
{}
public override void AddToBatch(IExpectation expectation)
{
IDbCommand cmd = CurrentCommand;
ExecuteNonQuery(cmd);
// Removed the following line
//expectation.VerifyOutcomeNonBatched(rowCount, cmd);
}
}
To do the same for SQL Server you would need to inherit from SqlClientBatchingBatcher, override DoExectuteBatch() and remove the call to VerifyOutcomeBatched() from the Expectations object:
public class NonBatchingBatcherWithoutVerification : SqlClientBatchingBatcher
{
public NonBatchingBatcherWithoutVerification(ConnectionManager connectionManager, IInterceptor interceptor) : base(connectionManager, interceptor)
{}
protected override void DoExecuteBatch(IDbCommand ps)
{
log.DebugFormat("Executing batch");
CheckReaders();
Prepare(currentBatch.BatchCommand);
if (Factory.Settings.SqlStatementLogger.IsDebugEnabled)
{
Factory.Settings.SqlStatementLogger.LogBatchCommand(currentBatchCommandsLog.ToString());
currentBatchCommandsLog = new StringBuilder().AppendLine("Batch commands:");
}
int rowsAffected = currentBatch.ExecuteNonQuery();
// Removed the following line
//Expectations.VerifyOutcomeBatched(totalExpectedRowsAffected, rowsAffected);
currentBatch.Dispose();
totalExpectedRowsAffected = 0;
currentBatch = new SqlClientSqlCommandSet();
}
}
Now you need to inject your new classes into NHibernate. There are at two ways to do this that I am aware of:
Provide the name of your IBatcherFactory implementation in the adonet.factory_class configuration property
Create a custom driver that implements the IEmbeddedBatcherFactoryProvider interface
Given that we already had a custom driver in our project to work around Sybase 12 ANSI string problems it was a straightforward change to implement the interface as follows:
public class DriverWithCustomBatcherFactory : SybaseAdoNet12ClientDriver, IEmbeddedBatcherFactoryProvider
{
public Type BatcherFactoryClass
{
get { return typeof(NonVerifyingBatcherFactory); }
}
//...other driver code for our project...
}
The driver can be configured by providing the driver name using the connection.driver_class configuration property. We wanted to use Fluent NHibernate and it can be done using Fluent as follows:
public class SybaseConfiguration : PersistenceConfiguration<SybaseConfiguration, SybaseConnectionStringBuilder>
{
SybaseConfiguration()
{
Driver<DriverWithCustomBatcherFactory>();
AdoNetBatchSize(1); // This is required to use our new batcher
}
/// <summary>
/// The dialect to use
/// </summary>
public static SybaseConfiguration SybaseDialect
{
get
{
return new SybaseConfiguration()
.Dialect<SybaseAdoNet12Dialect>();
}
}
}
and when creating the session factory we use this new class as follows:
var sf = Fluently.Configure()
.Database(SybaseConfiguration.SybaseDialect.ConnectionString(_connectionString))
.Mappings(m => m.FluentMappings.AddFromAssemblyOf<MyEntity>())
.BuildSessionFactory();
Finally you need to set the adonet.batch_size property to 1 to ensure that your new batcher class is used. In Fluent NHibernate this is done using the AdoNetBatchSize() method in a class that inherits from PersistenceConfiguration (see the SybaseConfiguration class constructor above for an example of this).
er... you might be able to decrypt them...
Edit: if you can't change code, decrypt, or disable then you have no code options on the SQL Server side.
However, You could try "disallow results from triggers Option" which is OK for SQL 2005 and SQL 2008 but will be removed in later versions. I don't know if it suppresses rowcount messages though.
Setting the "Disallow Results from Triggers" option to 1 worked for us (the default is 0).
Note that this option will not be available in a future releases of Microsoft SQL Server, but after it is no longer available it will behave as if it was set to 1. So setting this to 1 now fixes the problem and also give you the same behavior as will be in future releases.