Create PDF automatically after new entry in database - database

I am not sure if this is even the right site for this but I'll ask anyways.
Does anybody know a program that can create a PDF with data from a database with a complex SQL statement? When an employee finishes a request for a customer, I want that the program is triggered by the new entry in a database table and fills out a pre built PDF with data that it pulls from a database.
It needs to be a complex program that can process big SQL statements.

The only thing you can do to run custom code in SQL Server is to create a CRL Stored Procedure and a trigger to start PDF processing in your specific use case. You can write a class library that connects to the database you are triggered from using a specific keyword in the connection string. You can pass in the key you need to use to select the whole dataset you need to fill the PDF as a SP parameter.
https://msdn.microsoft.com/en-us/library/ms131094.aspx
In the class library you can reference third part libraries to interact with an editable PDF and fill the flieds you need with the data retrieved from the database. I suggest you to have a look about security concerns releted to CLR use in SQL Server. Basically your code runs within the SQLServer.exe process, sharing resources and access privileges.
https://msdn.microsoft.com/en-us/library/ms131071.aspx

SSRS can produce PDF. See if that table insert trigger can call a CLR that in turn will execute the report that can even be attached to an email.
The report can be as complex as you need, so long the report SP can populate the data based on the newly inserted row.
For stability, let's not have a trigger, but the insert process itself should be done in an SP, in which that insertion SP shall call the CLR.
Further, instead of using CLR, we can use SSRS Data-Driven Subscription. When making the Subscription, we can make it as a one time scheduled Job. The SP can invoke this 'expired' Job from SQL Server Agent by using sp_start_job.

Related

Options for executing SQL commands in parallel

Scenario
Note: I am using SQL Server 2017 Enterprise
I am looping through a list of databases and copying data to them out of one database. This database will only be accessed by the script (no other transactions will be made against it from something else). The copy includes copying straight table to table, or will have more complex, longer-running queries or stored procedures. All of this is done with SQL Server jobs calling procedures; I'm not using anything like SSIS.
Question
Instead of looping through all the databases and running the statements one at a time, I want to be able to run them in parallel. Is there an easy way to do this?
Options I've thought of:
Run each data transfer as a job and then run all the jobs at once. From my understanding, they would be executed asynchronously, but I'm not 100% sure.
Generate the SQL statements and write a script outside of SQL Server (e.g. Powershell or Python) and run all the commands in parallel
Leverage SSIS
I prefer not to do this, since this would take too much work and I'm not very familiar with it. This may be used down the road though.
Use powershell...
Create a table on the central database to house instance / connection string details. (Remember to obfuscate for security)
Create another table to house the queries.
Create a third table to map Instance to Query.
In powershell create a collection / list based object. Deserialized from your data entries. The object will be made up of three properties {Source / Destination / Query}
Write a method / function to carry out the ETL based work. CONNECT TO DB, READ FROM SOURCE, WRITE TO DEST.
Iterate over the collection using Foreach-Parallel construct with your function nested within. This will initiate a new SPID based on the number of elements in the collection and pass those values into your function where the work will be carried out.

Pass connection information and parameters to server based SSIS job from vb.net

After reviewing all the different options I am still confused.
Here is the scenario. We have multiple databases on the same server that we would like to have a Single SSIS job handle imports (or exports) into (from) a table from a file. We are calling this from vb.net and the job is running on SSIS on the server. We don't have xp_cmdshell available.
We need to pass to the job unique job information (it is possible that 2 people could be running the same job on the same db or on a different db on the same server), the database connection information (This cannot be stored and selected in the job, as db's may be added/removed as needed and we don't want to reconfigure the job) and the file name/path (on the server or permitted UNC path available to SSIS).
We have looked at the option of declaring the job/job steps and then directly executing the job. We like this idea in that the Jobs would be unique and we could have the sql proc that the job calls report issues back to a common log table by the job id, which would then be available to review.
What I don't really follow is how to pass the information that this job needs.
In http://code.msdn.microsoft.com/Calling-a-SSIS-Package-a35afefb I see them passing parameters using the set command, but I get confused by the explanation of the call that things are processed twice. Also, in that example, would I be changing the Master DB reference to my DB in the Add Job Step?
My issue is that no example is really clean and simple passing parameters and changing DB's, a lot use different options like a list of db's to process from a data source and none really cleanly show me what to do with a variable that will be passed on down to a called stored procedure.
I don't have time to delve deep and experiment, I need to see how it is done as I am trying to understand it at a level back so I know how we can utilize it and fit the information we need to use (ie what do I need for connection information to dynamically assign it) as I need to know it to understand where in the grand scheme I am getting that information. (We don't store that in the actual DB doing the job, we have a repository in a central DB for that, but I don't know exactly what I need to store!)
Brian
Parameters that are dynamic to a single run of a job can be passed in to the SSIS package through a config table. The process that starts the job sets any necessary parameters in the config table before starting the job. The job kicks off the SSIS package, which has a connection manager to read the values out of the config table and into parameter values within the SSIS package.
You mentioned that you have database connection information, and if you choose to pass in parameters through a table keep in mind that storing SQL login information in a database is bad practice. The connection manager in the SSIS package should use windows authentication, and permissions needed by the SSIS package can be granted to the SQLAgent service account.
From what I understand, you want to run a package (or packages)via a SQLAgent job and the database it will run against is subject to change.
As supergrady says, you can pass in specific parameters to the package through a config table.
What I did was to create a config table and add a status column (a bit that indicates on/off, true/false). This allows me to run a sql script setting the status for the specific databases that I want and turning off those that I don't want. For me this is easier than opening up the job and fiddling with the command line values which is another way of getting what you want. I hope this helps

Is there a free GUI tool for data sync between DB in which it is possible to script rules?

What I need to do is some data between 2 databases. The source can be anything (comma separated file, xls file, any database, ...), the destination is MS SQL Server.
I do not need to sync all data, I just need to sync particular tables.
Example:
I need to sync accounting Software (runs on PostgreSQL) CUSTOMERS table with CRM (runs on SQL Server).
Some problems this tool should be able to face:
1) Accounting software customers table has 1 field that is not mapped in crm customers table. (In this way I want to map this extra field to the field CUSTOMERS_CUSTOM_DATA.EXTRA_FIELD)
2) Having some rules (like sync only customers whose code is between 10000 and 99999)
3) Allowing to perform some post insert tasks (for example I am using manually managed seuqences for the tanble IDs, so after inserting 10 records I need to add 10 to the sequence)
4) Having an exception handling mechanism so if something is wrong it can wither call a sql server stored procedure (that I already have and it will send an e-mail to me) or simply send a message to notify that something was wrong in the nightly sync.
5) Be easy to schedule when to perform data sync (hourly, daily, INCLUDING MANUAL)
6) Perform data conversion: if Surname field in source table is varchar(20) and in destination table is varchar(15) I want to explicitly say "perform a truncation".
7) Have different rules for insert or update. For example in the source e-mail field is not present, but I want to populate it in the destination I decide to perform this operation on insert only, not on update. (for example as I insert a new customer I want to populate the e-mail field concatenating name and surname, but then I want to let the users to modify it, this first insertion is just to simplify data entry, but then this particular case will be handled manually. So I want to say (on insert populate e-mail field, on update don't do anything with email field)
8) In case of delete in the source db don't delete on the destination but only change the varchar(10) STATUS to DELETED.
Note: I know that Integration Services will be perfect for this, but I must support the Express Edition, so SSIS is not an option.
I created a bunch of scripts and scheduled stored procedure that at present do what I need, but it is very hard to maintain and the total lack of a GUI makes the work much slower. I remember having seeing TALEND time ago, maybe that tool is also the answr I need, anyway I need to provide a quick answer to management, so I have now no time to investigate all the tools on the market, and I would prefer to have a suggestion from an expert.
I believe SQL Server Integration Services does all that, and I believe SQL Server Management Studio allows you to create and package your SSIS jobs so that they can be deployed elsewhere.
Finally I went for TALEND, I never really used SSIS, I just saw a live demo of it at a SQL Server conference. Anyway Talend is a free alternative (and quite rich) to SSIS, so it will suite the needs of all customers, including the ones (95%) that has SQL Server Express.

How to create reports in Access via ADO When data is in SQL Server?

I have an Access 2003 project in which all data is stored in SQL Server 2008. I am using ADO to view/update data via forms that are completely unbound. For example, a form has several textboxes and combo boxes on it. When the form is loaded I use ADO to make a call to a stored procedure on SQL SQL, it returns a recordset and I populate the controls, via VBA, with the data from the recordset. I like this approach because only the VBA is stored within Access. No data (well actually connection strings are stored in Access, but that is it!).
My problem is what to do when it comes to reports. I want to create reports that are based off of views created within SQL Server, however I would like to avoid, if possible, static linking to the views directly from within Access. Is it possible to set the recordsource of a report dynamically at run-time to be the results of a SQL Server view? If it is, how does one go about designing the report id Access does not contain any data?
More info ... The reason I want to avoid linking to the view in Access is the environment in which the Access application could be run changes (Production, Development, Test). Currently whenever I make any calls to the database stored procedures, I look up the connection string (Active Directory based so no passwords are stored) in the only table that is stored in Access .
Thanks for any assistance.
First of all let's be clear: you don't have an Access 2003 "project." You have an Access 2003 database.
An actual Access Data Project cannot have local tables, and uses a SQL Server as the back end. When you view Tables you see the ones that exist on the server, and under Queries you see the views, functions, and stored procedures that exist on the server. You can use the "Upsize Wizard" to turn an Access database into an Access data project (or probably better, just create a new ADP (Access Data Project) and import all the forms, reports, macros, and modules.
Here are my ideas:
Convert the database to an actual Access Data Project and then just use regular old queries as if they were addressed to the local database. You can even bind forms to stored procedures and they can be updatable. To deal with Production, Development, and Test, you just change the connection string in the GUI or you change it through code like so:
Application.CurrentProject.CloseConnection
Application.CurrentProject.OpenConnection NewConnString
If you want to read the connection string from a centralized database or from a text file on a share or from a common table you load in each environment (that has the connection information for every other environment), that is up to. I have one Access Data Project that has an toolbar with an Environment dropdown. When the environment is switched, a child database dropdown is then populated, and finally all open forms are notified by an event (though bound forms close when this occurs).
There's nothing wrong with using linked tables. Just write a procedure that loops through all the tables and updates them to point to the correct server when you want to change environments. The difference between "static" linking and "dynamic" linking is just a single VB procedure that rips through all the tables and relinks them--easy peasy.
Setting a report recordset dynamically at runtime is problematic. It MIGHT be possible in actual Access Data Projects, but definitely not in regular MDBs.
You CAN create pass-through queries in an Access MDB, but I'm not sure about passing parameters in. You'd probably have to set the query text dynamically with the parameters hard-coded and then run the report. This would be a problem for a multi-user database unless each person gets his own front-end to run from.
I recommend that you go with option 1 or 2. Option 1 seems simplest but there is some learning to do before you'll become facile with ADPs over MDBs. Let me know if you think you'll go down that route and I I'll share some of the gotchas with you. However, it's probably easier than what you're doing now which is everything manually. (Ouch!) The second option would be fastest for implementing right away and not throwing any wrenches into your current skill with MDBs.
UPDATE
So if you want to link tables, here's some code to get you started:
Sub TableRelink(MdbPath As String)
Dim Table As DAO.TableDef
Dim Tables As DAO.TableDefs
Set Tables = CurrentDb.TableDefs
For Each Table In Tables
If Table.SourceTableName <> "" Then 'If a linked table
Table.Connect = ";DATABASE=" & MdbPath 'Set the new source
Table.RefreshLink
End If
Next
End Sub
This code is for MDB files, but some digging will quickly give you the correct properties and values to use for SQL Server linked tables.
Another Thought
I just thought of another possible way to handle just the problem you're experiencing: Use a session-keyed "temp" table in Access. Create a local table that has all the columns the view returns, plus a GUID column. When the report is run, insert the contents of the view to the local table, keyed by a new GUID value. Set the recordsource of the report to SELECT * FROM MyViewTempTable WHERE GUID = '{GUID}'. Simple problem solved. On report_close, delete from the table. Perhaps put in a date also and delete after 10 days in case any rows get left behind.

Creating a New Database from Within a Stored Procedure

Due to an employee quitting, I've been given a project that is outside my area of expertise.
I have a product where each customer will have their own copy of a database. The UI for creating the database (licensing, basic info collection, etc) is being outsourced, so I was hoping to just have a single stored procedure they can call, providing a few parameters, and have the SP create the database. I have a script for creating the database, but I'm not sure the best way to actually execute the script.
From what I've found, this seems to be outside the scope of what a SP easily can do. Is there any sort of "best practice" for handling this sort of program flow?
Generally speaking, SQL scripts - both DML and DDL - are what you use for database creation and population. SQL Server has a command line interface called SQLCMD that these scripts can be run through - here's a link to the MSDN tutorial.
Assuming there's no customization to the tables or columns involved, you could get away with using either attach/reattach or backup/restore. These would require that a baseline database exist - no customer data. Then you use either of the methods mentioned to capture the database as-is. Backup/restore is preferrable because attach/reattach requires the database to be offline. But users need to be sync'd before they can access the database.
If you got the script to create database, it is easy for them to use it within their program. Do you have any specific pre-requisite to create the database & set permissions accordingly, you can wrap up all the scripts within 1 script file to execute.

Resources