Search multiple SDF files - sql-server

I have a process that backs up my remove SQL Server database to local SDF (SQL CE) database files daily.
What I need is an easy way to search across multiple SDF files. For example, suppose I want to find all occurrences of a name within all of the backups. Instead of opening each individual SDF file -- one by one -- with Enterprise Manager, I'd like to be able to do a search across all the files and show the results in one centralized place. Maybe like a plug-in for Windows Search, etc.
If you're familiar with Notepad++, think about how it's File Search feature works.
Is there any way to accomplish this, or am I just dreaming?

There is no direct way to accomplish this, unless yo write your own search filter for SQL Server Compact. But you could use a tool like my SQL Server Command command line utility http://sqlcecmd.codeplex.com together with some clever batch/Powershell code to query across multiple sdf file and collect the results in a singel file.

You could write a script or program that opens/connects to a set of .SDFs (all files in a folder, from a hardcoded list, from a table in a specialized/master ..SDF), execute a (list of) statement(s) like SELECT ... FROM table WHERE field = 'needle', and display the resultsets to the console, a .HTML file, or push them to still another .SDF.

Related

Need advice on SSIS solution with variable based connection managers

I am working on the SSIS solution to get 20 different txt files from a specific folder and upload them to different SQL Server database tables. I added a mapping table with table_name, file_name, file_path, full_connection_string. How do I tell connection manager which connection to use for a certain file?
Which variables/parameters to use and where?
I do not want to have 20 txt connections(for the known filename difference) and 20 database connections.
All online tutorials are old and don't match Visual Studio 2019 UI.
Any help is highly appreciated!
You can have parametric connection string.
Define the variable: _Server (string)
Select the Connection
In Properties window, select [Expression]
Use your variable for (SereverName)
Obviously you have a loop and you can load the proper server name into your variable so in each iteration, the connection will be connected to specific server.
You will need one database connection manager for each target database as this is scoped at the database level.
You will need a flat file connection manager for each unique file metadata. You can have twenty Sales-date.txt files that use a single flat file connection manager and then expressions will take care of consuming the different files.
However, if you have Sales.txt and Customers.txt, the metadata, aka the columns inside the file, are going to be different and that's fine but you'll have to create a flat file connection manager for each of those types. This is the contract you're making with the SSIS engine - I promise all of the files this FFCM will touch conform to this standard. You will also need to have a Data Flow Task for each of these FFCMs as the engine computes how many rows of data it can operate on at a time based on the type and column constraints in the source.
If it were me, I'd spend a few days looking at Biml. Biml is the Business Intelligence Markup Language and what it allows you to do is describe your problem in a repeatable way. i.e. For each of these file types, I need an SSIS package that is a for each file enumerator to pick up the current file. Inside that, a data flow task to ingest the file. Then a File System Task to archive the file out of the working folder.
You've already started down this path identifying your metadata ( table_name, file_name, file_path, full_connection_string) the only thing remaining is to describe the contents of your files. If you look through my SO answers, you'll find plenty of answers that use Biml to create a reproducible solution.

execute rss script on several servers using SSIS, storing results in a table

I found a wonderful script that collects all the (shared) datasources used on a reportserver:
LINK
I simply love this script.
However, I am looking for a way to execute this script on several reportservers and add the results to a centralised table. That way my colleagues and me would be able to see pretty quickly what datasources are used.
I could place this script on each reportserver, collect the csv's on a central server and then use SSIS to insert them into a MSSQL table. That way I would have a nice central overview of all the used datasources.
However, I would prefer to have the script in one location and then execute that script on a list of servers.
Something like:
Loop through table with servers
execute script (see link)
insert resulting csv into central table (preferably skip this step, have script insert data in table directly)
next server
Any suggestions as to what the best approach would be? Should it be a webservicetask? Scripttask?
Something else completeley?
The level of scripting in the mentioned script is right at the edge of what I understand, so if someone would know how to adapt the script in such a way that I could use it as input in a dataflow in SSIS I would be very happy.
Thanks for thinking with me,
Henro
This script is called using a utility called rs.exe so you would use an execute process task to call it. To avoid writing to a file, you could modify the script and have it insert the results into a table. The package could be set up as follows:
Create a foreach loop which iterates over a list or ado.net recordset of your servers
Put the server name in a variable
Create a variable for the arguments for the process task, referencing the server variable from step 2
Add a process task which uses the above argument and calls rs.exe

SSMS Query to Text File

I have a complicated query that marshals data to a temporary table, which I then marshal into a further output temporary table before finally selecting on it to display to screen. This gets saved out from Grid view, to text and I get the file I need for processing off site.
What I want to do is have this query be run-able and create that file on the local disk without any need for the operator to change the "Results to" option, or fiddle with anything.
What command or functionality might be available to me to do this?
I can not install any stored procedures or similar to the server involved.
Since you can't do anything on the server I would suggest writing an SSIS package. Create a data flow, and in your source object put your script. Your destination object will then point to the file you want. You have a fair number of options for output.
The SSIS package can then be run by
A SQL Job (assuming you are allowed even that)
A non SQL job running a bat file with a DTEXEC command
The DTEXECUI GUI.
Also you can store your SSIS package in the instance or on any fileshare you choose.

SQL Server, execute batch T-SQL script on multiple databases

Our SQL Server 2000 instance hosts several databases which are all similar, one for each of our client. When comes the time to update them all, we use Red Gate SQL Compare to generate a migration script between the development database and a copy of the current state DB of all the clients database.
SQL Compare generates a script which is transactional, if one step fails, the script rolls back everything. But currently our system uses a method that splits the script on batch separators (the GO statement) and then runs each command separately, which ruins all the transactional stuff. The GO statement is not supported when querying the database by programmation (in classic ASP)
I want to know how I could run that script (keeping the transactions) on all those databases (like 250 DB), programmatically or manually in a tool? In Query Analyzer, we need to select each DB and press Run which is quite long for the number of DB we have.
If you can use SSMS from SQL 2005 or 2008, then I'd recommend the free SSMS Tool pack
I use external sqlcmd command line tool. I have the same situation on the server I work.
I have the script in *.sql file and the list of databases on the 2nd file. I have small *.bat script which iterate through all the databases and execute script using sqlcmd command.
In more details I have like this:
DB.ini file with all the databases on which I want to deploy my script
sql/ directory where I store all scripts
runIt.bat - script which deploys scripts
The command line looks more-less like this:
sqlcmd -S <ComputerName>\<InstanceName> -i <MyScript.sql> -d <database_name> -T
In SQL Server 2000 it was osql utility
UPDATE
Red Gate now have a tool called SQL Multi Script, which basically does exactly what you want. I supports SQL 2000 to 2008 R2 and running queries on multiple databases in parallel which improve performance.
7 years later i had the same issue so many times so I made it and published the project:
TAKODEPLOY
Here are some features:
Get all databases from a single instance and apply a name filter. Or just a single direct connection.
Mix database sources as much as you want. Example, two direct and one full instance with or withut a filter.
Script editor (Avalon Text, same monodevelop uses)
Scripts are parsed and errors are detected before executing.
Scripts are 'splitted' by GO statements.
Save your deployment into a file
Get a list of all databases before deploying.
See in realtime what is happening (PRINT statements are recommended here!).
Automatic rollback to independent database if any error occurs.
Transparent Updates via Squirrel.
You can get it at: https://github.com/andreujuanc/TakoDeploy
Not sure if this will work, but try replacing the GO statements with semicolons, and running the entire statement in one batch.
If I recall, you can also create a script in SQL Compare to change everything back to the state it started in. You might want to generate both.
When I did this sort of deployment (it's been awhile), I first loaded to a staging server that was made exactly like prod before I started to make sure the scripts would work on prod. If anything failed (usually because of the order that scripts were run, can't set a foreign key to a table that doesn't exist yet for instance). I also scripted al table changes first, then all view changes, then all UDF changes, then all stored proc changes. This cut down greatly onthe failures due to objects not yet existing, but I still usually had a few that needed to be adjusted.

Save Access Report as PDF/Binary

I am using Access 2007 (VBA - adp) front end with a SQL Server 2005 Backend. I have a report that I want to save a copy as a PDF as a binary file in the SQL Server.
Report Opened
Report Closed - Closed Event Triggered
Report Saved as PDF and uploaded into SQL Server table as Binary File
Is this possible and how would I achieve this?
There are different opinions if it's a good idea to store binary files in database tables or not. Some say it's ok, some prefer to save the files in the file system and only store the location of the file in the DB.
I'm one of those who say it's ok - we have a >440 GB SQL Server 2005 database in which we store PDF files and images. It runs perfectly well and we don't have any problems with it (for example with speed...that's usually one main argument of the "file system" people).
If you don't know how to save the files in the database, google "GetChunk" and "AppendChunk" and you will find examples like this one.
Concerning database design:
It's best if you make two tables: one only with an ID and the blob field (where the PDF files are stored in) and one with the ID and additional fields for filtering.
If you do it this way, all the searching and filtering will happen on the small table, and only when you know the ID of the file you want to load, you hit the big table exactly one time and load the file.
We do it like this and like I said before - the database contains nearly 450 GB of files, and we have no speed problems at all.
The easiest way to do this is to save the report out to disk as a PDF (if you don't know how to do that, I recommend this thread on the MSDN forums). After that, you'll need to use ADO to import the file using OLE embedding into a binary type of field. I'm rusty on that, so I can't give specifics, but Google searching has been iffy so far.
I'd recommend against storing PDF files in Access databases -- Jet has a strict limit to database size, and PDFs can fill up that limit if you're not careful. A better bet is to use OLE linking to the file, and retrieving it from disk each time the user asks for it.
The last bit of advice is to use an ObjectFrame to show the PDF on disk, which MSDN covers very well here.

Resources