Anyone else heard of coldfusion t-sql use database bug? - database

On our admin of our company's production site, we have a little query dumping tool, and I unknowingly, in trying to get data from a database, different than the main one, used the use database command.
And here's the kicker, it then made every coldfusion page with it's query instantly fail.
since it somehow caches that use database command.
Has anyone else heard of this weird bug?
How can we stop this behavior?
If i use a "use database" command, I want that to only exist as far as the current query i am running, after i am done, to go back to the normal database usage.
This is weird and a potentially damaging problem.
Any thoughts?

I imagine that this has something to do with connection pooling. When you call close, it doesn't close the connection, it just puts it back into the pool. When you call open, it doesn't have to open a new connection, it just grabs an existing one from the pool. If you change the database that the connection is pointing to, ColdFusion may be unaware of this. This is why some platforms (MySQL on .Net for instance) reset the connection each time you retrieve it from the pool, to ensure that you are querying the correct database, and to ensure that you don't have any temporary tables and other session info hanging around. The downside of this kind of behaviour, is that it has to make a round trip to the database, even when using pooled connections, which really may not be necessary.

Kibbee is on the right track, but to extend that a little further with three possible workarounds:
Create a different DSN for use by that one query so the "USE DATABASE" statement would only persist for any queries using that DSN.
Uncheck "Maintain connections across client requests" in the CF admin
Always remember to reset the database to the one you intend to use at the end of the request. It kinda goes without saying that this is a very dangerous utility to have on your production server!
It's not a bug nor is it really unexpected behavior - if the query is cached, then everything inside the cfquery block is going along for the ride. Which database platform are you using?

Related

Calling Powershell script from SQL Server trigger

Is it possible to use powershell "Send-MailMessage -SMTPServer" command from the sql server trigger?
I am trying to send emails when the rows in database update or new row created. I am not able to use Database-mail due to security restrictions. However, I can send emails through powershell's Send-MailMessage command.
First off, this is almost certainly a very bad idea. Keep in mind that triggers can cause unexpected issues in terms of transaction escalation and holding locks longer than necessary while they're processing. Also keep in mind that people will probably not expect there to be triggers of this sort on your table, and that they'll try to do CRUD operations on it like it's a normal table and not understand why their applications are timing out.
That said, you could do this at least three ways:
Enable xp_cmdshell, and use that to shell out to PowerShell, as explained here: Running Powershell scripts through SQL - but don't do this, because xp_cmdshell is a security risk and this is very likely to cause problems for you in one way or another (whether because someone uses it in a damaging manner or because PowerShell just fails and you don't even know why). If you can't use database mail due to security restrictions, you should definitely not be using xp_cmdshell, which has even more security concerns!
Instead of using PowerShell, configure Database Mail have your trigger call sp_db_sendmail - but don't do this because this could easily fail or cause problems for your updates (e.g. SMTP server goes down and your table can't be updated anymore). (I wrote this part before I saw you can't use it because of security restrictions.)
One other option comes to mind that may be more secure, but still not ideal - create a SQL CLR library that actually sends the mail using the .NET SmptClient. This could be loaded into your instance and exposed as a regular SQL function that could be called from your trigger. This can be done more securely than just enabling xp_cmdshell, but if you don't have the ability to configure Database mail this probably violates the same policy.
Instead of these options, I'd recommend something like these:
Instead of sending an email every time there's an update, have your trigger write to a table (or perhaps to a Service Broker Queue); create a job to send emails periodically with the latest data from that table, or create some kind of report off of that. This would be preferable because writing to a table or SSB queue should be faster and less prone to error than trying to send an email from in a trigger.
Configure and use Change Data Capture. You could even write some agent jobs or something to regularly email users when there are updates. If your version supports this, it may be a bit more powerful and configurable for you, and solve some problems that triggers can cause more easily.

Point Connection String to custom utility

Currently we have our Asp mvc LOB web application talking to an SQL server database. This is setup through the a connection string in the web.config as usual.
We are having performance issues with some of our bigger customers that are running some really large reports and kpi's on the database which choke it up and cause performance issues for the rest of the users.
Our solution so far is to setup replication on the database and pass all the report and kpi data calls off to the replicated server and leave the main server for the common critical use.
Without having add another connection string to the config for the replicated server and go through the application and direct the report, kpi and other read only calls to the secondary db is there a way I can point the web.config connection string to an intermediary node that will analyse the data request and shuffle it off to the appropriate db accordingly? i.e. If the data call is a standard update process on the db it will shuffle that to the main db and if there is a report being loaded it will pass it off to the secondary replicated server.
We will only need to add this node in for the bigger customers with larger db's, so if we can get away with adding a node outside the current application setup it will save us a lot of code changes and testing needed.
Thanks in advance
I would say it may be easier for you to add a second connection string for reports, etc. instead of trying to analyse the request.
The reasons are as follows:
You probably have a fairly good idea which areas of your system need to go the second database. Once you identify them, you can just point them to to the second database and not worry about switching them back and forth.
You can just create 2 connection string in you config file. If you have only one database for smaller customers, you can point both connections to the same one database. For bigger customers, you can use two different connection strings. This way you will make the system flexible and configurable.
Analysing requests usually turns out to be complex and adding this additional complexity seems unwarranted in this case.
All my comments are based on what you wrote above and may not be absolutely valid - you know they system better, just use them if you want.

Failover strategy for database application

I've got a writing and reading database application holding a local cache. In case of an application server fault a backup server shall start working.
The primary and backup application can only run exclusively because of its local cache and some low isolation level on the database.
As far as my communication knowledge goes it is impossible to let both servers always figure out who is allowed to run exclusively.
Can I somehow solve this communication conflict through using the database as a third entity? I think this is a quite typical problem and there might not be a 100% safe method, but I would be happy to know how other people recommend to solve such issues? Or if there is some best practice to this.
It's okay if both application are not working for 30 minutes or so, but there is not enough time to get people out of bed and let them figure out what the problem is.
Can you set up a third server which is monitoring both application servers for health? This server could then decide appropriately in case one of the servers appears to be gone: Instruct the hot standby to start processing.
if i get the picture right, your backup server constantly polls the primary server for data updates, it wouldn't be hard to check if the poll fails, schedule it again for 30s later 3 times and in the third failure dynamically update the DNS entry to the database server to reflect the change in active server. Both Windows DNS and Bind accept dynamic updates signed and unsigned.

Replicating / Cloning data from one MS SQL Server to another

I am trying to get the content of one MSSQL database to a second MSSQL database. There is no conflict management required, no schema updating. It is just a plain copy and replace data. The data of the destination database would be overwritten, in case somebody would have had changed something there.
Obviously, there are many ways to do that
SQL Server Replication: Well established, but using old protocols. Besides that, a lot of developers keep telling me that the devil is in the details and the replication might not always work as expected and that is this best choice for an administrator, but not for a developer.
MS Sync Framework: MSF is said to be the cool, new technology. Yes, it is this new stuff, you love to get, because it sounds so innovative. There is the generic approach for synchronisation, this sounds like: Learn one technology and how to integrate data source, you will never have to learn how to develop syncing again. But on the other hand, you can read that the main usage scenario seems to be to synchronize MSSQL Compact databases with MSSQL.
SQL Server Integration Services: This sounds like an emergency plannable solution. In case the firewall is not working, we have a package that can be executed again and again... until the firewall drops down or the authentication is fixed.
Brute Force copy and replace of database files: Probably not the best choice.
Of course, when looking on the Microsoft websites, I read that every technology (apart from brute force of course) is said to be a solid solution that can be applied in many scenarios. But that is, of course, not the stuff I wanted to hear.
So what is your opinion about this? Which technology would you suggest.
Thank you!
Stefan
The easiest mechanism is log shipping. The primary server can put the log backups on any UNC path, and then you can use any file sync tools to manage getting the logs from one server to another. The subscriber just automatically restores any transaction log backups it finds in its local folder. This automatically handles not just data, but schema changes too.
The subscriber will be read-only, but that's exactly what you want - otherwise, if someone can update records on the subscriber, you're going to be in a world of hurt.
I'd add two techniques to your list.
Write T-SQL scripts to INSERT...SELECT the data directly
Create a full backup of the database and restore it onto the new server
If it's a big database and you're not going to be doing this too often, then I'd go for the backup and restore option. It does most of the work for you and is guaranteed to copy all the objects.
I've not heard of anyone using Sync Framework, so I'd be interested to hear if anyone has used it successfully.

Is there a way to make transactions or connections read only in SQL Server?

I need a quick "no" for DELETE/UPDATE/INSERT, since 3p reporting tool allows users to write their own SQL.
I know that I should probably add a new user and set permissions on tables/sp/views/etc..., and then create a new connection as restricted user.
Is there a quicker way to force a transaction or connection in SQL Server to read only mode?
I don't know. If the 3P tool is that crazy, I would be completely paranoid about what I exposed to it. I think that setting up a new user is the best thing. Maybe even just giving them certian views and/or stored procs and calling it a day.
Why are you worried about your users' ability to put arbitrary SQL in their reporting queries? If they have the rights to change data in your database, surely they can just connect to it with any ODBC client and execute the SQL directly.
I'm not sure it's 3P that's the issue here, it sounds more like you need to restrict your users but haven't.
If you have a class of users who shouldn't be allowed to change your data, then set their accounts up that way. Relying on the fact that they'll only use a reporting tool that doesn't let them change data is a security hole I could drive a truck through.
If they are allowed to change the data, restricting sessions from 3P won't help secure your system.
Unless I've misunderstood your set-up. I've been wrong before, just ask my wife. In which case, feel free to educate me.
Does it have to be with named users ? I have a "report" user and a "browser" user that just has select rights on most tables. Anyone that needs data uses those accounts and since they are select only I don't have to worry about them.
See Kern's link.
Change the permissions for the user (the one used in the connection string) on the SQL Server.
If you have control when the connection is created and closed the you could perform a BEGIN TRAN and then do a ROLLBACK at the end. That way anything this reporting tool does will be rolled back at the end. However, if it has the ability to manage these transactions or new connections, or if the user base is unknown and potentially malicious then it is not foolproof. In addition, any large transaction may result in your database being locked by your users actions
I have to say though, the real answer is security is allocated to users. The "quicker" way you're after is a new user with just read only permissions.

Resources