Tune Database Settings- chef - sql-server

I am trying to install SQL Server 2012 using chef. Below is my chef recipe. But my task is to - Increase the buffer size(as results of common queries are returned slowly) and to - Decrease the transaction retry interval value (When transactions are failing frequently).
Could any one insight on this and tell me how to do this
include_recipe 'chocolatey::default'
chocolatey 'mssqlserver2012expressadv --allow-empty-checksums' do
action :install
end

You can use a Chef template to generate a Configuration_File.ini that fits your needs and later you can use it for your installation. So assuming you have a valid configuration template under templates directory:
template '/foo/bar/zaz/config_file.ini' do
source 'your.template.ini.erb'
variables(
variable1:node['attribute1']
...
variableN:node['attributeN']
)
action :create
end
However I am afraid that the second request you have (To dynamically set parameters depending on SQL server load) is not what Chef is intended for, since it is against chef's idempotent design. You can read more about it here.
If you are going to manage several SQL servers for different purposes, try looking at Chef Roles and Chef Environments, so you can override the configuration template to match the needs of each solution you deploy.

Related

SSIS Package Error single UPDATE in a execute SQL task

I am trouble shooting an error in a package.
Update MYTABLE for MYCOLUMN (REF to task name):Error: Executing the query "..." failed with the following error: "Invalid column name 'MYCOLUMN'.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
I have verified the table and column exists, the length of the field is way excessive than what it needs that is 14 where it is declared as varchar(250).
I have verified the script works on the server in SSMS outside of the context of the package.
I have verified the connection and database in the package is as I expect.
Is there away to verify on the server. I did try to look at the Connection Managers tab on the package configuration itself i.e. in the Integration Services Catalogs->SSISDB->solutionfolder->..->package.dtsx->Configure context menu but it is empty.
Any ideas on how to troubleshoot?
Just to add more context the package contains 27 other tasks, 9 tasks in a row linked to this task but all set to on completion, all seem to be doing stuff independent of the other. 1 task is a loop doing stuff and the rest are single independent tasks. So I don't know at this stage if it is a cascading connection issue perhaps however; I am just reading what the log says.
I kicked off the package at 9:54am, the timestamp on the error log says 11:45am so nearly 2 hours into running is this log reported.
I would suggest the below things to troubleshoot the issue.
I would suggest you to just have this task and disable all other
tasks to troubleshoot the issue. So that you can focus on this issue
specifically. That will tell you whether connection is working fine
without issues.
I would suggest you to edit the task and see whether parameters are
set properly. Different providers have different way of setting
parameters. Again check whether parameters are proper. Execute SQL
Task
one more thing, may be you are pointing the package to different
connection than the one you used for SSMS. So, it is working in SSMS
and in the connection being used in the package is not having schema
changes yet done.
I finally figure it out before I read the previous offered suggestion so will give some credit if I can! FYI: We have a lot of dev servers. I clicked on the overview hyperlink in the All Execution logs and it said another server. Also I found the connection on the job calling the package not the package itself so I have learnt something today. Anyhow the job said one server but the overview said another so I again I was back to square one scratching my head.
Then I decided to open the connection manager on the job and select the field and make no change rather then cancelling I clicked ok not thinking about it and noticed the field changed to bold face. So I am assuming if you make a manual change on the server in SSMS to anything it shows up in bold which is kind of useful. So I can only assume this is a MS SSMS or SSIS or VS deployment bug. That it does not overwrite, the previous connection although the SSMS interface says otherwise. Perhaps somebody can share some light. Having not checked the server before I made a change and deployed it I have no idea if the previous settings were changed manually by someone or the connection in the package was changed and deployed. Anyhow checking the job history shows it had been failing for awhile so it wasn't me so whoever and whenever a change was done by a previous developer didn't figure it our either or didn't bother or did not know how, or didn't observe it. Anyhow it is pointing to the correct server now!!!

Oracle 11g max login fail attempts workaround

My problem with database starts with situation where I cant really modify anything in database. My project specialist has limited time to help me. Here is the thing:
My user in Oracle database has older schema than actual production one. My section is working on stable and older version. After every release we are keep getting this issue, that something is set (maybe on Jenkins, maybe not) automatically to update our database to version, which we dont want. We tried to resolve it by changing password to user, but it produce new issue. Automat is trying to log in and when it gets wrong pass error, it is trying again. Oracle 11g has this limit 10 failed login attempts, after which it is locking the whole user account, which we use to connect do db by our application server.
We can not investigate this by turning on auditing failed logins, because it takes place on database space and our db-guy has not allowed us to do it, because if we exceed the space limit (which is about 11GB) the whole database will be dead (our project is not as important to do it). Another thing is that person who probably set the scripts which are our problem doesnt work anymore here.
Our workaround was to manually unlock account to get the connection by application server, and then wait a few secs to get locked again (but the connection of app server was stable). It is stupid, you must admit and the problem is when the connection drops by any reason - app server will not get it automatically, we have to do it manually which is not a solution. I have reconsidered it all again, my db-guy has no time to help me, I have no tools and access rights to investigate where this script or whatever other problem causing thing is beeing executed, so I started to thinking: what if we set limit of failed login attempts to unlimited? Will this decrease the performance of database? Will this generate any special new problems? Maybe the solution would be change the PASSWORD_LOCK_TIME to small value? I am asking you to some arguments that I could provide to my db-guy to convince him to use this new workarounds so I can start working again with code and not this database problems.

cannot connect to database after putting tasks in sequence container

I had a package that worked perfectly until i decided to put some of its tasks inside a sequence container (More on why I wanted to do that - How to make a SSIS transaction in my case?).
Now, i keep on getting an error -
[Execute SQL Task] Error: Failed to acquire connection "MyDatabase". Connection may not be configured correctly or you may not have the right permissions on this connection.
Why could this be happening and how do I fix it ?
I started writing my own examples to reply to your question. Then I remember that I met Matt Mason when I talked at a SQL Saturday in New Hampshire. He is the Microsoft Program Manager for SSIS.
While I spent 3 years between 2009 and 2011 writing nothing else but ETL code, I figured Matt had an article out there.
http://www.mattmasson.com/2011/12/design-pattern-avoiding-transactions/
Here is a high level summary of the approaches and the error you found.
[ERROR]
The error you found is related to MSDTC having issues. This must be configured and working correctly without any issues. Common issues are firewalls. Check out this post.
http://social.msdn.microsoft.com/Forums/sqlserver/en-US/3a5c847e-9c7e-4628-b857-4e6edaa7936c/sql-task-transaction-required?forum=sqlintegrationservices
[SOLUTION 1] - Use transactions at the package, task or container level.
Some data providers do not support MSDTC. Some tasks do not support transactions. This may be slow in performance since you are adding a new layer to support two phase commits.
http://technet.microsoft.com/en-us/library/aa213066(v=sql.80).aspx
[SOLUTION 2] - Use the following tasks.
A - BEGIN TRAN (EXECUTE SQL)
B - YOUR DATA FLOW
C - TEST THE RETURN CODE
1 - GOOD = COMMIT (EXECUTE SQL)
2 - FAILURE = ROLLBACK (EXECUTE SQL)
You must have the RetainSameConnection property set to True on the connection.
This forces all calls thru one session or SPID. All transaction management is now on the server.
[SOLUTION 3] - Write all you code so that it is restartable. This does not mean you go out and use check points.
One solution is to always use UPSERTS. Insert new data. Update old data. Deletes are only a flag in a table. This pattern allows a failed job to be executed many times with the same final state being achieved.
Another solution is to handle all error rows by placing them into a hospital table for manual inspection, correction, and insertion.
Why not use a database snapshot (keeps track of just changed records)? Take a snapshot before the ETL job. If an error occurs, restore the database from the snapshot. Last step is to remove the snapshot from the system to clean up house.
In short, I hope this is enough ideas to help you out.
While the transaction option is nice, it does have some down falls. If you need an example, just ping me.
Sincerely
J
What package protection level are you using? Don't Save Sensitive? Encrypt Sensitive with User Key? I'd recommend changing it to use Encrypt Sensitive with Password and enter a password. The password won't disappear.
Have you tried testing the connection to the database in the connection manager?

Using nHibernate in a windows service

I want to use nHibernate in a windows service. If the systems boots, it might start my service before the database. In that case, configuration of nHibernate fails and the service crashes. So now I'm wondering how I can check if the database service has already been started. In case it has not yet started, my service should wait a bit and try again later.
If your service always runs on the same machine as SQL Server, You should be using ServiceInstaller.ServicesDependedOn to tell Windows(SCM) that you depend on 'MSSQLSERVER' (the name of service that runs SQL Server).
From MSDN:
A service can require other services to be running before it can
start. The information from this property is written to a key in the
registry. When the user (or the system, in the case of automatic
startup) tries to run the service, the Service Control Manager (SCM)
verifies that each of the services in the array has already been
started.
ServiceInstaller is the class that is used by InstallUtil when it installs your service. Other installation packages including InstallShield also support this windows functionality. Equivalent SC command.
So your service will only start after SQL Server is already running. But even in this case, it might still be a good idea to offload all potentially long running startup procedures to the background thread. Do as little as possible in OnStart method. Ideally you would just spawn a new initialization thread that would take care of NHibernate session factory initialization. If for some reasons you still want to do this in OnStart, then you should consider retrying NHibernate initialization and calling ServiceBase.RequestAdditionalTime to avoid:
Error 1053: The service did not respond to the start or control
request in a timely fashion.
Ideally your service should not depend on the database availability because it might be running on a remote machine. The service is an 'always on' process that should tolerate intermittent database connectivity issues.
No clue if there are better ways, but in your service startup, check for the system uptime. If this is less then let's say 5 minutes, wait for (5 minutes - Uptime) and after that start the rest of the service as you normally would.
See the following for Calculating server uptime gives "The network path was not found"
This is not a solution however for when your service tries to connect to a SQL which is down, however if this happens you want to handle the exception and actually be notified that the SQL is down. Very unlikely you want the service to keep trying without you yourself beeing aware the SQL is down.
You could use ServiceController class and call its static method GetServices() to get the list of services. It will give an array of services, find the right one and check its status.
See ServiceController on MSDN
Currently I am making sure I can establish a connection to the database needed and running a default query (configurable). If this is successful I proceed to start the service.
What I've found in some cases is that even if the MSSQL service is started it doesn't guarantee that you can connect to it and execute queries against it.

Problem calling stored procedure from another stored procedure via classic ASP

We have a classic ASP application that simply works and we have been loathe to modify the code lest we invoke the wrath of some long-dead Greek gods.
We recently had the requirement to add a feature to an application. The feature implementation is really just a database operation requires minimal change to the UI.
I changed the UI and made the minor modification to submit a new data value to the sproc call (sproc1).
In sproc1 that is called directly from ASP, we added a new call to another sproc that happens to be located on another server, sproc2.
Somehow, this does not work via our ASP app, but works in SQL Management Studio.
Here's the technical details:
SQL 2005 on both database servers.
Sql Login is authenticating from the ASP application to SQL 2005 Server 1.
Linked server from Server 1 to Server 2 is working.
When executing sproc1 from SQL Management Studio - works fine. Even when credentialed as the same user our code uses (the application sql login).
sproc2 works when called independently of sproc1 from SQL Management Studio.
VBScript (ASP) captures an error which is emitted in the XML back to the client. Error number is 0, error description is blank. Both from the ADODB.Connection object and from whatever Err.Number/Err.Description yields in VBScript from the ASP side.
So without any errors, nor any reproducibility (i.e. through SQL Mgmt Studio) - does anyone know the issue?
Our current plan is to break down and dig into the code on the ASP side and make a completely separate call to Server 2.sproc2 directly from ASP rather than trying to piggy-back through sproc1.
Have you got set nocount on set in both stored procedures? I had a similar issue once and whilst I can't remember exactly how I solved it at the moment, I know that had something to do with it!
You could be suffering from the double-hop problem
The double-hop issue is when the ASP/X page tries to use resources that are located on a server that is different from the IIS server.
Windows NT Challenge/Response does not support double-hop impersonations (in that once passed to the IIS server, the same credentials cannot be passed to a back-end server for authentication).
You should verify the attempted second connection using SQL Profiler.
Note that with your manual testing you are not authenticating via IIS. It's only when you initiate the sql via the ASP/X page that this problem manifests.
More resources:
http://support.microsoft.com/kb/910449
http://support.microsoft.com/kb/891031
http://support.microsoft.com/kb/810572
I had a similar problem and I solved it by setting nocount on and removing print commands.
My first reaction is that this might not be an issue of calling cross-server, but one of calling a second proc from a first, and that this might be what's acting differently in the two different environments.
My first question is this: what happens if you remove the cross-server aspect from the equation? If you could set up a test system where your first proc calls your second proc, but the second proc is on the same server and/or in the same database, do you still get the same problem?
Along these same lines: In my experience, when the application and SSMS have gotten different results like that, it has often been an issue of the stored procedures' settings. It could be, as Luke says, NOCOUNT. I've had this sort of thing happen from extraneous PRINT statements in the code, although I seem to remember the PRINTed value becoming part of the error description (very counterintuitively).
If anything is returned in the Messages window when you run this in SSMS, find out where it is coming from and make it stop. I would have to look up the technical terms, but my recollection is that different querying environments have different sensitivities to "errors", and that a default connection via SSSM will not throw an error at certain times when an ADO connection from a scripting language will.
One final thought: in case it is an environment thing, try different settings on your ASP page's connection string. E.g., if you have an OLEDB connection, try ODBC. Try the native and non-native SQL Server drivers. Check out what connection string options your provider supports, and try any of them that seem like they might be worth trying.
Example code might help :) Are you trying to return two tables from the stored procedure; I don't think ADO 2.6 can handle multiple tables being returned.
I did consider that (double-hop), but what is the difference between a sproc-in-a-sproc call like I am referring to vs. a typical cross-server join via INNER JOIN? Both would be executed on Server1, using the Linked Server credentials, and authenticating to Server 2.
Can anyone confirm that calling a sproc cross-server is different than doing a join on data tables? And why?
If the Linked Server config is a sql account - is that considered a double-hop (since what you refer to is NTLM double-hops?)
In terms of whether multiple resultsets are coming back - no. Both Server1.Sproc1 and Server2.Sproc2 would be "ExecuteNonQuery()" in the .net world and return nothing (no resultsets and no return values).
Try to check the permissions to the database for the user specified in the connection string.
Use the same user name in the connection string to log in to the database while using sql mgmt studio.
create some temporary table to write the intermediate values and exceptions since it can be a effective way of debugging your application.
Can I just check: You made the addition of sproc2? Prior to that it was working fine for ages.
Could you not change where you call sproc2 from? Rather than calling it from inside sproc1, can you call it from the ASP? That way you control the authentication to SQL in the code, and don't have to rely on setting up any trusts or shared remote authentication on the servers.
How is your linked server set up? You generally have some options as to how it authenticates to the remote server, which include logging in as the currently logged in user or specifying a SQL login to always use. Have you tried setting it to always use a specific account? That should eliminate any possible permissions issues in calling the remote procedure...

Resources