Need advice on Tomcat6 and Oracle 11g app deployment - tomcat6

This is my very fist question in this forum; so please bear with me for any mistakes or incompleteness.
We have a Web application deployed under Tomcat 6.0.20 and Oracle 10g set up and it ran perfectly fine without issues for the last one year or so. This week we have migrated to a new server environment. The ONLY thing that changed were Tomcat 6.0.35 and Oracle 11g. I am using the same odbc14.jar for database connection pooling.
While the application seems to run fine, I am seeing JVM Full Thread dumps appearing in catalina.out about every 10 minutes or so (even when there are no apparent activities at the application side).
The application performance doesn't seem to be impacted so far but I wanted to know if I should be concerned about these thread dump messages.
Both tomcat and Oracle are running under Solaris 10 (in separate physical boxes)
Any advice will be very helpful. Let me know if a thread dump snapshot will be more helpful to analyze.

I believe that this is a known problem with the combination of 11g and odbc14.jar
You should be using ojdbc6.jar - that may or may not solve your problem, but it's the first thing that I'd try before looking elsewhere.
BTW, if you're upgrading tomcat, why 6.0.35 and not 7.x - 7.0.27 is out now?

Related

What's the best way for me to run SSIS on my Synology NAS?

I enjoy working with SQL Server and SSIS at my current job, and wanted to play around with it and learn some of the features I don't really use much at work in my own home environment. I have a Synology NAS with 32GB RAM, and it comes with support of Docker and a whole bunch of virtualization software. I was able to spin up a container of MSSQL Server 2019 under a development license in no time.
The problem is that SSIS is not currently supported on Docker. I've seen a few work arounds online (https://andyleonard.blog/2019/04/ssis-docker-and-windows-containers-part-4-adding-an-ssis-catalog-attempt-2/), but nothing that seemed very clean. Looking through the documentation, I see that SSIS is supported on RedHat, but doesn't support having an SSIS Catalog database. My end goal would be to be able to create packages on my personal laptop, then deploy them to SQL Server to have them ran on a schedule using SQL Server Agent (another thing not supported in the container).
It looks like I can spin up a VM of CentOS (which is more or less RedHat), but would lose a lot of the features I really like with the Windows version of SSIS. I can deal with that, but I'm curious how I could publish packages from my computer, if they need to be on the Linux file system. Yet at the same time, if I'm going this route, should I not just use CentOS as my main database and scrap the container I created?
What options are available to me if I want to work with SSIS? More specifically, how can I get a working version of it on Synology?
You are nearly there.
I'm setting up the same thing this week-end and I'm going to use Virtual Machine Manager to spin up Windows10 instead of CentOS.
There is no problem, you don't need Windows license if you use Windows Insider ISO (there is also the Enterprise edition).
And like you I'm going to use Docker on Synology for MSSQL.
As you have a good 32Gb Synology I won't be afraid to allocate 16Gb to the Windows VM. (Give 12Gb or 8Gb if you think you are going to keep it up day and night).
The problems are the CPU core you have to allocate. I suppose you have a quadcore so give 1 or max 2 core to that VM.
This way you can develop the SSIS packages on your laptop and push them to the VM or Docker or install Visual Studio and SSIS on the VM and that would be your development ambient through RDC.
If you want to push one step forward you can also use your Synology to setup AD to your home.
(I'm not going to do that)
One last thing: CPU cores and RAM are important but if you want to see your VM fly change HDD to SSD.
For the rest I wrote an article about how useless is to change the thermal paste of your Synology.

SQL server temporary connection failure from particular clients

We have a client deployment of our software that is showing intermittent SQL server connection failures, and we are struggling to understand them.
Our system consists of a SQL Server DB (2012) and 14 identical engines, each installed on a Windows 2012 VM. Each of these was created from the same template so they should be identical. The engines consist of a Windows service that connects to the DB on startup by reading a single row from a table. If the connection fails they will wait a few seconds and try again, until they get a connection.
In this particular case, the VMs were all rebooted due to a Windows Update. (The SQL server had the update/reboot about 12 hours before). They came online within a few minutes of each other. 12 of the engines started up without any problem. Two of them, however, failed to connect to the DB with:
"The underlying provider failed on Open."
Those two engines then started to poll, and continued to get this error for many hours. The rest of the engines had started up and were fine. We have a broker service too that was accessing the DB throughout and showed no connection issues.
When the client noticed this issue, they restarted the engine services on the two problem VMs, and the two engines connected to the DB just fine.
We are trying to understand what could have happened here. I guess my main questions are:
What could be an explanation of why 12 connections succeed and two fail? There's absolutely no difference as far as we know between the engines. The query itself is very simple.
Why did the connection continue to fail for those two engines until the service was restarted? This suggests to me that there is some process-level failed state that is only cleared when restarting the services. I've looked at the code to see if it was reusing the connections. It uses Entity Framework to read the single table row, and we create a fresh DbContext each time. I don't understand how this could go wrong.
We noted that there was a CheckDb operation proceeding on the DB around the time the services were coming up, and we wondered if this could be related to the issue. However, the client says that this runs every night and hasn't caused problems in the past. And it wouldn't explain why the engines didn't come back up again.
Thanks in advance for any help.

Database Engine Tuning Advisor Crashes Constantly

Microsoft SQL Database Engine Tuning Advisor seems to crash constantly for me... on multiple different servers, for multiple different databases, and throughout multiple different versions of SQL server (and DTA)...
I know this is probably a ridiculous question and not of the quality one would expect on stackoverflow :( but has anyone else experienced this?
I was having the same problem, as recently as SQL Server 2014 with Service Pack 2. I had to use a two-step approach to get it working again:
Installed both the latest service pack, and also the latest cumulative update for the service pack. This fixed the issue with Database Engine Tuning Advisor, but it was still crashing for me (see step 2)
I read where "hypothetical indexes" are added to your database when Database Engine Tuning Advisor is running. If it crashes, and does not complete successfully, the hypothetical indexes are not dropped. It was recommended that the hypothetical indexes be dropped from your database.
The combination of installing the latest service packs and cumulative updates, along with dropping the hypothetical indexes, seems to have worked for me.
I've experienced this behavior many times, and the way to fix it was to update my instances to the latest Service packs.
also the first version of SQL 2012 Tuning Advisor was crashing for some reasons, but updating to the latest SP2 has fixed this issue.
side note: plan-cash (a new features in SQL 2012) maybe helpful until you fix this problem permanently.
I was experiencing the same issue when running analysis on a database that contained encrypted stored procedures. I removed the encryption before I captured my profiler trace workload then re-ran the analysis and issue was resolved.

Classic ASP pages using COM object are extremely slow

First of all, I'd like to say upfront that some of the details on this question may be a little vague as it relates to commercial code that I obviously cannot divulge here. But anyway, here goes.
I am currently dealing with a .Net based web application that also has pages of classic ASP in certain areas - this application has a backend MSSQL database. The data access for the classic ASP areas of the application are handled by a 32 bit DLL added into Component Services on the IIS server. The application has recently been changed so all components (IIS authentication, .Net Application Pool and a data access DLL added to Component Services) now run under a single Windows account.
The problem I have found recently is that whilst .Net data access to the database is running at a perfectly normal speed, the classic ASP data access that goes via the COM+ component is extremely slow. Unfortunately, this doesn't seem to be throwing any errors anywhere, not in IIS logs and not in Event Viewer.
The COM+ component is instantiated in a standard fashion and this is done in an include file that is referenced on any ASP page needing data access, eg.
var objDataProvider = Server.CreateObject("DataAccess.DataProvider");
The ASP pages then use methods in the COM+ to execute queries such as;
objDataProvider.Query(...)
objDataProvider.Exec(...)
I have enabled failed request tracing in IIS for any ASP pages that are taking longer than 20s to process and can see that the majority of the calls are dealing with record sets as seen in the examples below.
if(rsri.EOF)
and
theReturn = rsData(0);
The two examples above both take over 9s to run. Profiling the SQL that runs in the background shows this runs in a matter of milliseconds which suggests the problem lies with the COM+ returning the data back to the ASP OR the ASP being slow to process it.
Windows Server 2008 R2 SP1 (64 bit)
Intel Xeon CPU
2GB RAM
Has anyone experienced a similar situation to this before or able to point me in the direction of any further diagnostics that might help?
After days of messing around with this, I've finally found the answer and it's the simplest of issues solved by going back to basics.
The web server hosting the application was in a DMZ and required a hosts file entry for the SQL server. Once this was added, the application was flying along again.
You'll have to go deeper by yourself. I had used before Ants Performance Profiler to improve the performance of .Net applications. It seems it can profile COM+ as well.
http://www.red-gate.com/products/dotnet-development/ants-performance-profiler/walkthrough
This or some other similar profiling tool is the only option I see to solve your problems.
I'm on the same issue. I have migrated a iis server from W2003-SQL express 2005 to W2012R2-SQL Server 2014 express and the asp scripts are 2 times slower.
The sql querys inside are executed with the same or better speed and I test the server locally to discard internet bandwith problems. And I think it may be a dns issue.
What change had you done in hosts file? and, What sql string connection or connection type had you used?
Thanks

VB.NET and SQL Server Express application deployment

I have developed an application using VB.NET and used SQL SERVER Express as the database back end.
The application has 5 user profiles.(Each user profile provides different services).
Deployment reqiurements :
The application is to be deployed on a LAN with 10-20 machines.
Any user profile can be accessed from any machine.
Any changes to the database entries should be reflected on all machines.
I am confused about how I should achieve this deployment. According to my research :
1.The database should be deployed on one machine . This machine will acts as the database server .
My problem(s) :
I am familiar with accessing databases on local machine but how to access a remote database?.
Is the connection string the only thing that needs to be addressed or are there any other issues too?
Do I need to install SQL SERVER on all machines or only on the server machine ?
Do I have to deal with concurrency issues (multiple users accessing/modifying same data simultaneously) or is it handled by the database engine?
2.The application can be deployed in 2 ways :
i. Storing the executable on a shared network drive on the server.Providing shortcut on desktop of each machine.
ii. Storing the executable itself on each machine.
My Problem(s) :
How does approach 1 work ? (One instance of an executable running on multiple machines ? :s)
In approach 2 , will the changes in database entries be reflected on all machines appropriately?
In approach 2, if there are changes to the application , is there any method to update it on all machines ? ( Other than redeploying it on each machine )
Which approach is preferable?
Do I need to install the .NET framework all machines?
Will I have to make any other system changes ( firewall,security,permissions) ?
If given a choice to install the operating system on each machine ,which version of windows is preferable for such an application environment ?
This is my first time deploying a multi-user database application on a network.I'll be very grateful for any suggestions/advice,references,etc.
Question 1: You will need to create SQL Server 'roles' for each of your 'profiles'. A given user will be assigned one or more or those 'roles'. Each of your tables, views, stored procedures, and triggers will need to be assigned one or more roles. This is a messy business, this is why DBAs get paid lots of money to lounge around most of the time (I'm kidding, don't vote me down).
Question 2: If you 'remote in' to a server, you'll get the server screens, which are quite a bit duller than the workstation presentation. Read up on 'One Click', this gives you the ability to detect an updated application on a host, and automatically deploy the update to the user's machine. This gets rid of the rather messy business of running around to 20 machine installing upgrades every time you fix something.
As you have hands-on access to all the machines your task is comparatively simpler.
Install SQL Express on your chosen db server. You should disable the 'hide advanced options' in the installer; this will allow you to enable TCP/IP and the SQL Browser service; also you may want mixed-mode authentication - depends on your app and whether the network is domain or peer-to-peer. The connection string will need to be modified as you are aware; also the default configuration of Windows firewall on the server will block access to the db engine - you will need to open exceptions in the firewall for the browser service and SQL server itself. Open these as exceptions for the exes, not as port numbers etc. Alternatively, if you have a firewall between your server and the outside world, you may decide to just turn off the firewall on the server, at least on a temporary basis while you get it working.
No, you don't need to install any SQL Server components on the workstations.
Concurrency issues should be handled by your application. I don't want to be rude but if you are not aware of this maybe you are not yet ready for deploying your app to production. Exactly what needs to be done about concurrency depends on both the requirements of your application and the data access technology you are using. If your application will be used mostly to enter new records and then just read them later, you may get away without too much concurrency-handling code; it's the scenario where users are simultaneously editing existing records where the problems arise - but you need to have at least basic handling in place.
Re where to locate the client exe - either of your suggestions can work. Simplest is local installation on each machine using an .msi file; you can place a master copy of the msi on the server. You can do stuff with login scripts, group policies, etc, or indeed clickonce. To keep it simple at this stage I would just install from an .msi onto each machine - sounds like you have enough complexity do get your head around already.
One copy of the exe on the server can be handled in a more sophisticated manner by Terminal Server Citrix, etc.
Either way assuming your app works correctly, yes all changes will be made against the same db and visisble to all workstations.
Yes you will need .net framework on all machines - however, it may very well already be there. Different versions of Windows came with different versions of the Fx built-in and or updated via Windows Update; also of course it depends which ver you built your exe against.
Right I hope there is something helpful in that lot. Good luck.

Resources