We are using a dependency to start a derby network server.
How can I start the derby network server so that it keeps running.
Now the server stops after the build.
As described in my answer here, you can use Derby as your database via the derby-maven-plugin which I wrote and is available on GitHub and via Maven Central.
See here for details on how to use it. This will basically remove the need for you to start Derby through your tests and it will keep it up and running while the tests are executing. In combination with the sql-maven-plugin, you could have a reasonably decent testing environment.
To further clarify, the server does not run after the build has finished. However, under target/derby you can find your database which you can run, if you need to investigate the produced data.
Related
I have packages deployed on a sql server 2008R2 and recently, we migrated to a new server machine, deployed with sql server 2012. I configured packages to project deployment mode and for 10 days, all packages are working smoothly, with the execution times in the same range of older server.
Since last two days, packages started to fail. I checked in detail and found that, they are taking longer time than usual, and fail due to "Protocol error in TDS stream, communication link failure and remote host forcibly closed the connection".
When I tried to run the package through ssdt, they can run successfully, but I see data transfer movement slower than I used to see, and so package execution time is much longer.
I am not sure, what has changed. I have searched the internet for the possible reason and checked the server memory and packet size, and tried match with the older server, which did not solve the problem. I suspect, SSIS logging may have causes this, but not sure how to check it?
Please help to identify the cause of this problem.
**Edit: I enabled logging in ssdt and could see that majority of time is used in rows transfer steps only. Since my package have look ups, I thought that look ups might make it slower somehow. So copied the main query to ssms and run as a normal query on this server.
About 13L rows were returned in 12 minutes. Then I run the same query on the old server, there it returned 13L rows in less than a minute. So, possibly it proves the problem somehow is related with data transfer and not specific to packages itself.
Can Someone help please.**
Just check the solution connection, it should be ‘RetainSameConnection’ property to 'true'. This can be done both in the SSIS package under connection manager properties and in the job step properties (Configuration > Connection Managers).
Link: http://www.sqlerudition.com/what-is-the-retainsameconnection-property-of-oledb-connection-in-ssis/
I will be working as QA on a projects which is upgrading Sybase ASE to version 16.
I have not worked on RDBMS system upgrade projects before. I need assistance in drafting a Test strategy. Could anyone please provide me guidance on
- What are the Steps involved in upgrading Sybase ASE version
- From System test point of view what sorts of test should we be running (We are already running regression on the applications which are connected to the DBs but apart from that what else should we be validating?)
I believe you also need to be checking post-upgrade performance, i.e. run some set of load tests against per-upgrade configuration and re-run the same set after the upgrade to ensure that upgrade didn't cause performance degradation.
You can use i.e. Apache JMeter, it can execute arbitrary SQL queries in multithreaded manner via JTDS JDBC Driver. See The Real Secret to Building a Database Test Plan With JMeter article for details.
I have developed an application using VB.NET and used SQL SERVER Express as the database back end.
The application has 5 user profiles.(Each user profile provides different services).
Deployment reqiurements :
The application is to be deployed on a LAN with 10-20 machines.
Any user profile can be accessed from any machine.
Any changes to the database entries should be reflected on all machines.
I am confused about how I should achieve this deployment. According to my research :
1.The database should be deployed on one machine . This machine will acts as the database server .
My problem(s) :
I am familiar with accessing databases on local machine but how to access a remote database?.
Is the connection string the only thing that needs to be addressed or are there any other issues too?
Do I need to install SQL SERVER on all machines or only on the server machine ?
Do I have to deal with concurrency issues (multiple users accessing/modifying same data simultaneously) or is it handled by the database engine?
2.The application can be deployed in 2 ways :
i. Storing the executable on a shared network drive on the server.Providing shortcut on desktop of each machine.
ii. Storing the executable itself on each machine.
My Problem(s) :
How does approach 1 work ? (One instance of an executable running on multiple machines ? :s)
In approach 2 , will the changes in database entries be reflected on all machines appropriately?
In approach 2, if there are changes to the application , is there any method to update it on all machines ? ( Other than redeploying it on each machine )
Which approach is preferable?
Do I need to install the .NET framework all machines?
Will I have to make any other system changes ( firewall,security,permissions) ?
If given a choice to install the operating system on each machine ,which version of windows is preferable for such an application environment ?
This is my first time deploying a multi-user database application on a network.I'll be very grateful for any suggestions/advice,references,etc.
Question 1: You will need to create SQL Server 'roles' for each of your 'profiles'. A given user will be assigned one or more or those 'roles'. Each of your tables, views, stored procedures, and triggers will need to be assigned one or more roles. This is a messy business, this is why DBAs get paid lots of money to lounge around most of the time (I'm kidding, don't vote me down).
Question 2: If you 'remote in' to a server, you'll get the server screens, which are quite a bit duller than the workstation presentation. Read up on 'One Click', this gives you the ability to detect an updated application on a host, and automatically deploy the update to the user's machine. This gets rid of the rather messy business of running around to 20 machine installing upgrades every time you fix something.
As you have hands-on access to all the machines your task is comparatively simpler.
Install SQL Express on your chosen db server. You should disable the 'hide advanced options' in the installer; this will allow you to enable TCP/IP and the SQL Browser service; also you may want mixed-mode authentication - depends on your app and whether the network is domain or peer-to-peer. The connection string will need to be modified as you are aware; also the default configuration of Windows firewall on the server will block access to the db engine - you will need to open exceptions in the firewall for the browser service and SQL server itself. Open these as exceptions for the exes, not as port numbers etc. Alternatively, if you have a firewall between your server and the outside world, you may decide to just turn off the firewall on the server, at least on a temporary basis while you get it working.
No, you don't need to install any SQL Server components on the workstations.
Concurrency issues should be handled by your application. I don't want to be rude but if you are not aware of this maybe you are not yet ready for deploying your app to production. Exactly what needs to be done about concurrency depends on both the requirements of your application and the data access technology you are using. If your application will be used mostly to enter new records and then just read them later, you may get away without too much concurrency-handling code; it's the scenario where users are simultaneously editing existing records where the problems arise - but you need to have at least basic handling in place.
Re where to locate the client exe - either of your suggestions can work. Simplest is local installation on each machine using an .msi file; you can place a master copy of the msi on the server. You can do stuff with login scripts, group policies, etc, or indeed clickonce. To keep it simple at this stage I would just install from an .msi onto each machine - sounds like you have enough complexity do get your head around already.
One copy of the exe on the server can be handled in a more sophisticated manner by Terminal Server Citrix, etc.
Either way assuming your app works correctly, yes all changes will be made against the same db and visisble to all workstations.
Yes you will need .net framework on all machines - however, it may very well already be there. Different versions of Windows came with different versions of the Fx built-in and or updated via Windows Update; also of course it depends which ver you built your exe against.
Right I hope there is something helpful in that lot. Good luck.
I have few questions about the memory usage in ssis package.
If i am loading data from server A to Server B and the ssis package is in my desktop System and running through BIDS,Whether the buffer creation(memory usage) will happen in my desktop system?If this is the case,the performance(low memory compare to servers) will be slow right?
How to enable the usage of server resources while developing package in my desktop system?
Please help me, if i have 3 ssis developer and all are developing different packages at a time,What is the best development method?
To expand on #3, the best way I have found to allow teams to work on a single SSIS solution is to decompose a problem (package) down into smaller and smaller chunks and control their invocation through a parent-child/master-slave type relationship.
For example, the solution concerns loading the data warehouse. I'd maybe have 2 Controller packages, FactController.dtsx and DimensionController.dtsx. Their responsibility is to call the various packages that solve the need (loading facts or dimensions). Perhaps my DimensionProductLoader package is dealing with a snowflake (it needs to update the Product and the SubProduct table) so that gets decomposed into 2 packages.
The goal of all of this is to break the development process down into manageable chunks to avoid concurrent access to a single package. Merging the XML will not be a productive use of your time.
The only shared resource for all of this is the SSIS project file (dtproj) which is just an XML document enumerating the packages that compromise the project. Create an upfront skeleton project with well-named, blank packages and you can probably skip some of the initial pain surrounding folks trying to merge the project back into your repository. I find that one-off type of merges go much better, for TFS at least, than everyone checking their XML globs back in.
Yes. A package runs on the same computer as the program that launches it. Even when a program loads a package that is stored remotely on another server, the package runs on the local computer.
If by server resources you mean server CPU, you cant. Is like using resources of any other computer on the network. Of course, if you have an OleDBSource that runs a select on SQL server, the CPU that "runs" the select will be the one on SQL Server, obviously, but once the resultset is retrieved, it is handled by the computer where the package is running.
Like any other development method. If you have a class on a C# project being developed by 3 developer, how do you do it? You can have each developer working on the same file and merge the changes, after all a package is a xml file, but is more complicated. I wouldn't recommend. I've been on situations where more than one developer worked on the same package but not at the exact same time.
Expanding on Diego's and Bill's Answers:
1) Diego has this mostly correct, I would just add: The package runs on the computer that runs it, but even worse, running a package through BIDS is not even close to what you will see on a server since the process BIDS uses to run the package is a 32bit process running local. You will be slower due to limits related to running in the 32bit subsystem, as well as copying all of your data for the buffer across the network to the buffer in memory on your workstation, transforming it as your package flows, and then pushing it again across the network to your destination server. This is fine for testing small subsets of your data in a test environment, but should not be used to estimate the performance on a server system.
2) Diego has this correct. If you want to see server performance, deploy it to a test server and run it there.
3) billinkc has this correct. One of the big drawbacks to SSIS in TFS is that there is not an elegant way to share work on a single package. If you want to use more than one developer in a single process break it into smaller chunks and let only one developer work on each piece. As long as they are not developing the same package at the same time, you should be fine.
I'm in the process of testing an application and it's database and for this I want to restart my testing each time completely clean. This application loads a large amount of data from Twitter. Therefore, before I start, I delete all data from the database and kill any processes from my web account associated with this application. When I try to then load my application, I get the following error:
[Macromedia][SequeLink JDBC Driver][ODBC Socket][Microsoft][SQL Native Client]Communication link failure
I would assume this has something to do with me killing all the related processes in the DB. After some amount of time I am able to run queries again.
Does this have something to do with the connections setup information in Coldfusion Administrator?
Does it just take some time to reset the connection? Is there any way to get around this?
Is there a better way to start fresh and clean when testing the loading?
By default, ColdFusion pools connection threads. I would guess, based on your comment to Stephen Moretti, that you are killing a connection that CF expects to still be live. That said, I've never had problems killing long DB threads, so this is pure speculation.
I'm not sure what killing these threads gets you, as far as testing goes. Once the page has stopped processing, open DB connections should not push or pull additional data.
I suspect that the error is actually related to how you are "cleaning up", particularly when you say "kill all related processes". By this I'm guessing you go into task manager and actually kill the processes.
I'm also guessing that if you're using SQL Server, you're on windows.
Rather than killing processes, cleanly stop the services associated with your application. Go into the Services Control Panel :
Stop your IIS or Apache Service.
Stop your ColdFusion Server instance service.
In terms of your database:
- Create a script for creating your database schema, tables, views, users and permissions and any default data entries
- drop your schema
- restart the sql server services if you want to be sure you've created any cached data out.
- run the script to create a blank of your database.
You could at this point actually create a database back up and just restore this, but its always handy to have the scripts to run on servers if you don't want to restore a backup.
After this start your coldfusion and iis/apache services.