I have a WPF application which was being downloaded to the main branches in the company from a single server.Now we have around 15 branches which have very low bandwidth ,and so we got a server in each branch and our plan was to install our application in these branches so that branch user can download the application from branch server .However i think at this step attached in image ,i have to repeat the process for all the 15 branch servers
Is there any way to publish to all servers at one attempt?
Related
I have about 7 projects deployed on a SQL Server. Each one contains a MasterPackage which run all the child packages of that project. The issue is that I want all 7 projects to run in parallel, starting at the same time, but as it is right now, they get queued up and start one after another. Can I make all the projects start at the same time?
You can always schedule packages' executions by the means of SQL Server Agent jobs. You will probably have to create a separate job for each project, but after that, whatever schedule you pick for them should be followed.
Just keep in mind that, if packages push a lot of data through, server might not cope with the total workload, so parallel execution might be slower than a serialised one.
OK… I’ve been tasked to figure out why an intranet site is running slow for a small to medium sized company (less than 200 people). After three days of looking on the web. I’ve decided to post what I’m looking at. Here is what I know:
Server: HP DL380 Gen9 (new)
OS: MS Server 2012 – running hyper-v
RAM: 32GB
Server 2012 was built to run at most 2 to 3 VMs at most (only running one VM at the moment)
16GB of RAM dedicated for the VHD (not dynamic memory)
Volume was created to house the VHD
The volume has a fixed 400GB VHD inside it.
Inside that VHD is server 2008r2 running SQL 2008r2 and hosting an iis7 intranet.
Here is what’s happening:
A page in the intranet is set to run a couple of stored procedures that do some checking against data in other tables as well as insert data (some sort of attendance db) after employee data is entered. The code looks like it creates and drops approximately 5 tables in the process of crunching the data. The page takes about 1min50secs to run on the newer server. I was able to get hold of the old server & run a speed test: 14 seconds.
I’m at a loss… a lot of sites say alter the code. However it was running quick before.
Old server is a 32bit 2003 server running SQL2000… new is obviously 64bit.
Any ideas?
You should find out where the slowness is coming from.
The bottleneck could be in SQL-Server, in IIS, in the code, on the network?
Find the SQL statements that are executed and run them directly in SQL server.
Run the code outside of IIS web pages
Run the code from a different server
Solved my own issue... just took a while for me to get back to this. Hopefully this will help others.
Turned on SQL Activity Monitor under tools\options => at startup => Open Object Explorer and Activity Monitor.
Opened Recent Expensive Queries. Right clicked on the top queries and selected Show Execution Plan. This showed a missing index for the db. Added index by clicking the plan info at the top. Added the index.
Hope this helps!
I have developed an application using VB.NET and used SQL SERVER Express as the database back end.
The application has 5 user profiles.(Each user profile provides different services).
Deployment reqiurements :
The application is to be deployed on a LAN with 10-20 machines.
Any user profile can be accessed from any machine.
Any changes to the database entries should be reflected on all machines.
I am confused about how I should achieve this deployment. According to my research :
1.The database should be deployed on one machine . This machine will acts as the database server .
My problem(s) :
I am familiar with accessing databases on local machine but how to access a remote database?.
Is the connection string the only thing that needs to be addressed or are there any other issues too?
Do I need to install SQL SERVER on all machines or only on the server machine ?
Do I have to deal with concurrency issues (multiple users accessing/modifying same data simultaneously) or is it handled by the database engine?
2.The application can be deployed in 2 ways :
i. Storing the executable on a shared network drive on the server.Providing shortcut on desktop of each machine.
ii. Storing the executable itself on each machine.
My Problem(s) :
How does approach 1 work ? (One instance of an executable running on multiple machines ? :s)
In approach 2 , will the changes in database entries be reflected on all machines appropriately?
In approach 2, if there are changes to the application , is there any method to update it on all machines ? ( Other than redeploying it on each machine )
Which approach is preferable?
Do I need to install the .NET framework all machines?
Will I have to make any other system changes ( firewall,security,permissions) ?
If given a choice to install the operating system on each machine ,which version of windows is preferable for such an application environment ?
This is my first time deploying a multi-user database application on a network.I'll be very grateful for any suggestions/advice,references,etc.
Question 1: You will need to create SQL Server 'roles' for each of your 'profiles'. A given user will be assigned one or more or those 'roles'. Each of your tables, views, stored procedures, and triggers will need to be assigned one or more roles. This is a messy business, this is why DBAs get paid lots of money to lounge around most of the time (I'm kidding, don't vote me down).
Question 2: If you 'remote in' to a server, you'll get the server screens, which are quite a bit duller than the workstation presentation. Read up on 'One Click', this gives you the ability to detect an updated application on a host, and automatically deploy the update to the user's machine. This gets rid of the rather messy business of running around to 20 machine installing upgrades every time you fix something.
As you have hands-on access to all the machines your task is comparatively simpler.
Install SQL Express on your chosen db server. You should disable the 'hide advanced options' in the installer; this will allow you to enable TCP/IP and the SQL Browser service; also you may want mixed-mode authentication - depends on your app and whether the network is domain or peer-to-peer. The connection string will need to be modified as you are aware; also the default configuration of Windows firewall on the server will block access to the db engine - you will need to open exceptions in the firewall for the browser service and SQL server itself. Open these as exceptions for the exes, not as port numbers etc. Alternatively, if you have a firewall between your server and the outside world, you may decide to just turn off the firewall on the server, at least on a temporary basis while you get it working.
No, you don't need to install any SQL Server components on the workstations.
Concurrency issues should be handled by your application. I don't want to be rude but if you are not aware of this maybe you are not yet ready for deploying your app to production. Exactly what needs to be done about concurrency depends on both the requirements of your application and the data access technology you are using. If your application will be used mostly to enter new records and then just read them later, you may get away without too much concurrency-handling code; it's the scenario where users are simultaneously editing existing records where the problems arise - but you need to have at least basic handling in place.
Re where to locate the client exe - either of your suggestions can work. Simplest is local installation on each machine using an .msi file; you can place a master copy of the msi on the server. You can do stuff with login scripts, group policies, etc, or indeed clickonce. To keep it simple at this stage I would just install from an .msi onto each machine - sounds like you have enough complexity do get your head around already.
One copy of the exe on the server can be handled in a more sophisticated manner by Terminal Server Citrix, etc.
Either way assuming your app works correctly, yes all changes will be made against the same db and visisble to all workstations.
Yes you will need .net framework on all machines - however, it may very well already be there. Different versions of Windows came with different versions of the Fx built-in and or updated via Windows Update; also of course it depends which ver you built your exe against.
Right I hope there is something helpful in that lot. Good luck.
Apologies for the noob question, I've never dealt with failover before.
Currently we have a single hardware server running Windows Server, SQL Server, ASP.NET and a single (very large) web application. We are considering migrating this to an Azure VM.
I see in the SLA that Microsoft will only guarantee 99.95% availability if I am running more than one instance of an Azure VM, to allow for failure and reboots etc.
Does this mean I therefore would have two servers to manage and maintain? For example, two versions of SQL with a database on each, and two sets of ASP.NET application files? If correct, this puts the price up dramatically.
I assume there is no way to 'mirror' one server across to the other to reduce this workload?
Also, our hardware server has 25,000 uploaded files on it. Would we need to put these on a VHD then 'link' them to whichever live server was running, or does Azure do this automatically? Or do they have to be mirrored from the live server to the failover server?
Any pointers would be appreciated. I've already read all the Azure documentation but it hasn't really made things much clearer...
Seems like you have multiple topics you should look after.
Let's start with the database. The easiest thing would be, if you could migrate your sql server into the sql azure one. Than you would not have no need to maintain it and to maintain the machines you should use.
This would you give the advantage, that you central component can be used by 1 to many applications.
Second one are you uploaded files. I assume that your application allows to upload files for sharing or something else. The best thing would be, if you could just write these files into the windows azure blobstorage. Often this means you have to rewrite a connector, but this would centralize another component.
For the first step you could make them available and clients can download it with the help of a link. If not you could load the files from their and deliver them to the customer.
If you don't want to rewrite your component, you should have to use the VHD. One VHD can only have one lease. So only one instance can be used. A common way I have seen is that if the application is starting, it is trying to "recover" the lease. (try-and-error like)
Last but not least your ASP.NET application. If you have such an application I would have a look into cloud instances. Try not to consider the VMs, because than you have to do all the management. VMs are the IaaS. With a .NET application should easily be able to convert it and deploy instances.
Than you have not to think about failover and so on. Just deploy 2 instances and the load-balancer will do the rest.
If you are able to "outsource" the SQL server, you could minimize your machine for the ASP.net application. Try to use scale-out and not scale-up. This means use more smaller nodes, than one big one. (if possible)
If you are really going the VM way, you have to manage all the stuff by yourself and yes than you need 2 vms. You are also need 3 vms, because you have no auto-loadbalancer and if you only have 2 just one machine can have the port 80 exported.
HTH
I have few questions about the memory usage in ssis package.
If i am loading data from server A to Server B and the ssis package is in my desktop System and running through BIDS,Whether the buffer creation(memory usage) will happen in my desktop system?If this is the case,the performance(low memory compare to servers) will be slow right?
How to enable the usage of server resources while developing package in my desktop system?
Please help me, if i have 3 ssis developer and all are developing different packages at a time,What is the best development method?
To expand on #3, the best way I have found to allow teams to work on a single SSIS solution is to decompose a problem (package) down into smaller and smaller chunks and control their invocation through a parent-child/master-slave type relationship.
For example, the solution concerns loading the data warehouse. I'd maybe have 2 Controller packages, FactController.dtsx and DimensionController.dtsx. Their responsibility is to call the various packages that solve the need (loading facts or dimensions). Perhaps my DimensionProductLoader package is dealing with a snowflake (it needs to update the Product and the SubProduct table) so that gets decomposed into 2 packages.
The goal of all of this is to break the development process down into manageable chunks to avoid concurrent access to a single package. Merging the XML will not be a productive use of your time.
The only shared resource for all of this is the SSIS project file (dtproj) which is just an XML document enumerating the packages that compromise the project. Create an upfront skeleton project with well-named, blank packages and you can probably skip some of the initial pain surrounding folks trying to merge the project back into your repository. I find that one-off type of merges go much better, for TFS at least, than everyone checking their XML globs back in.
Yes. A package runs on the same computer as the program that launches it. Even when a program loads a package that is stored remotely on another server, the package runs on the local computer.
If by server resources you mean server CPU, you cant. Is like using resources of any other computer on the network. Of course, if you have an OleDBSource that runs a select on SQL server, the CPU that "runs" the select will be the one on SQL Server, obviously, but once the resultset is retrieved, it is handled by the computer where the package is running.
Like any other development method. If you have a class on a C# project being developed by 3 developer, how do you do it? You can have each developer working on the same file and merge the changes, after all a package is a xml file, but is more complicated. I wouldn't recommend. I've been on situations where more than one developer worked on the same package but not at the exact same time.
Expanding on Diego's and Bill's Answers:
1) Diego has this mostly correct, I would just add: The package runs on the computer that runs it, but even worse, running a package through BIDS is not even close to what you will see on a server since the process BIDS uses to run the package is a 32bit process running local. You will be slower due to limits related to running in the 32bit subsystem, as well as copying all of your data for the buffer across the network to the buffer in memory on your workstation, transforming it as your package flows, and then pushing it again across the network to your destination server. This is fine for testing small subsets of your data in a test environment, but should not be used to estimate the performance on a server system.
2) Diego has this correct. If you want to see server performance, deploy it to a test server and run it there.
3) billinkc has this correct. One of the big drawbacks to SSIS in TFS is that there is not an elegant way to share work on a single package. If you want to use more than one developer in a single process break it into smaller chunks and let only one developer work on each piece. As long as they are not developing the same package at the same time, you should be fine.