I have multiple C Unit Test programs that connects to a REST Server and sends/receives some data. I usually open a new terminal and start the server by typing mvn jetty:run and in a different terminal I do perform the Unit Tests by entering make test. The problems is that the Code is tested by a Jenkins server and the Unit Tests fail because they can't connect to a REST Server. I tried opening the REST Server with a CMake execute_process(COMMAND mvn jetty:run), but then the Unit Tests never ends because the server is waiting for input. I also tried opening the server with mvn jetty:start, but the Server doesn't seem to remain active after the command finishes. It just tells "Jetty Server started", but I can't see any running REST Server nor make a connection with it and mvn jetty:stop says there is no active server running. I also tried opening the server in parallel using the &, but Maven blocks it. The attempt to open the Jetty Server in parallel using a ShellScript that I would execute using the & didn't work either. Also the problem is that when I execute the Server in parallel I don't know when the server has started up and is ready for the Unit Tests to begin.
The best solution would be if I could open the server, perform the Unit Tests and close the server afterwards fully automated with CMake.
Related
Problem:
To run WebDriver tests using it appears that the UI (desktop) must be active for WebDriver to 'see' the browser and correctly run the tests.
When trying to automatically launch a set of tests on a Windows 2012 server using Scheduled Tasks, the tests fail because when the tests are launched there is no UI/desktop active.
Bad solution:
The only way we can get the WebDriver tests to run properly is to manually log in to each Windows 2012 server that is hosting the tests so that the UI/desktop is active. However, this is not practical if we are running tests on 20 servers at a time. (We would need to have a human log into each server for each test run!)
Question:
How can we:
a) launch a Windows 2012 executable that gets an active UI/desktop (logged in) without a manual login?
b) get WebDriver to not need the UI/desktop to be active to run the test?
My SSIS package consists of an Execute Process Task which executes some compiled python code which downloads files from a Web API.
The code works.
The package executes succesfully from Visual Studio.
Once deployed to my localhost sql server 2012 instance SSISDB, I can right click execute it and it works fine.
However, as soon as I try to invoke the package from a SQL Server Agent job, it fails with. I do not understand this but I suspect this is because I don't understand what's happening with the SSL handshake. Does SQL Server Agent reference it's own store? If so, does this mean that my store is inactivated (do I need to enable SSL for my sql server agent?) or missing a certificate?
It feels like I have tried every possible workaround - I have ensured that my sql server agent is owned by my credential, rather than a generic credential - I've even created a proxy and explicitly instructed the sql server agent job step to run under that proxy but I guess this isn't the issue. The issue is the SSL certification step.
How can I fix this?
I created a load test repository in my local machine SQL DB using loadtestresultsrepository.sql script. Then changed connection string from the default SQL Server Express local db to the one I wanted to store the test results using Manage Test Controllers window. Then I ran the test using VS ,MSTest command line and Powershell locally and results were written to the DB successfully.
I wanted to achieve the same outcome on TFS 2015 build agent so that I could run the load and performance test on the CI server. When I ran the load test on CI server using a build definition that uses command line mstest task /TestContainer:$(build.sourcesDirectory)\$(Component)\LoadTests\20_Users.loadtest, it did not write any data to default (localdb)\v11.0 server. But the load test ran successfully. I RDP to the build agent, open the solution from previously copied build artifacts and again set the connection string to the SQL server DB on the build agent using Manage Test Controllers window. Also created load test repository on build agent as well. But load test results are not written to the DB when I am running the tests using build definition.If I execute the same mstest command using powershell or cmd prompt on build agent, it writes the results. Can someone help me. How can I get the build definition / test controller to write the data to the DB I have configured in build agent ?
Here is the short version of the problem: I have a discrete DTSX file that works fine on our Production server, but doesn't on our new Dev server.
Symptom: When run from a SQL-Server job, the job starts and nothing at all happens, and it never finishes... it just hangs, using very little system resources.
Some info: For Prod, the packages were developed on SQL-Server 2012 and run on an NT 2008 server. The new Dev server is also SQL-Server 2012, but runs on an NT 2012 server (in case that matters). I have duplicated the folder/file structure exactly, including drive name. The package uses an external dtsConfig file, but as I said - the folder/file structure is identical.
The SSIS service, SQL-Server Agent, and my remote login are all the same, and is a member of the server Administrator group on the Dev box. If I copy the command line text from the SQL job and run it in a CMD window using dtexec.exe, the package executes correctly. The job owner is my login, and the "run as" is the SQL-Agent, which - as I mentioned - is the same login. Since everything in the package uses integrated security, everything should be running using the same login whether on the command line or via the SQL-Agent, which should eliminate any user permission/credentials issues.
I tried adding SSIS logging to the package, logging everything I could. When I run the package from the command line, I get a ton of messages in the log. When I run the package via the SQL job, there are no messages at all in the log - nothing.
Whatever is going on, it's not getting far enough into the SSIS package to generate a single log entry. It's just stopping but not exiting or throwing an error. FWIW - I have the same problem with every other package I've tried.
Any ideas are appreciated...
I found the cause of the problem. The MS-SQL Server service was using a different login than the SSIS server service and the NT Agent service (it was using a local service account).
Once I changed the MS-SQL Server login to match the others (and restarted the service), the job ran correctly.
I have some DTS packages that are failing occasionally. They have some pretty comprehensive logging in the various VBScript components and both SQL and text logging are enabled in each package. The logging works fine if I run the DTS packages from the server they live on.
In practice these packages are run using DTSRun from a remote machine that only has the SQL Server Client Tools installed. As DTS runs locally, where do the logs go in this case, if they are created at all? They're not on the server, there's no database on the client to do any SQL logging to and none of the text files were created on the client or server. Is it possible to debug DTS packages run this way?
Such as it is, the answer was to ensure the full path of the text logging is set up so it's valid on the machine where DTSRun is being called. In this case I had it set to C:\DTSErrors - this folder existed on the server, but not on the client. Creating C:\DTSErrors on the client solved my problem - now text logs are being created.
I assume the SQL logging just fails silently as there's no SQL server on the client machine to log to.