I am testing the performance of the oracle 12c database using JMeter. I am totally new to JMeter. For testing I have create a .JAR file from java program. The java program uses JDBC driver to connect oracle database.
In JMeter, I have added Thread Group and inside the thread group I have added Java Request as Sampler. Am I following the right procedure?
If, my procedure is right, then also when I check the results in Table and Tree, I got an error. I have attached the snapshots of table and tree in JMeter.JMeter Result Table
Normally people use JDBC Configuration and JDBC Request sampler for database load testing. See Building a Database Test Plan for more information.
However JMeter is very flexible and your approach is also viable. In order to troubleshoot your problem:
First of all every time you face a problem with JMeter take a look into jmeter.log file, in the absolute majority of cases it contains enough information to get the idea why JMeter test has failed.
If your JAR doesn't contain Oracle JDBC driver you need to put the Oracle JDBC driver into JMeter Classpath as well. JMeter restart will be required to pick the Oracle JDBC driver jar.
You can run JMeter with the debugger enabled like:
java -agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=8888 -jar ApacheJMeter.jar -t your_testplan.jmx
and use your favourite IDE to connect to the machine running JMeter using port 8888, step-by-step walkthrough your code and see where the errors live. See How to Debug your Apache JMeter Script article for more tips on getting to the bottom of your JMeter test failures.
Related
I am trying to read data form SQL server to process using Spark. I am using Zeppelin for writing my scala commands. I never worked on Java or Spark or Zeppelin, so I am having hard time figuring out the issues.
I installed spark on my machine and everything seem to be working as I can get to spark-shell successfully. I have installed Zeppelin via Docker and this also seem to be working as I can create a new notebook and run "sc" and can see the SparkContext type printed.
Now I want to read data from SQL Server. I am planning to use azure-sqldb-spark connector but I am not sure how to use it. I am trying to add this as an interpreter to Zeppelin but not sure what are the required properties and how to use it.
This is what I did so far.
Downloaded the jar file from GitHub repo. (I am not able to run this on my machine as this is complaining that there is no manifest file)
Copied this jar file to the container running zeppelin
Tried to create an interpreter in Zeppelin
Here are the properties:
I am specifying the dependency on jar file like this.
I tried to play with the properties a bit but no luck. I am not even sure if this is the right way to do this.
I am trying to run the following query but running into suitable driver not found issue.
I have lots of files in a project on a remote host and I want to find out from which file another php file is called. Is it possible to use Ctrl+Shift+f search on a remote host project?
Is it possible to use Ctrl+Shift+F search on a remote host project?
Currently it's not possible. (2022-06-09: now possible with remote development using JetBrains Gateway, see at the end)
In order to execute search in a file content in a locally run IDE such file must be read first. For that the IDE must download it... which can be quite time & connection consuming task for (S)FTP connections (depends on how far the server is; how fast your connection; bandwidth limits etc.)
Even if the IDE could do it transparently for search like it does with Remote Edit functionality (where it downloads a remote file but instead of placing it in the actual project it stores it in a temp location) it still needs to download it.
If you execute one search (one term) and then need to do another search (slightly modified term or completely different search string) the IDE would need to re-download those files again (waste of time and connection).
Therefore it makes much more sense to download your project (all or desired files only) locally and then execute such search(es) on local files.
If it has to be purely remote search (when nothing gets downloaded locally)... then you just establish SSH/RDP/etc connection to that remote host (BTW: PhpStorm has built-in SSH Console functionality) and execute such search directly on the remote server with OS native tools (find/grep and alike) or some remote software (e.g. mc or notepad++).
P.S. (on related note)
Some of the disadvantages when doing Remote Edit: https://stackoverflow.com/a/36850634/783119
EDIT 2022-06-09:
BTW, JetBrains now has JetBrains Gateway for remote development where you run the IDE core on a remote server and connect to it via SSH using a local dedicated app or a plugin to your IDE (PhpStorm comes bundled with such a plugin since 2021.3 version).
To check more on JetBrains Gateway:
https://www.jetbrains.com/remote-development/gateway/
https://blog.jetbrains.com/blog/2021/11/29/introducing-remote-development-for-jetbrains-ides/
I'm trying to automate sonarqube installation.
One (small) issue I'm running into is after installation during first access, as sonarqube is initializing the db schema, we run into timeouts.
I'd like to know if there's a script/command to initialize the db (create tables and all) from the bash?
I've been digging on the internet and couldn't find an answer to that.
Thanks!
I'd like to complete answers from Seb and Jeroen.
Schema is indeed created programmatically by SonarQube during startup. This step can't be executed independently in a script. Just start server. Instead of parsing logs, I suggest to call the web service GET http:///api/system/status (see documentation at http://nemo.sonarqube.org/api_documentation/api/system/status) to know when database is fully initialized.
Database upgrade when installing a new release can also be triggered through the web service POST http:///api/system/migrate_db (see http://nemo.sonarqube.org/api_documentation/api/system/migrate_db).
Database initialization is built-in SonarQube and can not be run independently of starting SonarQube.
As suggested by #jeroen, you can indeed analyze the sonar.log file and wait for the web[o.s.s.a.TomcatAccessLog] Web server is started line.
You can build in a wait loop and/or analyze the SonarQube log file during startup.
While configuring the SQL Server 2012 Master Data Services, I am having following problem
The required .svc handler mappings are not installed in IIS.
What I want to do is that, I want to query my database using a URL so that I can retrieve data directly using the URL it self just like we can store the querystring parameters into SQL Server
How do I deal with it, I followed several documents but not any ideas.
To fix this issue, open a command prompt and go to the .NET directory
(for example %windir%\Microsoft.NET\Framework64\v4.0.30319).
Run the command: aspnet_regiis –i
For further details check:SVC Handler mapping error in MDS Configuration Manager
I've come across these types of errors a few times when installing MDS, the problem usually comes about because just having IIS installed is not enough, there are loads of other role services and features that you need to enable and install as well which the setup program doesn't tell you about.
Thankfully they are all documented here:
Web Application Requirements (Master Data Services)
And, if you've missed any, you can go back, install them and then re-launch the configuration tool to complete the setup without having to re-install MDS from scratch.
What is the simplest way to schedule a batch file to run on a remote machine using Hudson (latest and greatest version)? I was exploring the master slave setup. I created a dumb slave but I am not sure what the parameters should be so that I can trigger the batch file in the remote slave machine.
Basically, I am trying to run 2 different batch files on two different remote machines sequentially, triggered from my machine (the master). The Step by step guide on the Hudson website is a dead link. There are similar questions posted on SO but it does not quite work for me when I use the parameters they mention.
If anyone has done something similar please suggest ways to make this work.
(I know how to set up jobs, and add a step to run a batch file etc what I am having trouble configuring is doing this on a remote machine using hudson in built features)
UPDATE
Thank you all for the suggestions. Quick update on this:
What I wanted to get done is partially working, below are the steps followed to get to it -
Created new Node from Manage Nodes -> New Node -> set # of Executors as 1, Remote FS root set as '/var/hudson', set Launch method as using JNLP, set slavename and saved.
Once slave was set up (from master machine), I logged into the Slave physical machine, I downloaded the _slave.jar from http://masterserver:port/jnlpJars/slave.jar, and ran the following from command line at the download location -> java -jar _slave.jar -jnlpUrl http://masterserver:port/computer/slavename/slave-agent.jnlp. The connection was made successfully.
Checked 'Restrict where this project can be run' in the Master job configuration, and set paramater as slavename.
Checked "Add Build Step" for adding my batch job script
What I am still missing now is a way to connect to 2 slaves from one job in sequence, is that possible?
It is fairly easy and straight forward. Lets assume you already have a slave running. Then you configure the job as if you are locally on the target box. The setting for Restrict where this project can be run needs to be the node that you want to on. This is all for the job configuration.
For the slave configuration read the following pages.
Installing Hudson as a Windows service
Distributed builds
On windows I prefer to run the slave as a service and let the remote machine manage the start up and shut down of the slave. The only disadvantage with this is, you need to upgrade the client every time you update the server Just get the new client.jar from the server, after the upgrade and put it on the slave. Then restart the slave and you are done.
I had troubles using the install as a service option for the slave even though I did it as a local administrator. I used then srvany to wrap the jar into a service. Here is a blog about it. The command that you need to wrap, you will get from your Hudson server from the slave page. For all of this to work, you should set up the slave management as jnlp.
If you have an ssh server on your target machine, you can use the ssl slave settings. These work for me like a charm. I use them with my unix slaves. So far the ssl option with unix is less of an hassle, than the windows service clients.
I had some similar trouble with slave setup and wrote up this blog post - I was running on Linux rather than Windows, but hopefully this will help.
I dont know about how to use built-in hudson features for this job - but in one of my project builds, i run a batch file that in turn uses PSTools
to run the job on a remote server. I found PS tools extremely easy to use - download, unpack and run the command with the right parameters, hence opted to use this.