Use database table as CSV dataset for JMeter - database

I am currently doing performance testing with JMeter for an application in my company. The application requires a login and I have to use multiple users. The way this is usually done in JMeter seems to be via CSV dataset, however I have access to the database and can read all the credentials from there directly.
Since it is a test environment, all users have the same password, so I can hardcode it, but I need the list of usernames. JMeter can already do JDBC requests, but I was wondering if there was any way to use the results from such a request as a dataset.
The ideal way would be for me to query the database for the usernames and use those as input for the login test.
Does anyone know if this is possible?

Sure, it is possible.
Follow the steps from The Real Secret to Building a Database Test Plan With JMeter to establish JDBC Connection and execute your query.
Define a variable in the "Variable Name" input of the JDBC Request sampler
3. Query results will be available in form of
actor_1=John
actor_2=Doe
etc.
If you need to deal with several columns, the approach is the same. See official documentation on JDBC Request Sampler for example

Related

Validating request coming from the client side of a JDBC server

I have to implement a scenario, where a user logs in to the application and I have to validate the queries that where being executed during the load run of the log in scenario. Now I know that the JDBC sampler allows to run certain queries and returns their response but that is not needed here. I want to check that when a number of users Login to the application, exactly what queries have been initiated. A road map or associated tool will be very helpful in this regard.
First of all check if there is an APM tool in place as well-behaved APM tools can show the SQL queries associated with the HTTP requests.
If it's not there you can only get the query log from the database, depending on the database type you can either use the aforementioned JDBC Request sampler or if the database doesn't expose its query logs via SQL you may need to go to the database server directly and get the query log via command-line using OS Process Sampler

Calling API from Azure SQL Database (as opposed to SQL Server)

So I have an Azure SQL Database instance that I need to run a nightly data import on, and I was going to schedule a stored procedure to make a basic GET request against an API endpoint, but it seems like the OLE object isn't present in the Azure version of SQL Server. Is there any other way to make an API call available in Azure SQL Database, or do I need to put in place something outside of the database to accomplish this?
There are several options. I do not know whether a powershell job as stated in the first comment to your question can execute http requests but I do know at least a couple of options:
Azure Data Factory allows you to create scheduled pipelines to copy/transform data from a variety of sources (like http endpoints) to a variety of destinations (like azure sql databases). This involves no or a little bit of scripting.
Azure Logic Apps allows you to do the same:
With Azure Logic Apps, you can integrate (cloud) data into (on-premises) data storage. For instance, a logic app can store HTTP request data in a SQL Server database.
Logic apps can be triggered by a schedule as well and involves none or little scripting
You could also write an Azure Function that is executed on a schedule and calls the http endpoint and write the result to the database. Multiple languages are supported for writing functions, like c# and powershell for example.
All those options include the possibility to force an execution outside the schedule.
In my opinion Azure Data Factory (no coding) or an Azure Function (code only) are the best options given the need to parse a lot of json data. But do mind that Azure Functions on a Consumption Plan have a maximum execution time allowed of 10 minutes per invocation.

Is possible to execute a query on kiwi-tcms tables?

I need to access to the test cases information and use that in a different format so I can backup in sharepoint as a plain file i.e. so, I wanna be able to extract some data like test cases or plans via query or something like that.
Is possible to execute a query on kiwi-tcms tables
Obviously yes. It's a database so you connect to it and run your SQL queries as you wish.
For full backups see the official method at:
https://kiwitcms.org/blog/atodorov/2018/07/30/how-to-backup-docker-volumes-for-kiwi-tcms/
For a more granular access you can use the existing API interface. See
https://kiwitcms.readthedocs.io/en/latest/api/index.html and in particular https://kiwitcms.readthedocs.io/en/latest/modules/tcms.rpc.api.html
For an even more granular/flexible access you can interact with the ORM models directly. See https://docs.djangoproject.com/en/3.2/ref/django-admin/#shell, https://docs.djangoproject.com/en/3.2/ref/models/querysets/ and https://github.com/kiwitcms/api-scripts/blob/master/perf-script-orm for examples. Kiwi TCMS database schema is documented at https://kiwitcms.readthedocs.io/en/latest/db.html.

Auditing sql server changes with triggers and different user id from nodejs

I'm building a 3-tier application with AngularJS, NodeJS+Express and SQL Server. I'm now sending each individual query from the backend to the database to be executed. Thought that's working well, I have now the legal requirement to audit many changes in the database.
For that, I thought about using triggers. However, the problem is that the user identified in the web application is different from the generic user logged into the database, so I don't know how to deal with this.
I thought about converting every query in the backend to a call to a stored procedure, passing each time the user id as a extra field from the frontend to the backend. However, the user id I get here is not the database user. I can't use a temporary table, as I have read in other posts, because this database is about to be accessed at the same time by thousands of users, so it's not secure in my case. I also saw others solutions, but applied to Postre (I need a generic solution, because this application has different versions, working with SQL Server, Oracle and MySQL). Finally, I can't also write code nor configurations in the server for each user, because potential clients will try the trial version without human intervention.
I need a solution that doesn't overload the performance, so it's important for the success of this application.
Thank you very much.

How should I access to SQL Server DB?

I have been reading that direct access to a SQL Server database over the Internet is insecure, so I am wondering what intermediary I can and should use between the client and the server. What are the best practices in terms of security and performance?
For direct access, you would have to use SSL on your connections, but generally, I wouldn't expose a database server to the internet. I would design my way around it, for example by creating web services in front of the db server.
Use an API - Application Programming Interface . This is a frontend door to the data you wish to expose. This means you will need to define what you expose and how.
For example, Stack Overflow does not allow their database to be accessed via anyone directly. BUT, they have allowed people to access certain parts of their database, via their Stack Apps API. What parts? they have exposed certains parts with their own API -> web url's that spit back data, based upon what you request. The results are in JSON format only (at the time of me posting this answer).
Here is a sample API method that exposes some of their database. (EDIT: hmm, none of the API links work ... the link i was trying to show was ...
http://api.stackoverflow.com/0.8/help/method?method=answers/{id}/
)
Now .. if you don't want to actually think about what data (eg DB tables, if you're using a Relational Database like Microsoft SQL Server or Oracle Sql Server) but want to expose the ENTIRE database .. just via the web ... then maybe you could look at using OData to stick in front of your DB, to expose it?
Another Edit:
I was assuming you ment - allowing the public to access your DB .. not private. Otherwise, this should be on ServerFault.
I'd written this lovely reply pertaining to web access to a SQL server, and then you go and update it stating you have a desktop app in place.
With that, as was said above, the best idea is to not expose a database server to the internet. If you absolutely have to, then there's a few possible solutions.
Implement some sort of VPN connection into the network. I had once instance where we had a large number of sites all connecting to a database server (and company network) via VPN. This kept the database server off of the internet, while still allowing a half decent access time to the information. This was for a retail environment with not a great deal of data throughput
Properly setup your firewalls and permissions on the server. This one should be done anyway. You could put the server behind a firewall, allowing access only on 1433, and only from a specific IP range (which i assume would be possible). This way, you can at least lower the amount of locations a possible attack could come from.
This could all be employed in addition to the APIs and services mentioned above.
You can use with config.php. You must write db name, db user, db password, and host in config.php. Then you can use
[?php require("config.php"); ?]
in you page. Please change [ and ] to { and }.
You could just have a page in your web site's language (e.g. PHP, JSP, ASP, etc...) that queries the DB and returns the data you need in whatever format you need. For example:
If you're using jQuery:
from the client-side:
$.ajax({
url: 'ajax/test.php',
success: function(data) {
$('.result').html(data);
alert('Load was performed.');
}
});
Here, test.php would connect to the DB and query it and the result of test.php would be returned in the 'data' variable.

Resources