I need to access to the test cases information and use that in a different format so I can backup in sharepoint as a plain file i.e. so, I wanna be able to extract some data like test cases or plans via query or something like that.
Is possible to execute a query on kiwi-tcms tables
Obviously yes. It's a database so you connect to it and run your SQL queries as you wish.
For full backups see the official method at:
https://kiwitcms.org/blog/atodorov/2018/07/30/how-to-backup-docker-volumes-for-kiwi-tcms/
For a more granular access you can use the existing API interface. See
https://kiwitcms.readthedocs.io/en/latest/api/index.html and in particular https://kiwitcms.readthedocs.io/en/latest/modules/tcms.rpc.api.html
For an even more granular/flexible access you can interact with the ORM models directly. See https://docs.djangoproject.com/en/3.2/ref/django-admin/#shell, https://docs.djangoproject.com/en/3.2/ref/models/querysets/ and https://github.com/kiwitcms/api-scripts/blob/master/perf-script-orm for examples. Kiwi TCMS database schema is documented at https://kiwitcms.readthedocs.io/en/latest/db.html.
Related
Is there a way to include a document from other RavenDb database instance to be loaded in our current store session?
The question is based from not being able to have categorized
collections in RavenDb studio, so it's annoying to scroll and find a
desired collection!
In another word, having bounded-context in a same document store is
not looking good, so the best solution is to split the stores to make
it more efficient and readable as well.
I know that this is not a best practice to store different bounded-context in a very same db instance, but what if I need that!
Update:
Seems like Cross-Database functions are not available in RavenDb.
If you need to pass info/documents between 2 different RavenDB databases then you can always use the External Replication Task or the RavenDB ETL task.
RavenDB ETL Task:
https://ravendb.net/docs/article-page/5.2/csharp/studio/database/tasks/ongoing-tasks/ravendb-etl-task
External Replication Task:
https://ravendb.net/docs/article-page/5.2/csharp/studio/database/tasks/ongoing-tasks/external-replication-task
With the ETL task option, you can use a script to define and/or filter what is sent to the other RavenDB Database. Once a document reaches the target database you can use/load/include as usual.
We have a large enterprise system which has many databases on a single server (Sybase)
Developers will make a change in one db, script it, then maybe make a change in another db, add that to the list of scripts and so on.
Our release then runs through these scripts making changes to the objects in different databases in the same order.
Reading the Liquibase documentation, it seems like it would work if you applied all the changes to one db, then another, then another. Which wouldnt really work in our case as a change in one db may rely a change done earlier on another db and vice versa.
How could I use Liquibase to do the same?
You probably need to start looking at Datical DB (disclaimer: I work at Datical), which is a set of tools and extensions around Liquibase to handle these kinds of situations.
Alternatively, you could do something similar, writing your own tools to control Liquibase. Liquibase is controllable at several different levels - you could use a scripting tool to execute the Liquibase command line, or you could use Java or Groovy (or any other language that integrates with tools in the JVM) and use the Liquibase classes more directly.
Liquibase does not currently support connecting to multiple different databases. It is being considered for version 4.0.
If your databases are the same database instance but different schemas, you can use the schemaName attribute to target changeSets at different schemas from one changeSet to the next. You will need a single connection URL that has access to all of the schemas.
If your databases are different instances or not all accessible from a single connection URL, you can probably create custom change classes or extensions that allow you to run SQL against different connections, although that will not be as clean or easy as the schemaName option.
We have several SQL Server databases containing measurements from generators that we build. However, this useful data is only accessible to a few engineers since most are unfamiliar with SQL (including me). Are there any tools would allow an engineer to extract chosen subsets of the data in order to analyze it in Excel or another environment? The ideal tool would
protect the database from any accidental changes,
require no SQL knowledge to extract data,
be very easy to use, for example with a GUI to select fields and the chosen time range,
allow export of the data values into a file that could be read by Excel,
require no participation/input from the database manager for the extraction task to run, and
be easy for a newbie database manager to set up.
Thanks for any recommendations or suggestions.
First off, I would never let users run their own queries on a production machine. They could run table scans or some other performance killer all day.
We have a similar situation, and we generally create custom stored procedures for the users to "call", and only allow access to a backup server running "almost live" data.
Our users are familiar with excel, so I create a stored procedure with ample parameters for filtering/customizations and they can easily call it by using something like:
EXEC YourProcedureName '01/01/2010','12/31/2010','Y',null,1234
I document exactly what the parameters do, and they generally are good to go from there.
To set up a excel query you'll need to set up the data sources on the user's PC (control panel - data sources- odbc), which will vary slightly depending on your version of windows.
From in excel, you need to set up the "query", which is just the EXEC command from above. Depending on the version of Excel, it should be something like: menu - data - import external data - new database query. Then chose the data source, connect, skip the table diagram maker and enter the above SQL. Also, don't try to make one procedure do everything, make different ones based on what they do.
Once the data is on the excel sheet, our users pull it to other sheets and manipulate it at will.
Some users are a little advanced and "try" to write their own SQL, but that is a pain. I end up debugging and fixing their incorrect queries. Also, once you do correct the query, they always tinker with it and break it again. using a stored procedure means that they can't change it, and I can put it with our other procedures in the source code repository.
I would recommend you build your own in Excel. Excel can make queries to your SQL Server Database through an ODBC connection. If you do it right, the end user has to do little more than click a "get data" button. Then they have access to all the GUI power of Excel to view the data.
Excel allows to load the output of stored procedures directly into a tab. That IMO is the best way: users need no knowledge of SQL, they just invoke a procedure, and there are no extra moving parts besides Excel and your database.
Depending on your version of SQL server I would be looking at some of the excellent self service BI tools with the later editions such as Report Builder. This is like a stripped down version of visual studio with all the complex bits taken out and just the simple reporting bits left in.
If you setup a shared data source that is logging into the server with quite low access rights then the users can build reports but not edit anything.
I would echo the comments by KM that letting the great unwashed run queries on a production system can lead to some interesting results with either the wrong query being used or massive table scans or cartesian joins etc
I have a client who owns a business with a handful of employees. He has a product website that has several hundred static product pages that are updated periodically via FTP.
We want to change this to a data-driven website, but the database (which will be hosted at an ISP) will have to be updated from data on my client's servers.
How best to do this on a shoestring? Can the database be hot-swapped via FTP, or do we need to build a web service we can push changes to?
Ask the ISP about the options. Some ISPs allow you to ftp upload the .mdf (database file).
Some will allow you to connect with SQL management studio.
some will allow both.
you gotta ask the ISP.
Last time I did this we created XML documents that were ftp'd to the website. We had an admin page that would clear out the old data by running some stored procs to truncate tables then import the xml docs to the sql tables.
Since we didn't have the whole server to ourselves, there was no access to SQL Server DTS to schedule this stuff.
There is a Database Publishing Wizard from MS which will take all your data and create a SQL file that can then be run on the ISP. It will also, though I've never tried it, go directly to an ISP database. There is an option button on one of the wizard screens that does it.
it does require the user to have a little training and it's still a manual process so mabe not what you're after but i think it will do the job.
Long-term, building a service to upload the data is probably the cleanest solution as the app can now control it's import procedures. You could go grossly simple with this and just have the local copy dump some sort of XML that the app could read, making it not much harder than uploading the file while still in the automatable category. Having this import procedure would also help with development as you now have an automated and repeatable way to sync data.
This is what I usually do:
You could use a tool like Red-Gate's SQL Data Compere to do this. The tool compares data between two catalogs (on same or different servers) and generates a script for syncing them.
It seems like something like this should exist, but I have never heard of it and would find such a utility to be incredibly useful. Many times, I develop applications that have a database backed - SQL Server or Oracle. During development, end users of the app are encouraged to test the site - I can verify this by looking for entries in the database...if there are entries, they have been testing...if not, they haven't.
What I would like is a tool/utility that would do this checking for me. I would specify the Database and connection parameters and the tool would pool the database periodically (based on values that I specify) and alert me if there was any new activity in the database (perhaps it would pop up a notification in the system tray). I could also specify multiple database scenarios to monitor in the tool. If such an app existed, I wouldn't have to manually run queries against databases for new activity. I'm aware of SQL Profiler, but when I reviewed it, it seemed like overkill for what I wanted to do (and it also wouldn't do the Oracle DB monitoring). Also, to use SQL Profiler, you have to be an admin of the database. I would need to monitor databases where I only have a read-only account.
Does someone know if such a tool exists?
Sounds like something really easy to write yourself. Just query the database schema, then do a select count(*) or select max(lastUpdateTime) query on each table and save the result. If something is different send yourself an email. JDBC in Java gives you access to the schema information in a cross-database manner. Don't know about ADO.