RavenDb Different Database Instance Include - data-modeling

Is there a way to include a document from other RavenDb database instance to be loaded in our current store session?
The question is based from not being able to have categorized
collections in RavenDb studio, so it's annoying to scroll and find a
desired collection!
In another word, having bounded-context in a same document store is
not looking good, so the best solution is to split the stores to make
it more efficient and readable as well.
I know that this is not a best practice to store different bounded-context in a very same db instance, but what if I need that!
Update:
Seems like Cross-Database functions are not available in RavenDb.

If you need to pass info/documents between 2 different RavenDB databases then you can always use the External Replication Task or the RavenDB ETL task.
RavenDB ETL Task:
https://ravendb.net/docs/article-page/5.2/csharp/studio/database/tasks/ongoing-tasks/ravendb-etl-task
External Replication Task:
https://ravendb.net/docs/article-page/5.2/csharp/studio/database/tasks/ongoing-tasks/external-replication-task
With the ETL task option, you can use a script to define and/or filter what is sent to the other RavenDB Database. Once a document reaches the target database you can use/load/include as usual.

Related

Is possible to execute a query on kiwi-tcms tables?

I need to access to the test cases information and use that in a different format so I can backup in sharepoint as a plain file i.e. so, I wanna be able to extract some data like test cases or plans via query or something like that.
Is possible to execute a query on kiwi-tcms tables
Obviously yes. It's a database so you connect to it and run your SQL queries as you wish.
For full backups see the official method at:
https://kiwitcms.org/blog/atodorov/2018/07/30/how-to-backup-docker-volumes-for-kiwi-tcms/
For a more granular access you can use the existing API interface. See
https://kiwitcms.readthedocs.io/en/latest/api/index.html and in particular https://kiwitcms.readthedocs.io/en/latest/modules/tcms.rpc.api.html
For an even more granular/flexible access you can interact with the ORM models directly. See https://docs.djangoproject.com/en/3.2/ref/django-admin/#shell, https://docs.djangoproject.com/en/3.2/ref/models/querysets/ and https://github.com/kiwitcms/api-scripts/blob/master/perf-script-orm for examples. Kiwi TCMS database schema is documented at https://kiwitcms.readthedocs.io/en/latest/db.html.

Is it possible to store custom metadata for SSAS objects within SSAS? (for versioning)

I am trying to implement a form of versioning for our company's SQL Analysis Services Databases.
At the moment we have a very simple drop and recreate each time we actually deploy using the SQLPS PowerShell module and XMLA, but this causes hindrances when having to reprocess large measure groups because of the database being recreated and we would like to where possible reduce the deployment window since this can impact reprocessing backlog of transactions after deployment is finished since the system needs to catchup again.
So thus i am trying to implement a form of versioning so that only when there is actually a schema or model change would the objects need to be dropped and recreated since our entire deployment process is automated then in those cases we will book a longer deployment slot.
I am trying to find out if there is any functionality that exists within SSAS itself which allows you to maybe store some text value perhaps like a version number of the databases which can then be correlated to whether we need to drop and recreate the SSAS database.
At the moment I could not find anything so my best bet so far is to rely on managing the version number of the database via its related sql database instance so then I use a tracking table within the SQL instance to check if the latest version of this release has already been deployed.
Does anyone know of any method where such custom metadata might be added to the SSAS objects other that trying to modify their names which i would like to avoid.
Does anyone know of something like this perhaps or has anyone dealt with a similar scenario and if yes, how did you approach it?
I would use Annotations. Almost all components of SSAS cube have Annotations property which is a collection of Annotation structures. It is a structure with Name property which is a key and string Value storing arbitrary string.
Good thing about annotations that its are stored in SSAS metadata and can be retrieved back from server once cube is deployed.

Solr for different accounts in a system

I'm working on a SaaS which have a database for each account, with basically the same tables. What's the best way to index all databases separately? I was thinking about setting different solr instances(different ports) for each database in the same server, but it could be hard on the server. So, i'm in this crazy doubt on what to do next. I haven't found any useful idea in the solr documentation. Could you guys help out. Thanks in advance.
If you store all the data from all of your tenants on one collection, it will be easy in the beginning because probably you will do several changes on your schema and it is easier if you do them once for all your customers.
As a negative point in this scenario you will have lots of unrelated data grouped together and you always have to use a filter query for the tenant (client) id.
What if you create, for starters, a collection for each of the tenant on the same Solr server? This way you don't mix the data of your tenants and you achieve the functionality you basically need.
In this scenario, as it happens for your relational database instances, you have to keep the schema changes in sync.
For relational databases there are tools like flyway or liquibase that can be used to version the changes applied on each of the tenant database.
For Solr there aren't AFAIK such tools, but you can apply your schema changes programmatically through Solr Schema API. In case you have to do highly detailed changes that can't be done via the Schema API, you can replace the schema.xml file of each collection with an updated version of it and restart the solr server.
What you need to keep in mind is backward compatibility. Whenever you add some changes to any of the databases (relational DB or Solr) you need to take into account that the old code must still work with the latest updates that you perform on the relational database/ solr schema structure.

How update in a Multi Tenant app all schema of all tenants?

I am developing a multi-tenant app. I chose the "Shared Database/Separate Schemas" approach.
My idea is to have a default schema (dbo) and when deploying this schema, to do an update on the tenants' schemas (tenantA, tenantB, tenantC); in other words, to make synchronized schemas.
How can I synchronize the schemas of tenants with the default schema?
I am using SQL Server 2008.
First thing you will need is a table or other mechanism to store the version information of the schema. If nothing else so that you can bind your application and schema together. There is nothing more painful than a version of the application against the wrong schema—failing, corrupting data, etc.
The application should reject or shutdown if its not the right version—you might get some blowback when its not right, but protects you from the really bad day when the database corrupts the valuable data.
You'll need a way to track changes such as Subversion or something else—from SQL you can export the initial schema. From here you will need a mechanism to track changes using a nice tool like SQL compare and then track the schema changes and match to an update in version number in the target database.
We keep each delta in a separate folder beneath the upgrade utility we built. This utility signs onto the server, reads the version info and then applies the transform scripts from the next version in the database until it can find no more upgrade scripts in its sub folder. This gives us the ability upgrade a database no matter how old it is to the current version. If there are data transforms unique the tenant, these are going to get tricky.
Of course you should always make a backup of the database that writes to an external file preferable with an human identifiable version number so you can find it and restore it when the script(s) go bad. And eventually it will so just plan on figuring out how to recover and restore.
I saw there is some sort of schema upgrader tool in the new VS 2010 but I haven't used it. That might also be useful to you.
There is no magic command to synchronize the schemas as far as I know. You would need to use a tool - either built in house or bought (Check out Red Gate's SQL Compare and SQL Examiner - you need to tweak them to compare different schemas).
Just synchronizing can often be tricky business though. If you added a column, do you need to also fill that column with data? If you split a column into two new columns there has to be conversion code for something like that.
My suggestion would be to very carefully track any scripts that you run against the dbo schema and make sure that they also get run against the other schemas when appropriate. You can then use a tool like SQL Compare as an occasional sanity check to look for any unexpected differences.

How to have a "master-structure" database with "children-data" databases in SQL SERVER 2005?

I have been googling a lot and I couldn't find if this even exists or I'm asking for some magic =P
Ok, so here's the deal.
I need to have a way to create a "master-structured" database which will only contain the schemas, structures, tables, store procedures, udfs, etc, everything but real data in SQL SERVER 2005 (if this is available in 2008 let me know, I could try to convince my client to pay for it =P)
Then I want to have several "children" of that master db which implement those schemas, tables, etc but each one has different data.
So when I need to create a new stored procedure or something like that, I just create it on the master database (and of course it's available on its children).
Actually I have several different databases with the same schema and different data. But the problem is to maintain congruency between them. Everytime I create a script to create some SP or add some index or whatever, I have to execute it in every database, and sometimes I could miss one =P
So let's say you have a UNIVERSE (would be the master db) and the universe has SPACES (each one represented by a child db). So the application I'm working on needs to dynamically "clone" SPACES. To do that, we have to create a new database. Nowadays I'm creating a backup of the db being cloned, restoring it as a new one and truncate the tables.
I want to be able to create a new "child" of the "master" db, which will maintain the schemas and everything, but will start with empty data.
Hope it's clear... My english is not perfect, sorry about that =P
Thanks to all!
What you really need is to version-control your database schema.
See do-you-source-control-your-databases
If you use SQL Server, I would recommend dbGhost - not expensive and does a great job of:
synchronizing 2 databases
diff-ing 2 databases
creating a database from a set of scripts (I would recommend this version).
batch support, so that you can upgrade all your databases using a single batch
You can use this infrastructure for both:
rolling development versions to test, integration and production systems
rolling your 'updated' system to multiple production deployments (especially in a hosted environment)
I would write my changes as a sql file and use OSQL or SQLCMD via a batchfile to ensure that I repeatedly executed on all the databases without thinking about it.
As an alternative I would use the VisualStudio Database Pro tools or RedGate SQL compare tools to compare and propogate the changes.
There are kludges, but the mainstream way to handle this is still to use Source Code Control (with all its other attendant benefits.) And SQL Server is increasingly SCC friendly.
Also, for many (most robust) sites it's a per-server issue as much as a per-database issue.
You can put things in master like SPs and call them from anywhere. As far as other objects like tables, you can put them in model and new databases will get them when you create a new database.
However, in order to get new tables to simply pop up in the child databases after being added to the parent, nothing.
It would be possible to create something to look through the databases and script them from a template database, and there are also commercial tools which can help discover differences between databases. You could also have a DDL trigger in the "master" database which went out and did this when you created a new table.
If you kept a nice SPACES template, you could script it out (without data) and create the new database - so there would be no need to TRUNCATE. You can script it out from SQL or an external tool.
Little trivia here. The mssqlsystemresource database works as you describe: is defined once and 'appears' in every database as the special sys schema. Unfortunately the special 'magic' needed to get this working is not available to the user databases. You'll have to use deployment techniques to keep your schema in synk. That is, apply the changes to every database as the other answers already suggested.
In theory, you could put a trigger on your UNIVERSE.sysobjects table (assuming SQL Server), and then you could enumerate master.dbo.sysdatabases to find all the child databases. If you have a special table that indicates it's a child database, you can reference child.dbo.sysobjects to find it.
Make no mistake, it would be difficult to implement. But it's one way you could do it.

Resources