Filtering a complex status column - sql-server

I work with a GraphQL (HotChocolate) API that use EF Core to fetch data from my MS SQL Server.
A few entities have status columns, that are currently created on-demand with a complex query which can be translated to SQL with EF Core.
The status is generated with a query of 3-5 tables.
query {
person(where: {status: {eq: Status1}}){
{...}
}
Now I want to make this column filterable with GraphQL, which is currently not possible, because the column is generated in the resolver of the entity.
Is there a good way to filter a computed column with GraphQL or a good solution to store such complex computed columns in the database, which will be updated in a short interval?
I thought to store it in the database and make a CRON Job/SQL Job that updates the value each hour or so, but this solution just doesn't feel right.

Related

Solr with SQL Server

I am working on a POC of using Solr with SQL Server.
I have a very complex data model with SQL server which required a bunch of joining and scalar functions to strip some mark up and other stuff.
This is turning out to be performance bottleneck. To address this issue, we have considered NoSQL (MogoDB) or Solr with SQL Server as our options.
Using MongoDB, we attach a replication events for all CRUD operations which will carry over the data to MongoDB as well after successful insert, update, delete on SQL Server. And when we have to perform a search we do the search directly on Mongo Collections.
This sounds very cool as we have 32 tables joining in this search, which can convert to 2 collections in MongoDB
On the other note we are also exploring our options using Solr with SQL Server with DataImport
My concern is, based on this article http://www.codewrecks.com/blog/index.php/2013/04/29/loading-data-from-sql-server-to-solr-with-a-data-import-handler/ I have to do a import for each entity
How does the joining search works with Solr? Should I import each table from SQL to Solr and then write a join login against Solr APIs
Can I Import Multiple Entities at once? Should I create a view for expected result set (denormalized) and import that view to Solr?
Will these imports have to be done on regular intervals? After a import
if there is new data, does Solr API reflects that change? or I have
to do a import first and then do the search against Solr API
Finally can I compare Solr with MongoDB, if anyone has done this kind of evaluation please share your thoughts.

management database intervention from an administrator

given the following question:
What are two possible classes of intervention that an administrator of a system database
could take to improve the performance of a query?
What is a possible answer?
first consider that DB optimizations are fairly dependent from the DB type/vendor, but here's a short list:
Fetch the "Explain plan" of the query (if your db let you do this): this is useful to understand what the DB does (using indexes, for instances) to retrieve the query results
If your query is filtering on attributes with small cardinality, create a Bitmap Index on these to speed up retrieving of table rows
If your query is using joins and your DB supports this, use a Join Index
If your query is retrieving results against a portion of a table identified by a single attribute, you could use Sharding

Hibernate - automatic created_date and updated_date from the database

We need to add two columns in each table - created_date and updated_date.
The type of database can be any of the following: Oracle, MySQL, PostgreSQL, CouchDB.
The two columns should store the date as well as the time.
created_date should be filled only when a row is created.
updated_date should be filled every time a row is updated (during creation time too).
The solution is for a cloud where a lot of Hibernate JVMs will be running.
Since there is no single JVM, they may occasionally go out of sync in time.
So we do NOT want the solution to populate JVM time in these two columns.
Is there a DB-agnostic way to do this in Hibernate?
We would like to put the responsibility of date creation/updation on the DB itself.
Triggers would be the last option we want to try as that would be cumbersome to generate for each table.
Ideal solution would be to have some kind of annotation in JPA/Hibernate which will dictate the ORM tool to create or update the date from DB during insert/update.
Found the solution and its very well explained here: Custom SQL for Columns - PrismoSkills
This solution is DB agnostic and Hibernate friendly.

timestamp vs date column sql azure for data sync and optimistic concurrency

Using Sql Azure with entity framework. Most of our tables have a date column where we store when the record was edited and by whom. Is there a benefit of making those columns into a timestamp column for the following reasons
Does timestamp help if we want to synchronize this db with another db with SQL Data Sync i.e. if we have a timestamp column that we can use both for our logging and data sync especially if data sync insists on all the tables having a timestamp column
Will having this column help with optimistic concurrency (via entity framework)?
To answer your first question, No. the SQL Data Sync service will create its own change tracking mechanism and you cant configure it to reuse your timestamp column.

Are there any in-memory databases that support computed columns?

We have a SQL 2005/2008 database that has a table with a computed column. We're using the computed column as a discriminator in NHibernate so having it in the database is proving to be very useful.
In order to gain the benefits of faster integration tests, I'd like to be able to run our integration tests against an in-memory database such as SQLite or SQL CE. But I don't think either of those support the computed column.
Are there any other solutions to my problem? I have complete access to the database and can modify it if there's a better solution available. I've seen this post that suggests using a view instead of a computed column, is this the best alternative?
What I did was added the computed column to the DataTable when loading the table from SqlCe. I stored the definition of the computed DataColumn in a "configuration" table stored in the database. I was able to do complex calculations that depended on a "chain" of tables, where each table performed a simplier function of a more complex function. (The last table in the chain contained the results.) I used SqlCe because one table of five contained 15 million rows. Too much data for the in-memory data sets of ADO.NET. (I had a requirement of using local client based calculations before posting to server.)

Resources