How do I verify the Snowflake Time Travel setting? - snowflake-cloud-data-platform

I do not see the Time Travel setting in SHOW PARAMETERS results. I have the Enterprise Edition which allows 90 day Time Travel. Does that mean it is automatically set at 90 days?

The default value is 1 (even for Enterprise Edition). As you know, you can set different retention values for databases, schemas and tables. To see the value of the parameter for your account, please use the following command:
SHOW PARAMETERS like '%DATA_RETENTION_TIME_IN_DAYS%' in account;

Related

Enabling Change Logs in SQL Server 2016

I am very new to SQL Server database. We have installed SQL Server 2016. We would like to enable the change logs with maximum retention of 14 days. The purpose is to track any changes done etc.
Can someone please help me out with the steps to achieve this?
Welcome, Ayushi.
Based on your question, it appears that you want to have an easy ability to look at what has changed in the last 14 days. If this is the case, then you are probably looking at enabling Change Data Capture (CDC).
(Note: To ensure that you actually want CDC, and not just Change Tracking, you should read this Microsoft article)
Unfortunately, this is more than a simple "flip this switch to track changes" answer. To implement this, you can start with this article on implementing the different types of change tracking from Microsoft, and then go to the subarticles on actual implementation, space considerations, etc.
Note that you may only want to track changes on a few tables, and not the entire database, for space and performance reasons.

Can the job history be extended?

For a job which runs every 15 minutes, I see only the last 30 entries.
I look into SSMS, and I also tried:
select * from msdb.dbo.sysjobhistory;
I don't see a rule here. Some jobs are tracked for one month, others are tracked for 12 hours only.
Anyway, can the job history be extended so that I can see all jobs for e.g. one full month back?
In the SSMS right click the SQL Server Agent and select the Properties,
in the History tab you manage the manage retention of the job history logs
Also you can see the this tip for more details

Azure SQL Query Editor vs Management Studio

I'm pretty new to azure and cloud computing in general and would like to ask your help in figuring out issue.
Issue was first seen when we had webpage that time outs due to sql timeout set to (30 seconds).
First thing I did was connect to the Production database using MS SQL management studio 2014 (Connected to the azure prod db)
Ran the stored procedure being used by the low performing page but got the return less than 0 seconds. This made me confused since what could be causing the issue.
By accident i also tried to run the same query in the Azure SQL query editor and was shock that it took 29 seconds to run it.
My main question is why is there a difference between running the query in azure sql query editor vs Management studio. This is the exact same database.
DTU usage is at 98% and im thingking there is a performance issue with the stored proc but want to know first why sql editor is running the SP slower than Management studio.
Current azure db has 50 dtu's.
Two guesses (posting query plans will help get you an answer for situations like this):
SQL Server has various session-level settings. For example, there is one to determine if you should use ansi_nulls behavior (vs. the prior setting from very old versions of SQL Server). There are others for how identifiers are quoted and similar. Due to legacy reasons, some of the drivers have different default settings. These different settings can impact which query plans get chosen, in the limit. While they won't always impact performance, there is a chance that you get a scan instead of a seek on some query of interest to you.
The other main possible path for explaining this kind of issue is that you have a parameter sniffing difference. SQL's optimizer will peek into the parameter values used to pick a better plan (hoping that the value will represent the average use case for future parameter values). Oracle calls this bind peeking - SQL calls it parameter sniffing. Here's the post I did on this some time ago that goes through some examples:
https://blogs.msdn.microsoft.com/queryoptteam/2006/03/31/i-smell-a-parameter/
I recommend you do your experiments and then look at the query store to see if there are different queries or different plans being picked. You can learn about the query store and the SSMS UI here:
https://learn.microsoft.com/en-us/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store?view=sql-server-2017
For this specific case, please note that the query store exposes those different session-level settings using "context settings". Each unique combination of context settings will show up as a different context settings id, and this will inform how query texts are interpreted. In query store parlance, the same query text can be interpreted different ways under different context settings, so two different context settings for the same query text would imply two semantically different queries.
Hope that helps - best of luck on your perf problem

Database Snapshot SQL Server 2000

We have a large database that receives records concerning several hundred thousand persons per year. For a multitude of reasons I won't get into when information is entered into the system for a specific person it is often the case that the individual entering the data will be unable to verify whether or not this person is already in the database. Due to legal reqirements, we have to strive towards each individual in our database having a unique identifier (and no individual should have two or more.) Because of data collection issues it'll often be the case that one individual will be assigned many different unique identifiers.
We have various automated and manual processes that mostly clean up the database on a set schedule and merge unique identifiers for persons who have had muliple assigned to them.
Where we're having problems is we are also legally required to generate reports at year end. We have a set of year-end reports we always generate, however it is also the case that every year several dozen ad hoc reports will be requested by decision makers. Where things get troublesome is because of the continuous merging of unique identifiers, our data is not static. So any reports generated at year end will be based on the data as it existed the last day of the year, 3 weeks later if a decision maker requests a report, whatever we give them can (and will) often conflict direcly with our legally required year end reports. Sometimes we'll merge up to 30,000 identifiers in a month which can greatly change the results of any query.
It is understood/accepted that our database is not static, but we are being asked to come up with a method for generating ad hoc reports based off of a static snapshot of the database. So if a report is requested on 1/25 it will be based off the exact same dataset as our year end reports.
After doing some research I'm familiar with database snapshots, but we have a SQL Server 2000 database and we have little ability to get that changed in the short-to-medium term and database snapshots are a new feature in the 2005 edition. So my question would be what is the best way to create a queryable snapshot of a database in SQL Server 2000?
Can you simply take a backup of the database on 12/31 and restore it under a different name?
You either need to take a snapshot and work off it (to another db or external file-based system, like Access or Excel) or, if there's enough date information stored, work from your live copy using the date value to distinguish previously reported data from new.
You're better off working from a snapshot because the date approach won't always work. Ideally, you'd export your live database at the end of the year somewhere (anywhere, really) else.

What is the best solution for POS application?

I'm current on POS project. User require this application can work both online and offline which mean they need local database. I decide to use SQL Server replication between each shop and head office. Each shop need to install SQL Server Express and head office already has SQL Server Enterprise Edition. Replication will run every 30 minutes as schedule and I choose Merge Replication because data can change at both shop and head office.
When I'm doing POC, I found this solution not work properly, sometime job is error and I need to re-initialize its. This solution also take a very long time, which obviously unacceptable to user.
I want to know, are there any solutions better than one that I'm doing now?
Update 1:
Constraints of the system are
Almost of transactions can occur at
both shop and head office.
Some transaction need to work in real-time mode, that being said,
after user save data to their local shop that data should go to update at head office too. (If they're currently online)
User can working even their shop has disconnected from head office database.
Our estimation about amount of data is at-most 2,000 rows in each day.
Windows 2003 is OS of Server at head office and Windows XP is OS of all clients.
Update 2:
Currently they're about 15 clients, but this number will growing in fairly slow rate.
Data's size is about 100 to 200 rows per replication, I think it may not more than 5 MB.
Client connect to server by lease-line connection; 128 kbps.
I'm in situation that replication take a very long time (about 55 minutes while we've only 5 minutes or so) and almost of times I need to re-initialize job to start replicate again, if I don't re-initialize job, it can't replicate at all. In my POC, I find that it always take very long time to replicate after re-initialize, amount of time doesn't depend on amount of data. By the way, re-initialize is only solution I find it work for my problem.
As above, I conclude that, replication may not suitable for my problem and I think it may has another better solution that can serve what I need in Update 1:
Sounds like you may need to roll your own bi-directional replication engine.
Part of the reason things take so long is that over such a narrow link (128kbps), the two databases have to be consistent (so they need to check all rows) before replication can start. As you can imagine, this can (and does) take a long time. Even 5Mb would take around a minute to transfer over this link.
When writing your own engine, decide what needs to be replicated (using timestamps for when items changed), figure out conflict resolution (what happens if the same record changed in both places between replication periods) and more. This is not easy.
My suggestion is to use MS access locally and keep updating data to the server after a certain interval. Add a updated column to every table. When a record is added or updated, set the updated coloumn. For deletion you need to have a seprate table where you can put primary key value and table name. When synchronizing fetch all local records whose updated field not set and update (modify or insert) it to central server. Delete all records using local deleted table and you are done!
I assume that your central server is only for collecting data.
I currently do exactly what you describe using SQL Server Merge Replication configured for Web Synchronization. I have my agents run on a 1-minute schedule and have had success.
What kind of error messages are you seeing?

Resources