I have created some SSRS subscription which is generating as UNIQUE ID(GUID) in msdb.dbo.sysjobs.
But now i want to rename these IDs to a proper name which will easy for maintenance.
I tried to rename the job name, steps of job and it's working properly.
Problem is that, again, this auto job has created (with GUID) for all the subscription.
This is the reason, two jobs are firing for same report , one from propername job and one from GUID job.
Please see below
You should not do this. Maintain all subscriptions through Report Manager, or SharePoint if you reports are hosted on SP. Sure, you can change the name of the job, but the minute you change the subscription, the job will be replaced.
If you want to be able to run subscriptions by hand, try make sure that the schedules vary enough from one to the other so you can identify them by when they run.
Messing with these jobs will only cause you pain the likes of which you do not want to experience. Just come to terms with it. Life is better that way.
Related
I am not very sure if there is possibility in SNOWFLAKE to create separate weblinks for QA and dev region.
Now we have one common link to access SNOWFLAKE in our company and we have QA and Dev databases built in that, I was just wondering if there is a option to create seperate web links for, one link for QA and one link for Dev.
You can have "secondary" account setup, on a new URL that are part of the same bill, but they really are "another" account.
So the question becomes what value does this add.
With different URL's you can reuse the same SQL verbatim and not need to alter it per "region". You can also reuse the same use accounts. If you DDOS the endpoint (which uses to happen with 100+ connections) you also loss access to the admin control surface to make the instance big to handle "the increase in load" (this might have changed over the years, we last had this problem is 2017)
Re-using the same account but have prod-x/dev-x/qa-x users/databases/roles, means you have just one instance to admin. You have to use some region aware software to run/rewrite you SQL.
We did both at my old job. We started with all in one, and just handled it, but we did DDOS the endpoint and block ourselves from making it bigger till we found the tool that was just start new sessions, and run hard queries. So we got a second account (ignore the already extra account is different world regions) and planned to do all dev from that. But when we spun it up, we created some warehouse and back then the SQL commands didn't set a default auto off time like the UI, and some features where missing on the region, so we walked away from that instance for a month or two, and then got a bill with ~15K USD of server charges. Which was unpleasant. Anyways the dev instance never really got used. (the default to the warehouse creation was changed though). For our system having different account was really wasteful. Because to have data "always loading" and have dashboards always loadable for test (and multi-regions at that) means always have one extra-small always running, where-as when they where on the same instance both QA and DEV ran on the same instance, and given the total data load was so tiny, 1 instance was more than enough.
Which is to say, more instances leads to a lot of waste. If you like waste, and extra overhead, go for it. Many people come from a big-iron perspective, thus to avoid noisy neighbor problem, each thing needs to be it's own box, but that is just not an issue here. just use prefix's, and it's "all separate"
I'm creating a bigquery table where I join and transform data from several other bigquery tables. It's all written in sql and the whole query takes about 20 minutes to run and consists of several sql scripts. I'm also creating some intermediate tables before the end table is created.
Now I want to make above query more robust and schedule it and I cant decide on the tool. Alternatives I'm thinking about.
Make it into a dataflow job and schedule with cloud scheduler. This feels like it might be overkill because all the code is in SQL and from bq --> bq.
Create scheduled queries to load the data. No experience with this but seems quiet nice
Create a python script that executed all the sql using the BQ API. Create a cron job and schedule it to run somewhere in GCP.
Any suggestions on what would be a preferred solution?
If it's encapsulated in a single script (or even multiple) I'd schedule it through BQ. It will handle your query no different than the other options so it doesn't make sense to set up extra services for it.
Are you able to run it as a single query?
According to my experience with GCP, both Cloud Composer and Dataflow jobs would be, as you suggested, overkill. None of these products would be serverless and would probably imply a higher economic cost because of the instances running in the background.
On the other hand, you can create scheduled queries on a regular basis (daily, weekly, etc) that are separated by a big enough time window to make sure the queries are carried out in the expected order. In this sense, the final table would be constructed correctly from the intermediate ones.
From my point of view, both executing a Python Script and sending notifications to Pub/Sub triggering a Cloud Function (as apw-ub suggested) are also good options.
All in all, I guess the final decision should depend more on your personal preference. Please feel free to use the Google Cloud Pricing Calculator (1) for having an estimate of how costly each of the options would be.
Disclaimer: I wouldn't be at all surprised to find out this is duplicate somewhere, but I have literally been searching for hours. All I can find it DBA information that I don't believe pertains to what I am trying to accomplish.
I am currently developing a small database for a friends startup. He has jobs that he needs to track that are performed on regular intervals (e.g. semi-annually, annually, quarterly, weekly, etc.).
I am trying to wrap my mind around how I can implement a solution in sql server where a customer has 0 to many tasks, and a specific task can have 0 many customers. Seems simple enough, but I also want to keep a history for each time the task was completed for each customer where it is assigned. This would be a record with a scheduled task and a completed date. Lastly, I need to be able to execute a query to retrieve upcoming scheduled tasks within a given period of time (e.g. all scheduled jobs coming up next month).
I know that there are is a ton of software that does stuff like this, so it can't be that uncommon, but I cannot find any information that is getting me anywhere. If there are any resources anyone could recommend it would greatly appreciated.
You should just create a table to hold all the list of tasks that need to be done, with columns where you can later record when they have been done.
You can easily then just query it to find what needs to be done.
Then create a stored procedure that does the work. Have the procedure do the right thing for different periods i.e. monthly, etc. Have it just work out what's needed, and do it.
Create a single SQL Server Agent job that runs that procedure each month. (Or however often it's needed).
As you do each step, no-one's going to mind if you ask for help with each part i.e. get comments on your table when you first design it, etc. (Or at least I wouldn't mind - some might but ignore them)
We are having an issue where we need event relations for people(s), and are having problems with this very large group of people having almost 400 total event relations in this one week we are testing on... When trying to grab this large groups event relations, it will take forever and possibly time out. However, if you try again right after a timeout it goes in a couple seconds and is great. I was thinking this was salesforce just chaching the soql query/information and so it could act very quickly the second time. I tried to kind of trick it into having this query cached and ready by having a batch job that ran regularly to query every members event relations so when they tried to access our app the timeout issue would stop.
However, this is not even appearing to work. Even though the batch is running correctly and querying all these event relations, when you go to the app after a while without using it, it will still timeout or take very long the first time then be very quick after that.
Is there a way to successfully keep this cached so it will run very quickly when a user goes and tries to see all the event relations of a large group of people? With the developer console we saw that the event relation query was the huge time suck in the code and the real issue. I have been kind of looking into the Platform Cache of salesforce. Would storing this data there provide the solution I am looking for?
You should look into updating your query to be selective by using indexes in the where cause and custom indexes if necessary.
I'm looking for the easiest way to view what users are logging into my database. We have some old user accounts that might not be getting used anymore. Instead of just turning them off and seeing who complains, I thought there might be some way to monitor who logs in and runs some type of query over the next month or so. What would be the easiest way to monitor and track this kind of activity?
Edit:
I would like to do this for all databases on the server.
To see who's connecting, you can use Logon Triggers which allows you to log access. Running a trace for a month or 2 to audit login events may simply not work if you failover, restart SQL etc
However, to see what someone is doing after connection, then you'll really have to use Profiler like Mitch said
Run a profiler trace with the Audit Login event selected: or just select the Standard Trace Template (and perhaps limit the trace size).
See Using SQL Server Profiler
The easiest way to do this would be with a third-party tool that's custom-written to do the work for you. Otherwise you have to fuss with (not SQL Profiler but) traces, regularly loading resulting data, and processing it, and for my money, that just is not an "easy" thing to do.
Not much help. The reason I'm posting is that just because someone (or something) hasn't logged in for a day, a week, or a month, does not mean that the account has gone derelict--I would only consider it an indication. I would recommend that once you've identified it as potentially derelict, then you disable it and see what happens. Give that a month, a quarter, or even a year (depends on your system) before actually deleting it.
(Of course, tracking that information over a month/quarter/year is yet more fuss and bother. Ideally, all accounts get created with deactivation/deletion rules, and their users/owners are informed of the rules under which they get to access the system. This probably won't help you now, but keep it in mind for the next system you design.)