I'd like to deploy two logicApps, having HTTP and E-Mail as triggers.
For budgets, I've only seen the option to get an alert, once limit is reached and I've few questions:
Is there any option to shut the logicApp off, once i reach a specified limit?
How do you protect your logicApps to prevent unwanted high bills?
You can write a Powershell Script that stops a resource based on its cost and implement it in an Automation Runbook. Then you can schedule the Runbook to execute it periodically based on your needs.
Related
We are using Cloud Tasks to call an "on-prem" API gateway (using Http request). This API gateway (IBM API Connect) sits in front off an on-prem system (Oracle). This back end system can at times be very slow. >5s.
We are desperately trying to increase the throughput but “adjusting” the Cloud Task queue settings (like -max-dispatches-per-second etc).
gcloud tasks queues update queue-1 --max-dispatches-per-second=8 --max-concurrent-dispatches=16
But all we see when we “crank up” the Cloud Task settings is that yellow triangle telling us that we are “enforced" to lower rate due to "system resources".
My understanding is that the yellow triangle shows up due to “errors” from the API gateway we call. Basically GCP/Cloud Tasks re-acts "by it self" based on return codes/errors/time-outs/latency etc from the API end-point we are calling with the result of a very low rate/thru-put. Is this understanding correct? Can someone verify?
The GUI does say that "or because currently there is no instance available to execute a request". What instance are they talking about? So to me that means that there is a possibility that it's "GCP specific" resources that comes into the picture here and have an effect on the "enforced rate"? Or?
Anyway, any help/insight would be appreciated.
Thanks
The error message you are seeing can be prompted by any of the 2 things you are mentioning: "Enforced rates" or "lack of GCP resources at the time of request".
The "Enforced rates" that Cloud tasks is refering to are the ones mentioned here. As you mention, this is due to the server being overloaded and returning too many errors. When this happens Cloud tasks acts by itself and will slow down execution until errors stop.
The "currently there is no instance available to execute a request" message you are seeing is that GCP does not have resources to create the request. Remember that cloud tasks is a managed service so this means that requests are created by GCP fully managed compute engine instances. This is a bit rare, although it does happen from time to time.
In order to make sure which of these 2 issues is the one you are running into, I would recommend you to check your Stackdriver logs and see if you are getting a high amount of errors on the Cloud Tasks filter as if this is the case, most likely you are running into the "Enforced rates" territory.
Hope you find this useful!
I have a desktop application which should be notified on any table change. So, I found only two solutions which fits well for my case: SqlDependency and SQLCLR. (I would like to know if there is better in .NET stack) I have built the both structure and made them work. I only able to compare the duration of a s̲i̲n̲gl̲e̲ response from SQL Server to the client.
SqlDependency
Duration: from 100ms to 4 secs
SQLCLR
Duration: from 10ms to 150ms
I would like this structure to be able to deal with high rate notifications*, I have read a few SO and blog posts (eg: here) and also am warned from a colleague that on mass requests SqlDependency may go wrong. Here, MS offers something which I didn't get that may be another solution to my problem.
*:Not all the time but for a season; 50-200 requests per sec on 1-2 servers.
On the basis of a high rate of notifications and in parallel with performance, which of these two should I go on with, or is there another option?
Neither SqlDependency (i.e. Query Notifications) nor SQLCLR (i.e. call a Web Service via a Trigger) is going to work for that volume of traffic (50-200 req per sec). And in fact, both options are quite dangerous at those volumes.
The advice given in both linked pages (the one on SoftwareEngineering.StackExchange.com and the TechNet article) are all much better options. The advice on Best way to get push notifications to server from ms sql database (i.e. custom queue table that is polled every few seconds) is very similar to option #1 of the Planning for Notifications TechNet article (which uses Service Broker to handle the processing of the queue).
I like the queuing idea (fully custom or using Service Broker) the best and have used fully custom queues on highly transactional systems (easily the volume you are anticipating) with much success. The pros and cons between these two options (as I see them, of course) are:
Service Broker
Pro: Existing (and proven) framework (can scale and tied into Transactions)
Con: not always easy to configure or administer / debug, can't easily aggregate 200 individual events in 1 second into a single message (will still be 1 message per each Trigger event)
Fully custom queue
Pro: can aggregate many simultaneous trigger events into single "message" to client (i.e. polling service picks up whatever changes happened since last polling), can make use of Change Tracking / Change Data Capture as the source of "what changed" so you might not need to build a queue table.
Con: Is only as scalable as you are able to make it (might be as good, or better, than Service Broker, but highly dependent on your skill and experience to achieve this), needs thorough testing of edge cases to make sure the queue processing doesn't miss, or double-count, events.
You might be able to combine Service Broker with Change Tracking / Change Detection. If there is an easy-enough way to determine the last change processed (change as noted in Change Tracking / Change Data Capture table(s)), then you can set up a SQL Server Agent job to poll every few seconds, and if you find that new changes have come in, then grab all of those changes into a single message to send to Service Broker.
Some documentation to get you started:
Track Data Changes (covers both Change Tracking and Change Data Capture)
SQL Server Service Broker
Question/Environment
The goal of my web application is to be a handy interface to the database at our company.
I'm using:
Scalatra (as minimal web framework)
Jetty (as servlet container)
SBT (Simple Build Tool)
JDBC (to interface with the database)
One of the requirements is that each user can manage a number of concurrent queries, and that even when he/she logs off, the queries keep running and can be retrieved later on (or their completion status checked if they stopped for any reason).
I suppose queries will likely have to run in their own seperate thread.
I'm not even sure if this issue is orthogonal or not to connection pooling (which I'm definitely going to use, BoneCP and C3PO seem nice).
Summary
In short: I need to have very fine-grained control over the lifetime of database requests, and they cannot be bound to the servlet lifetime
What ways are there to fulfill my requirements? I've searched quite a bit on google and stack overflow and haven't found anything that addresses my problem, is it even possible?
What is missing from your stack is a scheduler. e.g http://www.quartz-scheduler.org/
A rough explanation:
Your connection pool (eg C3P0) will be bound to the application's lifecycle.
Your servlets will be sending query requests to the scheduler (these will be associated to the user requesting the query).
The scheduler will be executing queries as soon as possible, by using connections from the connection pool. It may also do so in a synchronized/serialized order (for each user).
The user will be able to see all query requests associated with him, probably with status (pending, completed with results etc).
I have a couple of applications which do long queries in an OLTP database. They however have a significant impact on database server load.
Is it possible to run them with low priority? Is still intend to allow the user make adhoc queries, but response time is not critical.
Please advice solutions for oracle and/or sqlserver.
If you're using 11g, then perhaps the Database Resource Manager will help you out. The resource manager allows you to change consumer groups based on I/O consumption, something that was unavailable in prior releases. If not, the best you can do is lower priority based on CPU use.
Place resource limits on their accounts via profiles. Here is a link:
http://psoug.org/reference/profiles.html
For this you can use Oracle Resource Manager.
Most important for this is that you need to have an idea how Resource Manager can pick out which session to throttle. You can have lots of criteria to assign a user to a resource consumer group. Often username is used but this can be a few other things like machine, module etc. See Creating Consumer Group Mapping Rules (http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/dbrm004.htm#CHDEDAIB)
Specifying Automatic Switching by Setting Resource Limits (http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/dbrm004.htm#CHDDCGGG) could be very useful for you since all users are starting in the same OLTP group. Some start long running adhoc queries. You want those sessions to switch to a lower priority group for the duration of that call.
There could be one little snag: if that throttled session has locks, those locks will stay longer and might cause problems elsewhere.
I have been building a multi-user database application (in C#/WPF 4.0) that manages tasks for all employees of a company. I now need to add some functionality such as sending an email reminder to someone when a critical task is due. How should this be done? Obviously I don’t want every instance of the program to be performing this function (Heh each user would get 10+ emails).
Should I add the capability to the application as a "Mode" and then run a copy on the database server in this mode or would it be better to create a new app altogether to perform "Global" type tasks? Is there a better way?
You could create a windows service/wcf service that would poll the database at regular intervals for any pending tasks and send mails accordingly.
Some flag would be needed to indicate whether email is send or not for a particular task.