What will happen if more alarms occurred than the license limitation in thingsboard? - licensing

If I had a Cloud Maker license, that has an alarm creation limitation of 200 per month, what will be the TB behavior if more than 200 alarms have occurred? Does it mean that TB will stop generating alarms until the start of a new month subscription? Thanks in advance.

The Thingsboard will stop generating the new alarms.

Related

how to recover deleted VM after the end of credits?

I had some VMs in gogole cloud that were deleted when my credits ran out. I know that there is a 30 day period for storage, but for personal reasons I could not do it.
These VMs had my master's project, I am a student, and I really want to retrieve the content of these Vms. Thanks in advance.
As specified in the documentation, you must manually export any data that you want to keep from your Compute Engine VMs before the trial period ends, as your data and resources are only available for 30 days after the free trial ends.

Does Application Insights Enterprise with a WPF application treat each user pc as a node

We are using Application Insights to track some basic telemetry of a WPF application. As we have developed the app we have just been using the Basic subscription, but we would like to make use of the continuous export feature which requires the Enterprise.
But according to the pricing page the Enterprise is charged at $15/node/month, will it treat each users pc as a node? It is not really clear as AI is really aimed at web servers.
I am happy to pay for 1 node and whatever extra data charges are incurred but unsurprisingly $15 per user machine per month is not affordable.
It is based on the role instance, so basically the machine name. You should just stick with the basic plan unless you need the OMS Connector or Continuous Export. If you have this deployed to hundreds of machines and need these features ping #DaleKoetke on twitter. I think he might even have his e-mail there.
This is a response I got from Microsoft:
What we have to understand is few things.
For assumption Node==Client
 Continuous feed to Node – We count the number of distinct nodes sending telemetry data each hour. If a node does not send any telemetry during a particular hour, then it is not counted. The monthly per-node pricing for enterprise($15) assumes a node is sending telemetry every hour of the month, so if there are periods of inactivity for your application during the month, the actual charge will be lower.
So the short answer is yes we charge per node however the charge is proportional to the “activity feed”. No activity no charge and the enterprise charges will decline/node.
Calculation:-
So lets say you have 744 hours in a month and there is continuous feed for one node that is priced t $15.
So we see that every hour $0.020 is being charged for continuous feed/node.
You will need to calculate the continuous activity feed on the clients machine to get an idea of the charges.

The application tells it took zero ms in the database call, the database says it took 3 ms to complete its working, who to believe?

There's a time critical application that handles messages from a trading server where we get around 10K msgs per second... There are times when the application tends to be taking a lot of time in doing the inserts in the database... After several days of going back and forth with the dev team about which side is taking the time, our db team decided to build a simple C# app that resides on a server on the same rack and the same network as the database server. The database in question is sql server 2012 standard.
The times were taken from ado.net this way...
var currT1 = DateTime.Now;
sqlComm.ExecuteNonQuery();
var loadT = DateTime.Now;
The times from sql server were taken from the startTime and endTime columns from a server side trace... The two servers are time-synched but there's a differences of 10-15 ms...
Now what's making me want to bang my head on something is that while it's understandable the application takes longer than the db (cuz it has to do processing as well as other stuff)... But in some cases, the DB reports it took 4 ms, but the app says it took zero ms!!
I definitely think the bug is with the test app... But there's nothing separating the db call and the two timestamps... The log reads like this... App times (start, end, diff, method) followed by db calls (starttime, endtime, diff)
10:46:06.716
10:46:06.716
0:00:00.000
DataAdapter
10:46:06.697
10:46:06.700
0:00:00.003
is there something else I should provide?
Based on the observations from you the helpful lot, we used the stopwatch class... Then we got an even weirder issue... We used the stopwatch.elapsedticks property thinking that dividing it by 10 would give us microseconds... the duration column in the server side trace is in microseconds because it's saved to a file. Still, the time from the application is less than from the database... As it turned out, the property to use was elapsed.tick and not the elapsedtick property to get the microseconds. Dividing elapsed.tick with 10 gave us the microseconds...
So there it is... got both the application and the db to give us very close to accurate (can't be sure ever :) ) times...
The conclusion that I have drawn is to not only not believe the datetime.now .net property but also the startTime and endTime server trace columns... calculating duration from dedicated timers is what's required...
Thanks for the heads up guys...

When is Enforced Rate reduced below queue.xml rate?

Our "Enforced Rate" dropped to 0.10/s even though the queue is defined to be 20/s for a Push Task Queue. We had a 2 hour backlog of tasks built up.
The documentation says:
The enforced rate may be decreased when your application returns a 503
HTTP response code, or if there are no instances able to execute a
request for an extended period of time.
There were no 503s (all 200s), yet we saw 0 (zero) instances running for the version processing that queue, which may explain the Enforce Rate drop. The target version of the application processing the queue is not the primary version, and the target version has no other source of requests than the Push Task Queue workload.
If the Enforced Rate had been dropped due to lack of instances, why did GAE not spin up new instances?
There were no budget limits or rate limits that explain the drop to 0 instances.
I'm answering this as the bug we're starring http://code.google.com/p/googleappengine/issues/detail?id=7338

Queuing Emails on App Engine

I need to send out emails at a rate exceeding App Engine's free email quota (8 emails/minute). I'm planning to use a TaskQueue to queue the emails, but I wondered: is there already a library or Python module I could use to automate this? It seems like the kind of problem someone might have run into before.
If it's an option, why not just enable billing? It'll jump the max rate from 8 recipients/minute to 5,100 recipients/minute.
The first 2000 recipients is free each day, as long as you aren't going over the daily free quotas my understanding is that it will not cost you anything (and if you need to email more than 2000 people per day you're going to have to enable billing anyways).
The deferred library is designed for exactly this sort of thing. Simply use deferred.defer(message.send), and make sure the queue you're using has the appropriate execution rate.
its cheaper to just pay for it for a year than to engineer a workaround.
Easiest way in my opinion would be to use a queue, ex Amazon SQS, and pull 8 records per minute, in a cron job running every minute.
Considering it was pushed into the queue, then taken out, I am working out the math that it is an extremely cheap service.
See below, 0.000002 is the rate for 2 requests. (Add and View)
8 requests per minute, 60 minutes in an hour, and 24 hours in a day. Take into account 30 days in the average month, you are still under $1.
0.000002 * 8 * 60 * 24 * 30 = $0.6912
This might not be exactly what you were looking for, but it should be a pretty simple solution.
EDIT:
See here, a python SQS & S3 Lib (sqs is all that you should be looking for).
http://pypi.python.org/pypi/Python-Amazon/0.5
I'm not familiar with any canned solutions to this problem, but it should be something very easy to solve. Write the emails to a datastore table, with an auto_add_now date field to record the order in which they entered. Your cron job that runs every minute pulls the eight oldest records off, mails them and deletes them.
Certainly, if you can solve this is a reasonably generic manner, you can be the person who solves this problem for everyone with a nice open source module.

Resources