I have an excel spreadsheet with name and description and i want to create those in Salesforce. The sheet has 500 rows.
Through my app i send this to salesforce but it seems to always stop at about 98% .
Is that due to some maximum number of concurrent requests limit reached ?
Related
We'd like to pick a file in this structure LogicApp->APIM->SFTP. APIM because we want to define a staticIP for SFTP server to whitelist. I understand how we can restrict/monitor calls to LogicApp via APIM but i require help in the other direction.
Thanks
We can restrict/monitor the calls (call rate) to a specified number per basis of time period specified using the rate-limit policy that prevents API usage spikes on based of per subscription.
Set the calls rate on number of calls box and renewal period in seconds using Inbound Processing option:
Caller will receive 429 Too many requests as response & status code if the call rate is exceeded:
I'm running a parallel job to copy data from Heroku into Google Cloud Storage and eventually into Bigquery. The way I'm doing it right now is split the job of querying IDs in the range [61500000, 62000000) into say 40 taskqueue tasks, and inside each task handler is responsible for the subrange say [
61500000, 61512500). Inside each taskqueue task handler, it spawns 3 goroutines to query our Heroku API parallely, and an additional goroutine doing an Insert to Google Cloud Storage. The way the 3 HTTP API input goroutines pump data to the GCS insert goroutine is through io.Pipe().
However, for some reason I can't get this to work except for toy workloads. Virtually every time there will be some shards failing with error:
"Post https://www.googleapis.com/upload/storage/v1beta2/b/ethereal-fort-637.appspot.com/o?alt=json&uploadType=multipart: Over quota: "
returned from the storage.ObjectsInsertCall.Do().
I checked everywhere for possible places where we hit the quota for billed apps:
* urlfetch total limits developers.google.com/appengine/docs/quotas#UrlFetch
* instance memory developers.google.com/appengine/docs/go/modules/#Go_Instance_scaling_and_class
but still couldn't find the cause.
Below I explain why I ruled out the above possibilities:
urlfetch total limits
urlfetch is used in the 3 goroutines to query our API server for JSON data. These 3 goroutines then process the data and send them to the GCS goroutine through io.Pipe(). The code looks something like
cl := urlfetch.Client(c)
resp, err := cl.Get("pic-collage.com/...")
if err != nil {
if appengine.IsOverQuota(err) {
c.Errorf("collageJSONByID over quota: %v", err)
}
return err
}
However, while we see numerous "POST www.googleapis.com/upload/storage/v1beta2...: Over quota: " errors, we never ever see the logs "collageJSONByID ..." related to urlfetches to our Heroku server.
instance memory
We are using the B1 instance for our jobs which has 128MB of RAM. Throughout our runs, we see from our appengine console our memory usage is constantly well below 30MB for each and every instance.
I also applied the fix for the caching inside serviceaccounts described in "Over quota" when using GCS json-api from App Engine , but the problem persists.
I wonder is it possible for us to get more information about the specific App Engine quota we are exceeding? Or perhaps there are other hidden quotas for Google Cloud Storage that are not mentioned in the docs ?
The Over quota: message can happen when you've reached your daily billing limit. If you've enabled billing for your application, make sure the daily budget is high enough to cover your usage.
I have a free Java application running on GAE that needs to send 3 emails per day. It used to send 2 per day and it worked fine, but when I increased it to 3 it started throwing an OverQuotaException. All 3 calls to the Mail API are executed in the same method at 00:00 hs, but I understand that you can send up to 8 emails in the same minute within the free quota.
This is the exception:
com.google.apphosting.api.ApiProxy$OverQuotaException: The API call mail.Send() required more quota than is available.
What could be the reason behind this?
Are you sending the emails to multiple recipients? If so, each recipient is counted as a separate email.
Or you might be hitting the 340 KB/minute limit if your emails are long.
If your emails have several attachments, you might also be hitting the 8 attachments/minute or 10 MB attachments/minute limit.
I have the below requirement
A large text file around 10Mb to 25Mb (With 50,000 to 100,000 lines of data) is uploaded into the web application. I have to validate the file line by line and write the output to another location and then display a message to the user.
The App Server is WebLogic and its is accessed through Web Server through Apache Bridge. Apache Bridge times out pretty quickly during the upload + processing activity. Is there any way to solve this issue without changing the timeout of the Apache Bridge
What is best possible solution ? Below are my current thoughts.
Soln 1 Upload the file and return back to the page. Then trigger a Ajax to run the validation in a separate thread and check its status through further Ajax requests.
Soln 2. Use sc_partial_content(206) http Code to keep the connection alive.
I'm attempting to do some stress testing on my GAE application
to see how it's performance holds up with a large number of
simultaneous users. I tried having a 100 threads each send an https
requests within 1 second, but half of them failed with a 503 status code the following
message:
"Error: Connection not allowed: reached maximum number of
connections."
This is a paid app, so I tried upgrading the instance class and
setting up some idle instances, but it doesn't seem to make any
difference.
Is there a limit on the number of simultaneous connections? Or is
this because all the requests are generated from the same host?
Thanks
EDIT: Response to Kyle: I'm using jmeter and sending 100 simultaneous requests to google.com doesn't ahve any issues.
Response to Nick: I'm not expecting individual clients to send lots of simultaneous requests, I was trying to simulate 100 users sending 1 request each.
Unbeknownst to me, a colleague had added a custom throttling filter to our application :) I removed this from the web.xml and it solves the problem.