In my GAE app I add rows to Google Spreadsheet.
taskqueue.add(url='/tabletask?u=%s' % (user_id),
retry_options=taskqueue.TaskRetryOptions(task_retry_limit=0),
method='GET')
class TableTaskHandler(webapp2.RequestHandler):
def get(self):
user_id = self.request.get('u')
if user_id:
try:
tables.add_row(
user_id
)
except Exception, error_message:
pass
def get_google_api_service(scope='https://www.googleapis.com/auth/spreadsheets', api='sheets', version='v4'):
''' Login to Google API with service account and get the service
'''
service = None
try:
credentials = AppAssertionCredentials(scope=scope)
http = credentials.authorize(httplib2.Http(memcache))
service = build(api, version, http=http)
except Exception, error_message:
logging.exception('Failed to get Google API service, exception happened - %s' % error_message)
return service
def add_row(user_id, user_name, project_id, question, answer, ss_id=SPREADSHEET_ID):
service = get_google_api_service()
if service:
values = [
[
user_id, user_name, project_id, question, answer # 'test1', 'test2'
],
# Additional rows ...
]
body = {
'values': values
}
# https://developers.google.com/sheets/api/guides/values#appending_values
response = service.spreadsheets().values().append(
spreadsheetId=ss_id,
range='A1:E1000',
valueInputOption='RAW',
body=body).execute()
I add many tasks with different row values.
In result I get critical errors 'Exceeded soft private limit of 128 Mb with 158 Mb' after servicing 5 requests in total.
What could be wrong here?
At first glance there’s nothing special in your code that might lead to a memory leak.
I don’t think anybody can locate it unless he’s very deeply familiar with the 3rd party libraries used and their existing bugs. So I’d approach the problem as follows:
First lets find out where exactly memory is leaking and whether it’s leaking at all.
Refer to tracemalloc, memory_profiler, heapy or whatever else you’re familiar with. Most profilers available are listed here Which Python memory profiler is recommended?
Expected outcome: you clearly know where exactly the memory is leaking, up to a code line / python expression
If the problem is in a 3rd party code, try to dig deeper into its code and figure out what’s up there
Depending on p.2 outcome
a. Post another SO question like ‘why this python code excerpt leads to a memory leak’ - ideally it should be a standalone code snippet that leads to a weird behavior free of any third party libraries and reproducible locally. Environment specification - at least python version, is appreciated
b. If the problem is in a 3rd party library and you’ve located the problem, open a bug report on github/anywhere the target project is hosted
c. If the problem is clearly in a 3rd party library and you’re unable to find the cause, open a ticket describing the case with the profiler's report attached
It seems to be that you are running instance class B1 or F1, which has a memory limit of 128 MB.
A possible solution would be to use a higher instance class. But please keep in mind that choosing a different instance class will have impact on your pricing and quotas.
Related
I am using the following code to get an access token using AzureAuth package in R
library (AzureAuth)
AuthToken <- get_azure_token("120d688d-1518-4cf7-bd38-182f158850b6",tenant="72f988bf-86f1-41af-91ab-2d7cd011db47", app="1950a258-227b-4e31-a9cf-717495945fc2");
However, I don't see any examples on how to use the obtained AuthToken in query data from an API?
Appreciate any help!
Pls point out my mistake if I misunderstand your question.
=======================Update=======================
Yes, I found some documents and I followed the sample. And I found that, if I wanna to call graph api, I just need to 'install.packages("AzureGraph")', and with this package I can reach my goal. And if I need to use AzureR to do some other operations on azure, the ducoment above has offered an example to illustrate how to create a resource group and storage account in AzureRMR, and a registered app in AzureGraph.
===================================================
Getting started with httr
I use this code to get httr get request, and http post request is similar, look up the document above for more details:
a <- GET("https://graph.microsoft.com/v1.0/me", add_headers(Authorization = "Bearer <accesstoken>"))
I just figured out the syntax. I found it difficult on two counts
Syntax for POST command. There are lot of examples for GET command but not many on POST
Getting to access_token. However once I started using R Studio, I was able to inspect the object and find the right field. Here is the syntax that worked for me
res <- POST(EnvironmentFqdnUrl,add_headers(Authorization = paste("Bearer", AuthToken$credentials$access_token)), body = upload_file("body.json"), verbose())
print(res)
I successfully installed and ran a couple of circuits on a backend the other day (essex).
Everything was ok, results came up, but the next day, once I wanted more QC, I could not manage to get a provider.
I have looked into my account (active), looked into the package (up-to-date), and a new file in the project. I also already disabled and enabled the account without problems, but I keep having this error.
Code
from qiskit import IBMQ
IBMQ.active_account()
IBMQ.providers()
provider = IBMQ.get_provider(hub='ibm-q', group='open', project='main')
and I get:
>~/my_environment_name/lib/python3.7/site-packages/qiskit/providers/ibmq/ibmqfactory.py in get_provider(self, hub, group, project)
425 raise IBMQProviderError('No provider matches the specified criteria: '
426 'hub = {}, group = {}, project = {}'
--> 427 .format(hub, group, project))
428 if len(providers) > 1:
429 raise IBMQProviderError('More than one provider matches the specified criteria.'
IBMQProviderError: 'No provider matches the specified criteria: hub = ibm-q, group = open, project = main'
I would like to know where I am wrong, I look forward to keep learning thru the backends efficiently.
Thank you in advance
This means that there is no provider that matches all the criteria you specified, so in that hub, group and project. This could be because your account hasn't loaded correctly, so check to see if anything is returned from IBMQ.providers(). If there isn't anything load your account using IBMQ.load_account(). The other issue could be that there are genuinely no backends that meet those criteria, so try running IBMQ.get_provider() instead.
Try to use API token to enable your IBMQ account.
from qiskit import IBMQ
provider = IBMQ.enable_account("your-api-key") # We load our account
provider.backends() # We retrieve the backends to check their status
for b in provider.backends():
print(b.status().to_dict())
Create IBM Quantum account if you don't have one, then use the API token that available in the dashboard as enable_account() method argument to resolve this issue.
For More: https://quantum-computing.ibm.com/lab/docs/iql/manage/account/ibmq
https://quantum-computing.ibm.com/
https://www.ibm.com/account/reg/us-en/signup?formid=urx-19776&target=https%3A%2F%2Flogin.ibm.com%2Foidc%2Fendpoint%2Fdefault%2Fauthorize%3FqsId%3D70b061b4-7c64-4545-a504-a8871f2d414f%26client_id%3DN2UwMWNkYmMtZjc3YS00
I have a simple GetStaff function that should retrieve all users from active directory. We have over a 1000 users so the directory searcher is using paging because the default for the AD MaxPageSize is 1000.
Currently the search works 'sometimes' when I build and sends back all 1054 users, and other times it only sends back 1000. If it works once, it works all the time. If it fails once, it fails all the time. I have set everything in using statements to make sure the objects are destroyed, but it still doesn't always seem to respect the PageSize attribute. By default if the PageSize attribute is set, the searcher should use a SizeLimit of 0. I have tried leaving the size limit out, setting it to 0, and setting it to 100000 and the unstable result is the same. I have also tried lowering the PageSize to 250 and get the same unstable results. Currently I am trying changing the ldap policy on the server to have a MaxPageSize of 10000 and I am still receiving 1000 users with the search PageSize to 10000 also. Not sure what I am missing here, but any help or direction would be appreciated.
public IEnumerable<StaffInfo> GetStaff(string userId)
{
try
{
var userList = new List<StaffInfo>();
using (var directoryEntry = new DirectoryEntry("LDAP://" + _adPath + _adContainer, _quarcAdminUserName, _quarcAdminPassword))
{
using (var de = new DirectorySearcher(directoryEntry)
{
Filter = GetDirectorySearcherFilter(LdapFilterOptions.AllUsers),
PageSize = 1000,
SizeLimit = 0
})
{
foreach (SearchResult sr in de.FindAll())
{
try
{
var userObj = sr.GetDirectoryEntry();
var staffInfo = new StaffInfo(userObj);
userList.Add(staffInfo);
}
catch (Exception ex)
{
Log.Error("AD Search result loop Error", ex);
}
}
}
}
return userList;
}
catch (Exception ex)
{
Log.Error("AD get staff try Error", ex);
return Enumerable.Empty<StaffInfo>();
}
}
A friend got back to me with the below response that helped me out, so I thought I would share it and hope it helps anyone else with the same issue.
The first thing I think of is "Are you using the domain name, e.g. foo.com as the _adpath?"
If so, then I have a pretty good idea. A dns query for Foo.com will return a random list of all of up to 25 DCs in the domain. If the first DC in that random list is not responsive or firewalled off and you get that DC from DNS then you will experience the behavior you describe. Since the DNS is cached on the local machine, you will see it happen consistently one day, then not do it the next. That's infuriating behavior. :/
You can verify this with a network trace to see if this is happening.
So how do you workaround it? A couple of options.
Query DNS -> create a lists of hosts returned -> Try the first one. If it fails, Try the next one. If you hit the bottom of the list, Fail. If you do this, log each independent failure noisily so the admins don't blame you.
Even better would be to ask the AD administrators for a list of ldap servers and use that with the approach described above.
80% of administrators will tell you just to use the domain name. This is good because that deploying a new domain will "just work" with no reconfiguration required.
15% of administrators will want to specify a couple of DCs that are network closest to the application. This is good for performance, but bad if they forget about this application when the time comes for them to upgrade their domain.
The other 5% doesn't really matter. :)
The next point that I see is that you are using LDAP, not LDAPs. That is fine, but there is a risk that you will use "Basic" binds. With "Basic" binds, joe hacker can steal your account credentials using a network sniffer. There are a couple of possible workarounds.
1. There is another DirectoryEntry constructor that will let you specify "Secure" as the auth method.
2. Ask your admins if you can use LdapS. (more portable, in case you need to talk to an LDAP server other than Active Directory)
The last piece is regarding Page Size. 1,000 should be fine universally. Don't use any value > 5,000 or you can expect some fidgety behaviors. i.e. This is higher than the default limit under Windows 2003, and in Windows 2008 the pagesize is hardcoded limited to 5,000 unless it's been overridden using a rather obscure bit in AD called dsHeuristics. http://support.microsoft.com/kb/2009267
LDAP is configured, by default, to only return a maximum of 1000. You can change this setting on the domain your requesting from.
While running some tests, I started to get an IntegrityError in my setUp function. Here is my code:
def setUp(self):
self.client = Client()
self.emplUser = User.objects.create_user('employee#email.com', 'employee#email.com', 'nothing')
self.servUser1 = User.objects.create_user('thebestcompany#email.com', 'thebestcompany#email.com', 'nothing')
self.servUser2 = User.objects.create_user('theothercompany#email.com', 'theothercompany#email.com', 'nothing')
self.custUser1 = User.objects.create_user('john#email.com', 'john#email.com', 'nothing')
self.custUser2 = User.objects.create_user('marcus#email.com', 'marcus#email.com', 'nothing')
... save users here ...
Im wondering as to how this IntegrityError keeps getting raised. I delete all the users in the tearDown function and am using sqlite3 as my DB backend. I see no conflicting usernames and in production, I have no issues with using emails as usernames.
This started happening only half an hour ago, out of the blue. Has anyone run into a solution to this problem?
I'm sure you're not suffering this problem anymore since you wrote 18 months ago, but I had this problem too, and finally figured out what was happening. When using Postgres for test cases, DB changes are done in a transaction and simply rolled back, and so it is not necessary to explicitly clear tables in tearDown(), however, in SQLite, it is necessary.
Late but more appropriate answer, for the people who would land there after a google search:
When there is interaction with the database in your tests (typically, creating model instances), you should subclass your test class from django.test.TestCase, which flushes the database after each test is run.
Then you don't need to write a tedious tearDown method in all your test classes.
See https://docs.djangoproject.com/en/dev/topics/testing/overview/#writing-tests
Greetings,
Well I am bewildered. I have been tasked with updating a PHP script that uses the BulkAPI to upsert some data into the Opportunity entity.
This is all going well except that the Bulk API is returning this error for some clearly defined custom fields:
InvalidBatch : Field name not found : cv__Acknowledged__c
And similar.
I thought I finally found the problem when I discovered the WSDL version I was using was quite old (Partner WSDL). So I promptly regenerated the WSDL. Only problem? Enterprise, Partner, etc....all of them...do not include these fields. They're all coming from the Common Ground package and start with cv_
I even tried to find them in the object explorer in Workbench as well as the schema explorer in Force.com IDE.
So, please...lend me your experience. How can I update these values?
Thanks in advance!
Clif
Screenshots to prove I have the correct access:
EDIT -- Here is my code:
require_once 'soapclient/SforcePartnerClient.php';
require_once 'BulkApiClient.php';
$mySforceConnection = new SforcePartnerClient();
$mySoapClient = $mySforceConnection->createConnection(APP.'plugins'.DS.'salesforce_bulk_api_client'.DS.'vendors'.DS.'soapclient'.DS.'partner.wsdl.xml');
$mylogin = $mySforceConnection->login('redacted#redacted.com', 'redactedSessionredactedPassword');
$myBulkApiConnection = new BulkApiClient($mylogin->serverUrl, $mylogin->sessionId);
$job = new JobInfo();
$job->setObject('Opportunity');
$job->setOpertion('upsert');
$job->setContentType('CSV');
$job->setConcurrencyMode('Parallel');
$job->setExternalIdFieldName('Id');
$job = $myBulkApiConnection->createJob($job);
$batch = $myBulkApiConnection->createBatch($job, $insert);
$myBulkApiConnection->updateJobState($job->getId(), 'Closed');
$times = 1;
while($batch->getState() == 'Queued' || $batch->getState() == 'InProgress')
{
$batch = $myBulkApiConnection->getBatchInfo($job->getId(), $batch->getId());
sleep(pow(1.5, $times++));
}
$batchResults = $myBulkApiConnection->getBatchResults($job->getId(), $batch->getId());
echo "Number of records processed: " . $batch->getNumberRecordsProcessed() . "\n";
echo "Number of records failed: " . $batch->getNumberRecordsFailed() . "\n";
echo "stateMessage: " . $batch->getStateMessage() . "\n";
if($batch->getNumberRecordsFailed() > 0 || $batch->getNumberRecordsFailed() == $batch->getNumberRecordsProcessed())
{
echo "Failures detected. Batch results:\n".$batchResults."\nEnd batch.\n";
}
And lastly, an example of the CSV data being sent:
"Id","AccountId","Amount","CampaignId","CloseDate","Name","OwnerId","RecordTypeId","StageName","Type","cv__Acknowledged__c","cv__Payment_Type__c","ER_Acknowledgment_Type__c"
"#N/A","0018000000nH16fAAC","100.00","70180000000nktJ","2010-10-29","Gary Smith $100.00 Single Donation 10/29/2010","00580000001jWnq","01280000000F7c7AAC","Received","Individual Gift","Not Acknowledged","Credit Card","Email"
"#N/A","0018000000nH1JtAAK","30.00","70180000000nktJ","2010-12-20","Lisa Smith $30.00 Single Donation 12/20/2010","00580000001jWnq","01280000000F7c7AAC","Received","Individual Gift","Not Acknowledged","Credit Card","Email"
After 2 weeks, 4 cases, dozens of e-mails and phone calls, 3 bulletin board posts, and 1 Stackoverflow question, I finally got a solution.
The problem was quite simple in the end. (which makes all of that all the more frustrating)
As stated, the custom fields I was trying to update live in the Convio Common Ground package. Apparently our install has 2 licenses for this package. None of the licenses were assigned to my user account.
It isn't clear what is really gained/lost by not having the license other than API access. As the rest of this thread demonstrates, I was able to see and update the fields in every other way.
If you run into this, you can view the licenses on the Manage Packages page in Setup. Drill through to the package in question and it should list the users who are licensed to use it.
Thanks to SimonF's professional and timely assistance on the Developer Force bulletin boards:
http://boards.developerforce.com/t5/Perl-PHP-Python-Ruby-Development/Bulk-API-So-frustrated/m-p/232473/highlight/false#M4713
I really think this is a field level security issue. Is the field included in the opportunity layout for that user profile? Field level security picks the most restrictive option, so if you seem to have access from the setup screen but it's not included in the layout, I don't think the system will give you access.
If you're certain that your user's profile has FLS access to the fields and the assigned layouts include the fields, then I'd suggest looking into the definition of the package in question. I know the bulk API allows use of fields in managed packages normally (I've done this).
My best guess at this point is that your org has installed multiple versions of this package over time. Through component deprecation, it's possible the package author deprecated these custom fields. Take a look at two places once you've logged into salesforce:
1.) The package definition page. It should have details about what package version was used when the package was first installed and what package version you're at now.
2.) The page that has WSDL generation links. If you choose to generate the enterprise WSDL, you should be taken to a page that has dropdown elements that let you select which package version to use. Try fiddling with those to see if you can get the fields to show up.
These are just guesses. If you find more info, let me know, and I can try to provide additional guidance.