I would expect the chrome driver performance log to grow with new requests over time.
However, the contents of driver.get_log('performance') change with almost every request.
Are old requests cleared after they are viewed using this method?
If the log isn't inspected often enough, can requests be cleared before they are accessed/processed?
import seleniumwire.undetected_chromedriver as uc
driver = uc.Chrome()
driver.get("{URL}")
while(True):
logs = driver.get_log('performance')
print(len(logs))
Related
I am using firestore with queries and i am getting data but something in network tab is loading none stop
as you can see its over a minute
should i be concerned about it?
It's hard to be certain without seeing the code that produces this problem, but most likely you're using a realtime listener (onSnapshot) in your code.
Such a listener keeps an active , open connection between the client and the server, so that the database can send a notification to the client when the data changes. The Chrome network tool shows this as a content download, because indeed data trickles in over this connection.
So likely what you're seeing is a normal part of Firestore's protocol for realtime, bi-directional communication between the client and server, but again.
Running a react app in dev mode on localhost:3000 is stalled out with the initial loading network calls not responding.
To my knowledge, I didn't change anything between when the app was working and now. I've tried resetting to a previous git commit during which I know the project was working but the behavior is the same.
I've tried building the project and navigating to the built project
I don't even know where to look to troubleshoot this problem.
First of all, as Icepickle pointed out, this is fine. It's less than 1/500 of a second... it's nothing. :)
If you are still curious, you can have a look at these terms their explanations from the Google Network Analysis Reference. Here is some related info:
Queueing. The browser queues requests when:
There are higher priority requests.
There are already six TCP connections open for this origin, which is the limit. Applies to HTTP/1.0 and HTTP/1.1 only.
The browser is briefly allocating space in the disk cache
Stalled. The request could be stalled for any of the reasons described in Queueing.
The OPTION/POST Request is failing inconsistently with a console Error as err_timed_out. We get the issue inconsistently, it's only observed sometimes. Otherwise the request gets proper response from the back end. When it's timing out, the request doesn't even reach the server.
I have done some research on the stuff and found that due to maximum 6 connections to a resource restrictions it may wait for getting a connection released. But, I don't see any other requests pending ,all the other requests were completed.
In the timeline I can always see that it stalled for 20.00 seconds. Most of the time the time is same. But, it only shows that its been stalled for some time nothing else in the timeline.
The status of the request shows failed ERR_Connection_Timed_Out. Please help.
The Network Timing
Console Error
I've seen this issue when I use an authenticated proxy server and usually a refresh of the page fixes it.
Are you using an authenticated proxy server where you are seeing this behavior? Have you tried on a pc with direct access (i.e. without proxy) to the Internet?
I've got the same problem when I choose another ISP. I thought I would have only to put my new ID and password, but it wasn't the case.
I have an ADSL modem with a dry loop.
All others services were fine (DNS resolution, IP telephony, FTP, etc).
I did a lot of tests (disable firewall, try some others navigator, try under Linux, modem default factory, etc) none of those tests were successful.
To resolve the problem ERR_TIMED_OUT, I had to adjust the MTU and MRU values. I put 1458 rather than 1492, which is the default value.
It works for me. Maybe some ISPs use different values. Good luck.
In the app I work on, we have an "export from highrise" feature that produces a .csv file.
Sometimes (depending on the things we need exported) the request takes a long time (7-10 minutes). Everything work fine (as far as I can tell) server side but my client's browser (he tried safari, chrome and firefox) doesn't get a response. Aka the browser just stays there with the loading animation indefinitely (well we gave up after 45 minutes).
On my machine everything works fine on all browsers. Based on the logs I have/put in place, everything goes as plan, the task ends the output is sent, but he gets no response.
Any idea is welcome, I have no idea where to look next.
EDIT: I did what #ceejayoz suggested but the problem persists, the browser doesn't receive a response, it just sits there waiting even though the file is generated correctly.
I'd have the export generate server-side, and have the user-facing page refresh or check via AJAX every 15-30 seconds to see if the export is complete. Once completed, redirect them to the file.
I'm using urlfetch in my app and while everything works perfectly fine in the development environment, i'm finding urlfetch to be VERY unreliable when it's actually deployed. Sometimes it works as it should (retrieving data), but then a few minutes later it might return nothing, then it'll be working fine again a few minutes after that. This is very unacceptable. I've checked to make sure it's NOT the source URL that's the problem (YQL) and, again, everything works as it should in the development environment.
Are there any third-party libraries I could try?
Example code:
url = "http://query.yahooapis.com/v1/public/yql?q=%s&format=json" % urllib.quote_plus(query)
result = urlfetch.fetch(url, deadline=10)
if result.status_code == 200:
r = json.loads(result.content)
else:
return
a = r['query']['results']
# Do stuff with 'a'
Sometimes it'll work as it should, but other times - completely randomly with no code changes - i'll get this this error:
a = r['query']['results']
TypeError: 'NoneType' object is unsubscriptable
Sometimes it'll work as it should,
but other times completely randomly with no code changes
This is a common symptom that your application's requests have exceeded the Yahoo API calls rate limit.
Quoting Yahoo developer documentations rate limit:
IP Based Limits
Our service rate limits are imposed as
a limit on the number of API calls
made per IP address during a specific
time window. If your IP address
changes during that time period, you
may find yourself with more "credit"
available. However, if someone else
had been using the address and hit the
limit, you'll need to wait until the
end of the time period to be allowed
to make more API calls.
Google App Engine uses a pool of IP addresses for outgoing urlfetch requests and your application is sharing these IP addresses with other applications that are calling the same Yahoo endpoint; when the rate limit is exceeded, the endpoint replies with a limit exceeded error causing UrlFetch to fail.
Here another case using the Twitter search API.
When you mix Google App Engine+Third party web APIs, you need to be sure that the API provides authenticated calls allowing your application to have its own quota (StackApps API for example).
import urllib2
response = urllib2.urlopen('http://python.org/')
html = response.read()
This isn't an error in URLFetch - it's an issue with the JSON being returned. Either json.loads is returning None, or r['query'] is - I'm guessing it's probably the latter. Try logging result.content to see what the service is returning. You probably also want to cehck result.status.
One possibility is that your request is being denied or ratelimited by Yahoo in production, but not on your development machine.