PancakeSwap API - how to get APR? - cryptocurrency

I need info about pools APR/APY from API endpoints like:
https://api.pancakeswap.info/api/v2/summary
or
https://api.pancakeswap.info/api/v2/pairs
but this info is missed there.
How can I get it ?
When going to v3 API, eg . https://api.pancakeswap.info/api/v3/pairs
I get the response:
{"message":"Missing Authentication Token"}
How to solve that ?
The pancakeswap api page: https://syncwith.com/api/pancakeswap
It does not have reference to v3 api though.
Calculating APR based on Volume 24h and Liquidity
I've found the way to calculate APR based on volume24 and liquidity. Yet as I try to calculate APR with the current values, i get the wrong APR*:
Token Volume APR Liquidity
WBNB+BUSD 174278887 23.15 467090610
USDT+WBNB 227057 0.06 241228503
USDT+BUSD 28411575 5.15 342070717
ETH+WBNB 38904 0.02 104583785
Cake+WBNB 37343 0.01 286549465
Compare it with those at the website for pools/pairs:
What's wrong here ?

Related

Nutch 1.4 and Solr 3.6 - Nutch not crawling 301/302 redirects

I am having an issue where the initial page is crawled by the redirect is not being crawled or indexed.
I have the http.redirect.max property set to 5, I have attempted values 0, 1, and 3.
<property>
<name>http.redirect.max</name>
<value>5</value>
<description>The maximum number of redirects the fetcher will follow when
trying to fetch a page. If set to negative or 0, fetcher won't immediately
follow redirected URLs, instead it will record them for later fetching.
</description>
</property>
I have also attempted to clear out a majority of what is in the regex-urlfilter.txt and crawl-urlfilter.txt. Other than the website being crawled this is the only other params in these files.
# skip file: ftp: and mailto: urls
-^(file|ftp|mailto):
# skip image and other suffixes we can't yet parse
-\.(gif|GIF|jpg|JPG|png|PNG|ico|ICO|css|sit|eps|wmf|zip|ppt|mpg|xls|gz|rpm|tgz|mov|MOV|exe|jpeg|JPEG|bmp|BMP|PDF|pdf|js|JS|swf|SWF|ashx|css|CSS|wmv|WMV)$
Also it seems like Nutch is crawling and pushing only pages that have querystring parameters.
When looking at the output.
http://example.com/build Version: 7
Status: 4 (db_redir_temp)
Fetch time: Fri Sep 12 00:32:33 EDT 2014
Modified time: Wed Dec 31 19:00:00 EST 1969
Retries since fetch: 0
Retry interval: 2700 seconds (0 days)
Score: 0.04620983
Signature: null
Metadata: _pst_: temp_moved(13), lastModified=0: http://example.com/build/
There is a default IIS redirect occuring throwing a 302 to add the trailing slash. I have made sure this slash is already added on all pages. So unsure why this is being redirected.
Just a bit more information, here are some parameters I have tried.
depth=5 (tried 1-10)
threads=30 (tried 1 - 30)
adddays=7 (tried 0, 7)
topN=500 (tried 500, 1000)
Try running Wireshark on the webserver to see exactly what is being served, and on the machine Nutch is on to see what's being requested. If they're on the same server, great. Try that and add HTTP to your filter box after the capture.

Google Cloud Storage (gcs) Error 200 on non-final Chunk

I'm running into the following error when running an export to CSV job on AppEngine using the new Google Cloud Storage library (appengine-gcs-client). I have about ~30mb of data I need to export on a nightly basis. Occasionally, I will need to rebuild the entire table. Today, I had to rebuild everything (~800mb total) and I only actually pushed across ~300mb of it. I checked the logs and found this exception:
/task/bigquery/ExportVisitListByDayTask
java.lang.RuntimeException: Unexpected response code 200 on non-final chunk: Request: PUT https://storage.googleapis.com/moose-sku-data/visit_day_1372392000000_1372898225040.csv?upload_id=AEnB2UrQ1cw0-Jbt7Kr-S4FD2fA3LkpYoUWrD3ZBkKdTjMq3ICGP4ajvDlo9V-PaKmdTym-zOKVrtVVTrFWp9np4Z7jrFbM-gQ
x-goog-api-version: 2
Content-Range: bytes 4718592-4980735/*
262144 bytes of content
Response: 200 with 0 bytes of content
ETag: "f87dbbaf3f7ac56c8b96088e4c1747f6"
x-goog-generation: 1372898591905000
x-goog-metageneration: 1
x-goog-hash: crc32c=72jksw==
x-goog-hash: md5=+H27rz96xWyLlgiOTBdH9g==
Vary: Origin
Date: Thu, 04 Jul 2013 00:43:17 GMT
Server: HTTP Upload Server Built on Jun 28 2013 13:27:54 (1372451274)
Content-Length: 0
Content-Type: text/html; charset=UTF-8
X-Google-Cache-Control: remote-fetch
Via: HTTP/1.1 GWA
at com.google.appengine.tools.cloudstorage.oauth.OauthRawGcsService.put(OauthRawGcsService.java:254)
at com.google.appengine.tools.cloudstorage.oauth.OauthRawGcsService.continueObjectCreation(OauthRawGcsService.java:206)
at com.google.appengine.tools.cloudstorage.GcsOutputChannelImpl$2.run(GcsOutputChannelImpl.java:147)
at com.google.appengine.tools.cloudstorage.GcsOutputChannelImpl$2.run(GcsOutputChannelImpl.java:144)
at com.google.appengine.tools.cloudstorage.RetryHelper.doRetry(RetryHelper.java:78)
at com.google.appengine.tools.cloudstorage.RetryHelper.runWithRetries(RetryHelper.java:123)
at com.google.appengine.tools.cloudstorage.GcsOutputChannelImpl.writeOut(GcsOutputChannelImpl.java:144)
at com.google.appengine.tools.cloudstorage.GcsOutputChannelImpl.waitForOutstandingWrites(GcsOutputChannelImpl.java:186)
at com.moose.task.bigquery.ExportVisitListByDayTask.doPost(ExportVisitListByDayTask.java:196)
The task is pretty straightforward, but I'm wondering if there is something wrong with the way I'm using waitForOutstandingWrites() or the way I'm serializing my outputChannel for the next task run. One thing to note, is that each task is broken into daily groups, each outputting their own individual file. The day tasks are scheduled to run 10 minutes apart concurrently to push out all 60 days.
In the task, I create a PrintWriter like so:
OutputStream outputStream = Channels.newOutputStream( outputChannel );
PrintWriter printWriter = new PrintWriter( outputStream );
and then write data out to it 50 lines at a time and call the waitForOutstandingWrites() function to push everything over to GCS. When I'm coming up to the open-file limit (~22 seconds) I put the outputChannel into Memcache and then reschedule the task with the data iterator's cursor.
printWriter.print( outputString.toString() );
printWriter.flush();
outputChannel.waitForOutstandingWrites();
This seems to be working most of the time, but I'm getting these errors which is creating ~corrupted and incomplete files on the GCS. Is there anything obvious I'm doing wrong in these calls? Can I only have one channel open to GCS at a time per application? Is there some other issue going on?
Appreciate any tips you could lend!
Thanks!
Evan
A 200 response indicates that the file has been finalized. If this occurs on an API other than close, the library throws an error, as this is not expected.
This is likely occurring do to the way you are rescheduling the task. It may be that when you reschedule the task, the task queue is duplicating the delivery of the task for some reason. (This can happen) and if there are no checks to prevent this, there could be two instances attempting to write to the same file at the same time. When one closes the file the other sees an error. The net result is a corrupt file.
The simple solution is not to re-schedule the task. There is no time limit on how long a file can be held open with the GCS client. (Unlike the deprecated Files API.)

ImageMosaic-JDBC Error

I'm trying to get postgis raster layers in geoserver-2.3.1, with postgresql-8.4, postgis-2.0 and gt-imagemosaic-jdbc-9.1.jar support. I'm going through the tutorial with a PNG raster that i have. In the last step i'm stuck with a java exception...can't really understand what is it!? I've tried other jdbc-postgresql driver and less tiles but the error seems to come up everytime.
Here is the output! Any interpretations? Every help is welcome. Thanks
java -jar ~rdfs_run/geoserver/geoserver-2.3.1/webapps/geoserver/WEB-INF/lib/gt-imagemosaic-jdbc-9.1.jar import -config ~rdfs_run/geoserver/geoserver-2.3.1/data_dir/coverages/postgis/aveiro.postgis.xml -spatialTNPrefix tileaveiro -tileTNPrefix tileaveiro -dir tiles -ext png
Apr 28, 2013 8:50:34 PM org.geotools.gce.imagemosaic.jdbc.Import logInfo
INFO: Truncating table : tileaveiro_0
Apr 28, 2013 8:50:34 PM org.geotools.gce.imagemosaic.jdbc.Import logInfo
INFO: Number of tiles to import: 48
Apr 28, 2013 8:50:34 PM org.geotools.gce.imagemosaic.jdbc.Import logInfo
INFO: Inserted tile AveiroRDFS_rgb_5_8.png : 1/48
...
INFO: Inserted tile AveiroRDFS_rgb_3_8.png : 48/48
java.sql.BatchUpdateException: Batch entry 0 INSERT INTO tileaveiro_0 (location,geom,data) VALUES ('AveiroRDFS_rgb_5_8.png',geomfromwkb(?,4326),?) was aborted. Call getNextException to see the cause.
at org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2746)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1887)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:405)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeBatch(AbstractJdbc2Statement.java:2893)
at org.geotools.gce.imagemosaic.jdbc.Import.sqlCommit(Import.java:1026)
at org.geotools.gce.imagemosaic.jdbc.Import.fillSpatialTable(Import.java:856)
at org.geotools.gce.imagemosaic.jdbc.Import.start(Import.java:401)
at org.geotools.gce.imagemosaic.jdbc.Toolbox.main(Toolbox.java:46)
Found the answer...gt-imagemosaic-jdbc-9.1 plugin (from Geotools) probably doesn't support PostGIS 2.0: geomfromwkb function is not defined - in newer versions it is st_geomfromwkb.
Right now i'm using an older postgis/postgres database.

Timeout on bigquery v2 from GAE

I am doing a query to BigQuery from my app in google app engine, and receive a weird result from BQ sometimes (discovery#restDescription). It took me some time to understand that the problem occurs only when the amount of data i am querying is high, and thus making somehow my query time out within 10 sec.
I found a good description of my problem here:
Bad response to a BigQuery query
After reading again GAE docs, i found out that HTTP requests should be handled within a few seconds. So i guess, and this is only a guess, that bigquery might also be limiting itself in the same way, and therefor, has to respond to my queries "within seconds".
If this is the case, first of all, i will be a bit surprised, because my bigquery requests are for sure going to take more than few seconds... But anyway, i did a test by forcing a timeout of 1 second to my query, and then get the queryResult by polling the API call getQueryResults.
The outcome is very interesting. BigQuery is returning something within 3 secs, more or less (not 1 as i asked) and then i get my results later on, within 26 secs by polling. This looks like circumventing the 10 secs timeout issue.
But i hardly see myself doing this trick in production.
Did someone encontered the same problem with BigQuery? What am i supposed to do when the query lasts more than "few seconds"?
Here is the code i use to query:
query_config = {
'timeoutMs': 1000,
"defaultDataset": {
"datasetId": self.dataset,
"projectId": self.project_id
},
}
query_config.update(params)
result_json = (self.service.jobs()
.query(projectId=project,
body=query_config)
.execute())
And to retrieve the results, i poll with this:
self.service.jobs().getQueryResults(projectId=project,jobId=jobId).execute()
And those are the logs of what happens on BigQuery:
2012-12-03 12:31:19.835 /api/xxxxx/ 200 4278ms 0kb Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1
xx.xx.xx.xx - - [03/Dec/2012:02:31:19 -0800] "GET /api/xxxxx/ HTTP/1.1" 200 243 ....... ms=4278 cpu_ms=346 cpm_usd=0.000426 instance=00c61b117c1169753678c6d5dac736b223809b
I 2012-12-03 12:31:16.060
URL being requested: https://www.googleapis.com/discovery/v1/apis/bigquery/v2/rest?userIp=xx.xx.xx.xx
I 2012-12-03 12:31:16.061
Attempting refresh to obtain initial access_token
I 2012-12-03 12:31:16.252
URL being requested: https://www.googleapis.com/bigquery/v2/projects/xxxxxxxxxxxx/queries?alt=json
I 2012-12-03 12:31:19.426
URL being requested: https://www.googleapis.com/bigquery/v2/projects/xxxxxxxx/jobs/job_a1e74a6769f74cb997d998623b1b6b2e?alt=json
I 2012-12-03 12:31:19.500
This is what my query API call returns me. And in the meta data, status is 'RUNNING':
{u'kind': u'bigquery#queryResponse', u'jobComplete': False, u'jobReference': {u'projectId': u'xxxxxxxxxxx', u'jobId': u'job_a1e74a6769f74cb997d998623b1b6b2e'}}
with the jobId I am able to retrieve the results 26 secs later, when they are ready.
There must be another way! What am i doing wrong?

GAE/J request log format breakdown

Here is a sample of GAE Console log record:
http://i.stack.imgur.com/M2iJX.png for readable high res version.
I would like to provide a breakdown of the fileds, displayed both in the collpased (summary) view and the expended (detail) view. I will fill the fields I know their meaning and would appreciate assistannce with dichipering the rest. This post will be updated once new information is available.
Thank you,
Maxim.
Open issues:
How to read timestamp? [...-prod/0-0-39.346862139187007139]
Why in summary it says request took 343ms but in details is says 344ms ?
If request spend 123ms on cpu and 30ms on API calls where did the rest of the time go? Why the total request time is 343/344ms ?
Summary
12-14 : Date of the request. 12 is the month (December), 14 is the day of the month (Tuesday).
05:21AM : Time of the request, PST offset. 05 is the hour. 21 is the minute.
57.593 : Time of request, PST offset. 57 is the second. 593 is the millisecond.
/match/... : HTTP request path
200 : HTTP return code. (200 = OK)
343ms : The total time (in milliseconds) it took to calculate and return the response to the user
123cpu_ms : The time (in milliseconds) the request spend on CPU calculation
30api_cpu_ms : The time (in milliseconds) the request spend on API calls (Datastore get and co...)
1kb : The size (in kilobytes) of the response that was sent to the user
Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/534.7 (KHTML, like Gecko) Chrome/7.0.517.44 Safari/534.7,gzip(gfe) : User Agent note that gzip(gfe) is added by AppEngine front end.
Details
IP yellow masked out : The IP address of the client initiating the request
HTTP Referrer : Note that it's empty on this request because it's a direct hit
[14/Dec/2010:05:21:57 -0800] : Date, including timestamp offset specification.
"GET /match/... HTTP/1.1" : The HTTP GET URI.
200 : HTTP return code. (200 = OK)
1036 : The size (in bytes) of the response that was sent to the user
Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/534.7 (KHTML, like Gecko) Chrome/7.0.517.44 Safari/534.7,gzip(gfe) : User Agent note that gzip(gfe) is added by AppEngine front end.
ms=344 : The total time (in milliseconds) it took to calculate and return the response to the user
cpu_ms=123 : The time (in milliseconds) the request spend on CPU calculation
api_cpu_ms=30 : The time (in milliseconds) the request spend on API calls (Datastore get and co...)
cpm_usd=0.003648 : The amount (in us $) that 1000 requests such as this one would cost. ref
log record
12-14 : Date of this specific application emitted log entry. 12 is the month (December), 14 is the day of the month (Tuesday).
05:21AM : Time of this specific application emitted log entry, PST offset.
57.833 : Time of request, PST offset. 57 is the second. 833 is the millisecond.
[...-prod/0-0-39.346862139187007139] : The identifier of current version of the application that emitted this log message. Note: ...-prod is the application name. 0-0-39 is the deployed version name (app.yaml). .346862139187007139 is the time? (in what format?) when this version was deployed to appengine cloud.
stdout : The channel to which the application emitted this log message. Can be either stdout or stderr.
INFO ....Matcher - ... Id 208 matched. : Application level output. Can be done via either System.out.print or (as in this case) using a logging framework, logback
Isn't 57.593 seconds.milliseconds?
And cpm_usd represents an estimate of what 1,000 requests similar to this request would cost in US dollars.

Resources