Coinbase Pro API client_oid always empty - uuid

I've run into an issue using the Coinbase Pro sandbox API to test my software.
When placing orders, I POST a client_oid field along with the rest of the body to the REST API, the order gets filled properly but when the received message arrives through the websocket stream, the client_oid is always an empty string.
Anyone knows why is that and how to fix this?
Example data POSTed when placing the order:
{
"type": "market",
"side": "buy",
"product_id": "BTC-EUR",
"funds": "1000",
"client_oid": "dev_node-order-1"
}
And here's the matching websocket message of type received:
{
"type": "received",
"side": "buy",
"product_id": "BTC-EUR",
"time": "2021-08-15T16:57:29.079657Z",
"sequence": 52030416,
"profile_id": "[MY-PROFILE-ID]",
"user_id": "[USER-ID]",
"order_id": "d1f60730-8960-495e-a7eb-cd37baa46768",
"order_type": "market",
"funds": "995.0245866076",
"client_oid": ""
}
As you can see the received client_oid is empty, any idea why?

So the problem was that the client_oid needs to be of the UUID format, for example 9bffcb70-13ea-11ec-abc7-7dfab310af81, if not of this format the field is ignored.

Related

Solr COLSTATUS and LIST show deleted collection that cannot be deleted

For some of our collections when we run a Collections API DELETE synchronously followed immediately by a Configset API DELETE for the underlying configset we end up with a messed up collection state.
I have been unable to reproduce this issue in a test environment, it only happens on the live production instances inconsistently, so it may be load/race condition related.
Running a COLSTATUS against the broken collection provides the following response,
{
"responseHeader": {
"status": 404,
"QTime": 33
},
"collection_19744": {
"stateFormat": 2,
"znodeVersion": 51,
"properties": {
"autoAddReplicas": "false",
"maxShardsPerNode": "1",
"nrtReplicas": "3",
"pullReplicas": "0",
"replicationFactor": "3",
"router": {
"name": "compositeId"
},
"tlogReplicas": "0"
},
"activeShards": 1,
"inactiveShards": 0
},
"error": {
"metadata": [
"error-class",
"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException",
"root-error-class",
"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException"
],
"msg": "Error from server at http://solr1.prod-internal:8983/solr/collection_19744_shard1_replica_n4: Expected mime type application/octet-stream but got text/html. <html>\n<head>\n<meta http-equiv=\"Content-Type\" content=\"text/html;charset=utf-8\"/>\n<title>Error 404 Not Found</title>\n</head>\n<body><h2>HTTP ERROR 404</h2>\n<p>Problem accessing /solr/collection_19744_shard1_replica_n4/admin/segments. Reason:\n<pre> Not Found</pre></p>\n</body>\n</html>\n",
"code": 404
}
}
The underlying shard data for the collection has been successfully removed from disk and is not present on any of the solr nodes, the /collections/collection_19744 node has also been successfully deleted from Zookeeper, which I tested using the zkcli script. Receiving a NoNode for /collections/collection_19744 message.
As the COLSTATUS is broken we cannot delete the associated configset, doing so results in a "Can not delete ConfigSet as it is currently being used by collection [collection_19744]" message. Which is false.
Where exactly does the COLSTATUS get its collection meta information, as the /collections/collection_19744 node is absent in zookeeper?
I want to remove the broken collection metadata so I can then remove the configset and recreate the collection with the original naming.

How can I go about avoiding verification for a TikTok bot with proxies?

I am making a TikTok bot for fun. It does not automate posts/follows, it only scrapes information off the site. The issue I am finding is that after about 10-15 requests to the TikTok API it requires verification, spouting the following JSON:
{
"code":"10000",
"from":"",
"type":"verify",
"version":"",
"region":"sg",
"subtype":"slide",
"detail":"Hj4wDrDKZhDyu*bE94NlMgd3uQfAXw2eZJGOyoJXO-X9iLbeynU-spQiwbxyOkhJkGKbHNCyGHKuZ4jnJaJfnGedLadLrz8UMPAV*sriWIzRIEwj0PdWEmtZ25SbcEoytp4G631fwjn7y0498dMxisxkA8QnSTTGfswOFlkQBfyyMFYf5TlvDkfxmkjG7qKRHdCOhsnmSLbTCOd6MLcNFJA9WhlmcnhBrJnnVCs-HvoRzOdbpGbOmZ55HjpWIRz0JrQp2EdEjr8-qtQd5jpdpzuXxcfzrLbGFZTjWkyMHPf4vMb3J*q8hIs0zX2gP6IyCsa2et5BQPsB1KU2YyRA5VEvd*8*lZyRR60ZVs46UwtEXAu0l41Y2q0agUrayqnPnj8zpq7H7aK2VS46RZO0W3N7nZ-Jjq4QbAs.",
"verify_event":"",
"fp":"verify_kxe9l4xj_3jaJngfM_UEUu_47yj_Au6M_Kp0jwEVrqCJb",
"scene":"",
"verify_ticket":"",
"channel_mobile":"",
"sms_content":"",
"mobile":"",
"email":""
}
I am aware of sneaker bots using proxies to avoid these sorts of issues, however TikTok requires cookies authentication from a signed in account so I'm not sure if it will work. Some responses talk about using headless requests, etc., but I've had no luck thus far in preventing this verification process.
Does anyone with experience scraping TikTok have a resolution for this issue?
Thanks.
{
"code": "10000",
"from": "",
"type": "verify",
"version": "",
"region": "sg",
"subtype": "slide",
"detail": "Hj4wDrDKZhDyubE94NlMgd3uQfAXw2eZJGOyoJXO-X9iLbeynU-spQiwbxyOkhJkGKbHNCyGHKuZ4jnJaJfnGedLadLrz8UMPAVsriWIzRIEwj0PdWEmtZ25SbcEoytp4G631fwjn7y0498dMxisxkA8QnSTTGfswOFlkQBfyyMFYf5TlvDkfxmkjG7qKRHdCOhsnmSLbTCOd6MLcNFJA9WhlmcnhBrJnnVCs-HvoRzOdbpGbOmZ55HjpWIRz0JrQp2EdEjr8-qtQd5jpdpzuXxcfzrLbGFZTjWkyMHPf4vMb3Jq8hIs0zX2gP6IyCsa2et5BQPsB1KU2YyRA5VEvd8*lZyRR60ZVs46UwtEXAu0l41Y2q0agUrayqnPnj8zpq7H7aK2VS46RZO0W3N7nZ-Jjq4QbAs.",
"verify_event": "",
"fp": "verify_kxe9l4xj_3jaJngfM_UEUu_47yj_Au6M_Kp0jwEVrqCJb",
"scene": "",
"verify_ticket": "",
"channel_mobile": "",
"sms_content": "",
"mobile": ""
}

GraphAPI Schema Extensions don't appear for Messages

I would like to add some custom data to emails and to be able to filter them by using GraphAPI.
So far, I was able to create a Schema Extension and it gets returned successfully when I query https://graph.microsoft.com/v1.0/schemaExtensions/ourdomain_EmailCustomFields:
{
"#odata.context": "https://graph.microsoft.com/v1.0/$metadata#schemaExtensions/$entity",
"id": "ourdomain_EmailCustomFields",
"description": "Custom data for emails",
"targetTypes": [
"Message"
],
"status": "InDevelopment",
"owner": "hiding",
"properties": [
{
"name": "MailID",
"type": "String"
},
{
"name": "ProcessedAt",
"type": "DateTime"
}
]
}
Then I patched a specific message https://graph.microsoft.com/v1.0/me/mailFolders/Inbox/Messages/hidingmessageid:
PATCH Request
{"ourdomain_EmailCustomFields":{"MailID":"12","ProcessedAt":"2020-05-27T16:21:19.0204032-07:00"}}
The problem is that when I select the message, the added custom data doesn't appear by executing a GET request: https://graph.microsoft.com/v1.0/me/mailFolders/Inbox/Messages?$top=1&$select=id,subject,ourdomain_EmailCustomFields
Also, the following GET request gives me an error.
Request: https://graph.microsoft.com/v1.0/me/mailFolders/Inbox/Messages?$filter=ourdomain_EmailCustomFields/MailID eq '12'
Response:
{
"error": {
"code": "RequestBroker--ParseUri",
"message": "Could not find a property named 'e2_someguid_ourdomain_EmailCustomFields' on type 'Microsoft.OutlookServices.Message'.",
"innerError": {
"request-id": "someguid",
"date": "2020-05-29T01:04:53"
}
}
}
Do you have any ideas on how to resolve the issues?
Thank you!
I took your schema extension and copied and pasted it into my tenant, except with a random app registration I created as owner. then patched an email with your statement, and it does work correctly.
A couple of things here,
I would verify using microsoft graph explorer that everything is correct. eg, log into graph explorer with an admin account https://developer.microsoft.com/en-us/graph/graph-explorer#
first make sure the schema extensions exists
run a get request for
https://graph.microsoft.com/v1.0/schemaExtensions/DOMAIN_EmailCustomFields
It should return the schemaextension you created.
then
Run a get request for the actual message you patched not all messages that you filtered for now.
https://graph.microsoft.com/v1.0/me/mailFolders/Inbox/Messages/MESSAGEID?$select=DOMAIN_EmailCustomFields
here the response should be the email you patched and your EmailCustomField should be in the data somewhere, if it is not, that means that your patch did not work.
then you can run patch again from graph explorer
I did all this from graph explorer, easiest way to confirm.
two other things,
1) maybe the ?$top=1 in your get first message isn't the same message that you patched?
2) as per the documentation, you cannot use $filter for schema extensions with the message entity. (https://learn.microsoft.com/en-us/graph/known-issues#filtering-on-schema-extension-properties-not-supported-on-all-entity-types) So that second Get will never work.
Hopefully this helps you troubleshoot.

Alexa INTERNAL_SERVICE_EXCEPTION [Error reading entity from input stream]

I am using the Alexa Skill Kit, but sometimes this error appears, I do not know why this happens, if I repeat exactly the same sentence that generated the error, the errors do not re-activate, this happens sometimes but frequently.
{
"header": {
"namespace": "System",
"name": "Exception",
"messageId": "d54f1559-f45e-47a8-b558-d391793c2030"
},
"payload": {
"code": "INTERNAL_SERVICE_EXCEPTION",
"description": "Error reading entity from input stream."
}
}
I don't know if there is my lambda or if is an error from the skill. I need to know what is the problem to fix it.
Thanks a lot!

Translating JSON file with Angular Gettext

I am using gettext to translate my AngularJS site - it all works fine where I have HTML attributes that I can add 'translate' to.
However I also have quite a large and complex JSON file which needs translating, which includes arrays and objects.
Is there any way to include this in the translation that gettext does, into the PO file? Or would I need to rethink the whole idea of using a JSON file to segment the customer flow?
I have included an initial extract of the JSON file below
{
"version": "1.1",
"name": "MVP",
"description": "Initial customer segmenting flow",
"enabled": true,
"funnel": [
{
"text": "I am...",
"image": "",
"help": "",
"options": [
{
"text": "Placing an order",
"image": "image1.png",
"next": 2
},
{
"text": "E-mailing customer service",
"image": "image2.png",
"next": 2
},
Thanks
James
Process the HTML file yourself with a script at build-time and dump all translatable messages into a dummy source file with the syntax expected by your string extractor, probably something like this:
<translate>I am ...</translate>
<translate>Placing an order</translate>
<translate>E-mailing customer service</translate>

Resources