I am using gettext to translate my AngularJS site - it all works fine where I have HTML attributes that I can add 'translate' to.
However I also have quite a large and complex JSON file which needs translating, which includes arrays and objects.
Is there any way to include this in the translation that gettext does, into the PO file? Or would I need to rethink the whole idea of using a JSON file to segment the customer flow?
I have included an initial extract of the JSON file below
{
"version": "1.1",
"name": "MVP",
"description": "Initial customer segmenting flow",
"enabled": true,
"funnel": [
{
"text": "I am...",
"image": "",
"help": "",
"options": [
{
"text": "Placing an order",
"image": "image1.png",
"next": 2
},
{
"text": "E-mailing customer service",
"image": "image2.png",
"next": 2
},
Thanks
James
Process the HTML file yourself with a script at build-time and dump all translatable messages into a dummy source file with the syntax expected by your string extractor, probably something like this:
<translate>I am ...</translate>
<translate>Placing an order</translate>
<translate>E-mailing customer service</translate>
Related
I've run into an issue using the Coinbase Pro sandbox API to test my software.
When placing orders, I POST a client_oid field along with the rest of the body to the REST API, the order gets filled properly but when the received message arrives through the websocket stream, the client_oid is always an empty string.
Anyone knows why is that and how to fix this?
Example data POSTed when placing the order:
{
"type": "market",
"side": "buy",
"product_id": "BTC-EUR",
"funds": "1000",
"client_oid": "dev_node-order-1"
}
And here's the matching websocket message of type received:
{
"type": "received",
"side": "buy",
"product_id": "BTC-EUR",
"time": "2021-08-15T16:57:29.079657Z",
"sequence": 52030416,
"profile_id": "[MY-PROFILE-ID]",
"user_id": "[USER-ID]",
"order_id": "d1f60730-8960-495e-a7eb-cd37baa46768",
"order_type": "market",
"funds": "995.0245866076",
"client_oid": ""
}
As you can see the received client_oid is empty, any idea why?
So the problem was that the client_oid needs to be of the UUID format, for example 9bffcb70-13ea-11ec-abc7-7dfab310af81, if not of this format the field is ignored.
i am completely new to JSON and Java in General.
i have a Task with a similar Block of code:
{
"name": "Chew Barka",
"breed": "Bichon",
"age": "2 years",
"weight": 8,
"bio": "The park, The pool or the Playground - I love to go anywhere!",
"filename": ""
},
And i would like to have the Contents of the folder:
"C:/Temp" for example
Stored in "filename"
so that when i call "filename" i get the "C:/Temp" Content
I am getting the "value, object or array expected." syntax error when I test my JSON-LD code with Google's Structured Data Testing Tool. The error appears on line 143 of my code.
I am implementing Schema.org for a local business website with JSON-LD. I have
tried replacing the [ brackets with } brackets for the image object, and
even tried removing the comma on line 143. I either get the same error, or new errors appear. I have searched other problems related to this error, but they both had different code.
{
"#context": "https://schema.org",
"#type": "LocalBusiness",
"image": [
"http://secureservercdn.net/166.62.110.232/kkk.bd6.myftpupload.com/wp-
content/uploads/2019/05/360webclicks-logo2-4.fw_.png",
],
Highlighted error in the SDTT:
The last value must not be followed by a comma.
So, this
"image": [
"image.png",
],
should be this
"image": [
"image.png"
],
If you only have one value, you could omit the array ([…]):
"image": "image.png",
I'm trying to build a simple atom package that displays some filetypes associated with squarespace development in the correct highlighted way. Iv'e tried reading the docs and looking at related packages and mirroring mine off of theirs but it seems no matter what I do atom wont pickup that these file types have an associated language package installed, and when I manually apply my language it doesn't even highlight them correctly.
The associations I'm trying to build are
.block -> html
.region -> html
.list -> html
.item -> html
.conf -> json
.preset -> json
my package.json looks like
{
"name": "language-squarespace",
"version": "0.4.0",
"description": "Syntax Highlighting for SquareSpace files",
"repository": {
"type": "git",
"url": "https://github.com/blaketarter/language-squarespace"
},
"license": "MIT",
"engines": {
"atom": "*",
"node": "*"
}
}
and an example of one of my grammar files is
'filetypes': [
'block'
]
'name': 'block (squarespace)'
'patterns': [
{
'include': 'source.html'
}
]
'scopeName': 'source.block'
I feel like im missing something important because I based mine off of https://github.com/rgbkrk/language-ipynb and things seem to match.
I am using Solr for searching institutions... My Solr DB has around 400k documents each of which has multiple fields like ("name","id","city",...)...
A document in my DB looks like this:
"docs":
{
"id": "91348",
"p_code": "71637",
"name": "University of Toronto - Mississauga",
"ext_name": "",
"city": "Mississauga",
"country": "CA",
"state": "ON",
"type": "academic/campus",
"alt_name": "",
"ext_city": "",
"zip": "L5L 1C6",
"alt_ext_city": "",
}
I write a query like {name: (university of toronto)}... Top two matches are:
"docs":
{
"id": "91348",
"p_code": "71637",
"name": "University of Toronto - Mississauga",
"ext_name": "",
"city": "Mississauga",
"country": "CA",
"state": "ON",
"type": "academic/campus",
"alt_name": "",
"ext_city": "",
"zip": "L5L 1C6",
"alt_ext_city": "",
"_version_": 1473710223400108000,
"score": 1.499069
},
{
"id": "10624",
"p_code": "7938",
"name": "University of Toronto",
"ext_name": "",
"city": "Toronto",
"country": "CA",
"state": "ON",
"type": "academic",
"alt_name": "Saint George Downtown Campus",
"ext_city": "",
"zip": "M5S 1A1",
"alt_ext_city": "",
"_version_": 1473710220148473900,
"score": 1.4967358
}
I am really surprised to see that "University of Toronto - Mississauga" returns a higher score than "university of Toronto". Intuitively, the field containing "University of Toronto - Mississauga" should get a lower score since it is longer than the other one.
I was also very surprised to see that Solr gives different values for querynorm as follows:
(0.03198291 = queryNorm) for the top document and (0.03203078 = queryNorm) for the second ranked document. I presumed that the query norm should be exactly the same for the all documents as it is only a function of the query.
I am not sure if I got something wrong about how Solr works or there is something wrong in indexing or configuration? Has anybody faced the same problem?
Make sure that omitNorms is set to false for that field and that your collection is using the latest version of the schema. Then re-index all of your documents for the change to the field to take effect.
I've found that some schema modifications are best treated with a complete wipe of the index prior to indexing in new content. I am not sure, but I believe this may be one of them. For most of the changes you can just re-index all of your content and overwrite the old stuff.