how to set an OCR language in Google Drive - google-app-engine

I need to upload image into Google Drive including the OCR feature.
The problem with the setOcrLanguage method. I need to specify Hebrew language and based on ISO 639-1 codes its should be "heb".
I'm getting the response
"code" : 400,
"errors" : [ {
"domain" : "global",
"location" : "ocrLanguage",
"locationType" : "parameter",
"message" : "Invalid Value",
"reason" : "invalid"
} ],
Any idea what should be the code or where I can get the list of avaliable codes?

Related

Not able to fetch the FileSystem from Hitachi_NAS_File_Storage

When I try to execute the below curl command I am getting an issue:
curl -vk -H "X-Api-Key: zrxvSDAv9x.RIP4gkmKarG3beF.or.4Tc2im7oeqYN88C9XPGHxbXC" https://172.17.11.11:8444/v7/storage/filesystems
Error:
{
"errorCode" : 1081427,
"errorDetail" : {
"detail" : "[no detail]",
"fault" : "SOAP-ENV:Receiver",
"fileName" : "FileSystemMgmntProvider.cpp",
"function" : "getAllFileSystems",
"lineNumber" : 54,
"message" : "Failed to find a list of file system IDs on the server.",
"reason" : "Failed to find a list of file system IDs on the server.",
"returnedValue" : 2,
"subCode" : 8992
},
"errorMsg" : "An internal HNAS error which usually results from an object not found or an invalid parameter."
}
But I am able to see the FileSystems on that specific server. Am I missing anything.
Please let me know if any permission or any issue with the version.
Reference:
https://knowledge.hitachivantara.com/Documents/Storage/NAS_Platform/13.9/Hitachi_NAS_File_Storage_REST_API_Reference/File_system_resource/02_Get_file_systems

Error on send email endpoint. Precondition check failed

I have got an application up and running, and sending emails using the API with no problem at all. But for specific customers, I am getting the following error and I cannot manage to reproduce it. Could you give me further details about the problem? It works for most of my customers
{
"code" : 400,
"errors" : [ {
"domain" : "global",
"message" : "Precondition check failed.",
"reason" : "failedPrecondition"
} ],
"message" : "Precondition check failed.",
"status" : "FAILED_PRECONDITION"
}
I am using the Java Client library from Google, and this is how my code looks I followed the Google guides:
MimeMessage content = emailService.createEmail(sendEmailRequest);
Message message = createMessageWithEmail(content);
gmail.users().messages().send(ME, message).execute();
The path that the client is hitting is this one {userId}/messages/send and these are the scopes the application asks:
openid email https://www.googleapis.com/auth/gmail.send
Thanks in advance

Azure Cognitive Search - Create Data Souce via API

I have an azure cognitive search service that I am trying to add an azure blob storage data source via the api. Creating it works fine via the portal.
Here is the uri:
https://xxxxxx.search.windows.net/datasources?api-version=2019-05-06
Here are the headers:
User-Agent: Fiddler
Content-Type: application/json
api-key: XXXXXXXXXXXXXXXXXXXXXXXXXXXX
Host: XXXXXXXXXX.search.windows.net
Content-Length: 412
Here is the body:
{
"name" : "documents",
"description" : "documents data source",
"type" : "'azureblob",
"credentials" :
{ "connectionString" :
"XXXXXXXXX"
},
"container" : { "name" : "documents" }
}
When I run it, I get a 400 error code with the following message:
{"error":{"code":"","message":"Data source type ''azureblob' is not
supported"}}
I got the enum value straight from the docs here. Am I missing something?
Thanks in advance
So it was a copy/paste error from the docs:
"azureblob" <> "azureblob "

Lily with Morphline and HBase

I'm trying to use an tutorial from Cloudera. (http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/search_hbase_batch_indexer.html)
I have a code to insert objects in Avro format in HBase and I want to insert them to Solr but I don't get anything.
I have been taking a look to the logs:
15/06/12 00:45:00 TRACE morphline.ExtractHBaseCellsBuilder$ExtractHBaseCells: beforeNotify: {lifecycle=[START_SESSION]}
15/06/12 00:45:00 TRACE morphline.ExtractHBaseCellsBuilder$ExtractHBaseCells: beforeProcess: {_attachment_body=[keyvalues={0Name178721/data:avroUser/1434094131495/Put/vlen=237/seqid=0}], _attachment_mimetype=[application/java-hbase-result]}
15/06/12 00:45:00 DEBUG indexer.Indexer$RowBasedIndexer: Indexer _default_ will send to Solr 0 adds and 0 deletes
15/06/12 00:45:00 TRACE morphline.ExtractHBaseCellsBuilder$ExtractHBaseCells: beforeNotify: {lifecycle=[START_SESSION]}
15/06/12 00:45:00 TRACE morphline.ExtractHBaseCellsBuilder$ExtractHBaseCells: beforeProcess: {_attachment_body=[keyvalues={1Name134339/data:avroUser/1434094131495/Put/vlen=237/seqid=0}], _attachment_mimetype=[application/java-hbase-result]}
So, I'm reaing them but I don't know why it isn't indexed anything in Solr.
I guess that my morphline.conf is wrong.
morphlines : [
{
id : morphline1
importCommands : ["org.kitesdk.**", "org.apache.solr.**", "com.ngdata.**"]
commands : [
{
extractHBaseCells {
mappings : [
{
inputColumn : "data:avroUser"
outputField : "_attachment_body"
type : "byte[]"
source : value
}
]
}
}
#for avro use with type : "byte[]" in extractHBaseCells mapping above
{ readAvroContainer {} }
{
extractAvroPaths {
flatten : true
paths : {
name : /name
}
}
}
{ logTrace { format : "output record: {}", args : ["#{}"] } }
]
}
]
I wasn't sure if I had to have an "_attachment_body" field in Solr, but it seems that it isn't necessary, so I guess that readAvroContainer or extractAvroPaths are wrong.
I have a "name" field in Solr and my avroUser has a "name" field as well.
{"namespace": "example.avro",
"type": "record",
"name": "User",
"fields": [
{"name": "name", "type": "string"},
{"name": "favorite_number", "type": ["int", "null"]},
{"name": "favorite_color", "type": ["string", "null"]}
]
}
I have all this things working well here.
I did this steps:
1) Install hbase-solr-indexer as a service:
Fist of all you have to install hbase-solr-indexer.
installing hbase-solr-indexing as a service
Add cloudera repos to yum repos for this.
After that type:
sudo yum install hbase-solr-indexer
2) Criate morphline files:
ok, you did it.
2) Set the Replication scope for every column family and register a hbase-indexer configuration
Using the Lily HBase NRT Indexer Service
$ hbase shell
hbase shell> disable 'record'
hbase shell> alter 'record', {NAME => 'data', REPLICATION_SCOPE => 1}
hbase shell> enable 'record'
Try to follow the others tutorials above. ;)
I was with problems with a NRT solution, but when I followed all that tutorial step by step It worked.
I hope this help someone.

Exception handling from my custom filter in google cloud endpoints

Whenever a cloud endpoint throws an exception app engine handles those exceptions and sends standard response as follows.
{
"error" : {
"message" : "java.lang.ArithmeticException: / by zero",
"code" : 503,
"errors" : [ {
"domain" : "global",
"reason" : "backendError",
"message" : "java.lang.ArithmeticException: / by zero"
} ]
}
}
But I want to handle those exception in my custom filters and set relevant status code to response.
Also sometimes I want to redirect to different url. How could I do that with endpoints?
What you should do is set up an error handler as explained here. That way you can catch all of the standard exceptions thrown by endpoints or whatever custom exceptions you throw yourself.

Resources