Ibm watson pdf document is damanged from translation API - ibm-watson

I constantly get pdf erros when I run translation API.
I have success in submitting a document to translate API endpoint. When I try to download the translated document, it also works. But when I try to view the pdf file, it does not work. I have tried opening it on 4-5 different pdf viewers but to no avail.
Everything was working fine today 2 hours ago but now I get errors.
I use these APIs and test them on a simple sample document
curl -X POST \
--user "apikey:{apikey}" \
--form "file=#sample.pdf" \
--form "source=en" \
--form "target=fr" \
https://gateway-lon.watsonplatform.net/language-translator/api/v3/documents?version=2018-05-01
and the document download endpoint is as follows:
curl -X GET \
--user "apikey:{apikey}" \
--output "curriculum-fr.pdf" \
https://gateway-lon.watsonplatform.net/language-translator/api/v3/documents/75a0b3b9-f123-43cf-937f-9899871b62f3/translated_document?version=2018-05-01
The downloaded pdf is always corrupted.
it was working perfectly fine 4 hours ago.
e.g. just a simple file like this: https://res.cloudinary.com/palsplate/image/upload/v1573769184/sample_rjlxrk.pdf

Related

Using curl with AngularJS and bash

I’m trying to fill out web forms using curl via a bash script to a website that uses AngularJS. Can’t find any documentation on how to do this. Is it even possible to use curl to POST data to webforms that use AngularJS? I’m not even sure I’m asking the right question or if there’s a better method?
In most cases AngularJS uses ajax calls with JSON payload instead of old-school multipart POSTs.
You can use browser to send test post and save request information "as cURL".
Most likely you will have ready-to-use command to add to your bash file.
But quite often such posts are associated with authenticated person so you will need to fill in up-to-date session cookie into your request.
First things to check will be whether your command works with cleaned cookies.
If it works then your task is done.
Just call such API with something like this:
curl -X POST -H "Content-Type:application/json" \
http://some-server/handle-form \
-d '{"parameter1":["41","34"],"another_parameter":"val1"}'
But if your curl request is rejected by server with cookies absent then you need to setup proper cookie before invocation of API request.
This call will authenticate you against server and will store session cookie in a jar file:
curl -b ./jar -c ./jar -i -X POST -H "Content-Type:application/json" \
http://some-server/login \
-d '{"login":"my-user-name", "password":"my-password"}'
And such save session cookies would be reused for subsequent API calls:
curl -b ./jar -c ./jar -i -X POST -H "Content-Type:application/json" \
http://some-server/handle-form \
-d '{"parameter1":["41","34"],"another_parameter":"val1"}'

Retrieve latest artifact from JFROG artifactory

I have a few versions of my code in JFROG, which are provided to the clients. How do I specify a generic way to pull latest version(Artifact)? It is not Maven code. I looked up on the Jfrog page:
'''
GET http://localhost:8081/artifactory/ivy-local/org/acme/[RELEASE]/acme-[RELEASE].jar
'''
How do I get [RELEASE] ?
Please help ?
the [RELEASE] in this case is the version number you want to download the latest artifact for. To get that number you can use the REST API call for Artifact Latest Version Search Based on Layout. For example
GET /api/search/latestVersion?g=org.acme&a=artifact&repos=libs-snapshot-local
This would return a string of the latest release version you have.
You can follow the documentation at jfrog
#!/bin/bash
# Note that we don't enable the 'e' option, which would cause the script to
# immediately exit
set -uo pipefail
HOST=myartcloud.jfrog.io
USER=thisIsMyUser
PASS=thisismypass
SNAPSHOT_LAST_VERSION=$(curl --silent --show-error --fail \
-u$USER:"$PASS" \
-X POST https://$HOST/artifactory/api/search/aql \
-H "content-type: text/plain" \
-d 'items.find({ "repo": {"$eq":"my-repo-snapshot"}, "name": {"$match" : "my-project-package-name*"}})'\
| grep -E -o -e 'my-project-package-name-[[:digit:]].[[:digit:]].[[:digit:]]+'| uniq | sort | tail -1 \
| grep -E -o -e '[[:digit:]].[[:digit:]].[[:digit:]]+')
Explanation. Making usage of curl with option
--fail (HTTP) Fail silently (no output at all) on server errors. This is mostly done to better enable scripts etc to better deal with failed attempts.
--silent Silent or quiet mode. Don't show progress meter or error messages. Makes Curl mute.
--show-error When used with -s it makes curl show an error message if it fails.
Then a post request -X POST is made using basic authentication to the api path. The content type of the request is -H "content-type: text/plain".
-d, --data (HTTP) Sends the specified data in a POST request to the HTTP server, in the same way that a browser does when a user has filled in an HTML.
The data sent, is using a filter query to find an repository and also the content.
After the api results, using grep command line utility where it searches for PATTERNS and return only the version number.
Example: 0.1.1

How to automatically create a Sentry release and upload source-maps to Sentry in a react project?

I have a create-react-app project, and I'd like the deploy process to generate a Sentry release and upload the source maps to Sentry as well.
This script will create a Sentry release for version specified in the package.json file, and upload the source maps to Sentry.
It will work for any JS project, not just React.
create a file in your project root and name it deploy.sh:
SENTRY_TOKEN="YOUR_TOKEN"
PACKAGE_VERSION=`cat package.json \
| grep version \
| head -1 \
| awk -F: '{ print $2 }' \
| sed 's/[",]//g' \
| tr -d '[[:space:]]'`
printf "\nBuilding version $PACKAGE_VERSION...\n\n"
#2) Build for dev and cd to build directory
npm run build # or whatever your build command is
cd build/static/js # or whatever your build folder is
#3) create Sentry release
SOURCE_MAP=`find . -maxdepth 1 -mindepth 1 -name '*.map' | awk '{ gsub("./", "") ; print $0 }'`
printf "\nCreating a Sentry release for version $PACKAGE_VERSION...\n"
curl https://sentry.io/api/0/projects/:sentry_organization_slug/:sentry_project_slug/releases/ \
-X POST \
-H "Authorization: Bearer ${SENTRY_TOKEN}" \
-H 'Content-Type: application/json' \
-d "{\"version\": \"${PACKAGE_VERSION}\"}" \
#4) Upload a file for the given release
printf "\n\nUploading sourcemap file to Sentry: ${SOURCE_MAP}...\n"
curl "https://sentry.io/api/0/projects/:sentry_organization_slug/:sentry_project_slug/releases/$PACKAGE_VERSION/files/" \
-X POST \
-H "Authorization: Bearer ${SENTRY_TOKEN}" \
-F file=#${SOURCE_MAP} \
-F name="https://THE_URL_OF_THE_MAIN_JS_FILE/$SOURCE_MAP"
#5) IMPORTANT: Delete the sourcemaps before deploying
rm $SOURCE_MAP
#6) upload to your cloud provider
...
replace:
:sentry_organization_slug and :sentry_project_slug with the correct values from sentry (from the URL of any page inside your sentry account website)
SENTRY_TOKEN with your token from Sentry
THE_URL_OF_THE_MAIN_JS_FILE with the URL where your react build file is publicly accessible.
run.
Make sure you don't forget to update the package.json version on every release
I had the same problem recently and despite that there is no official solution for Create React App from Sentry their tooling is great and it's quite easy to automate the process of creating releases by yourself. You would need to generate release name, build the app and use this name to initialize Sentry library, create Sentry Release and upload sourcemaps.
I wrote the article which explains in details how to do it: https://medium.com/#vshab/create-react-app-and-sentry-cde1f15cbaa
Or you can go straight forward and look at example of configured project: https://github.com/vshab/create-react-app-and-sentry-example

Is it possible to get to the local directory of my in the Google Cloud shell?

I am trying to run a Tensorflow object classifier in Google Cloud. The problem is that in the command for training, ask for a local path for a cloud.yaml file. The code taken from the Google Cloud Documentation instruccions is the following:
# From tensorflow/models/research/
gcloud ml-engine jobs submit training object_detection_`date +%s` \
--job-dir=gs://${TRAIN_DIR} \
--packages dist/object_detection-0.1.tar.gz,slim/dist/slim-0.1.tar.gz \
--module-name object_detection.train \
--region us-central1 \
--config **${PATH_TO_LOCAL_YAML_FILE}** \
-- \
--train_dir=gs://${TRAIN_DIR} \
--pipeline_config_path=gs://${PIPELINE_CONFIG_PATH}
I solved it by putting the Cloud.yaml file in the root directory of my Google cloud. It is actually possible to call a path in that directory from the Google cloud terminal.

CURL batch doesn't do anything - infinite loop

Hi guys i'am new working with curl, I've already played with wget but never with curl.
My problem is i want to download a csv from a url with GET params but when i do the request curl goes to infinite loop and doesn't do anything and doesn't give any error.
curl --verbose -X 136.17.0.23:83 -U myProxyUser:myProxyPassword -u myHttpUser:myHttpPassword -o C:\outfile "http://136.16.120.13/webtrac/universal_ajax.aspx?tableIdx=54&mode=export&filters=%%22groupOp%%22:%%22AND%%22,%%22rules%%22:\\[{%%22fnield%%22:%%22Year%%22,%%22op%%22:%%22bw%%22,%%22data%%22:%%222014%%22}\\]}&selection="

Resources