Watson Visual Recognition "Cannot execute learning task. : no classifier name given" - ibm-watson

Getting cURL error: {"code":400,"error":"Cannot execute learning task. : no classifier name given"}
Getting the same result whether I use the beta GUI tool or a cURL entry:
curl -X POST \
-F "Airplanes_positive_examples=#Airplanes.zip" \
-F "Biking_positive_examples=#Biking.zip" \
-F "GolfPuttingGreens_positive_examples=#GolfPuttingGreens.zip" \
-F "name=AllJpegClassifier" \
"https://gateway-a.watsonplatform.net/visual-recognition/api/v3/classifiers?api_key={my-api-key}&version=2016-05-20"
I have read all previous SO questions for this problem and made sure of the following:
Classifier name is alphanumeric only
Zip filenames are alphanumeric only
Image filenames are alphanumeric with _ - . only
Zip files contain between 27 and 49 images each
All image files are the same format (JPEG)
All images conform to pixel size and file size limits

Your command looks fine, and when I try it with my API key and my own zip files, it works. So I suspect there is something in your zip files that the system is having trouble with. If you could provide the "owner" guid field (also called you instance-id) I could look into our logs to try to diagnose it. This is displayed when you do a GET /classifiers/{cid} of an existing classifier. Alternatively, you could let me know one of your other existing classifier_ids
Another way would be if you could open a Bluemix support ticket and include copies of the zip files which you're using in this example. Then we can reproduce the problem.

Related

Batch convert dae to scn

Is there a way to batch convert Collada dae files to Scenekit scn files?
My project uses approx 50 models created in sketchup that are updated regularly, these are exported to DAE but also need to be converted to SCN files for usage in xCode. I know it can be done manually via xCode and "Convert to SceneKit scene file format (scn)" but this take to much manual labour.
Based on https://the-nerd.be/2014/11/07/dynamically-load-collada-files-in-scenekit-at-runtime/ I figured out that the scntool is able to convert it via the command line and write the following script:
find ./dae -name "*.dae" | while read f ; do
inputfilename=$(basename $f)
echo $inputfilename
./scntool --convert $f --format scn --output ./scn/$inputfilename
done
for file in ./scn/*.dae; do
mv "$file" "./scn/$(basename "$file" .dae).scn"
done
#HixField has a good shell script for invoking scntool. Another way to do that is to leverage Xcode's build system, which does the same thing for any .dae files you put in a project's scnassets folder. Even if you're not bundling those files in your app, you can create a dummy Xcode target or project that contains all the assets you want to convert, and it'll convert them all whenever you build the target. (Which you could then integrate into a CI system or other automation.)
I agree with #Hixfield About everything except you need to add one more option to the scntool to get your materials correctly without need to re add all manually
scntool --convert INPUT.dae --format scn --output OUT.scn --asset-catalog-path .
The dot at the end of the command line is very important it means you will set resources to same location
If you don’t set the —asset-catalog-path . You will have no materials

curl output file replacing path seperator with underscore when writing the file

I'm trying to create a fairly simple batch file to download a file from our ftp site and store it in a specific directory. I would like to be able to put the batch file in the path so I can call it from anywhere. Currently I have a line like the following:
curl -v -u %FTP_USER%:%FTP_PASS% -Q "TYPE I" -o %OUTPUT_PATH% "ftp://%FTP_HOST%/%JAR_FILE_NAME%"
What I've discovered is that no matter the value of the output path, the file is always written in the current directory, and any path seperators are converted to underscores in the file name. This happens no matter if I use a '\' or '/' and if I try to escape it with double slashes I just end up with two underscores. Quotes around things don't seem to help either.
My question is does the -o option allow for outputting to a folder other than current working dir? I guess I can have the next step in the script to be "move the file to its destination", but that seems really kludgey.
This sounds exactly like a regression (reported here) we unfortunately brought in curl 7.47.0. We hope to release an updated, fixed, version soon and in the mean time you can probably consider downgrading to an earlier version to work-around this annoying issue.

Batch Script - Find and replace text in multiple files in a directory

I am new to writing batch scripts. I am in need of a simple batch file that searches for a text (eg., FOO) and replaces it with another text (eg., BAR) in all the text files within a folder and it's sub-folders.
I need to give this batch file to the user. So, it is not possible to ask the user to install anything else. Can someone please help me with this?
I've used this tool extensively to accomplish similar tasks: http://fart-it.sourceforge.net/
(Despite its name, it is a very handy tool).
For example, this command performs a search of all TXT files in the "C:\Dir\To\Files" (+subfolders) replacing all occurances of FOO with BAR. The -i switch performs a case-insensitive search.
FART -i -r "C:\Dir\To\Files\*.txt" FOO BAR
I believe your question has already been answered. At least for replacing text.
How can you find and replace text in a file using the Windows command-line environment?
Adding more information would also be helpful in assessing your problem. Like if the text files mentioned are created by a script.

Arelle locate ratio extraction command that i cannot understand to find in docs(~2pages)

The basic command while we are working with Command Line Operation in Arelle is:
python arelleCmdLine.py arguments
provided we go with the cmd to the folder that arelle is installed.
I have devoted huge resources but i cannot find if there is a command in the Documentation (about ~2 pages) that can output ratios (e.g. Current Ratio) or metrics (e.g. Revenue) instead of having to download all the data in Columns and filter the data. I must admit that i cannot understand some commands in the documentation.
What i am doing to download data is:
python arelleCmdLine.py -f http://www.sec.gov/Archives/edgar/data/1009672/000119312514065056/crr-20131231.xml -v --facts D:\Save_in_File.html --factListCols "Label Name contextRef unitRef Dec Prec Lang Value EntityScheme EntityIdentifier Period Dimensions"
-f is the command that pulls data and after that is a location with data in the web
-v is the command to validate the data that are pulled
--facts Saves the data in an HTML file in a designated directory
factListCols is the Columns i choose to have (i take all the available columns in the upper command).
There is an absolute zero on tutorials.
Arelle only runs on Python 3 and can be downloaded without creating a hassle only by following these quick and simple steps.

IBM i PASE tar - Excluding files or directories

I want to exclude some directories from an archive using the PASE tar command on an IBMi but the [-X Exclude File] option doesn't seems to work for me.
I tried using an exclude file that just contained a file name (/home/JSMITH/data/sub2/file2.txt) and then one that just contained a pattern (*.txt), and neither archive operation omitted anything.
Given the following directory structure:
/home/JSMITH/data
/home/JSMITH/data/sub1
/home/JSMITH/data/sub1/file1.txt
/home/JSMITH/data/sub2
/home/JSMITH/data/sub2/file2.txt
/home/JSMITH/data/sub3
/home/JSMITH/data/sub3/file3.txt
and the following command:
/qopensys/usr/bin/tar -cvf /home/JSMITH/test.tar -X /home/JSMITH/excludes.txt /home/JSMITH/data
The entire /home/JSMITH/data structure gets included in the resulting archive.
I have tried using the /home/JSMITH/excludes.txt file with either of these contents:
/home/JSMITH/data/sub2/file2.txt
or
*.txt
How does one exclude files/directories/patterns from the IBMi PASE tar command?
You need the full path in the exclude file.
I created mine via ls /home/JSMITH/data/*.txt > /home/JSMITH/excludes.txt
If you're doing it by hand, make certain you haven't got any trailing whitespace.
Also, I used Notepad++ when I created mine by hand. I found that the green screen edtf created an EBCDIC file with CRLF in it, and that didn't exclude for me.
IBM i 7.1

Resources