ODK briefcase command-line form export - odk

I am trying to automaticly backup forms submited to transform them to CSV.
I am using this commandline:
java -jar ./ODK_Briefcase_v1.4.5_Production.jar --form_id NameOfTheForm
--odk_username USER --odk_password PASSWORD
--export_directory /var/www/data --storage_directory /var/www/data
--export_filename A_Chaufferie.csv --overwrite_csv_export
--export_start_date 2014/02/05 --export_end_date 2016/02/06
I get the error GRAVE: Form not found
I have no idea what is the purpose of storage_directory. I can't find any forms submissions on mys server (tryed with the linux command find).
Do you know what I am missing?
i have this --help:
java -jar briefcase.jar
-ed,--export_directory </path/to/dir> Directory to export the CSV and
media files into (relative path
unless it begins with / or C:\)
-em,--exclude_media_export Flag to exclude media on export
-end,--export_end_date <yyyy/MM/dd> Include submission dates before
(exclusive) this date in export
to CSV
-f,--export_filename <name.csv> File name for exported CSV
-h,--help Print help information (this
screen)
-id,--form_id <form_id> Form ID of form to download and
export
-oc,--overwrite_csv_export Flag to overwrite CSV on export
-od,--odk_directory </path/to/dir> /odk directory from ODK Collect
(relative path unless it begins
with / or C:\)
-p,--odk_password <password> ODK password
-pf,--pem_file </path/to/file.pem> PEM private key file (relative
path unless it begins with / or
C:\)
-sd,--storage_directory </path/to/dir> Directory to create or find ODK
Briefcase Storage directory
(relative path unless it begins
with / or C:\)
-start,--export_start_date <yyyy/MM/dd> Include submission dates after
(inclusive) this date in export
to CSV
-u,--odk_username <username> ODK username
-url,--aggregate_url <url> ODK Aggregate URL (must start
with http:// or https://)
-v,--version Print version information

I didn't know i had to put --aggregate_url even if the brieface and ODK aggregate are in the same server. Don't miss the http:// or it won't work
java -jar ./ODK_Briefcase_v1.4.5_Production.jar --form_id NameOfTheForm
--odk_username USER --odk_password PASSWORD
--export_directory /var/www/data --storage_directory /var/www/briefcase
--export_filename A_Chaufferie.csv --overwrite_csv_export
--export_start_date 2014/02/05 --export_end_date 2016/02/06
--aggregate_url http://your.odk-aggregate.site

Related

Image Extractor by AI Habitat produces a configuration error when importing Matterport dataset

I need help understanding the error message, which is along the lines of changing the file name to json because the configuration fails. I have a long error message but pasted the part that is mostly repeated throughout the message:
/Users/kyra/Documents/GitHub/habitat-sim/matterport/scans/house1/8194nk5LbLH 13/poisson_meshes/8194nk5LbLH_10.stage_config.json
I0412 19:04:17.735939 42397184 AttributesManagerBase.h:296] AttributesManager::createFromJsonOrDefaultInternal (Stage) : Proposing JSON name : /Users/kyra/Documents/GitHub/habitat-sim/matterport/scans/house1/8194nk5LbLH 13/poisson_meshes/8194nk5LbLH_10.stage_config.json from original name : /Users/kyra/Documents/GitHub/habitat-sim/matterport/scans/house1/8194nk5LbLH 13/poisson_meshes/8194nk5LbLH_10.ply | This file does not exist.
I0412 19:04:17.736085 42397184 AbstractObjectAttributesManagerBase.h:182] AbstractObjectAttributesManager::createObject (Stage) : Done making attributes with handle : /Users/kyra/Documents/GitHub/habitat-sim/matterport/scans/house1/8194nk5LbLH 13/poisson_meshes/8194nk5LbLH_10.ply
I0412 19:04:17.736093 42397184 AbstractObjectAttributesManagerBase.h:189] File (/Users/kyra/Documents/GitHub/habitat-sim/matterport/scans/house1/8194nk5LbLH 13/poisson_meshes/8194nk5LbLH_10.ply) exists but is not a recognized config filename extension, so new default Stage attributes created and registered.
I0412 19:04:17.736124 42397184 SceneDatasetAttributes.cpp:46]
What I did: Ran image extractor after activating Conda env. I modified the image extractor to change the file path to point to a .ply file in the matterport dataset.
Setup: 1)Facebook's AI Habitat-sim built from source,
2)MacBook Air M1,
3)Conda environment with the dependencies (using pip install -r requirements.txt) but habitat-sim is not installed by Conda,
4)Matterport3D dataset (downloaded one house).
Thank you.

Rundeck - Failed to read SSH Private Key stored at path - Path does not exist

I am running the Rundeck war file directly
java -jar rundeck-3.0.17-20190311.war
I get this error message when I trigger a build.
Failed to read SSH Private key stored at path:
keys/rundeck.pem: org.rundeck.storage.api.StorageException:
Path does not exist: keys/rundeck.pem
Failed: ConfigurationFailure: Failed to read SSH
Private key stored at path: keys/rundeck.pem
It makes sense that the reference in the Default Node Executor is invalid and that Rundeck cannot find the .pem file.
I've tried
referencing the full working directory (/home/user/rundeck/keys/rundeck.pem) It wants the location to start with keys/.
referencing it to its relative path (keys/rundeck.pem)
copied the keys directory to /home/user/
In desperation, I ran chmod 700 on the pem file.
Most of the questions and examples I found were on older versions of Rundeck.
I'd like to know where the .pem file must be configured and how it should be referenced. Any other information that could help me configure the SSH keys will be appreciated.
You must add the key using the GUI and use the path that you are defined in your resources.xml.
For add your key, you can follow this. Although the video is based on Rundeck 2.x it is valid for Rundeck 3.x:
Check that https://www.youtube.com/watch?v=qOA-kWse22g
And for generate your resources.xml file select your new project and go to Project Settings > Edit Nodes > Click on "Configure Nodes" button (up to right) > Click on "Add Sources +" Button > Select "+ File" option > in "Format" field select "resourcexml" and fill the path in "File Path" field (put the file name at the end, usually "resources.xml"), then select "Generate", "Include Server Node" and "Writeable" checkboxes and click on "Save" button.

Solr extract text from image and imagePdf files

I am working with Solr-6.5.1, I want to extract text from Image file and ImagePdf file.for this i installed TesseractOcr and configured this with solr in two ways:
1.Environment variable is set for TESSDATA_PREFIX = C:\Program Files (x86)\Tesseract-OCR and i used /update/extract request handler to index image with content.
2.I modified the tesseractOCRConfig.properties file in tika-parsers-1.13 jar file in solr lib to" tesseractPath=C:/Program Files (x86)/Tesseract-OCR" and i used /update/extract request handler to index image/imagePdf with content.
In this two way also i'm not getting any content ,But response giving only attr_x_parsed_by=org.apache.tika.parser.ocr.TesseractOCRParser.
Any other configuration i need to set for solr to TesseractOcr to extract content for Image/ImagePdf file.
Thanks in advance.

Cakephp 3 - Plugin - translation file - not working

Using Cake version 3.4.5 :
1) I've wrote a plugin :
/plugins/Accounting/
2) then, to create the pot file from the view files I run :
bin/cake i18n extract --plugin Accounting
3) this generates /plugins/Accounting/src/Locale/default.pot
But the translated text does not appear.
My locale is es_AR, and I've tried to copy the file as :
/plugins/Accounting/src/Locale/accounting.pot
or
/plugins/Accounting/src/Locale/es_AR/default.pot
or
/plugins/Accounting/src/Locale/es_AR/accounting.pot
Also tried to save the files as accounting.po, but nothing happens
But still not text is translated from the plugin views (it does work for the app's views).
I've found it !!!
The problem was the file / directory permissions.
By default, cake i18n extract --plugin MyPlugin makes this :
creates the src/Locale/ directory inside the plugin structure
creates the template translation file default.pot instead of
my_plugin.pot
all these creations are made with mode 750 being
the owner the linux user currently logged in ( not www-data )
So in order to make it work :
change the permissions of the Locale structure to 755
rename default.pot to my_plugin.po
use __d( 'my_plugin', 'Text to be translated' )

Loading .owl files in marklogic

Is it possible to load .owl files using mlcp?
I tried with -input_file_type rdf but it gives error as below:
bin/mlcp.sh import -host localhost -port 9010 -username uname
-password pwd -mode local -input_file_path /home/user/semantics/data -input_file_type rdf -input_file_pattern '.*.owl'
FATAL contentpump.RDFReader: dbpedia1.owl: Element or attribute do not
match QName production: QName::=(NCName':')?NCName. FATAL
contentpump.RDFReader: dbpedia2.owl: Element or attribute do not match
QName production: QName::=(NCName':')?NCName.
What am I missing here ?
MarkLogic documentation lists the supported triples file formats:
.rdf
.ttl
.json
.n3
.nt
.nq
.trig
Maybe you convert your .owl file to one of those formats, at which point you could use MLCP to load it. I tried plugging your example into a format converter, but that didn't work. Perhaps it's because we only have a snippet here.
MarkLogic should be able to process .owl files, but I think Joshua is right that MarkLogic is expecting .owl files to contain RDF/XML. You can also see that from the list of Mimetypes in the Admin interface. It lists the .owl extension as 'application/owl+xml', and RDF/XML seems to be the more common serialization of OWL.
Might just be that if you rename the file to .nt that it works..
HTH!

Resources