On a same SFTP location, there are day wise structure as bellow.
2014/06/29/
2014/06/30/
2014/07/01/
2014/07/02/
Folowing route is working fine and each 30S the consumer checks the SFTP location and downloads .txt files.
from("sftp://user#host?antInclude=*/*/*/*.txt"
+ "&password=xxx" + "&recursive=true" + "&idempotent=true"
+ "&scheduler=spring&scheduler.cron=0/30+*+*+*+*+?")
.to("file:/home/user/data");
But above route will scan ALL the directories in the SFTP location and it MAY be a performance issue. So I need to scan only for today and previous day like bellow.
from("sftp://user#host?antInclude=2014/07/03/*.txt,2014/07/02/*.txt"
+ "&password=xxx" + "&recursive=true" + "&idempotent=true"
+ "&scheduler=spring&scheduler.cron=0/30+*+*+*+*+?")
.to("file:/home/user/data");
But, I need to use dynamic directory pattern for antInclude= option. I am trying with several approaches but it was not success. Can you please give me an idea with your experience.
The source endpoint is not dynamic. If you need to change it, process as described here. So, one possible solution could be to use a scheduler to update the route every day. Not very elegant, I know.
Related
I'm starting to test an Angular2 project and I want to retrieve the URL of the browser in some tests to check that redirections are made correctly.
The problem is that the only method I can see in the API to retrieve the current URL is webdriver.WebDriver.getCurrentUrl, but it returns the absolute path. This can be a pain if for some reason you change the testing port or hostname in the future. Is there any way to retrieve only the relative URL like you can do with browser.get? Thanks.
To keep things simple - To handle future changes to host & port, you can just do this.
expect(browser.getCurrentUrl()).toContain("/some/path/resource");
I'm not sure that there is built-in method for same.
Suppose there is an URL, for example,
http://example.com/some/path/resource
Just as last resort you can use below code
browser.executeScript("return '/' + window.location.href.split('/').slice(3).join('/');")
to get just
/some/path/resource
or
browser.executeScript("return window.location.href.split('/')[window.location.href.split('/').length -1];")
to get
resource
I have the following folder hierarchy :
D
D1
D1doc1.txt
D1doc2.otherext
Readme.txt
D2
D2doc1.txt
othertext.txt
Using Camel file component, I would like to send the repertory D1 and its content to another endpoint. So far I manage to send file independently or a whole content of a repertory, but I don't know how to send with the prvious structure the repertory D1 and its content (not just the content)
To send all the content of D1, i am writing :
from("file://D/D1/?noop=true&recursive=true").to(.....)
and it sends everything inside D1 correctly. Now to send D1 as a full directory with the contents, I tried :
from("file://D/?fileName=D1&noop=true&recursive=true").to(.....)
of course not working as camel file is apparently designed to work for file only and not directories like I saw on this link :
http://grokbase.com/t/camel/users/1485bjq5zr/polling-a-directory-for-inner-directories
However, it looks annoying and strange to me as I have to make a hack changing the previous hierarchy into :
D
D1
D1
D1doc1.txt
D1doc2.otherext
Readme.txt
D2
D2
D2doc1.txt
othertext.txt
so that when I use :
from("file://D/D1/?noop=true&recursive=true").to(.....)
it finally does what I want sending the directory as well.
Is there really not a cleaner way to do this ? If no, what is the reason behind ?
Use recursive to tell Camel to travel down sub directories. And you can use the min/max depth options to control from and how far you go down.
This is the clean solution using the correct options for what they are intended.
For example on unix the find command also has minx/max depth options and its the similar concept in the Camel file component.
More details at: http://camel.apache.org/file2
And if you do not want to build the directory structured on the 'other side' you can use the flattern option.
It's a kinda old thread but I'm sure it will help someone
from("file:D:\\INPUTFOLDER?noop=false&recursive=true&maxDepth=NUMBEROFSUBDIR").process(new MyProcessor()).to("file:D:\\OUTPUTFOLDER");
in here NUMBEROFSUBDIR.. will be +1 from the main
directory(INPUTFOLDER) and it won't copy the folder unless you have
file present in it, as it supports FTP.
I have values in file:
en-us, de-de, es-es, cs-cz, fr-fr, it-it, ja-jp, ko-kr, pl-pl, pt-br, ru-ru, tr-tr, zh-cn, zh-tw.
how can I get this values for one request?
I want to create a query that takes the value of these in turn and writes the variable
This scenario can be achieved using Jmeter component "CSV Data Set Config"
Please refer to below mentioned link:
Jmeter CSV Data Set Config
Hope this will help
Can't comment, not enough karma. In response to above questions your path is probably wrong. If you use a debug sampler to show what path the CSV reader is taking I think you will find it is looking at something like C:/Jmeter/C:/path/to/CSV/file.
Another option for completing this is to use inline CSVRead. In your HTTP request use code like this -
${__CSVRead(etc/filters.csv,0)}${__CSVRead(etc/filters.csv,next)}
etc/filters is the RELATIVE path from Jmeters active running directory. In my case this evaluates to
C:/git/JmeterScripts/etc/filters.csv
In either case, I am sure your problem is that Jmeters active running directory is not what you think it is. I have had this problem several times with the same error.
I run IMDbAPI.com and have been using Bing's Search API for finding IMDb ID's from title searches. Bing is currently changing their API over to the Azure Marketplace (August 1st) and is no longer available for free. I started testing my API using Freebase to resolve these ID's and hit their 100k limit in the first 8 hours (my site currently gets about 3 million requests a day, but only 200-300k are title searches)
This is exactly why they offer the data dump files,
I downloaded most of the files in the Film folder but cannot find where they are storing the "/authority/imdb/title" imdb id namespace data.
https://www.googleapis.com/freebase/v1/mqlread?query={"type":"/film/film","name":"True%20Grit","imdb_id":null,"initial_release_date>=":"1969-01","limit":1}
This is how I'm currently accessing the ID.
Does anyone know which file contains this information? and how to link back to it from the film title/id?
That imdb_id property is backed by a key in the /authority/imdb/title namespace, so you're looking for the line:
/m/015gxt /type/object/key /authority/imdb/title tt0065126
in the file http://download.freebase.com/datadumps/latest/freebase-datadump-quadruples.tsv.bz2
That's a 4 GB file, so be prepared to wait a little while for the download. Note that everything is keyed by MID, so you'll need to figure that out first if you don't have it in your database.
The equivalent query using MQL instead of the data dumps is https://www.googleapis.com/freebase/v1/mqlread?query=%7B%22type%22%3a%22/film/film%22,%22name%22%3a%22True%20Grit%22,%22imdb_id%22%3anull,%22initial_release_date%3E=%22%3a%221969-01%22,%22mid%22:null,%22key%22:[{%22namespace%22:%22/authority/imdb/title%22}],%22limit%22:1%7D&indent=1
EDIT: p.s. I'm pretty sure the files in the Browse directory are going away, so I wouldn't depend on them even if you could find the info there.
The previous answer works fine, it's just that a snappier version of such a query could be:
query = [{
'type': '/film/film',
'name': 'prometheus',
'imdb_id': null,
...
}];
The rest of the MQL request isn't mentionned as it doesn't differ from the aforementioned. Hope that helps.
I am using the Drupal 7 Migrate module to create a series of nodes from JPG and EPS files. I can get them to import just fine. But I notice that when I am done importing them if I look at the nodes it creates, none of the attached filefield and thumbnail files contain filename information.
Upon inspecting the file_managed table I see that both the filename and filemime fields are empty for ONLY the files that I attached via the migrate module. This also creates an issue with downloading the files.
Now I think the problem has to do with the fact that I am using "file_link" instead of "file_copy" as the file operation I specify. The problem is I am importing around 2TB (thats Terabytes) of image files. We had to put in a special request with Rackspace just to get access to that much disk space on our server. So I can't go around copying from one directory to the next because of space issues. So "file_link" seems like the obvious choice.
Now you probably want to see how I am doing this exactly, so here is the code snippet:
$jpg_arguments = MigrateFileFieldHandler::arguments(NULL,
'file_link', FILE_EXISTS_RENAME, 'en', array('source_field' => 'jpg_name'),
array('source_field' => 'jpg_filename'), array('source_field' => 'jpg_filename'));
$this->addFieldMapping('field_image', 'jpg_uri')
->arguments($jpg_arguments);
As you can see I am specifying no base path (just like the beer.inc example file does). I have set file_link, the language, and the source fields for the description, title, and alt.
It is able to generate thumbnails from the JPGs. But still missing those columns of data in the db table. I traced through the functions the best I could but I don't see what is causing this. I tried running the uri in the table through the functions that generate the filename and the filemime and they output just fine. It is like something is removing just those segments of data.
Does anyone have any idea what this could be? I am using the Drupal 7 Migrate module version 2.2. It is running on Drupal 7.8.
Thanks,
Patrick
Ok, so I have found the answer to yet another question of mine. This is actually an issue with the migrate module itself. The issue is documented here. I will be repealing this bounty (as soon as I figure out how).