talend tGSCopy selective file copy to other bucket - google-app-engine

Using Talend, I am trying to move an App engine data store backup file specifically skiping file name ends with ".backup_info" to new folder.
I have to load only file 2,3 skipping file 1.
File:1
ahFzfnZpcmdpbi1yZWQtdGVzdHJACxIcX0FFX0RhdGFzdG9yZUFkbWluX09wZXJhdGlvbhiRyH8MCxIWX0FFX0JhY2t1cF9JbmZvcm1hdGlvbhgBDA.backup_info
File:2
ahFzfnZpcmdpbi1yZWQtdGVzdHJACxIcX0FFX0RhdGFzdG9yZUFkbWluX09wZXJhdGlvbhiRyH8MCxIWX0FFX0JhY2t1cF9JbmZvcm1hdGlvbhgBDA.MasterContentType.backup_info
File#3 ahFzfnZpcmdpbi1yZWQtdGVzdHJBCxIcX0FFX0RhdGFzdG9yZUFkbWluX09wZXJhdGlvbhjSz7UDDAsSFl9BRV9CYWNrdXBfSW5mb3JtYXRpb24YAQw.Timeline.backup_info
There are around 100 objects, how do I key in for "Source Object Key" of tGScopy in component configuration to select particular file. This seems challenging, please assist.

Related

How download more than 100MB data into csv from snowflake's database table

Is there a way to download more than 100MB of data from Snowflake into excel or csv?
I'm able to download up to 100MB through the UI, clicking the 'download or view results button'
You'll need to consider using what we call "unload", a.k.a. COPY INTO LOCATION
which is documented here:
https://docs.snowflake.net/manuals/sql-reference/sql/copy-into-location.html
Other options might be to use a different type of client (python script or similar).
I hope this helps...Rich
.....EDITS AS FOLLOWS....
Using the unload (COPY INTO LOCATION) isn't quite as overwhelming as it may appear to be, and if you can use the snowSQL client (instead of the webUI) you can "grab" the files from what we call an "INTERNAL STAGE" fairly easily, example as follows.
CREATE TEMPORARY STAGE my_temp_stage;
COPY INTO #my_temp_stage/output_filex
FROM (select * FROM databaseNameHere.SchemaNameHere.tableNameHere)
FILE_FORMAT = (
TYPE='CSV'
COMPRESSION=GZIP
FIELD_DELIMITER=','
ESCAPE=NONE
ESCAPE_UNENCLOSED_FIELD=NONE
date_format='AUTO'
time_format='AUTO'
timestamp_format='AUTO'
binary_format='UTF-8'
field_optionally_enclosed_by='"'
null_if=''
EMPTY_FIELD_AS_NULL = FALSE
)
overwrite=TRUE
single=FALSE
max_file_size=5368709120
header=TRUE;
ls #my_temp_stage;
GET #my_temp_stage file:///tmp/ ;
This example:
Creates a temporary stage object in Snowflake, which will be discarded when you close your session.
Takes the results of your query and loads them into one (or more) csv files in that internal temporary stage, depending on size of your output. Notice how I didn't create another database object called a "FILE FORMAT", it's considered a best practice to do so, but you can do these one off extracts without creating that separate object if you don't mind having the command be so long.
Lists the files in the stage, so you can see what was created.
Pulls the files down using the GET, in this case this was run on my mac and the file(s) were placed in /tmp, if you are using Windoz you will need to modify a little bit.

How to Import Just The FileName into SQL via SSIS

I am trying to write a file name to a table in my database - at the moment all I am achieving is importing the whole file path.
I have a foreach loop, which on the Collection looks in a specific folder and for a specific file type (the retrieve file name is fully qualified)
This has a variable mapping to "ImportInvoiceFilePath"
Then within that is a Data Flow Task which includes the flat file source, a derived column which creates the file path in the database.
This works fine - but what I am trying really hard to do but can't work out is how do I get just the file name (no extension) to write to the database as well?
Literally worked it out. Set my forloop to nameonly then in my connection to my Source file under expressions put:
#[User::ProcessingInvoiceFilePath] + "\\"+#[User::ImportInvoiceFileName]+".saf"
Where saf is the file type

migration some file from Dev environment folder to stage

I need to do migration from one envionment to another envionomet.It is seach the file with extention and copy and paste into other envinoment.
is this possible to automate.file and folder, name is dynamic in nature.
can we take input as file of this information(file name ,envionoment name (Dev-> stage)or stage to prod and folder name)

AS400: How to know which program create a file?

I am not expert about AS400, just know some commands and i exporti some files from AS400 (iSeries) into SQL Server 2005.
Actually i need to know which RPG Program created a file in a library. This because that file contains statistic data from other files stored in other AS400 libraries.
This screenshot show the file STTMVF in the library DAT_4DWH (by DSPLIB DAT_4DWH)
So there are a command that let me know which RPG program created the file STTMVF ?
If yes i need to open the source RPG or CL and try to understand which phisical files are used to compose this statistic file.
Thanks in advance!
You can use journal management or program references to determine what is writing to the file.
Journal management
Starting the journal
To create a basic journal you need to create a journal receiver, a journal, and activate journalling for the file. Replace RECEIVER-LIB, RECEIVER-FILE, JOURNAL-LIB, JOURNAL-FILE, FILE-LIB and FILE with values appropriate for your system.
CRTJRNRCV JRNRCV(RECEIVER-LIB/RECEIVER-FILE)
CRTJRN JRN(JOURNAL-LIB/JOURNAL-FILE) JRNRCV(RECEIVER-LIB/RECEIVER-FILE)
STRJRNPF FILE(FILE-LIB/FILE) JRN(JOURNAL-LIB/JOURNAL-FILE) OMTJRNE(*OPNCLO)
Dumping the journal
DSPJRN JRN(JOURNAL-LIB/JOURNAL-FILE) FILE(FILE-LIB/FILE) RCVRNG(*CURCHAIN) JRNCDE(R) ENTTYP(PT PX DL UP) OUTPUT(*OUTFILE) OUTFILFMT(*TYPE1) OUTFILE(QTEMP/QADSPJRN)
Querying the journal
The field JOPGM will contain the program name that inserted, updated, or deleted records from the file.
Removing the journal
ENDJRNPF FILE(FILE-LIB/FILE)
DLTJRN JRN(JOURNAL-LIB/JOURNAL-FILE)
Program references
Dumping the references
DSPPGMREF PGM(*ALLUSR/*ALL) OUTPUT(*OUTFILE) OUTFILE(QTEMP/QADSPPGM)
Querying the references
Search the file for all references where the field WHFNAM equals FILE. The field WHPNAM will contain the program name. Due to file overrides, etc this method is not as accurate as using a journal.

Need to extract/consolidate info from database files

Here's a summary of my problem:
Our company's old software had a large database of contacts in it.
We switched to a new program and have no way to easily transfer those contacts to it.
The contacts database appears to have 4 files which can all be opened in Excel, but not MSAccess. The four files contain the following:
File 1: A nicely formatted spreadsheet of names and some other BASIC info for each contact. There is an ID number on each one, but the numbers do not seem to correspond to anything in File 2.
File 2: Info on each contact, but not in rows. Instead it looks something like this :
JHGH_CONTACT_BLOB: 1426367745
EMAIL: SMITH
WEB:
PHONE_COUNT: 1
FAX_COUNT: 0
ADDRESS_COUNT: 0
NOTE_COUNT: 0
555-7364
(I changed some info for privacy reasons)
Each blob of info is on a separate spreadsheet row. Each starts off with the same first line, even the number is the same, so it can't be some sort of ID number.
File 3: A file containing a lot of gobbledygook, interspersed with a few readable bits of text here and there. The readable text looks like it belongs to the database (ie, it is info on contacts like place of work and other notes.)
File 4: Contains one row and one column labeled ID, with the number 12725 in it.
I need to somehow get the info from File 2, into the nicely formatted file 1. In essence, I need to add the phone numbers, emails etc included in a messy fashion in file 2 on their proper rows in file 1.
This probably makes little sense and I thank you for even reading down this far. If you have any suggestions, I'd love to hear them.
Thanks
We have established that you have a DBF file, an FPT file and a CDX file. These are likely to all relate to Visual FoxPro (a now discontinued Microsoft product).
The .dbf file can be opened in Excel via the standard file open dialog by changing "Files of type" to "dBase files (*.dbf)". Going by your original post, Excel seems to be able to open this sensibly in the first place.
The combination of all three files might be accessible by downloading this OLE DB provider for FoxPro which would let you access the database from Excel using the methods outlined here
You can get more info on the specific file structures at the following links: DBF, FPT and CDX. The DBF contains most of the data, the FPT contains binary memo data and the CDX is an index file.

Resources