Programmatically read queue.xml file - google-app-engine

This question is related to this problem: Programmatically read a Queue's parameters
Is there a way to read queue.xml file's content programmatically on App Enigne? As far as I know all operations related to filesystem are prohibited on GAE.

The prohibited functions are related to the writing process in the file system (because does not exists in the sandbox) but the reading functions are available w/o problems.
the new File(); object set the root in you war folder (or webapp if Maven project), so you can open any file under that folder.
You can try to create new File("WEB-INF/queue.xml") and then read it with the common ways to read an xml

Related

Moving file to another directory in ABAP

I have got a service running in a specific directory in 5-second-intervals which is picking up an XML file created in that directory sending it for some necessary authorization checks to another client and then requesting a response file.
My issue is that my Z_PROGRAM creating the XML file might take longer than 5 seconds as a result of the file's size. Therefore creating the file in that specific directory is not preferable. I thought about creating a new folder in that directory called "temporary" and creating the file inside that folder, then once I'm done with it, moving it back outside for the service to pick it up.
Is there any way to move files from one directory to another via ABAP code only?
Copying the file manually is not an option since the problem that I have during file creation still persists. I need 2 alternatives, one used for local directories and one for application server directories. Any ideas?
Generally, we create another empty file for completed files after the file creation process ends. Third parties must be firstly checked empty file is there. Example:
data file.csv
data file.ok
If you already completed your integration and it is not easy to make any change with third parties, I prefer using OS level file moving commands. Sample document here. You can use mv for Linux server and move for Windows. If your file is big, you will get same problem with OPEN DATASET concept. We have ARCHIVFILE_SERVER_TO_SERVER FM for moving files but it is also using OPEN DATASET.
there is no explicit move command in ABAP code that move or copy files between directories in application server.
there is two tips can be helpfull in your case. if you are writing big file you may seperate the logic behind collecting data and writing file. I would say don't execute transfer data inside your loop. instead collect you data into an internal table once you're done, loop over this internal table and write direclty strings without any delay you should be able to write a big files upp to several hundred of MB under 1 sec.
next tips is to not modify your program, or if you are using function modules to construct xml is, write to a temp directory after finishing, then have another program open you file on source directory by read dataset and directly write data to the new directory again just strings without interruptions.
you should be ok if you just write strings.
You can simply use System Call Commands to perform actions in Application Directory.
CALL 'SYSTEM'
ID 'COMMAND'
FIELD 'mv /usr/sap/temporary/File.xml
/usr/sap/final/file.xml'

How to access the source directory in codename one

I am currently trying to create a plugin-like library for my company.
I need to check if four directories exist within the project structure. As java.io.File is not available, I am pretty confused on how to check for existance of a file that needs to exist within the project structure?
The concrete use-case.
There will be four directories:
/entities
/converter
/attributes
/caches
Now if the developer uses this library and wants to access all, lets say "Person"-Entities from the server, he should be able to call
RestGet.getAll("Person");
and the library looks in the source directory of the project if there are these Files:
/entities/PersonEntity.java //<-- Stores the actual data
/converter/PersonConverter.java //<-- Converts the JSON answer of the server to the Object
/attributes/PersonAttributes.java //<-- An enum that is used to set the attributes of the object
/caches/PersonCache.java //<-- A simple Cache
How can I do this? I tried with FileSystemStorage, but it only tell me that I should use getAppHome()...
I don't quite understand the usage of the source directory which obviously won't exist on the device where your application is running.
You can get access to files in the root of your SRC directory which get packaged into the JAR using Display.getInstance().getResourceAsStream(...).
The replacement to java.io.File is FileSystemStorage which is covered in the developer guide.

Data loss on concurrent file write in camel

I am using camel technology for my file operation. My system is cluster environment.
Let say, I have 4 instances
Instance A
Instance B
Instance C
Instance D
Folders Structure
Input Folder: C:/app/input
Output Folder: C:/app/output
All the four instances will be pointing to Input folder location. As per, my business 8 files will be placed in the input folder and output will be consolidated file. here camel losing data when concurrently writing to output file.
Route:
from("file://C:/app/input")
.setHeader(Exchange.File_Name,simple("output.txt"))
.to("file://C:/app/output?fileExist=Append")
.end();
Kindly help me to resolve this issue. is there any thing like write lock in camel? to avoid concurrent file writer. Thanks in advance
You can use the doneFile option of the file component, see http://camel.apache.org/file2.html for more information.
Avoid reading files currently being written by another application
Beware the JDK File IO API is a bit limited in detecting whether another application is currently writing/copying a file. And the implementation can be different depending on OS platform as well. This could lead to that Camel thinks the file is not locked by another process and start consuming it. Therefore you have to do you own investigation what suites your environment. To help with this Camel provides different readLock options and doneFileName option that you can use. See also the section Consuming files from folders where others drop files directly.

Apache Camel 2.10.7 - monitor deletion of files from file system

I am using camel 2.10.7 with great success from servicemix to feed files from the local file system to my application.
The files shall remain on the file system, hence I use a configuration like this one.
from uri="file:../ange-data/vessels?noop=true&idempotentKey=${file:name}-${file:modified}"
This works great if I touch/update a file on the file system.
Only issue remains: how can I then in my Java code detect that a file has been removed from the file system by some other person or process?
Could not find any hint by studying the manual pages http://camel.apache.org/file-language.html or http://camel.apache.org/file2.html - but I believe it should be possible to get a message on file deletion?
You would need to use Java 7 nio2 which has a file watcher api where you can get notifications when files are added/removed etc.
Search the web / SO for details on this api, for example
http://docs.oracle.com/javase/tutorial/essential/io/notification.html

Sharing file locks

I am currently working on a file processing service that looks at a fileshare, where files are uploaded to via FTP.
For scalability I've been asked to make this service to be able to be load balanced, so the service has to expect that other services on different machines may also be trying to process these files.
OK, so I thought I should be able achieve this by obtaining an exclusive lock for my process before processing a file, and skipping any files that may already be locked by another process.
The crux of this approach is shown below (I've left out the error handling for simplicity):
using(FileStream fs = File.Open(myFile, FileMode.Open, FileAccess.ReadWrite, (FileShare.Read | FileShare.Delete))
{
//Do work
}
Q1: My process now has a lock on this file. I thought this would mean I could then access the same file (without using the stream) and still have the correct access to it, but based on testing it seems I only have the benefits of the lock through the stream. Is this correct?
(For example, before I included FileShare.Delete, File.Delete(myFile) failed)
The above lock ultimately uses the 'Write' permission to determine which service has the file, but is intended to allow other processes to still Read the file. This is because the process that has the lock attempts to verify if the file is a valid zip file , which uses a third party library (Xceed.Zip). However this fails saying the file "is being used by another process". Using reflector I ultimately found the problematic call is:
stream = this.m_info.Open(FileMode.Open, FileAccess.Read, FileShare.Read);
Now I would have expected this to work as it only wants to read the file, but it fails. The reason appears to be outlined in a similar question. However, as this is a 3rd party API I can't change their code to use ReadWrite.
Q2: Is there a way I can correctly lock the file so it will not be picked up by the other services, but it can still be verified as a zip file using the external API?
I feel like there should be a 'correct' way to do this, but at the moment the best I can come up with is to lock the file, move it away from the shared directory, and then verify it at the new location.
If you're planning to reactively handle this situation by handling UnauthorizedAccessException I think you're making a serious mistake.
This can be handled by proactively renaming files. For example you can configure your service to only read files whose name is in the format 'Filename.YYYYMMDD.txt'. Prior to processing the file, you can rename it to 'Filename.YYYYMMDD.processing'. Then after processing the file you rename it to 'Filename.YYYYMMDD.done'.
You can even take it a step further by making another service that enqueues the filenames. This service will be a FileSystemWatcher that listens for FileAdd operations. Once it receives that event it proceeds to queueing the Filename to a global message queue. Then, each of your service will just be dequeueing filenames and no longer have to worry about concurrent access.
HTH

Resources