A Batch file is executed and it gives few parameters as output.I need to write those output parameters in virtual table server (VTS), as i need to pass them to my LR Script.
Output of Batch file
USING API KEY : Android
Base URL = /user/authorization
HEADERS :
accessKey = 45k907its35dooeo182dm0guy8k0dv8o
signature = Tdo0ZBfZazTvYd8UwmHT+haq2vM=
timestamp = 1455397355435enter
The major concern is, the output of batch-file is valid only for few minutes(HMACSHA1), i cant wait to copy it in csv file and then upload it in VTS. Any ideas how I can directly write it on VTS. Thanks in advance :)
Have you considered having your batch automatically output the CSV and then also automatically upload it to VTS. Otherwise, the API in use is a set of LoadRunner libraries.
An alternate path that I often use is with RabbitMQ. With the management plugin there is an HTTP interface available that you could push data with CURL and then pull it using a standard HTTP call from within your virtual user. I use Rabbit MQ for it's open back end to use stuff outside of a LoadRunner virtual user to seed a queue and also because the HTTP interface is simpler than the VTS API set.
Related
I'm trying to read a binary file from a local filesystem, send it over HTTP, then in a different application I need to receive the file and write it out to the local file system, all using Apache Camel.
My (simplified) client code looks like this:
from("file:<path_to_local_directory>")
.setHeader(Exchange.HTTP_PATH, header("CamelFileNameOnly"))
.setHeader(Exchange.CONTENT_TYPE, constant("application/octet-stream"))
.to("http4:localhost:9095");
And my server code is:
restConfiguration()
.component("spark-rest")
.port(9095);
rest("/{fileName}")
.post()
.consumes("application/octet-stream")
.to("file:<path_to_output_dir>?fileName=${header.fileName}");
As you can see, I'm using the Camel HTTP4 Component to send the file and the Spark-Rest component to receive it.
When I run this, and drop a file into the local directory, both the client and server applications work and the file is transmitted, received and written out again. The problem I'm seeing is that the original file is 5860kb, but the received file is 9932kb. As it's a binary file it's not really readable, but when I open it in a text editor I can easily see that it has changed and many characters are different.
It feels like it's being treated as a text file and it's being received and written out in a different character set to that in which it is written. As a binary file, I don't want it to be treated as a text file which is why I'm handling it as application/octet-stream, but this doesn't seem to be honoured. Or maybe it's not a character set problem and could be something else? Plain text files are transmitted and received correctly, with no corruption, which leads me to think that it is the special characters in the binary file that are causing the problem.
I'd like to resolve this so that the received file is identical to the sent file, so any help would be appreciated.
I got the same issue. By default, Camel will serialize it as a String when producing to the http endoint.
You should explicitly convert the GenericFile to byte[] by doing a simple : .convertBodyTo(byte[].class) before your .to("http4:..")
Here I have a Tmote Sky node. And I have printf the RSSI on the terminal. Now I want to store these RSSI data to my computer. I have tried cfs which is used to operate the external flash of a node. So how can I save the data to my computer with contiki.
/platform/sky/Makefile.common provides a target serialdump, which will also print the output to a file named serialdump-<current time>. Therefore you want to tun make serialdump TARGET=sky.
Or do you want to get the data from the external flash? In that case you need to add a function that dumps the file contents to the serial (e.g. when pushing the button or sending a special command via serial). You can then save that output to a file.
When I try to establish a XCB connection to a given display stored in a string e.g. dpy, I know I can do it in two different ways:
Simply call xcb_connect(dpy, NULL), or
Set environment variable DISPLAY to the value of dpy and call xcb_connect(NULL, NULL).
However, if my X server requires a Xauthority file, I can only establish a XCB connection if I set environment XAUTHORITY Xauthority file path and then call xcb_connect(dpy, NULL).
I would like to establish this connection without having to set environment variable XAUTHORITY. I know there's a funcion in XCB API called xcb_connect_to_display_with_auth_info() which receives a xcb_auth_info_t struct, but I have absolutely no idea of how to build this struct given a Xauthority file path.
How could I do it?
The contents of a xcb_auth_info_t struct are the same as the parameters to XSetAuthorization.
Unfortunately, that's not well documented either.
name is the authorization method name (e.g. "MIT-MAGIC-COOKIE-1"), and data is the authentication data (e.g. a 128-bit cookie).
If you want to avoid using the XAUTHORITY env var, but have an .Xauthority file, I think you could use XauReadAuth to parse the .Xauthority file and locate the entry corresponding to the display you are connecting to, and extract the authentication method and data.
I am designing a logger plugin for my tool.I have a busybox syslog on a target board, and i want to get syslog data from it so i can forward to my host(not via remote port forwarding of syslog) via my own communication framework.Initially i had made use of syslog's ability to forward messages it receives to a named pipe but this only works via a patch addition which is not feasible in my case.So now my idea is to write a configuration file in syslog to forward all log messages it receives to a file and track the file to get my data.I can use tail function to monitor my file changes but my busybox tail does not support "--follow" option since syslog performs logrotate which causes "tail -f" to fail.And also i am not sure if this is a good method to do it.So what i wanted to ask is there another way in which i can get modified data from a file.I can use inotify, but that can only be used to track file changes.So is there a way to do this?
You could try the "diff" utility (or git-diff, which has more facilities).
You may write a script/program which can receive an inotify event. And the script reopens the file and starts to read till EOF, from the previously saved last read file position.
I have a top-level minifilter driver and a user-mode service, which is similar to the Scanner MSDN example.
I want my user-mode service to replace the A.txt file contents, when it's opened in the Notepad.
So, in the IRP_MJ_CREATE post-operation callback I'm sending notification to the service and waiting for it to write a new data to the file.
But service cannot open the A.txt, because it's already locked by notepad.
How to allow my service to write the data without using the kernel FltWriteFile?
What is the best way of doing this?
Maybe cancel file open, letting service write data and reopen it with the same parameters without leaving the post-operation callback?
Maybe I should overwrite the desired access in the pre-op?
---
Any info will be appreciated. If you think this question lacks of details, please, let me know.
Instead of notifying to your service in on PostOperation, do that in PreOperation callback. By the time you do that in PostOperation file will be already opened for Notepad.exe, which is why open in your service is failing.
Also, if you are not doing already, you would have to wait in PreOperation while your service writes new data to the file.
I don't really agreee with Rogan's answer, as the file could very well be locked by any other process before notepad.
That is not the issue here or at least not how you should look at this problem.
If you want to notepad to have a certain view of A.txt, simply use notepad's FILE_OBJECT and do the writing yourself from the kernel. Just remember to use ObReferenceObjectByPointer and ask for the WRITE access. Since access mode will be kernel mode you will be allowed.
Alternatively if you really want it to be done by your service, open the file yourself, from the driver and provide a handle to your service. Opening the file from kernel mode could supress share modes and so on, you will need to read the documentation for FltCreateFileEx2 to make sure you have all the necessary parameters.
Use ObOpenObjectByPointer on the FileObject you have just opened and access mode UserMode. Make sure you will be attached to your user-mode's process address space via KeStackAttachProcess.
Order of operations in PostCreate:
FltCreateFileEx2(the_file, ignore_share_access, etcc )
KeStackAttachProcess(your_service_eprocess)
ObOpenObjectByPointer(UserMode=access_mode) -> now your um process has a handle to the file
KeUnstackDetachProcess()
Send the HANDLE pointer to the user-mode process as now it is able to use it.
Wait for the user mode service to write the data and also close the handle
Dereference the FileObject obtained as well as close the handle from FltCreateFileEx2.
Let the Create go for Notepad
Profit.
Good luck.
//Decclaration
PFLT_CALLBACK_DATA Data //Note: you get this in preOperation as argument so dont need to defined explicitly
PFLT_FILE_NAME_INFORMATION nameInfo=NULL;//must be declared
NTSTATUS status;
if(KeCurrentIrql()==PASSIVE_LEVEL)// file operation should be performed in IRQL PASSIVE_LEVEL
{
status=FltGetFileNameInformation(Data,FLT_FILE_NAME_OPENED |FLT_FILE_NAME_qUERY_ALWAYS_ALLOW_CACHE_LOOKUP,&nameInfo);
if(NT_SUCCESS(status))
{
status = FltParseFileNameInformation(nameInfo);
}
}
//Now you have Inforamtion of files in nameInfo Structure.
//you can get files inforamtion like that read documentions of above used structures will help you more about them. Specially PFLT_FILE_NAME_INFORMATION .