Why is this code taking very long to execute on a server - c

We use LoadRunner to run performance tests for an application. The tests are generally written in Ansi-C.
We have a simple base64 encoding function:
void base64encode(char *src, char *dest, int srcLen, int destLen)
{
int i=0;
int slen= srcLen;
for(i=0;i<slen && i<destLen;i+=3,src+=3)
{
*(dest++)=base64encode_lut[(*src&0xFC)>>0x2];
*(dest++)=base64encode_lut[(*src&0x3)<<0x4|(*(src+1)&0xF0)>>0x4];
*(dest++)=((i+1)<slen)?base64encode_lut[(*(src+1)&0xF)<<0x2|(*(src+2)&0xC0)>>0x6]:'=';
*(dest++)=((i+2)<slen)?base64encode_lut[*(src+2)&0x3F]:'=';
}
*dest='\0';
}
This code when ran on a developer machine (64-bit Windows 10 machines) the code runs in under a sec for a simple image (srcLen around 7k).
When ran on the production server (32-bit Windows 2012, VM) the execution takes between 10 to 20 minutes for the same image.
Can anyone explain why and how to avoid this issue? I'm not sure if it's LoadRunner or the code to blame.
EDIT: adding the code that is calling the encoding function:
long infile; // file pointer
char *buffer; // buffer to read file contents into
char *filename = "DFC_COLOR.jpg"; // file to read
int fileLen; // file size
int bytesRead; // bytes read from file
char *encoded;
int dest_size;
web_set_max_html_param_len("999999999");
infile = fopen(filename, "rb");
if (!infile) {
lr_error_message("Unable to open file %s", filename);
return;
}
// get the file length
fseek(infile, 0, SEEK_END);
fileLen = ftell(infile);
fseek(infile, 0, SEEK_SET);
lr_log_message("File length is: %9d bytes.", fileLen);
// Allocate memory for buffer to read file
buffer = (char *)malloc(fileLen + 1);
if (!buffer) {
lr_error_message("Could not malloc %10d bytes", fileLen + 1);
fclose(infile);
return;
}
// Read file contents into buffer
bytesRead = fread(buffer, 1, fileLen, infile);
if (bytesRead != fileLen)
{
lr_error_message("File length is %10d bytes but only read %10d bytes", fileLen, bytesRead);
}
else
{
lr_log_message("Successfully read %9d bytes from file: ", bytesRead);
}
fclose(infile);
// Save the buffer to a loadrunner parameter
lr_save_var(buffer, bytesRead, 0, "fileDataParameter");
// calculate the destination size
dest_size = 1 + ((bytesRead + 2) / 3 * 4);
encoded = (char *)malloc(dest_size);
memset(encoded, 0, dest_size);
// encode the buffer
base64encode(buffer, encoded, bytesRead, dest_size);

Run a single virtual user on a non shared/brokered resource physical machine and see if you have the same performance. You have no want to determine what your priority and resource pool is within the virtual machine environment. You could be running on a VM host which is highly overloaded and you are simply having to wait as the hypervisor makes decisions on who gets a given resource set at which time. Moving a single virtual user to a single hardware host provides a common apples to pears basis (oranges to kumquats) for making your comparison.
LoadRunner also includes an RFC compliant base64 encoding and decoding algorithm as a part of its core set so you do not need to recode this. Normally this is used as part of the SMTP or DNS virtual users, but you can load the DLL, prototype the functions and move forward. You can find existing function prototypes in the \include\mic_socket.h header file Here is a great article on how this is accomplished. I know the editors are going to scream and demote this as an external link, so you may want to capture the link fast
https://northwaysolutions.com/blog/loadrunner-vugen-encoding-and-decoding-base64/#.WL1vZxLytlc
The other way to handle this is as a Data Format extension. Decode is already covered as a base extension, so you would only have to handle the encode if you so desired. Use Google to pull references to Loadrunner, DFE and base64 for the appropriate items. You can also find a DFE developers guide in your local documentation set, installed with your copy of LoadRunner.

Related

in-memory FILE* (without disk access)

We have library which accepts FILE* (CImg). For performance reason we wish to handle data already in memory without accessing a disk. Target platform is windows which unfortunately does not support fmemopen (and funopen)
char* buf = new char[sz];
FILE *fp = fopen("C:\\test.dat", "wb");
int r = setvbuf(fp, buf, _IOFBF, sz);
r = fwrite(src_buf, 1, sz, fp); // Here r contains right size
fseek(fp, 0, SEEK_END);
size_t sz2 = ftell(fp); // sz2 contains right size as well
rewind(fp);
// Test (something like this actually is somewhere deep inside library)
char* read_buf = new char[sz];
r = fread(read_buf, 1, sz, fp); // Zero!
Final fread() can't read anything... Any suggestions?
Possibly because you opened with "wb" (write-only) instead of "wb+" (read + write)
fopen() function
To help further, you can #include errno.h and print out the error code and string.
printf( "errno: %d, error:'%s'\n", errno, strerror( errno ) );
For our specific case we found CImg plugin doing the thing.
As of a sample in an original question - at least MSVC runtime flushes written content to disk itself. So it isn't a valid replace for fmemopen()
https://github.com/Snaipe/fmem is a wrapper for different platform/version specific implementations of in-memory files
It tries in sequence the following implementations:
open_memstream.
fopencookie, with growing dynamic buffer.
funopen, with growing dynamic buffer.
WinAPI temporary memory-backed file.
When no other mean is available, fmem falls back to tmpfile()

Proper way to get file size in C

I am working on an assignment in socket programming in which I have to send a file between sparc and linux machine. Before sending the file in char stream I have to get the file size and tell the client. Here are some of the ways I tried to get the size but I am not sure which one is the proper one.
For testing purpose, I created a file with content " test" (space + (string)test)
Method 1 - Using fseeko() and ftello()
This is a method I found on https://www.securecoding.cert.org/confluence/display/c/FIO19-C.+Do+not+use+fseek()+and+ftell()+to+compute+the+size+of+a+regular+file
While the fssek() has a problem of "Setting the file position indicator to end-of-file, as with fseek(file, 0, SEEK_END), has undefined behavior for a binary stream", fseeko() is said to have tackled this problem but it only works on POSIX system (which is fine because the environment I am using is sparc and linux)
fd = open(file_path, O_RDONLY);
fp = fopen(file_path, "rb");
/* Ensure that the file is a regular file */
if ((fstat(fd, &st) != 0) || (!S_ISREG(st.st_mode))) {
/* Handle error */
}
if (fseeko(fp, 0 , SEEK_END) != 0) {
/* Handle error */
}
file_size = ftello(fp);
fseeko(fp, 0, SEEK_SET);
printf("file size %zu\n", file_size);
This method works fine and get the size correctly. However, it is limited to regular files only. I tried to google the term "regular file" but I still not quite understand it thoroughly. And I do not know if this function is reliable for my project.
Method 2 - Using strlen()
Since the max. size of a file in my project is 4MB, so I can just calloc a 4MB buffer. After that, the file is read into the buffer, and I tried to use the strlen to get the file size (or more correctly the length of content). Since strlen() is portable, can I use this method instead? The code snippet is like this
fp = fopen(file_path, "rb");
fread(file_buffer, 1024*1024*4, 1, fp);
printf("strlen %zu\n", strlen(file_buffer));
This method works too and returns
strlen 8
However, I couldn't see any similar approach on the Internet using this method. So I am thinking maybe I have missed something or there are some limitations of this approach which I haven't realized.
Regular file means that it is nothing special like device, socket, pipe etc. but "normal" file.
It seems that by your task description before sending you must retrieve size of normal file.
So your way is right:
FILE* fp = fopen(...);
if(fp) {
fseek(fp, 0 , SEEK_END);
long fileSize = ftell(fp);
fseek(fp, 0 , SEEK_SET);// needed for next read from beginning of file
...
fclose(fp);
}
but you can do it without opening file:
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
struct stat buffer;
int status;
status = stat("path to file", &buffer);
if(status == 0) {
// size of file is in member buffer.st_size;
}
OP can do it the easy way as "max. size of a file in my project is 4MB".
Rather than using strlen(), use the return value from fread(). stlen() stops on the first null character, so may report too small a value. #Sami Kuhmonen Also we do not know the data read contains any null character, so it may not be a string. Append a null character (and allocate +1) if code needs to use data as a string. But in that case, I'd expect the file needed to be open in text mode.
Note that many OS's do not even use allocated memory until it is written.
Why is malloc not "using up" the memory on my computer?
fp = fopen(file_path, "rb");
if (fp) {
#define MAX_FILE_SIZE 4194304
char *buf = malloc(MAX_FILE_SIZE);
if (buf) {
size_t numread = fread(buf, sizeof *buf, MAX_FILE_SIZE, fp);
// shrink if desired
char *tmp = realloc(buf, numread);
if (tmp) {
buf = tmp;
// Use buf with numread char
}
free(buf);
}
fclose(fp);
}
Note: Reading the entire file into memory may not be the best idea to begin with.

C sockets receive file

I'm developing very simple ftp client. I have created a data connection sockets, but I can't transfer file successfully:
FILE *f = fopen("got.png", "w");
int total = 0;
while (1){
memset(temp, 0, BUFFSIZE);
int got = recv(data, temp, sizeof(temp), 0);
fwrite(temp, 1, BUFFSIZE, f);
total += got;
if (total == 1568){
break;
}
}
fclose(f);
BUFFSIZE = 1568
I know that my file is 1568 bytes size, so I try to download it just for a test. Everything is file when I try to download .xml or .html files, but nothing good happens when I try to download png or avi files. Simply original file size is 1568 but got.png file size is 1573. I can't figure out what might cause that.
EDIT:
I have modified my code, so now it looks like (it can accept any file size):
FILE *f = fopen("got.png", "w");
while (1){
char* temp = (char*)malloc(BUFFSIZE);
int got = recv(data, temp, BUFFSIZE, 0);
fwrite(temp, 1, got, f);
if (got == 0){
break;
}
}
fclose(f);
Still received file is 2 bytes too long.
You are opening the file in text mode, so bare CR/LF characters are going to get translated to CRLF pairs when written to the file. You need to open the file in binary mode instead:
FILE *f = fopen("got.png", "wb");
You are always writing a whole buffer even if you have received only a partial one. This is the same problem with ~50% of all TCP questions.
The memset is not necessary. And I hope that temp is an array so that sizeof(temp) does not evaluate to the native pointer size. Better use BUFFSIZE there as well.
Seeing your edit, after fixing the first problem there is another one: Open the file in binary mode.

how to write .hex file on internal Flash ROM of microcontroller?

I want to write a ".hex" file of size 28 kb flash on to that Internal flashROM of size 32 kb.The above mentioned question is for initialising the flashROM. What i am doing for writing that file and the code is mentioned below:
I am having a .hex file in Intel hex format, that i have to read and write to internal
FlashROM of At91(8051) microcontroller.
Open the .hex file in "rb+" mode.
Get the length of the file and set the pointer to start address(zeroth address).
As I need to write that file page by page and pagesize in my case is 256 bytes, I have
divide the file by 256.
After that I have try to write that file.
Please let me know where I am going wrong. The code is given below.
int a,b; int size,last_chunk; FILE *file; char *buffer1,name[20]; unsigned long fileLen;
file = fopen("flashROM.hex", "rb+");
if (!file)
{
fprintf(stderr, "Unable to open file %s", name);
return;
}
fseek(file, 0, SEEK_END);
fileLen=ftell(file);
printf("the file length is:%d\n",fileLen);
fseek(file, 0, SEEK_SET);
//Allocate memory
buffer1 =(char *)malloc(fileLen+1);
if (!buffer1)
{
fprintf(stderr, "Memory error!");
fclose(file);
return;
}
//Read file contents into buffer
fread(buffer1, fileLen, 1, file);
/* We have to divide the entire file into chunks of 256 bytes*/
size = fileLen/256;
printf("\nsize = %d\n",size);
last_chunk = fileLen%256;
printf("\nlast chunk = %d bytes\n",last_chunk);
address = 0x0000;
printf("File upgradation should start from :%.4x",address);
for(a=0;a<=size;a++)
{
write(fd,&buffer1,size);
printf("Iteration=[%d]\t Data=[%x]\n",a,*buffer1);
usleep(5000);
}
for(b=0;b<=last_chunk;b++)
{
write(fd,&buffer1,1);
usleep(5000);
}
After executing the binary of above mentioned program, my result is mentioned below:
Writing upgrade file
the file length is:30855
size = 120
last chunk = 135 bytes
File upgradation should start from :0000
Iteration=[0] Data=[3a]
Iteration=[1] Data=[3a]
Iteration=[2] Data=[3a]
Iteration=[3] Data=[3a]
Iteration=[4] Data=[3a]
Iteration=[5] Data=[3a]
Iteration=[6] Data=[3a]
Iteration=[7] Data=[3a]
I don't know, why the data is always "3a", its not clear.
Please let me know where i have done wrong in programming.
You need a special tool that reads the .hex file and does whatever is necessary to write it into the flash memory of your controller (JTAG, talk to a bootloader via whatever means of communication, ...).
The tool depends on your specific microcontroller family. 8051 is not enough information, there is a huge variety of 8051s from many vendors.
Try using "wb+" as your mode for fopen
Opening the file as writable will create it if it doesn't already exist.
The code where you write data also looks suspect. You're passing a pointer to a pointer to a buffer into write rather than a simple pointer to your buffer. I also don't see the pointer getting incremented so you're writing the same data repeatedly.
You could try replacing your writing code with something like the following:
char* ptr = buffer1;
for(a=0;a<=size;a++)
{
write(fd,ptr,size);
ptr+=size;
printf("Iteration=[%d]\t Data=[%x]\n",a,*buffer1);
usleep(5000);
}
for(b=0;b<=last_chunk;b++)
{
write(fd,ptr,1);
usleep(5000);
}

C function to insert text at particular location in file without over-writing the existing text

I have written a program which takes a file as input and whenever it finds a line with length > 80, it adds \ and \n to that file to make it 80 chars in width max.
The problem is that I have used fseek to insert \ and \n whenever the length exceeds 80, so it overrides two characters of that line which exceeds length 80. Is there a way using which I can insert text without overriding the existing text?
Here is my code:-
#include<stdio.h>
#include<string.h>
int main(int argc, char *argv[])
{
FILE *fp1,*fp2;
int prev=0,now=0;
char ch;
int flag=0;
long cur;
fp1=fopen(argv[1],"r+");
if(fp1==NULL){
printf("Unable to open the file to read. Program will exit.");
exit(0);
}
else{
while((ch=fgetc(fp1))!=EOF){
if(ch!=' ' && ch!='\n'){
now=now+1;
}
else{
if(now>=80){
fseek(fp1,cur,SEEK_SET);
fputc('\\',fp1);
fputc('\n',fp1);
now=0;
continue;
}
if(ch=='\n'){
flag=0;
now=0;
continue;
}
else{
prev=now;
cur=ftell(fp1);
}
now=now+1;
}
}
}
fclose(fp1);
return 0;
}
To run it, you need to do following:-
user#ubuntu$ cc xyz.c
user#ubuntu$ ./a.out file_to_check.txt
While there are a couple of techniques to do it in-place, you're working with a text file and want to perform insertions. Operating systems typically don't support text file insertions as a file system primitive and there's no reason they should do that.
The best way to do that kind of thing is to open your file for reading, open a new file for writing, copy the part of the file before the insertion point, insert the data, copy the rest, and then move the new file over the old one.
This is a common technique and it has a purpose. If anything goes wrong (e.g. with your system), you still have the original file and can repeat the transaction later. If you start two instances of the process and use a specific pattern, the second instance is able to detect that the transaction has already been started. With exclusive file access, it can even detect whether the transaction was interrupted or is still running.
That way is much less error prone than any of the techniques performed directly on the original file and is used by all of those traditional tools like sed even if you ask them to work in-place (sed -i). Another bonus is that you can always rename the original file to one with a backup suffix before overwriting it (sed offers such an option as well).
The same technique is often used for configuration files even if your program is writing an entirely new version and doesn't use the original file for that. It hasn't been long since many internet magazines claimed that ext4 accidentally truncates configuration files to zero length. This was exactly because some applications kept the configuration files open and truncated while the system was forcedly shut down. Those application often tampered with the original configuration files before they had the data ready and then even kept them open without syncing them, which made the window for data corruption much larger.
TL;DR version:
When you value your data, don't destroy it before you have the replacement data ready.
No, there's no way to insert characters into an existing file. You will need to use a second file to do that.
This is the function I use for this kind of thing:
int finsert (FILE* file, const char *buffer) {
long int insert_pos = ftell(file);
if (insert_pos < 0) return insert_pos;
// Grow from the bottom
int seek_ret = fseek(file, 0, SEEK_END);
if (seek_ret) return seek_ret;
long int total_left_to_move = ftell(file);
if (total_left_to_move < 0) return total_left_to_move;
char move_buffer[1024];
long int ammount_to_grow = strlen(buffer);
if (ammount_to_grow >= sizeof(move_buffer)) return -1;
total_left_to_move -= insert_pos;
for(;;) {
u16 ammount_to_move = sizeof(move_buffer);
if (total_left_to_move < ammount_to_move) ammount_to_move = total_left_to_move;
long int read_pos = insert_pos + total_left_to_move - ammount_to_move;
seek_ret = fseek(file, read_pos, SEEK_SET);
if (seek_ret) return seek_ret;
fread(move_buffer, ammount_to_move, 1, file);
if (ferror(file)) return ferror(file);
seek_ret = fseek(file, read_pos + ammount_to_grow, SEEK_SET);
if (seek_ret) return seek_ret;
fwrite(move_buffer, ammount_to_move, 1, file);
if (ferror(file)) return ferror(file);
total_left_to_move -= ammount_to_move;
if (!total_left_to_move) break;
}
seek_ret = fseek(file, insert_pos, SEEK_SET);
if (seek_ret) return seek_ret;
fwrite(buffer, ammount_to_grow, 1, file);
if (ferror(file)) return ferror(file);
return 0;
}
Use it like this:
FILE * file= fopen("test.data", "r+");
ASSERT(file);
const char *to_insert = "INSERT";
fseek(file, 3, SEEK_SET);
finsert(file, to_insert);
ASSERT(ferror(file) == 0);
fclose(file);
This (as others here have mentioned) can theoretically corrupt a file if there is an error, but here is some code to actually do it... Doing it in-place like this is usually fine, but you should backup the file if you are worried about it...
No, there is no way. You have to create a new file or move the contents of the file 2 characters backwards.
You can load the file as chunks (in your case is 80 characters) and then append two character (new line) and write the content into anohter file.
another implementation use tmpfile()
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
FILE *tmp_buf;
int finsert(FILE *f, const char* msg){
fseek(tmp_buf, 0, SEEK_SET);
fpos_t f_pos;
assert (fgetpos(f, &f_pos)==0);
char buf[50];
while(fgets(buf, 50, f))
fputs(buf, tmp_buf);
long tmp_buf_pos = ftell(tmp_buf);
fsetpos(f, &f_pos);
fputs(msg, f);
fseek(tmp_buf, 0, SEEK_SET);
while(--tmp_buf_pos>=0)
fputc(fgetc(tmp_buf), f);
return ferror(f);
}
int main()
{
FILE *f = fopen("result.txt", "wb+");
assert (f!=NULL);
fputs("some text", f);
tmp_buf = tmpfile();
assert (tmp_buf!=NULL);
assert(finsert(f, "another text")==0);
fclose (f);
perror("");
}
tested in Cygwin64

Resources