According to this site:
http://pedrowa.weba.sk/docs/ApiDoc/apidoc_ap_get_client_block.html
This function:
ap_get_client_block(request_rec *r, char *buffer, int bufsiz);
Reads a chunk of POST data coming in from the network when a user requests a webpage.
So far, I have the following core code that reads the data:
while (len_read > 0){
len_read=ap_get_client_block(r,argsbuffer,length);
if((rpos+len_read) > length){rsize=length-rpos;}else{rsize=len_read;}
memcpy((char*)*rbuf+rpos,(char*)argsbuffer,(size_t)rsize);rpos+=rsize;
}
argsbuffer is a character array of length bytes and rbuf is a valid pointer, and the rest of the variables are apr_off_t data type.
If I changed this code to this:
while (len_read > 0){
len_read=ap_get_client_block(r,argsbuffer,length);
if((rpos+len_read) > length){rsize=length-rpos;}else{rsize=len_read;}
memcpy((char*)*rbuf+rpos,(char*)argsbuffer,(size_t)rsize);rpos+=rsize;
if (rpos > wantedlength){len_read=0;}
}
would I be able to close the stream some way and maintain processing speed without corrupt data coming in?
I already executed ap_setup_client_block(r, REQUEST_CHUNKED_ERROR) and made sure ap_should_client_block(r) returned true before processing the first code above. so that is like in a sense, opening a file. ap_get_client_block(r,argsbuffer,length). is like reading a file. Now what about an some ap_ command equivalent to close?
What I want to avoid is corrupt data.
The data that is incoming is in various sizes and I only want to attempt to capture a certain piece of data without having a loop go through the entire set of data every time. Thats why I posted this question.
For example: If I wanted to look for "A=123" as the input data within the first fixed 15 bytes and the first set of data is something like:
S=1&T=2&U=35238952958&V=3468348634683963869&W=fjfdslhjsdflhjsldjhsljsdlkj
Then I want the program to examine only:
S=1&T=2&U=35238
I'd be tempted using the second block of code. The first block of code works but goes through everything.
Anyone have any idea? I want this to execute on a live server as I am improving a security detection system. If anyone knows any other functionality that I should add or remove to my code, let me know. I want to optimize for speed.
Related
Using Windows API, I want to implement something like following:
i.e. Getting current microphone input level.
I am not allowed to use external audio libraries, but I can use Windows libraries. So I tried using waveIn functions, but I do not know how to process audio input data in real time.
This is the method I am currently using:
Record for 100 milliseconds
Select highest value from the recorded data buffer
Repeat forever
But I think this is way too hacky, and not a recommended way. How can I do this properly?
Having built a tuning wizard for a very dated, but well known, A/V conferencing applicaiton, what you describe is nearly identical to what I did.
A few considerations:
Enqueue 5 to 10 of those 100ms buffers into the audio device via waveInAddBuffer. IIRC, when the waveIn queue goes empty, weird things happen. Then as the waveInProc callbacks occurs, search for the sample with the highest absolute value in the completed buffer as you describe. Then plot that onto your visualization. Requeue the completed buffers.
It might seem obvious to map the sample value as follows onto your visualization linearly.
For example, to plot a 16-bit sample
// convert sample magnitude from 0..32768 to 0..N
length = (sample * N) / 32768;
DrawLine(length);
But then when you speak into the microphone, that visualization won't seem as "active" or "vibrant".
But a better approach would be to give more strength to those lower energy samples. Easy way to do this is to replot along the μ-law curve (or use a table lookup).
length = (sample * N) / 32768;
length = log(1+length)/log(N);
length = max(length,N)
DrawLine(length);
You can tweak the above approach to whatever looks good.
Instead of computing the values yourself, you can rely on values from Windows. This is actually the values displayed in your screenshot from the Windows Settings.
See the following sample for the IAudioMeterInformation interface:
https://learn.microsoft.com/en-us/windows/win32/coreaudio/peak-meters.
It is made for the playback but you can use it for capture also.
Some remarks, if you open the IAudioMeterInformation for a microphone but no application opened a stream from this microphone, then the level will be 0.
It means that while you want to display your microphone peak meter, you will need to open a microphone stream, like you already did.
Also read the documentation about IAudioMeterInformation it may not be what you need as it is the peak value. It depends on what you want to do with it.
I am trying to implement a pulse generator in SIMULINK that needs to know the previous 2 input values i.e. I need to know the previous 2 state values for the input signal. Also, I need to know the previous output value.
My pseudo code is:
IF !input AND input_prevValue AND !input_prevValue2
output = !output_pv
ELSE
output = output_pv;
I know that I can use legacy function importer and use C code to do this job in SIMULINK. However, the problem arises when you apply a configuration reference set to your model. The key problem is the flexibility. When you use this model somewhere else (say share it with a colleague or whoever), unless you have used a configuration reference set, you can rebuild the code (i.e. from S-Function Block) and run your model. But you cannot rebuild the code if the configuration reference set is applied.
My solution would be to implement the logic in a way that I can do the same without C functions. I tried to use the memory block in SIMULINK but apparently it doesn't do it. Does anyone know how to hold previous values for input and output in SIMULINK (for as long as the model is open)?
Have you tried with a MATLAB Function block? Alternatively, if you have a Stateflow license, this would lend itself nicely to a state chart.
EDIT
Based on your pseudo-code, I would expect the code in the MATLAB Function block to look like this
function op = logic_fcn(ip,ip_prev,ip_prev2,op_prev)
% #codegen
if ~ip && ip_prev && ~ip_prev2
op = ~op_prev;
else
op = op_prev;
end
where ip, ip_prev, ip_prev2 and op_prev are defined as boolean inputs and op as a boolean output. If you are using a fixed-step discrete solver, the memory block should work so that you would for example feed the output of the MATLAB Function block to a memory block (with the correct sample time), and the output of the memory block to the op_prev input of the MATLAB Function block.
You could (and should) test your function in MATLAB first (and/or a test Simulink model) to make sure it works and produces the output you expect for a given input.
This is reasonably straight forward to do with fundamental blocks,
Note that for the Switch block the "Criteria for passing first input:" has been changed to "u2~=0".
Please keep in mind that I am a TOTAL NOOB. I am not asking anyone to code my entire project, but I am easily lost in many cases. Your help and patience is much appreciated ...
My AS3 code looks something like this:
var totalFeatureImageDownloads:Number = 0; //total file size of all feature images, in bytes
var totalFeatureImageDownloaded:Number = 0; //total downloaded of all feature images, in bytes
for each(var featureimage:XML in xml.featureimages.imgurl)
{
trace(featureimage);
var featureImageRequest:URLRequest = new URLRequest(featureimage);
var featureImageLoader:URLLoader = new URLLoader(featureImageRequest);
totalFeatureImageDownloads = totalFeatureImageDownloads + featureImageLoader.bytesTotal;
trace(featureImageLoader.bytesTotal);
}
Currently I have 3 images in the XML file, and their URL's are traced properly. However, bytesTotal always equals 0. Also, I know that this loop creates 3 identically named URLRequest's and 3 identically named URLLoaders. I probably need to create 3 differently named ones.
I would like all the images to be downloaded simultaneously, and I need to calculate the combined file size for all three, along with the download status as a fraction (bytesLoaded / bytesTotal).
I have multiple scenarios like this, and am stuck until I get this finished. Your help is much appreciated. Cheers.
It is not possible to read the bytesTotal from a newly created URLLoader, because the client needs to connect to the web server first to get the size of the file to be downloaded. You need to listen for the first ProgressEvent.PROGRESS event sent by the URLLoader, once that event is sent the bytesTotal property is more likely to be accurate. I say "more likely" because it is not possible to get a total file size if the data is dynamically generated on the server, but in the general case where you download a file from the server it works.
I recognize the general scenario. There are libraries to aid with this, such as LoaderMax or BulkLoader. I personally use BulkLoader: https://github.com/arthur-debert/BulkLoader
I have this code
n_userobject inv_userobject[]
For i = 1 to dw_1.Rowcount()
inv_userobject[i] = create n_userobject
.
.
.
NEXT
dw_1.rowcount() returns only 210 rows. Its so odd that in the range of 170 up, the application stop and crashes on inv_userobject[i] = create n_userobject.
My question, is there any limit on array or userobject declaration using arrays?
I already try destroying it after the loop so as to check if that will be a possible solution, but it is still crashing.
Or how can i be able to somehow refresh the userobject?
Or is there anyone out there encounter this?
Thanks for all your help.
First, your memory problem. You're definitely not running into an array limit. If I was to take a guess, one of the instance variables in n_userobject isn't being cleaned up properly (i.e. pointing to a class that isn't being destroyed when the parent class is destroyed) or pointing to a class that similarly doesn't clean itself up. If you've got PB Enterprise, I'd do a profiling trace with a smaller loop and see what is being garbage collected (there's a utility called CDMatch that really helps this process).
Secondly, let's face it, you're just doing this to avoid writing a reset method. Even if you get this functional, it will never be as efficient as writing your own reset method and reusing the same instance over again. Yes, it's another method you'll have to maintain whenever the instance variable list changes or the defaults change, but you'll easily gain that back in performance.
Good luck,
Terry.
I'm assuming the crash you're facing is at the PBVM level, and not a regular PB exception (which you can catch in your code). If I'm wrong, please add the exception details.
A loop of 170-210 iterations really isn't a large one. However, crashes within loops are usually the result of resource exhaustion. What we usually do in long loops is call GarbageCollect() occasionally. How often should it be called depends on what your code does - using it frequently could allow the use of less memory, but it will slow down the run. Read this for more.
If this doesn't help, make sure the error does not come from some non-PB code (imported DLL or so). You can check the stack trace during the crash to see the exception's origin.
Lastly, if you're supported by Sybase (or a local representative), you can send them a crash dump. They can analyze it, and see if it's a bug in PB, and if so, let you know when it was (or will be) fixed.
What I would normally do with a DataWindow is to create an object that processes the data in a row and call it for each row.
the only suggestion i have for this is to remove the rowcount from the for (For i = 1 to dw_1.Rowcount()) this will cause the code to recount the rows every time it uses one. get the count into a variable and then use the variable. it should run a bit better and be far more easy to debug.
First stackoverflow question! I've searched...I promise. I haven't found any answers to my predicament. I have...a severely aggravating problem to say the least. To make a very long story short, I am developing the infrastructure for a game where mobile applications (an Android app and an iOS app) communicate with a server using sockets to send data to a database. The back end server script (which I call BES, or Back End Server), is several thousand lines of code long. Essentially, it has a main method that accepts incoming connections to a socket and forks them off, and a method that reads the input from the socket and determines what to do with it. Most of the code lies in the methods that send and receive data from the database and sends it back to the mobile apps. All of them work fine, except for the newest method I have added. This method grabs a large amount of data from the database, encodes it as a JSON object, and sends it back to the mobile app, which also decodes it from the JSON object and does what it needs to do. My problem is that this data is very large, and most of the time does not make it across the socket in one data write. Thus, I added one additional data write into the socket that informs the app of the size of the JSON object it is about to receive. However, after this write happens, the next write sends empty data to the mobile app.
The odd thing is, when I remove this first write that sends the size of the JSON object, the actual sending of the JSON object works fine. It's just very unreliable and I have to hope that it sends it all in one read. To add more oddity to the situation, when I make the size of the data that the second write sends a huge number, the iOS app will read it properly, but it will have the data in the middle of an otherwise empty array.
What in the world is going on? Any insight is greatly appreciated! Below is just a basic snippet of my two write commands on the server side.
Keep in mind that EVERYWHERE else in this script the read's and write's work fine, but this is the only place where I do 2 write operations back to back.
The server script is on a Ubuntu server in native C using Berkeley sockets, and the iOS is using a wrapper class called AsyncSocket.
int n;
//outputMessage contains a string that tells the mobile app how long the next message
//(returnData) will be
n = write(sock, outputMessage, sizeof(outputMessage));
if(n < 0)
//error handling is here
//returnData is a JSON encoded string (well, char[] to be exact, this is native-C)
n = write(sock, returnData, sizeof(returnData));
if(n < 0)
//error handling is here
The mobile app makes two read calls, and gets outputMessage just fine, but returnData is always just a bunch of empty data, unless I overwrite sizeof(returnData) to some hugely large number, in which case, the iOS will receive the data in the middle of an otherwise empty data object (NSData object, to be exact). It may also be important to note that the method I use on the iOS side in my AsyncSocket class reads data up to the length that it receives from the first write call. So if I tell it to read, say 10000 bytes, it will create an NSData object of that size and use it as the buffer when reading from the socket.
Any help is greatly, GREATLY appreciated. Thanks in advance everyone!
It's just very unreliable and I have to hope that it sends it all in one read.
The key to successful programming with TCP is that there is no concept of a TCP "packet" or "block" of data at the application level. The application only sees a stream of bytes, with no boundaries. When you call write() on the sending end with some data, the TCP layer may choose to slice and dice your data in any way it sees fit, including coalescing multiple blocks together.
You might write 10 bytes two times and read 5 then 15 bytes. Or maybe your receiver will see 20 bytes all at once. What you cannot do is just "hope" that some chunks of bytes you send will arrive at the other end in the same chunks.
What might be happening in your specific situation is that the two back-to-back writes are being coalesced into one, and your reading logic simply can't handle that.
Thanks for all of the feedback! I incorporated everyone's answers into the solution. I created a method that writes to the socket an iovec struct using writev instead of write. The wrapper class I'm using on the iOS side, AsyncSocket (which is fantastic, by the way...check it out here -->AsyncSocket Google Code Repo ) handles receiving an iovec just fine, and behind the scenes apparently, as it does not require any additional effort on my part for it to read all of the data correctly. The AsyncSocket class does not call my delegate method didReadData now until it receives all of the data specified in the iovec struct.
Again, thank you all! This helped greatly. Literally overnight I got responses for an issue I've been up against for a week now. I look forward to becoming more involved in the stackoverflow community!
Sample code for solution:
//returnData is the JSON encoded string I am returning
//sock is my predefined socket descriptor
struct iovec iov[1];
int iovcnt = 0;
iov[0].iov_base = returnData;
iov[0].iov_len = strlen(returnData);
iovcnt = sizeof(iov) / sizeof(struct iovec);
n = writev(sock, iov, iovcnt)
if(n < 0)
//error handling here
while(n < iovcnt)
//rebuild iovec struct with remaining data from returnData (from position n to the end of the string)
You should really define a function write_complete that completely writes a buffer to a socket. Check the return value of write, it might also be a positive number, but smaller than the size of the buffer. In that case you need to write the remaining part of the buffer again.
Oh, and using sizeof is error-prone, too. In the above write_complete function you should therefore print the given size and compare it to what you expect.
Ideally on the server you want to write the header (the size) and the data atomically, I'd do that using the scatter/gather calls writev() also if there is any chance multiple threads can write to the same socket concurrently you may want to use a mutex in the write call.
writev() will also write all the data before returning (if you are using blocking I/O).
On the client you may have to have a state machine that reads the length of the buffer then sits in a loop reading until all the data has been received, as large buffers will be fragmented and come in in various sized blocks.