Aux Scan descriptors in Sybase - sybase

I am writing a query for Sybase (unionize query). Now according to Sybase article there is one aux scan descriptor for each table in the query and considering that I have 100 queries with 3 tables/query it should have 300 aux scan descriptors.
But when I run the query it says that it requires 303 aux scan descriptors. Somehow the extra 3 aux scan descriptors is a variable number. Previously I use to get error saying it required 305 descriptors. I went through their documentation but did not find anything helpful. Can anyone explain how this variable number that is added?
Another case is when I set aux scan descriptors to 278 and run the query, The query runs successfully. 25 descriptors are added to the value that I had set using the below command
sp_configure "aux scan descriptors", x
Can anyone give some pointers on this?

The 'extra' scan descriptors are likely for worktables, the number of which can vary based on the query plan generated by the optimizer.

Related

Fast way to get information from a huge logfile on unix

i have a 6 GB applicationlogfile. The loglines have the following format (shortened)
[...]
timestamp;hostname;sessionid-ABC;type=m
timestamp;hostname;sessionid-ABC;set_to_TRUE
[...]
timestamp;hostname;sessionid-HHH;type=m
timestamp;hostname;sessionid-HHH;set_to_FALSE
[...]
timestamp;hostname;sessionid-ZZZ;type=m
timestamp;hostname;sessionid-ZZZ;set_to_FALSE
[...]
timestamp;hostname;sessionid-WWW;type=s
timestamp;hostname;sessionid-WWW;set_to_TRUE
I have a lot of session with more then these 2 lines.
I need to find out all sessions with type=m and set_to_TRUE
My first attempt was to grep all sessionIDs with type=m and write it into a file. Then looping with every line from the file (1 sessionID per line) trough the big logfile and grep for sessionID;set_to_TRUE
This method takes a loooot of time. Can anyone give me a hint to solve this in a much better and faster way?
Thanks a lot!

grep 200 GB Sql file

we have around 200 GB .sql file we are grepping for some tables it is taking around 1 and half hour, as there any method we ca reduce time? any other efficient method to filter for some tables ? any help will be appreciated
The GNU parallel program can split input into multiple child processes, each of which will run grep over each respective part of the input. By using multiple processes (presumably you have enough CPU cores to apply to this work), it can finish faster by running in parallel.
cat 200-gb-table.sql | parallel --pipe grep '<pattern>'
But if you need to know the context of where the pattern occurs (e.g. the line number of the input) this might not be what you need.

Logstash unique output to file

Does anybody know how to configure logstash to obtain distinct results in output file?
For example if i have a couple eqaul lines from input source I will get duplicated results in my output file. I am interested in something like 'unique' in unix.
Thanks in advance for your help.
The only way you could do this with logstash would be to write a plugin for it, but since logstash is a long running process, you'd have to be very careful about how you go about it. To only output unique items, you have to keep track in memory of all items you ever see. So without some way to purge the data in memory, your logstash process will consume all memory on JVM and then eventually crash.
So it may make more sense to preprocess your log file with some external process that does a sort/unique and then writes the output to a log file that you can then ingest with logstash.

tokyo cabinet: .tcb.wal file created along with .tcb file. Db size doesnot decreases while deleting records

I am using tokyo cabinets B+ tree API to create a lookup database. On linux environment I see a .tcb.wal file created along with the actual .tcb database file. The size of this file is 0. I wonder whether its a lock file that is created to help synchronization. Also when I delete records from the database the size of the file does not decrease. Any reasons why its behaving like that?
The extension .wal stands for Write Ahead Logging file. This file is only relevant if you use any transaction functions; most applications do not use these. (For details, search for "ahead" in the documentation.)
The file size does not change for every deletion for efficiency reasons. Similarly, if you create an empty database, it will reserve space for faster insertions.

Sending multiple record with one call using hiredis

I hope this list is right for asking questions about redis client "hiredis" .
I want to achieve the same thing which I am doing below with redis client . As can be seen redis send 3 different record with one rpush call .
redis 127.0.0.1:6379> rpush test kemal erdem husyin
(integer) 3
redis 127.0.0.1:6379> lrange test 0 -1
1) "kemal"
2) "erdem"
3) "husyin"
In my project I use hiredis an example :
reply = (redisReply*)(redisCommand(c, "RPUSH %s %s" , channelName, message));
But Now I have a big log file which every line is being hold in a buff like char[][];
I need to send each line as different records but also need calling rpush only one time for performance .Would you have a advice for me ?
It would be a bad idea to send a unique command to push more than a few thousands of items. It would saturate the communication buffers, and a large command will block all other concurrent commands due to the single-threaded nature of Redis.
I suggest to build your push commands by batch of small packets of n items (n between 10 and 100), and to group your push commands in a pipeline of m commands (m between 10 and 100).
The algorithm would be something like this:
While there are still lines to read:
New Redis pipeline, i=0
While there are still lines to read and i<m:
Read at most n lines
Build push command for the read lines
Pipeline push command
++i
Flush Redis pipeline, check return status if needed
It will only generate N / (n*m) roundtrips (N being the number of lines in the input file).
To build commands with arbitrary numbers of parameters, you can use the redisAppendCommandArgv function.

Resources