I am using a collector node on IIB to collect messages. Can someone guide with sample ESQL after collector node to process a message collection? - file

I'm using the FileOutputNode to write the data into the file. I have tried writing the collection messages in the file but every time the file created is of 0 byte and there is no data.
SET OutputRoot.Properties = InputRoot.Properties;
CREATE FIELD OutputRoot.Collection.IN;
DECLARE refCollection REFERENCE TO InputRoot.Collection.IN[1];
WHILE LASTMOVE(refCollection) DO
SET OutputRoot.Collection.IN= refCollection;
SET i = i + 1;
MOVE refCollection NEXTSIBLING REPEAT TYPE NAME;
END WHILE;
RETURN TRUE;

It is very difficult to provide help without knowing what your input message tree looks like.
You should follow the instructions here: https://www.ibm.com/support/knowledgecenter/en/SSMKHH_10.0.0/com.ibm.etools.mft.doc/bc16130_.htm
If you need further help, you should
add Trace nodes into your message flow before and after the Compute node and set the Pattern property on both nodes to ${Root}. This will allow you to see (and share) the structure of InputRoot and OutputRoot.
Enable user trace using the console commands mqsichangetrace, mqsireadlog, mqsiformatlog. This will show you exactly what the message flow is doing. It will also contain the full text of any errors that are being reported.

Related

making discord bot command to store message content (python)

So this is my first time coding an actual project that isn't a small coding task. I've got a bot that runs and responds to a message if it says "hello". I've read the API documentation up and down and really only have a vague understanding of it and I'm not sure how to implement it.
My question right now is how would I go about creating a command that takes informationn from a message the command is replying to (sender's name, message content) and stores it as an object. Also, what would be the best way to store that information?
I want to learn while doing this and not just have the answers handed to me ofc, but I feel very lost. Not sure where to even begin.
I tried to find tutorials on coding discord bots that would have similar functions to what I want to do, but can't find anything.
Intro :
Hi NyssaDuke !
First of all, prefer to paste your code instead of a picture. It's easier for us to take your code and try to produce what you wish.
In second, I see an "issue" in your code since you declare twice the bot. You can specify the intents when you declare your bot as bot = commands.Bot(command_prefix="!", intents=intents)
Finally, as stated by #stijndcl , it's against TOS, but I will try to answer you at my best.
filesystem
My bot needs to store data, like users ID, language, and contents relative to a game, to get contacted further. Since we can have a quite big load of requests in a small time, I prefered to use a file to store instead of a list that would disappear on crash, and file allow us to make statistics later. So I decided to use pytables that you can get via pip install pytables. It looks like a small DB in a single file. The file format is HDF5.
Let say we want to create a table containing user name and user id in a file :
import tables
class CUsers (tables.IsDescription) :
user_name = StringCol(32)
user_id = IntCol()
with tables.open_file("UsersTable.h5", mode="w") as h5file :
groupUser = h5file.create_group("/", "Users", "users entries")
tableUser = h5file.create_table(groupUser, "Users", CUsers, "users table")
We have now a file UsersTable.h5 that has an internal table located in root/Users/Users that is accepting CUsers objects, where, therefore, user_name and user_id are the columns of that table.
getting user info and storing it
Let's now code a function that will register user infos, and i'll call it register. We will get the required data from the Context that is passed with the command, and we'll store it in file.
#bot.command(name='register')
async def FuncRegister (ctx) :
with tables.open_file("UsersTable.h5", mode="a") as h5file :
tableUser = h5file.root.Users.Users
particle = tableUser.row
particle['user_name'] = str(ctx.author)
particle['user_id'] = ctx.author.id
particle.append()
tableUser.flush()
The last two lines are sending the particle, that is the active row, so that is an object CUsers, into the file.
An issue I got here is that special characters in a nickname can make the code bug. It's true for "é", "ü", etc, but also cyrillic characters. What I did to counter is to encode the user name into bytes, you can do it by :
particle['user_name'] = str(ctx.author).encode()
reading file
It is where it starts to be interesting. The HFS5 file allows you to use kind of sql statements. Something you have to take in mind is that strings in the file are UTF-8 encoded, so when you extract them, you have to call for .decode(utf-8). Let's now code a function where a user can remove its entry, based on its id :
#bot.command(name="remove")
async def FuncRemove(ctx) :
with tables.open_file("UsersTable.h5", mode="a") as h5file :
tableUser = h5file.root.Users.Users
positions = tableUser.get_where_list("(user_id == '%d')" % ctx.author.id)
nameuser = tableUser[positions[0]]['user_name'].decode('utf-8')
tableUser.remove_row(positions[0])
.get_where_list() returns a list of positions in the file, that I later address to find the right position in the table.
bot.fetch_user(id)
If possible, prefer saving ID over name, as it complicates the code with encode() and decode(), and that bots have access to a wonderful function that is fetch_user(). Let's code a last function that will get you the last entry in your table, and with the id, print the username with the fetch method :
#bot.command(name="last")
async def FuncLast(ctx) :
with tables.open_file("UsersTable.h5", mode="r") as h5file :
tableUser = h5file.root.Users.Users
lastUserIndex = len(tableUser) - 1
iduser = tableUser[lastUserIndex]['user_id']
member = await bot.fetch_user(iduser)
await ctx.send(member.display_name)
For further documentation, check the manual of discord.py, this link to context in particular.

camel: split the message into multiple processing paths

My input message:
<file>
<node1>
...
</node1>
.....
<node10>
.....
</node10>
</file>
I want to:
Process the whole file using stylesheet and output to Dest A
For a few elements in the file (say, node1, node3 and node7) I want to extract them and output the content of each individually to Dest B
I know how to process the file using stylesheet but I'm at a loss how to do the other, let alone combine them together.
I'm looking for something like:
from(direct:start).magic_split(
to("xslt:mysheet").to("destA"),
setBody(xpath("//node1").to("destB"),
setBody(xpath("//node3").to("destB"),
setBody(xpath("//node7").to("destB"),
).transform(constant(responseOK);
if you can split the XML, then each node is put in it's own exchange, if you can identify the node after it is split, then you can use a Content Based Router to route the exchange to the appropriate destination. This might require a custom splitter bean, or you might be able to do it from xpath if the nodes are named nodeX where X is a number.
Use the Wire Tap pattern. This pattern allows you to route messages to a separate location while they are being forwarded to the ultimate destination:
from("cxf:bean:submitOrder")
.wireTap("direct:tap")
.beanRef("customBean2");
from("direct:tap")
.to("xslt:my.xsl")
.beanRef("customBean1);
For what I needed, custom beans did the job well. The only thing I had to figure out was how to restore the message to the original content. I guess there is a more elegant way of doing it but works fine:
from("cxf:bean:submitOrder")
.setProperty("originalData", simple("${in.body}")) //save original input msg
.to("xslt:my.xsl").beanRef("customBean1)
.setBody(simple("${property.originalData}")) //restore original message
.beanRef("customBean2");

boost log every hour

I'm using boost log and I want to make basic log principal file: new error log at the beginning of each hour (if error exists), and to name it like "file_%Y%m%d%H.log".
I have 2 problems with this boost library:
1. How to rotate file at the beginning of each hour?
This isn't possible with rotation_at_time_interval parameter because it creates new file regarding first written record in file, and the hour in file name doesn't match that rule. Is it possible to have multiple rotation_at_time_point for one file in sink or is there some other solution?
2. When file exceed some size I want it to start new file and in that case it should append some index to file name. With adding rotation_size parametar and %N to file name it will increment N all the time while application is running. I want that N to be reset at the beginning of each hour, just as my file name changes. Does anybody have any idea how to do that with this boost log library?
This is basic principal in creating log files in industry. I really don't understand how this can't be done with library which is dedicated for creating log files.
Library itself doesn't provide a way to rotate file at the begging of every hour, but i had same problem so i used a function wrapper, which return true on begging of every hour.
I find this way better for me, because i can controll efficency of code.
from boost.org:
bool is_it_time_to_rotate();
void init_logging(){
boost::shared_ptr< sinks::text_file_backend > backend =
boost::make_shared< sinks::text_file_backend >(
keywords::file_name = "file_%5N.log",
keywords::time_based_rotation = &is_it_time_to_rotate
);
}
For a second question i really dont undrestand it well.

Saving a Binary tree to a file in C

I am currently doing a Customers program when a user can add/edit/search/list customers. I decided to use a binary tree as the backbone of this program. My idea was to then save each item in the tree to "customers.dat" right before the program closes, then on start up load everything from the file to the tree. So far so good, however after finally managing to save the binary search tree into a file, I have one bug.
Lets say I add 3 customers the first time. I then close the program, and when I reopen it I will find the same 3 customers in the tree. however, the next time I open the file, it gives me an error from one of my predefined errors, which occurs when a node is not able to identity whether to go left or right, maybe because its empty or uncomparable. here are some code snippets. I aslo tried using other fileopen techniques other than a+b, and I had no such error, but by the way I designed my program, I need the append method or else only one record will save.
Customers are stored in Cstmr in the header:
typedef struct customer
{
char Name[MAXNAME];
char Surname[MAXNAME];
char ID[MAXID];
char Address[MAXADDRESS];
} Cstmr;
else:
void CustomerTreeToFile(Tree*pt)
{
if (TreeIsEmpty(pt))
puts("Nothing to save!");
else
Traverse(pt,saveItem); //Traverses each node, and appliess the function
//saveItem to each node
}
void saveItem(Cstmr C)
{
save = C;
customers = fopen("customers.dat","ab+");
fwrite(&C,sizeof (Cstmr), 1, customers);
fclose(customers);
}
The problem is that you are always appending all the data to the same file ... so everything from the previous run is duplicated.
One solution would be to delete (unlink) the file before saving the data with your current approach.
However, as Freezerburn pointed out earlier, it's more economic to not open and close the file for each item. Just open the file once in overwrite mode (i.e. not append), then write all the data, then close the file. Should be much faster, too.
Another problem is that you save your data in binary format. Try to define an easy to read textual format. That would have made the problem obvious ...

Retrieving Specific Active Directory Record Attributes using C#

I've been asked to set up a process which monitors the active directory, specifically certain accounts, to check that they are not locked so that should this happen, the support team can get an early warning.
I've found some code to get me started which basically sets up requests and adds them to a notification queue. This event is then assigned to a change event and has an ObjectChangedEventArgs object passed to it.
Currently, it iterates through the attributes and writes them to a text file, as so:
private static void NotifierObjectChanged(object sender,
ObjectChangedEventArgs e)
{
if (e.ResultEntry.Attributes.AttributeNames == null)
{
return;
}
// write the data for the user to a text file...
using (var file = new StreamWriter(#"C:\Temp\UserDataLog.txt", true))
{
file.WriteLine("{0} {1}", DateTime.UtcNow.ToShortDateString(), DateTime.UtcNow.ToShortTimeString());
foreach (string attrib in e.ResultEntry.Attributes.AttributeNames)
{
foreach (object item in e.ResultEntry.Attributes[attrib].GetValues(typeof(string)))
{
file.WriteLine("{0}: {1}", attrib, item);
}
}
}
}
What I'd like is to check the object and if a specific field, such as name, is a specific value, then check to see if the IsAccountLocked attribute is True, otherwise skip the record and wait until the next notification comes in. I'm struggling how to access specific attributes of the ResultEntry without having to iterate through them all.
I hope this makes sense - please ask if I can provide any additional information.
Thanks
Martin
This could get gnarly depending upon your exact business requirements. If you want to talk in more detail ping me offline and I'm happy to help over email/phone/IM.
So the first thing I'd note is that depending upon what the query looks like before this, this could be quite expensive or error prone (ie missing results). This worries me somewhat as most sample code out there gets this wrong. :) How are you getting things that have changed? While this sounds simple, this is actually a somewhat tricky question in directory land, given the semantics supported by AD and the fact that it is a multi-master system where writes happen all over the place (and replicate in after the fact).
Other variables would be things like how often you're going to run this, how large the data set could be in AD, and so on.
AD has some APIs built to help you here (the big one that comes to mind is called DirSync) but this can be somewhat complicated if you haven't used it before. This is where the "ping me offline" part comes in.
To your exact question, I'm assuming your result is actually a SearchResultEntry (if not I can revise, tell me what you have in hand). If that is the case then you'll find an Attributes field hanging off of that guy, and from there there is AttributeNames and Values. I think you'll see how it works from there if you have Values in hand, for example:
foreach (var attr in sre.Attributes.Values)
{
var da = (DirectoryAttribute)attr;
Console.WriteLine(da.Name);
foreach (var val in da.GetValues(typeof(byte[])))
{
// Handle a byte[] val ...
}
}
As I said, if you have something other than a SearchResultEntry in hand, let us know and I can revise the code sample.

Resources