I am involved in using the C API to interact with Lotus Notes and Lotus Domino. I'm running into issues when reading existing Notes out of an NSF. Specifically, reading TYPE_OBJECT fields and even more specifically, $FILE fields (though I'm sure all TYPE_OBJECT fields would fail if I had any others).
I'm using NSFItemInfo to get the summary data on the $FILE field (so I don't need the saved file, I need information about it such as its size, name, etc...).
If I create the Note in memory, Commit it, then read the $FILE field, everything works. If I change my unit test to read an existing Note (instead of creating it in memory), Lotus PANICS with an Invalid Handle Lookup message.
So I'm left feeling like there is something different about loading those fields when I create a Note from scratch Vs opening one already created. Even reading in an already created Note that my own code created gives me the same error, so I think I'm creating the Notes correctly.
I've explored the NSFNoteOpenExt's flags options and have attempted to open the Note with every possible flag described in OPEN_xxx and I always get the panics except when I open the Note with OPEN_ABSTRACT or OPEN_NOOBJECTS. The reason those don't error though, is because they open the Note without the $FILE fields at all, so when I see if the field exists I get a false and the code to read in TYPE_OBJECT fields is never executed.
Any ideas what I'm missing?
I'd provide code, but I'm actually using .NET interop to accomplish all this, and the code is spread across multiple files, etc.... If you have any questions please ask and I'll provide as much detail as I can.
Craig
I figured out the issue. It came from the fact that when using interop in C#, you can't call C macros. OSLockBlock is defined as a macro to another macro to a function. Essentially, it locks the BlockId.Pool pointer, then increments the pointer by BlockId.BlockHandle. I was mis-interpreting that macro logic to be first increment BlockId.Pool by BlockId.BlockHandle, then lock.
Essentially:
Lock(BlockId.Pool)+BlockId.BlockHandle Vs Lock(BlockId.Pool+BlockId.BlockHandle)
It's interesting that the latter would work when creating a new note with new attachments. I finally figured that out as well, BlockId.BlockHandle was always zero when doing that. So that's why that always worked.
Related
I'm getting an error when uploading my customized policy, which is based on Microsoft's SocialAccounts example ([tenant] is a placeholder I added):
Policy "B2C_1A_TrustFrameworkExtensions" of tenant "[tenant].onmicrosoft.com" makes a reference to ClaimType with id "client_id" but neither the policy nor any of its base policies contain such an element
I've done some customization to the file, including adding local account signon, but comparing copies of TrustFrameworkExtensions.xml in the examples, I can't see where this element is defined. It is not defined in TrustFrameworkBase.xml, which is where I would expect it.
I figured it out, although it doesn't make sense to me. Hopefully this helps someone else running into the same issue.
The TrustFrameworkBase.xml is not the same in each scenario. When Microsoft documentation said not to modify it, I assumed that meant the "base" was always the same. The implication of this design is: If you try to mix and match between scenarios then you also need to find the supporting pieces in the TrustFrameworkBase.xml and move them into your extensions document. It also means if Microsoft does provide an update to their reference policies and you want to update, you need to remember which one you implemented originally and potentially which other ones you had to pull from or do line-by-line comparison. Not end of the world, but also not how I'd design an inheritance structure.
This also explains why I had to work through previous validation errors, including missing <DisplayName> and <Protocol> elements in the <TechnicalProfile> element.
Yes - I agree that is a problem.
My suggestion is always to use the "SocialAndLocalAccountsWithMfa" scenario as the sample.
That way you will always have the correct attributes and you know which one to use if there is an update.
It's easy enough to comment out the MFA stuff in the user journeys if you don't want it.
There is one exception. If you want to use "username" instead of "email", the reads/writes etc. are only in the username sample.
I was reading about the .settings file on msdn and I noticed they give 2 examples of how to set the value of a item in the settings. Now my question is what is the real diffrence between the 2 and when would you use one instead of the other, since to me they seem pretty mutch the same.
To Write and Persist User Settings at Run Time
Access the user setting and assign it a new value, as shown in the following example:
Properties.Settings.Default.myColor = Color.AliceBlue;
If you want to persist changes to user settings between application sessions, call the Save method, as shown in the following code:
Properties.Settings.Default.Save();
The first statement updates the value of the setting in memory. The second statement updates the persisted value in the user.config file on the disk. That second statement is required to get the value back when you restart the program.
It is very, very important to realize that these two statements must be separate and never be written close together in your code. Keeping them close is harakiri-code. Settings tend to implement unsubtle features in your code, making it operate differently. Which isn't always perfectly tested. What you strongly want to avoid is persisting a setting value that subsequently crashes your program.
That's the harakiri angle, if you saved that value then it is highly likely that the program will immediately crash again when the user restarts it. Or in other words, your program will never run correctly again.
The Save() call must be made when you have a reasonable guarantee that nothing bad happened when the new setting value was used. It belongs at the end of your Main() method. Only reached when the program terminated normally.
I often find myself reading other developer's C code containing expressions like
ptr->member1.member2[i].another_member.final_member = 42;
and needing to find out what type final_member is. Usually what I do is to track down the chain of types using C tags, starting at the declaration of ptr and digging my way into the chain of members. This is cumbersome and often I'm stuck somewhere scratching my head, asking myself "What was the next member in the chain?" To make matters worse, a simple grep for final_member in the source tree turns up too many false positives due to the name being reused in more than one struct.
Is there a way to make vim give me the answer directly? I'm willing to install any plugin and even type a few characters while the cursor is on the final_member or select the whole expression :-) Non-GUI solutions preferred.
If i'm working on a project with several nested structs i add preview to the completeopt option.
In combination with the excellent omnicppcomplete plugin a tiny scratch window pops up if you select an entry in the completion menu. That scratch window shows some properties of the selected tag. Among other things it contains the search pattern for the tag which in case of a struct member usually contains its datatype.
I really suggest you to use plugin clang_complete (or some other plugin powered by clang) for completion. It will give you pure completion of C/C++/Objective-C code by real compiler, not ugly method by tags. Each item in completion menu also has type of field (that's what you are looking for)
Omnicppcomplete fails often on complicated expressions. Clang works great, since it is real awesome compiler.
I have the following question:
The calendar text file and binary file should have a name that with a fixed part and a variable part. Use the time function (in time.h) or some other automatic mechanism to make sure that, when you write the files back out after updating the calendar, you do not overwrite the files you read in but you write a new version of the file that is clearly more recent.
Knowing that I have a program that manages a calendar.
Is it possible to to create a file with a fixed part and a variable part using the time.hlibrary ?
Thank you in advance!
Your question is vague, so the answer could only be similar.
From your specification, I guess you need a filename, f.e. "calendar-YYYYMMDDhhmmss.bin" and "calendar-YYYYMMDDhhmmss.txt"
When you "man time.h", you can see, that the time-"library" provides all these data. At the bottom of the man-page you see some related functions like "time()" and "strftime()", which help you to get a timestamp and to format a time to your needs.
If you "http://www.whathaveyoutried.com" and are stuck again, please update your question, and we will help you further.
EDITH (to the comment):
That depends on whether you should have a lot of files with each containing one "calendar" and the most recent dateded file is the actual calendar and the olde ones are backups; or you have one calendar-file with a new section for each "calendar", then you have to define (for yourself) how to organise these actual and historical sections.
as a matter of fact i would prefere the first solution, so each time you update your calendar, you call "fopen(path_filename_timestamp_txt, "w");". In the second case you would call "fopen(path_filename_txt, "a");" and "fwrite(timestamp);" your section-header;
Please show us, what you have done so far! (as short as possible, according to http://sscce.org/)
When you take your first look at an Oracle database, one of the first questions is often "where's the alert log?". Grid Control can tell you, but its often not available in the environment.
I posted some bash and Perl scripts to find and tail the alert log on my blog some time back, and I'm surprised to see that post still getting lots of hits.
The technique used is to lookup background_dump_dest from v$parameter. But I only tested this on Oracle Database 10g.
Is there a better approach than this? And does anyone know if this still works in 11g?
Am sure it will work in 11g, that parameter has been around for a long time.
Seems like the correct way to find it to me.
If the background_dump_dest parameter isn't set, the alert.log will be put in $ORACLE_HOME/RDBMS/trace
Once you've got the log open, I would consider using File::Tail or File::Tail::App to display it as it's being written, rather than sleeping and reading. File::Tail::App is particularly clever, because it will detect the file being rotated and switch, and will remember where you were up to between invocations of your program.
I'd also consider locking your cache file before using it. The race condition may not bother you, but having multiple people try to start your program at once could result in nasty fights over who gets to write to the cache file.
However both of these are nit-picks. My brief glance over your code doesn't reveal any glaring mistakes.