I'm trying to implement a C application that will monitor writes / modifications / new documents events happening on a couchbase remote cluster coming from a different application. I am now familiar with couchbase C SDK and synchronous instances but I have trouble combining it with libevent for asynchronous I/O.
I read couchbase libevent plugin documentation and the external event loop integration example but I cannot grasp how I would tell my event_base that, for instance:
Monitor this file on this bucket and send me a callback when it's modified
Here's what I do so far:
Firstly, I create my libevent IO option
struct event_base *mEvbase = event_base_new();
lcb_t instance;
lcb_error_t err;
struct lcb_create_io_ops_st ciops;
lcb_io_opt_t ioops;
memset(&ciops, 0, sizeof(ciops));
ciops.v.v0.type = LCB_IO_OPS_LIBEVENT;
ciops.v.v0.cookie = mEvbase;
err = lcb_create_libevent_io_opts(0, &ioops, mEvbase);
if (err != LCB_SUCCESS) {
ERRORMSG0("Failed to create an IOOPS structure for libevent: %s\n", lcb_strerror(NULL, error));
}
and then I create my instance:
struct lcb_create_st create_options;
std::string host = std::string("couchbase://192.168.130.10/");
host.append(bucket);
const char password[] = "password";
create_options.version = 3;
create_options.v.v3.connstr = host.c_str();
create_options.v.v3.passwd = password;
create_options.v.v3.io = ioops;
//Creating a lcb instance
err = lcb_create(&instance, &create_options);
if (err != LCB_SUCCESS) {
die(NULL, "Couldn't create couchbase handler\n", err);
return;
}
/* Assign the handlers to be called for the operation types */
lcb_set_bootstrap_callback(instance, bootstrap_callback);
lcb_set_get_callback(instance, generic_get_callback);
lcb_set_store_callback(instance, generic_store_callback);
and then I schedule a connection.
//We now schedule a connection to the server
err = lcb_connect(instance);
if (err != LCB_SUCCESS) {
die(instance, "Couldn't schedule connection\n", err);
lcb_destroy(instance);
}
lcb_set_cookie(instance, mEvbase);
I use libcouchbase version 2.0.17, libevent core version 2.0.so.5.1.9 and libevent extra version 2.0.so.5.1.9. With the code above, my instance cannot connect to couchbase. I get the following warnings:
event_pending: event has no event_base set.
event_add: event has no event_base set.
So two problems here: I can't connect using the above code and I don't know which direction to go to start receiving events. If anyone point me towards a link or a coded example of this simple case that would unblock me.
Ensure that you are using the same libevent version for both the library and your application. Installing the packages from the repository, you will need to align with the libevent version used therein (e.g. ldd /usr/lib64/libcouchbase_libevent.so). Keep in mind that this must beĀ of the same ABI (so for example, using libevent's 2.0 -> 1.4 compat layer will not work, as the two versions contain different ABIs, and using a libcouchbase_libevent.so linked against 1.4 will break under 2.0).
For the full exchange, see the the comments on the question :)
Related
I have heard rumours that performing a catalog update correctly in FDT 1.x is quite complex. There seem to be more than the obvious steps, which are in pseudo code:
foreach (progid in Registry having component category "FDT DTM")
{
dtm = CoCreateInstance(progid);
StartDTMAccordingStateMachine(dtm);
info = dtm.GetInformation("FDT");
catalog.Add(info);
ShutdownDTMAccordingStateMachine(dtm);
Release(dtm);
}
I could not find any hints in the FDT specification that would require a more complex catalog update procedure, so are the rumours true? What makes a correct catalog update procedure so complex?
Basically the idea for the catalog update is correct. Unfortunately the rumours are also true: doing a catalog update involves some more thoughts, as there are:
Frame application interface considerations
During the catalog update, the DTM is not part of a project yet. Therefore the frame application could be implemented without project specific interfaces such as IFdtTopology or IFdtBulkData. However, many DTMs will query for those interfaces immediately and throw an exception if the frame application does not support those interfaces.
Also, during the catalog update, the frame application could expect that the DTM works without user interface, because this is a batch operation which should not require user interaction. This means the frame application could be implemented without the IFdtActiveX and IFdtDialog interfaces. Unfortunately there are also DTMs that use those interfaces during catalog update time.
.NET considerations
Doing a catalog update on a system with many DTMs installed could require a lot of memory. Therefore some frame applications do the catalog update in an external process. While this is a good idea, you need to consider the FDT .NET specifications and best practice documents.
The base line here is: the external process must be a .NET 2.0 process, independent of the actual implementation technology of your frame application. If you have a C++ implementation, you'll need a very small .NET 2.0 object being loaded before any DTM is started.
Memory considerations
Since FDT 1.x is a conglomerate of COM and .NET, there will be pinned objects. This makes it likely that your application suffers from small object heap fragmentation. In addition FDT passes XMLs as strings which makes it more likely that your application suffers from large object heap fragmentation. The overall combination is very dangerous.
One solution might be to start a limited number of DTMs in the same process and then restart the process, e.g. like
updateprocess = StartProcess();
dtmCount = 0;
foreach (progid in Registry having component category "FDT DTM")
{
dtmCount++;
if (dtmCount % 10 == 0)
{
// restart process to avoid out of memory situation
updateProcess.SignalShutdown();
updateProcess.WaitForExit();
updateProcess = StartProcess();
}
updateProcess.StartDTM(progid);
info = updateProcess.GetDtmInformation();
catalog.Add(info);
updateProcess.ShutdownDTM();
}
In the update process you'll need to create the COM object and follow the state machine etc.
FDT 1.2.1 scanning information
In FDT 1.2.1, additional information was introduced to better recognize device during a hardware scan. Although there is no fully FDT 1.2.1 compliant DTM at the time of writing, many FDT 1.2.0 DTMs implement the additional interface IDtmInformation2 to support device detection.
For you as the frame application developer, this means that you have to extend the GetDtmInformation() method in the update process:
T GetDtmInformation()
{
var result = new T(); // a type defined by you
result.info = dtm.GetInformation();
foreach (deviceType in result.info)
{
foreach (protocol in deviceType)
{
deviceInfo = dtm.GetDeviceIdentificationInformation(deviceType, protocol);
result.deviceinfo.Add(deviceInfo);
}
}
}
Schema path updates
FDT 1.2.0 had the problem that the user needed to install XDR schema definitions manually, which was very uncomfortable. FDT 1.2.1 solves this problem in the way that the DTM can now bring XDR schemas with it. The definition is in the XML from GetInformation() at the XML elements <FDT>, <DtmInfo>, <DtmSchemaPaths>. The DTM will publish a directory name there. In theory, this is an easy task: to install the XDR schemas, we need to update the GetDtmInformation() a little bit:
T GetDtmInformation()
{
var result = new T(); // a type defined by you
result.info = dtm.GetInformation();
schemaPaths = result.info.SelectNodes("/FDT/DtmInfo/DtmSchemaPaths/DtmSchemaPath");
foreach (dtmSchemaPath in schemaPaths)
{
CopyFiles(from dtmSchemaPath to frameSchemaPath);
}
// *) read on, more code needed here
foreach (deviceType in result.info)
{
foreach (protocol in deviceType)
{
deviceInfo = dtm.GetDeviceIdentificationInformation(deviceType, protocol);
result.deviceinfo.Add(deviceInfo);
}
}
}
Unfortunately there is a logical bug in the sequence now. Since the DTM was already started, it has already asked the frame application for the schema path (using IFdtContainer::GetXmlSchemaPath()) and it has already set up the schema cache to validate XMLs. The DTM cannot be notified about updates in the schema path.
Therefore you need to restart the DTM in order to be sure that it gets the latest version of XDR schemas. In code, this means you have to update the whole code to:
T GetDtmInformation()
{
var result = new T; // a type defined by you
result.info = dtm.GetInformation();
schemaPaths = result.info.SelectNodes("/FDT/DtmInfo/DtmSchemaPaths/DtmSchemaPath");
schemasUpdated = false;
foreach (dtmSchemaPath in schemaPaths)
{
schemasUpdated |= CopyFiles(from dtmSchemaPath to frameSchemaPath);
}
if (schemasUpdated)
{
// restart the DTM to make sure it uses latest versions of the schemas
dtm = CoCreateInstance(progid);
StartDTMAccordingStateMachine(dtm);
info = dtm.GetInformation("FDT");
}
foreach (deviceType in result.info)
{
foreach (protocol in deviceType)
{
deviceInfo = dtm.GetDeviceIdentificationInformation(deviceType, protocol);
result.deviceinfo.Add(deviceInfo);
}
}
}
XDR schema version information issue
In the chapter before, I have used a simple CopyFiles() operation to update the XDR schema files. This method is not so simple as it seems, because this method needs to perform a version number check.
The version is given in the XDR schema like this:
<AttributeType name="schemaVersion" dt:type="number" default="1.0"/>
The attribute #default defines the version number of the schema. #schemaVersion itself is not used anywhere else.
Version numbers that are used at the time of writing:
1.0 // e.g. FDTCIPCommunicationSchema CIP version 1.1-02
1.1 // e.g. FDTCIPChannelParameterSchema CIP version 1.1-02
1.00 // e.g. DTMIOLinkDeviceSchema IO Link version 1.0-1
1.21 // e.g. FDTIOLinkChannelParameterSchema IO Link version 1.0-1
1.22 // e.g. FDTHART_ExtendedCommunicationSchema
Version 1.21 highly suggests that it correlates to FDT version 1.2.1, which brings up the question on how to interpret the version number. There are three possible ways of interpreting it:
a) as a simple float number as defined in the datatype of XDR (dt:type="number")
b) as a version number in format major.minor
c) as a version number in format major.minorbuild where minor and build are simply concatenated
Ok, I'll leave that puzzle up to the reader. I have suggested a document clarifying this version number issue.
Anyway, this is our CopyFiles() method:
bool CopyFiles(sourceDir, destinationDir)
{
filesCopied = false;
foreach(filename in sourceDir)
{
existingVersion = ExtractVersion(destinationDir + filename);
newVersion = ExtractVersion(sourceDir + filename);
if (newVersion > existingVersion)
{
File.Copy(sourceDir + filename, destinationDir+filenam);
filesCopied = true;
}
}
return filesCopied;
}
XDR schema update impact on other DTMs
In the last chapter we return a flag from CopyFiles() in order to determine whether or not the DTM needs to be restarted in GetDtmInformation(). However, this update may not only affect the current DTM, it may also affect other DTMs of the same protocol which have been added to the catalog before.
While you can simply restart the whole catalog update from scratch, this would imply a huge performance impact. The better way seems to do it selectively.
To apply a selective approach, you need to maintain a list of protocols that were updated (in T GetDtmInformation()):
foreach (dtmSchemaPath in schemaPaths)
{
schemasUpdated = CopyFiles(from dtmSchemaPath to frameSchemaPath);
if (schemasUpdated)
{
listOfChangedProtocols.Add(ExtractProtocolId(destinationDir));
}
}
And of course, don't forget to re-update the catalog for affected DTMs:
affectedDtms = catalog.GetDtmsForProtocols(listOfChangedProtocols);
// TODO: perform catalog update again
// NOTE: remember that this might apply recursively
Getting protocol groups right
Next, you need to know the concept of protocol groups. A protocol group shares XDR schema files across different protocols, where each protocol is identified by a protocol ID. A good example is the CIP protocol family, which consists of the single protocols DeviceNet, CompoNet and Ethernet/IP.
These protocols share a common set of XDR schema files, so you'll find the same file three times on your hard disk. This duplication also has some impact on the catalog update since you need to update all copies even if the DTM comes for a single protocol only.
The reason is in the way a schema cache is constructed: when adding XDR schemas to the schema cache, the first file will win. Other files with the same name will not be added any more. Therefore it is important to ensure that the first file added to the cache is the one with the highest version number. This can only be achieved by updating all copies to the latest version.
This results in an update of the CopyFiles() method:
List<protocolID> CopyFiles(sourceDir, destinationDir)
{
protocolsChanged = new List<protocolID>();
foreach(filename in sourceDir)
{
foreach (subdirectory in destinationDir)
{
files = GetFiles(subdirectory, pattern = filename);
if (files.Count == 1)
{
UpdateXDRConsideringVersionNumber(sourceDir, subdirectory);
protocolsChanged.Add(ExtractProtocolId(subdirectory));
}
}
}
return protocolsChanged;
}
void UpdateXDRConsideringVersionNumber(sourceDir, destinationDir)
{
existingVersion = ExtractVersion(destinationDir + filename);
newVersion = ExtractVersion(sourceDir + filename);
if (newVersion > existingVersion)
{
File.Copy(sourceDir + filename, destinationDir+filenam);
filesCopied = true;
}
}
Device DTMs and schema paths
For whatever reason, it is defined that only communication DTMs and device DTMs need to bring XDR schemas with them. The rationale behind that probably was that you cannot use a device DTM without a communication or gateway DTM.
Unfortunately, when querying the Windows Registry for DTMs, you cannot predict the order in which you get DTMs. This may lead to the case that you get a device DTM first. Starting this DTM and getting information from it may result in errors or invalid XML if there is no XDR schema for the protocol of the DTM yet.
So you need to continue the catalog update, hopefully find a communication DTM or gateway DTM of the same protocol which brings the XDR schemas. Then you start the device DTM again and it will deliver useful information.
This does not need an update to any code. It should already work if you followed all the steps described before. The only thing to consider here is good error handling (but I'll not do that in pseudo code here).
Conclusion
Hopefully I could cover all the topics which are important to know in conjunction with the FDT 1.x catalog update. As you can see, this is not only a rumour.
I am returning to a C API Couchbase 2.0 project which was working a year ago and now fails to connect to the database, on a computer and source code that I believe has not changed, except for Ubuntu updates.
I now get the following error when I attempt to call lcb_create:
Failed to create libcouchbase instance: The administrative account can
no longer be used for data access
Have lcb_create parameters or behavior changed? What do I now need to provide?
My code is:
// Create the database instance:
lcb_error_t err;
struct lcb_create_st create_options;
struct lcb_create_io_ops_st io_opts;
io_opts.version = 0;
io_opts.v.v0.type = LCB_IO_OPS_DEFAULT;
io_opts.v.v0.cookie = NULL;
memset(&create_options, 0, sizeof(create_options));
err = lcb_create_io_ops(&create_options.v.v0.io, &io_opts);
if (err != LCB_SUCCESS) {
fprintf(stderr, "Failed to create libcouchbase IO instance: %s\n",
lcb_strerror(NULL, err));
return 1;
}
create_options.v.v0.host = Host;
create_options.v.v0.user = Username;
create_options.v.v0.passwd = Password;
create_options.v.v0.bucket = Bucket;
err = lcb_create(&Instance, &create_options);
if (err != LCB_SUCCESS) {
fprintf(stderr, "Failed to create libcouchbase instance: %s\n",
lcb_strerror(NULL, err));
return 1;
}
The Username I am passing is has been the Administrator name, and as I said this used to work. Reading around, it sounds like now we use the bucket name as the user name? Is the bucket field then redundant? And the password is now a bucket password? I don't see a place to set that - maybe I need to update Couchbase Server past 2.0, and the updates have made my API out of sync with the server?
You should use the bucketname and password to connect to the bucket, not the Administrator name. This is covered in the release notes for the Couchbase C client 2.1.1.
The use of Administrator credentials was never documented, was not a good security practice and was rarely used, so Couchbase decided to address the issue.
My software is "C" based and is using libcouchbase to talk to Couchbase server
I know how to query Couchbase views using libcouchbase.
But to be able to query the view I need to create one.
I understand that the view can be created through the couchbase GUI.
But when the software is shipped as a product I dont want to give the instructions to create the view separately.
Hence I am looking for a libcouchbcase API which can create the view from the Couchbase C client itself.This will be a onetime activity when the product startsup(In other words its an idempotent operation)
Any code snippets are also welcome.
man lcb_make_http_request to get more info about doing restful queries to couchbase
Also you can find doc sources in the repo https://github.com/couchbase/libcouchbase/blob/master/man/man3couchbase/lcb_make_http_request.3couchbase.txt#L147-163
const char *docid = "_design/test";
const char *doc = "{\"views\":{\"all\":{\"map\":\"function (doc, meta) { emit(meta.id, null); }\"}}}";
lcb_http_cmd_t cmd;
lcb_http_request_t req;
cmd.version = 0;
cmd.v.v0.path = docid;
cmd.v.v0.npath = strlen(docid);
cmd.v.v0.body = doc;
cmd.v.v0.nbody = strlen(doc);
cmd.v.v0.method = LCB_HTTP_METHOD_PUT;
cmd.v.v0.content_type = "application/json";
lcb_error_t err = lcb_make_http_request(instance, NULL,
LCB_HTTP_TYPE_VIEW,
&cmd, &req);
if (err != LCB_SUCCESS) {
... failed to schedule request ...
We are currently implementing a security exit for our SVRCONN channels. This exit will authenticate to our LDAP(AD or UNIX). Our current implementation of the exit is working for only connections coming from MQ EXPLORER.
When we write a code to connect and pass the userID/PWD, the security exit is picking up the user account login on the client machine.
Here is a snippet on how we connect to MQ
Code:
MQCNO ConnectOptions = {MQCNO_DEFAULT};
MQCD ClientConn = {MQCD_CLIENT_CONN_DEFAULT};
MQCSP mqCSP = {MQCSP_DEFAULT};
MQHCONN HConn;
MQLONG CompCode;
MQLONG Reason;
char QMName[MQ_Q_MGR_NAME_LENGTH+1]="QMGRNAME";
char channelName[MQ_CHANNEL_NAME_LENGTH+1]="MY_CHANNEL";
char hostname[1024]="MQSERVER(PORT)";
char UserId[32+1]="MyID";
char Password[32+1]="MyPWD";
strncpy(ClientConn.ConnectionName, hostname, MQ_CONN_NAME_LENGTH);
strncpy(ClientConn.ChannelName, channelName, MQ_CHANNEL_NAME_LENGTH);
mqCSP.AuthenticationType = MQCSP_AUTH_USER_ID_AND_PWD;
mqCSP.Version = MQCSP_VERSION_1;
mqCSP.CSPUserIdPtr = &UserId;
mqCSP.CSPUserIdOffset = 0;
mqCSP.CSPUserIdLength = strlen(UserId);
mqCSP.CSPPasswordPtr = &Password;
mqCSP.CSPPasswordOffset = 0;
mqCSP.CSPPasswordLength = strlen(Password);
ConnectOptions.SecurityParmsPtr = &mqCSP;
ConnectOptions.SecurityParmsOffset = 0;
ConnectOptions.ClientConnPtr = &ClientConn;
ConnectOptions.Version = MQCNO_VERSION_5;
MQCONNX (QMName, &ConnectOptions, &HConn, &CompCode, &Reason);
Then we use this code to retrieve the userID/PWD on the security exit.
Code:
memset (User, 0, pChDef->LongRemoteUserIdLength);
memset (Pass, 0, MQ_PASSWORD_LENGTH);
MakeCString(User,pChDef->LongRemoteUserIdPtr,pChDef->LongRemoteUserIdLength);
MakeCString(Pass,pChDef->RemotePassword,MQ_PASSWORD_LENGTH);
MQ Server->7.1.0.2
Why in the world would you re-invent the wheel when there is a cheap product that does authentication against an LDAP server? If you spent more than a day programming this, you instead could have purchased a license for MQ Authenticate Security Exit and done something else.
MQ only flows the password in plain text. Very easy to grab that password if you are familiar with MQ or just use WireShark since it knows/understands MQ protocol.
MQ uses 2 different styles of flowing the UserID and Password between a client and server: "old" and "new" style. Different platforms support different styles. Some support both directly and some support both indirectly and some platforms do a conversion and flow both at the same time (very weird!).
It is fine if all of your applications can be rebuilt to use MQCONNX by what if the application cannot be rebuilt? Or the application team does not want to do it or the source code is lost. What about 3rd party applications that do not support MQCONNX? What are you going to do?
What about MQ JNDI or CCDT (Client Channel Table Definitions) or the new "MQClient.ini" file? Have you thought about how you are going to handle these implementations?
You are going to easily spend 6 months (i.e. 1000 hours) building a working prototype that covers some of the issues I have highlighted. You are in for a world of hurt. I know, I've been there done that.
We have the following snippet.
OSStatus createErr = PasteboardCreate(kPasteboardClipboard, &m_pboard);
if (createErr != noErr) {
LOG((CLOG_DEBUG "failed to create clipboard reference: error %i" createErr));
}
This compiles fine, however, it fails to run when called from SSH. This is because there is no pasteboard available in the SSH terminal. However, the idea here is to share clipboards between computers.
When run from desktop terminal, this works just fine. But when run from SSH, PasteboardCreate returns -4960 (aka, coreFoundationUnknownErr). I assume that the only way around this issue is to run the application from within the same environment as the pasteboard, but is this possible?
Synergy+ issue 67
Accessing the pasteboard directly looks to be a no-go. First, launchd won't register the processes1 with the pasteboard server's mach port. You'd first need find a way to get the pasteboard server's mach port (mach_port_names?). Also, direct communication between user sessions is prohibited2, and other communication is limited. I'm not sure if your program will have the rights to connect to the pasteboard server.
Here's a first shot at an illustrative example on using Apple events to get & set the clipboard as a string. Error handling is minimal to nonexistent (I'm not certain how I feel about require_noerr). If you're going to get/set clipboard data multiple times during a run, you can save the Apple events and, when copying to the clipboard, use AECreateDesc & AEPutParamDesc or (maybe) AEBuildParameters. AEVTBuilder might be of use.
NSString* paste() {
NSString *content;
AppleEvent paste, reply = { typeNull, 0L };
AEBuildError buildError = { typeNull, 0L };
AEDesc clipDesc = { typeNull, 0L };
OSErr err;
err = AEBuildAppleEvent(kAEJons, kAEGetClipboard,
typeApplicationBundleID, "com.apple.finder", strlen("com.apple.finder"),
kAutoGenerateReturnID, kAnyTransactionID,
&paste, &buildError,
""
);
require_noerr(err, paste_end);
err = AESendMessage(&paste, &reply, kAEWaitReply, kAEDefaultTimeout);
err = AEGetParamDesc(&reply, keyDirectObject, typeUTF8Text, &clipDesc);
require_noerr(err, pastErr_getReply);
Size dataSize = AEGetDescDataSize(&clipDesc);
char* clipData = malloc(dataSize);
if (clipData) {
err = AEGetDescData(&clipDesc, clipData, dataSize);
if (noErr == err) {
content = [NSString stringWithCString:clipData encoding:NSUTF8StringEncoding];
} else {}
free(clipData);
}
AEDisposeDesc(&clipDesc);
pastErr_getReply:
AEDisposeDesc(&reply);
pasteErr_sending:
AEDisposeDesc(&paste);
paste_end:
return content;
}
OSStatus copy(NSString* clip) {
AppleEvent copy, reply = { typeNull, 0L };
AEBuildError buildError = { typeNull, 0L };
OSErr err = AEBuildAppleEvent(kAEJons, kAESetClipboard,
typeApplicationBundleID, "com.apple.finder", strlen("com.apple.finder"),
kAutoGenerateReturnID, kAnyTransactionID,
©, &buildError,
"'----':utf8(#)",
AEPARAMSTR([clip UTF8String])
/*
"'----':obj {form: enum(prop), want: type(#), seld: type(#), from: null()}"
"data:utf8(#)",
AEPARAM(typeUTF8Text),
AEPARAM(pClipboard),
AEPARAMSTR([clip UTF8String])
*/
);
if (aeBuildSyntaxNoErr != buildError.fError) {
return err;
}
AESendMessage(©, &reply, kAENoReply, kAEDefaultTimeout);
AEDisposeDesc(&reply);
AEDisposeDesc(©);
return noErr;
}
I'm leaving the Core Foundation approach above, but you'll probably want to use NSAppleEventDescriptor to extract the clipboard contents from the Apple Event reply.
err = AESendMessage(&paste, &reply, kAEWaitReply, kAEDefaultTimeout);
require_noerr(err, pasteErr_sending);
// nsReply takes ownership of reply
NSAppleEventDescriptor *nsReply = [[NSAppleEventDescriptor alloc] initWithAEDescNoCopy:&reply];
content = [[nsReply descriptorAtIndex:1] stringValue];
[nsReply release];
pasteErr_sending:
AEDisposeDesc(&paste);
paste_end:
return content;
}
An NSAppleEventDescriptor is also easier to examine in a debugger than an AEDesc. To examine replies, you can also to set the AEDebugReceives environment variable when using osascript or Script Editor.app:
AEDebugReceives=1 osascript -e 'tell application "Finder" to get the clipboard'
References:
"Configuring User Sessions"
"Communicating Across Login Sessions"
Mach Kernel Interface, especially:
mach_msg_header
mach_msg
CFMessagePort Reference (mach port wrapper):
CFMessagePortCreateRemote
CFMessagePortSendRequest
Apple Events Programming Guide
Apple Event Manager Reference
AEBuild*, AEPrint* and Friends
AEBuildAppleEvent on CocoaDev
Mac OS X Debugging Magic (for AEDebugSends and other AEDebug* environment variables)
I tried doing it in AppleScript, and it worked (even when invoked via SSH). My script is as follows:
#!/usr/bin/osascript
on run
tell application "Finder"
display dialog (get the clipboard)
end tell
end run
This definitely isn't an ideal solution, but perhaps if you worked out how AppleScript does it then it'd help you implement it yourself.
Take a look at pbpaste (getting the contents of the clipboard) and pbcopy (copying contents TO the clipboard). Works fine, also over SSH. :)
On Mac OS X Snow Leopard:
(source: hillrippers.ch)
On Ubuntu 9.04:
(source: hillrippers.ch)
You can access the pasteboard with PasteboardCreate via SSH on SnowLeopard but not on Leopard or Tiger.
You probably don't want to use pbcopy and pbpaste for a full pasteboard sync since those only deal with plain text, RTF, and EPS. If, for example, you copy an image and then try to write it out with pbpaste, you'll get no output.
Assuming you have an app running in the user's session on both computers, you could serialize the pasteboard data out to a file, transfer it over SSH, read it from your app on the remote side, and then put the pasteboard data on the remote pasteboard. However, getting the pasteboard serialization right may be tricky and I'm not sure how portable pasteboard data is between OSes and architectures.