Create FTP mount using GIO library - c

I'm trying to use GIO. I figured out how to use GVolumeMonitor to catch volume changes and get list of volumes. g_volume_monitor_get_mounts function gives me a list of existing GMount's. Each of them can represent a HDD partition or a mount of network share (ftp, smb, sftp etc). Mounting a HDD partition seems to be possible using g_volume_mount. But how to create GMount representing a network share? Which classes are responsible for this?
Here is my code:
GVolumeMonitor* monitor = g_volume_monitor_get();
GList* list = g_volume_monitor_get_mounts(monitor);
for(; list; list = list->next) {
GMount* mount = static_cast<GMount*>(list->data);
GFile* file = g_mount_get_root(mount);
qDebug() << "Mount(" << g_mount_get_name(mount) <<
", " << g_file_get_path(file) << ")";
}
(I know there must be g_object_unref and g_list_free.)
Output:
Mount( SFTP for ri on host.org , /home/ri/.gvfs/SFTP for ri on host.org )
Mount( Yellow hard disk , /media/Yellow hard disk )
I was created the first sftp mount using nautilus. Now I want to implement this functionality myself. Target OS is Ubuntu 12.04.

I think you might be looking for g_file_mount_enclosing_volume()

Related

Permissions of a windows service regarding modified timestamp reading

This question was migrated from Super User because it can be answered on Stack Overflow.
Migrated 21 days ago.
I have a Qt app running on Windows Server 2016 that monitors files on a mapped network drive for changes via QFileSystemWatcher. Change notifications do not work, so I want to rely on regular "lastModified" timestamp polling. The problem is: If I run the app as a desktop app, the timestamp is read out correctly. If I run the same app as a service, the timestamp can't be read out. And I have to run it as a service, to keep it alive permanently.
Is this a restriction of a Windows service?
If yes, is there a workaround to read out the timestamp?
Here is how I try to read the timestamp:
for ( int i=0; i<conf_.files_.count(); i++)
{
QString f = conf_.files_.at(i);
QFileInfo finfo(f);
if ( finfo.lastModified().isNull() || !finfo.lastModified().isValid() ){
QString line(conf_.iniFileGroupName_ + ": Could not read lastModified-timestamp of file " + f);
ApplicationLogger::instance().log(line);
// TODO: Use winapi directly and see if it helps
// https://learn.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-getfileattributesa
}
else{
if ( !lastModified_.contains(f) ||
lastModified_.value(f) < finfo.lastModified() )
{
QString line(conf_.iniFileGroupName_ + ": Adding file " + f + " to backlog based on modification timer. Last modified = " + finfo.lastModified().toString());
ApplicationLogger::instance().log(line);
addToBacklog(f, finfo.lastModified());
lastModified_.insert(f, finfo.lastModified());
checkBacklogAndCopy();
}
}
}
Ideally I'd like to get notified and not poll, but this is out of scope for this question.

Deploy OVA to VCenter with Terraform

I am by no means a knowledgeable VMWare user at this point. I think this might just be a case where I just don't understand some essential concepts.
I'm trying to deploy a VM into a VCenter, I have an OVA (template?) that I want to deploy with.
Currently I have unpacked the OVA, uploaded the VMDKs I found therein to a datastore, and then used this terraform definition:
provider "vsphere" {
user = "${var.vsphere_user}"
password = "${var.vsphere_password}"
vsphere_server = "${var.vsphere_server}"
allow_unverified_ssl = true
}
resource "vsphere_virtual_machine" "primary" {
name = "myvm"
vcpu = 2
memory = 16384
datacenter = "${var.vsphere_datacenter}"
resource_pool = "/DATA_CENTER/host/10.132.260.000"
network_interface = {
label = "Private Network - vmnic0 vmnic2"
ipv4_address = "10.132.260.001"
ipv4_gateway = "10.132.260.002"
ipv4_prefix_length = 26
}
disk {
datastore = "datastore1"
vmdk = "/path/to/vmdk/"
bootable = true
type = "thin"
}
}
Which gets stuck, because it can't open the VMDK.
When I deploy the OVA with ovftool the vmdk that the vm is deployed with is very different.
An error was received from the ESX host while powering on VM myvm.
Failed to start the virtual machine. Module DiskEarly power on failed.
Cannot open the disk
'/vmfs/volumes/557fc17b-c078f45c-f5bf-002590faf644/template_folder/my_vm.vmdk'
or one of the snapshot disks it depends on. The file specified is not
a virtual disk
Should I be uploading the OVA file to the datastore instead and change my disk block to look like:
disk {
datastore = "datastore1"
template = "/path/to/ova/"
type = "thin"
}
Or am I just out of luck here? Also, the terraform provider for vsphere doesn't correctly receive the error from VCenter and just continues to poll even though the vm failed.
OVA contains streamOptimized disks. If you directly upload to the datastore vSphere doesn't recognize it as a VMDK for a VM.
You can use vmware-vdiskmanager tool to convert the streamOptimized disk to sparse disk.
vmware-vdiskmanager -r ova_extracted.vmdk -t 0 destination.vmdk

How to perform a catalog update correctly in FDT 1.x?

I have heard rumours that performing a catalog update correctly in FDT 1.x is quite complex. There seem to be more than the obvious steps, which are in pseudo code:
foreach (progid in Registry having component category "FDT DTM")
{
dtm = CoCreateInstance(progid);
StartDTMAccordingStateMachine(dtm);
info = dtm.GetInformation("FDT");
catalog.Add(info);
ShutdownDTMAccordingStateMachine(dtm);
Release(dtm);
}
I could not find any hints in the FDT specification that would require a more complex catalog update procedure, so are the rumours true? What makes a correct catalog update procedure so complex?
Basically the idea for the catalog update is correct. Unfortunately the rumours are also true: doing a catalog update involves some more thoughts, as there are:
Frame application interface considerations
During the catalog update, the DTM is not part of a project yet. Therefore the frame application could be implemented without project specific interfaces such as IFdtTopology or IFdtBulkData. However, many DTMs will query for those interfaces immediately and throw an exception if the frame application does not support those interfaces.
Also, during the catalog update, the frame application could expect that the DTM works without user interface, because this is a batch operation which should not require user interaction. This means the frame application could be implemented without the IFdtActiveX and IFdtDialog interfaces. Unfortunately there are also DTMs that use those interfaces during catalog update time.
.NET considerations
Doing a catalog update on a system with many DTMs installed could require a lot of memory. Therefore some frame applications do the catalog update in an external process. While this is a good idea, you need to consider the FDT .NET specifications and best practice documents.
The base line here is: the external process must be a .NET 2.0 process, independent of the actual implementation technology of your frame application. If you have a C++ implementation, you'll need a very small .NET 2.0 object being loaded before any DTM is started.
Memory considerations
Since FDT 1.x is a conglomerate of COM and .NET, there will be pinned objects. This makes it likely that your application suffers from small object heap fragmentation. In addition FDT passes XMLs as strings which makes it more likely that your application suffers from large object heap fragmentation. The overall combination is very dangerous.
One solution might be to start a limited number of DTMs in the same process and then restart the process, e.g. like
updateprocess = StartProcess();
dtmCount = 0;
foreach (progid in Registry having component category "FDT DTM")
{
dtmCount++;
if (dtmCount % 10 == 0)
{
// restart process to avoid out of memory situation
updateProcess.SignalShutdown();
updateProcess.WaitForExit();
updateProcess = StartProcess();
}
updateProcess.StartDTM(progid);
info = updateProcess.GetDtmInformation();
catalog.Add(info);
updateProcess.ShutdownDTM();
}
In the update process you'll need to create the COM object and follow the state machine etc.
FDT 1.2.1 scanning information
In FDT 1.2.1, additional information was introduced to better recognize device during a hardware scan. Although there is no fully FDT 1.2.1 compliant DTM at the time of writing, many FDT 1.2.0 DTMs implement the additional interface IDtmInformation2 to support device detection.
For you as the frame application developer, this means that you have to extend the GetDtmInformation() method in the update process:
T GetDtmInformation()
{
var result = new T(); // a type defined by you
result.info = dtm.GetInformation();
foreach (deviceType in result.info)
{
foreach (protocol in deviceType)
{
deviceInfo = dtm.GetDeviceIdentificationInformation(deviceType, protocol);
result.deviceinfo.Add(deviceInfo);
}
}
}
Schema path updates
FDT 1.2.0 had the problem that the user needed to install XDR schema definitions manually, which was very uncomfortable. FDT 1.2.1 solves this problem in the way that the DTM can now bring XDR schemas with it. The definition is in the XML from GetInformation() at the XML elements <FDT>, <DtmInfo>, <DtmSchemaPaths>. The DTM will publish a directory name there. In theory, this is an easy task: to install the XDR schemas, we need to update the GetDtmInformation() a little bit:
T GetDtmInformation()
{
var result = new T(); // a type defined by you
result.info = dtm.GetInformation();
schemaPaths = result.info.SelectNodes("/FDT/DtmInfo/DtmSchemaPaths/DtmSchemaPath");
foreach (dtmSchemaPath in schemaPaths)
{
CopyFiles(from dtmSchemaPath to frameSchemaPath);
}
// *) read on, more code needed here
foreach (deviceType in result.info)
{
foreach (protocol in deviceType)
{
deviceInfo = dtm.GetDeviceIdentificationInformation(deviceType, protocol);
result.deviceinfo.Add(deviceInfo);
}
}
}
Unfortunately there is a logical bug in the sequence now. Since the DTM was already started, it has already asked the frame application for the schema path (using IFdtContainer::GetXmlSchemaPath()) and it has already set up the schema cache to validate XMLs. The DTM cannot be notified about updates in the schema path.
Therefore you need to restart the DTM in order to be sure that it gets the latest version of XDR schemas. In code, this means you have to update the whole code to:
T GetDtmInformation()
{
var result = new T; // a type defined by you
result.info = dtm.GetInformation();
schemaPaths = result.info.SelectNodes("/FDT/DtmInfo/DtmSchemaPaths/DtmSchemaPath");
schemasUpdated = false;
foreach (dtmSchemaPath in schemaPaths)
{
schemasUpdated |= CopyFiles(from dtmSchemaPath to frameSchemaPath);
}
if (schemasUpdated)
{
// restart the DTM to make sure it uses latest versions of the schemas
dtm = CoCreateInstance(progid);
StartDTMAccordingStateMachine(dtm);
info = dtm.GetInformation("FDT");
}
foreach (deviceType in result.info)
{
foreach (protocol in deviceType)
{
deviceInfo = dtm.GetDeviceIdentificationInformation(deviceType, protocol);
result.deviceinfo.Add(deviceInfo);
}
}
}
XDR schema version information issue
In the chapter before, I have used a simple CopyFiles() operation to update the XDR schema files. This method is not so simple as it seems, because this method needs to perform a version number check.
The version is given in the XDR schema like this:
<AttributeType name="schemaVersion" dt:type="number" default="1.0"/>
The attribute #default defines the version number of the schema. #schemaVersion itself is not used anywhere else.
Version numbers that are used at the time of writing:
1.0 // e.g. FDTCIPCommunicationSchema CIP version 1.1-02
1.1 // e.g. FDTCIPChannelParameterSchema CIP version 1.1-02
1.00 // e.g. DTMIOLinkDeviceSchema IO Link version 1.0-1
1.21 // e.g. FDTIOLinkChannelParameterSchema IO Link version 1.0-1
1.22 // e.g. FDTHART_ExtendedCommunicationSchema
Version 1.21 highly suggests that it correlates to FDT version 1.2.1, which brings up the question on how to interpret the version number. There are three possible ways of interpreting it:
a) as a simple float number as defined in the datatype of XDR (dt:type="number")
b) as a version number in format major.minor
c) as a version number in format major.minorbuild where minor and build are simply concatenated
Ok, I'll leave that puzzle up to the reader. I have suggested a document clarifying this version number issue.
Anyway, this is our CopyFiles() method:
bool CopyFiles(sourceDir, destinationDir)
{
filesCopied = false;
foreach(filename in sourceDir)
{
existingVersion = ExtractVersion(destinationDir + filename);
newVersion = ExtractVersion(sourceDir + filename);
if (newVersion > existingVersion)
{
File.Copy(sourceDir + filename, destinationDir+filenam);
filesCopied = true;
}
}
return filesCopied;
}
XDR schema update impact on other DTMs
In the last chapter we return a flag from CopyFiles() in order to determine whether or not the DTM needs to be restarted in GetDtmInformation(). However, this update may not only affect the current DTM, it may also affect other DTMs of the same protocol which have been added to the catalog before.
While you can simply restart the whole catalog update from scratch, this would imply a huge performance impact. The better way seems to do it selectively.
To apply a selective approach, you need to maintain a list of protocols that were updated (in T GetDtmInformation()):
foreach (dtmSchemaPath in schemaPaths)
{
schemasUpdated = CopyFiles(from dtmSchemaPath to frameSchemaPath);
if (schemasUpdated)
{
listOfChangedProtocols.Add(ExtractProtocolId(destinationDir));
}
}
And of course, don't forget to re-update the catalog for affected DTMs:
affectedDtms = catalog.GetDtmsForProtocols(listOfChangedProtocols);
// TODO: perform catalog update again
// NOTE: remember that this might apply recursively
Getting protocol groups right
Next, you need to know the concept of protocol groups. A protocol group shares XDR schema files across different protocols, where each protocol is identified by a protocol ID. A good example is the CIP protocol family, which consists of the single protocols DeviceNet, CompoNet and Ethernet/IP.
These protocols share a common set of XDR schema files, so you'll find the same file three times on your hard disk. This duplication also has some impact on the catalog update since you need to update all copies even if the DTM comes for a single protocol only.
The reason is in the way a schema cache is constructed: when adding XDR schemas to the schema cache, the first file will win. Other files with the same name will not be added any more. Therefore it is important to ensure that the first file added to the cache is the one with the highest version number. This can only be achieved by updating all copies to the latest version.
This results in an update of the CopyFiles() method:
List<protocolID> CopyFiles(sourceDir, destinationDir)
{
protocolsChanged = new List<protocolID>();
foreach(filename in sourceDir)
{
foreach (subdirectory in destinationDir)
{
files = GetFiles(subdirectory, pattern = filename);
if (files.Count == 1)
{
UpdateXDRConsideringVersionNumber(sourceDir, subdirectory);
protocolsChanged.Add(ExtractProtocolId(subdirectory));
}
}
}
return protocolsChanged;
}
void UpdateXDRConsideringVersionNumber(sourceDir, destinationDir)
{
existingVersion = ExtractVersion(destinationDir + filename);
newVersion = ExtractVersion(sourceDir + filename);
if (newVersion > existingVersion)
{
File.Copy(sourceDir + filename, destinationDir+filenam);
filesCopied = true;
}
}
Device DTMs and schema paths
For whatever reason, it is defined that only communication DTMs and device DTMs need to bring XDR schemas with them. The rationale behind that probably was that you cannot use a device DTM without a communication or gateway DTM.
Unfortunately, when querying the Windows Registry for DTMs, you cannot predict the order in which you get DTMs. This may lead to the case that you get a device DTM first. Starting this DTM and getting information from it may result in errors or invalid XML if there is no XDR schema for the protocol of the DTM yet.
So you need to continue the catalog update, hopefully find a communication DTM or gateway DTM of the same protocol which brings the XDR schemas. Then you start the device DTM again and it will deliver useful information.
This does not need an update to any code. It should already work if you followed all the steps described before. The only thing to consider here is good error handling (but I'll not do that in pseudo code here).
Conclusion
Hopefully I could cover all the topics which are important to know in conjunction with the FDT 1.x catalog update. As you can see, this is not only a rumour.

Is it possible to access the Mac OS X pasteboard when logged in via SSH?

We have the following snippet.
OSStatus createErr = PasteboardCreate(kPasteboardClipboard, &m_pboard);
if (createErr != noErr) {
LOG((CLOG_DEBUG "failed to create clipboard reference: error %i" createErr));
}
This compiles fine, however, it fails to run when called from SSH. This is because there is no pasteboard available in the SSH terminal. However, the idea here is to share clipboards between computers.
When run from desktop terminal, this works just fine. But when run from SSH, PasteboardCreate returns -4960 (aka, coreFoundationUnknownErr). I assume that the only way around this issue is to run the application from within the same environment as the pasteboard, but is this possible?
Synergy+ issue 67
Accessing the pasteboard directly looks to be a no-go. First, launchd won't register the processes1 with the pasteboard server's mach port. You'd first need find a way to get the pasteboard server's mach port (mach_port_names?). Also, direct communication between user sessions is prohibited2, and other communication is limited. I'm not sure if your program will have the rights to connect to the pasteboard server.
Here's a first shot at an illustrative example on using Apple events to get & set the clipboard as a string. Error handling is minimal to nonexistent (I'm not certain how I feel about require_noerr). If you're going to get/set clipboard data multiple times during a run, you can save the Apple events and, when copying to the clipboard, use AECreateDesc & AEPutParamDesc or (maybe) AEBuildParameters. AEVTBuilder might be of use.
NSString* paste() {
NSString *content;
AppleEvent paste, reply = { typeNull, 0L };
AEBuildError buildError = { typeNull, 0L };
AEDesc clipDesc = { typeNull, 0L };
OSErr err;
err = AEBuildAppleEvent(kAEJons, kAEGetClipboard,
typeApplicationBundleID, "com.apple.finder", strlen("com.apple.finder"),
kAutoGenerateReturnID, kAnyTransactionID,
&paste, &buildError,
""
);
require_noerr(err, paste_end);
err = AESendMessage(&paste, &reply, kAEWaitReply, kAEDefaultTimeout);
err = AEGetParamDesc(&reply, keyDirectObject, typeUTF8Text, &clipDesc);
require_noerr(err, pastErr_getReply);
Size dataSize = AEGetDescDataSize(&clipDesc);
char* clipData = malloc(dataSize);
if (clipData) {
err = AEGetDescData(&clipDesc, clipData, dataSize);
if (noErr == err) {
content = [NSString stringWithCString:clipData encoding:NSUTF8StringEncoding];
} else {}
free(clipData);
}
AEDisposeDesc(&clipDesc);
pastErr_getReply:
AEDisposeDesc(&reply);
pasteErr_sending:
AEDisposeDesc(&paste);
paste_end:
return content;
}
OSStatus copy(NSString* clip) {
AppleEvent copy, reply = { typeNull, 0L };
AEBuildError buildError = { typeNull, 0L };
OSErr err = AEBuildAppleEvent(kAEJons, kAESetClipboard,
typeApplicationBundleID, "com.apple.finder", strlen("com.apple.finder"),
kAutoGenerateReturnID, kAnyTransactionID,
&copy, &buildError,
"'----':utf8(#)",
AEPARAMSTR([clip UTF8String])
/*
"'----':obj {form: enum(prop), want: type(#), seld: type(#), from: null()}"
"data:utf8(#)",
AEPARAM(typeUTF8Text),
AEPARAM(pClipboard),
AEPARAMSTR([clip UTF8String])
*/
);
if (aeBuildSyntaxNoErr != buildError.fError) {
return err;
}
AESendMessage(&copy, &reply, kAENoReply, kAEDefaultTimeout);
AEDisposeDesc(&reply);
AEDisposeDesc(&copy);
return noErr;
}
I'm leaving the Core Foundation approach above, but you'll probably want to use NSAppleEventDescriptor to extract the clipboard contents from the Apple Event reply.
err = AESendMessage(&paste, &reply, kAEWaitReply, kAEDefaultTimeout);
require_noerr(err, pasteErr_sending);
// nsReply takes ownership of reply
NSAppleEventDescriptor *nsReply = [[NSAppleEventDescriptor alloc] initWithAEDescNoCopy:&reply];
content = [[nsReply descriptorAtIndex:1] stringValue];
[nsReply release];
pasteErr_sending:
AEDisposeDesc(&paste);
paste_end:
return content;
}
An NSAppleEventDescriptor is also easier to examine in a debugger than an AEDesc. To examine replies, you can also to set the AEDebugReceives environment variable when using osascript or Script Editor.app:
AEDebugReceives=1 osascript -e 'tell application "Finder" to get the clipboard'
References:
"Configuring User Sessions"
"Communicating Across Login Sessions"
Mach Kernel Interface, especially:
mach_msg_header
mach_msg
CFMessagePort Reference (mach port wrapper):
CFMessagePortCreateRemote
CFMessagePortSendRequest
Apple Events Programming Guide
Apple Event Manager Reference
AEBuild*, AEPrint* and Friends
AEBuildAppleEvent on CocoaDev
Mac OS X Debugging Magic (for AEDebugSends and other AEDebug* environment variables)
I tried doing it in AppleScript, and it worked (even when invoked via SSH). My script is as follows:
#!/usr/bin/osascript
on run
tell application "Finder"
display dialog (get the clipboard)
end tell
end run
This definitely isn't an ideal solution, but perhaps if you worked out how AppleScript does it then it'd help you implement it yourself.
Take a look at pbpaste (getting the contents of the clipboard) and pbcopy (copying contents TO the clipboard). Works fine, also over SSH. :)
On Mac OS X Snow Leopard:
(source: hillrippers.ch)
On Ubuntu 9.04:
(source: hillrippers.ch)
You can access the pasteboard with PasteboardCreate via SSH on SnowLeopard but not on Leopard or Tiger.
You probably don't want to use pbcopy and pbpaste for a full pasteboard sync since those only deal with plain text, RTF, and EPS. If, for example, you copy an image and then try to write it out with pbpaste, you'll get no output.
Assuming you have an app running in the user's session on both computers, you could serialize the pasteboard data out to a file, transfer it over SSH, read it from your app on the remote side, and then put the pasteboard data on the remote pasteboard. However, getting the pasteboard serialization right may be tricky and I'm not sure how portable pasteboard data is between OSes and architectures.

Can I download an SQLite db on /sdcard and access it from my Android app?

I already found out that there is no way to bundle files in an .apk and have them on /sdcard, the best option so far being to download the large files upon first run. I came accross a tutorial saying how to bundle an sqlite db with the apk and then copy it so that it can be accessed with SQLiteDatabase (thus doubling the space needed, and not using /sdcard at all).
http://developer.android.com/guide/topics/data/data-storage.html#db says all databases MUST be in /data/data/package_name/databases.
Is that really so? Is there a way to trick the framework into opening a database that is placed on the /sdcard partition? Is there a way to use another SQLite java wrapper/framework to access such databases?
If the answer to the above is 'No', what other options do I have? My data is very well represented in a relational model, but is just too big, plus, I want to be able to update it without the need to reinstall/upgrade the entire app.
Sure you can. The docs are a little conflicting about this as they also say that no limitations are imposed. I think they should say that relative paths are to the above location and dbs there ARE private. Here is the code you want:
File dbfile = new File("/sdcard/mydb.sqlite" );
SQLiteDatabase db = SQLiteDatabase.openOrCreateDatabase(dbfile, null);
System.out.println("Its open? " + db.isOpen());
Try this:
String dir = Environment.getExternalStorageDirectory().getPath()
File dbfile = new File(dir+"/mydb.sqlite");
SQLiteDatabase db = SQLiteDatabase.openOrCreateDatabase(dbfile, null);
System.out.println("Its open? " + db.isOpen());
Just an idea:
you can put your database.db into the assets folder.
if your database.db file is larger than 2Mb the system is unable to compress it, so you need other one options
you can rename your database.db for example database.jit or database.mp3 - which are not compressed, than at the first run you can rename it to database.db
check this out ...
storing android application data on SD Card
I share next code. Declare opcionesMenu Vector<String>
Vector < String > opcionesMenu = new Vector< String >();
// the database is SDCard. I saw the code to Blackberry (net.rim)
String rutaDB = "/storage/extSdCard/Android/databases/Offshore.db";
SQLiteDatabase db = SQLiteDatabase.openOrCreateDatabase(rutaDB, null);
Cursor cursor = db.rawQuery("SELECT Codigo, Nombre FROM Ruta ORDER BY Codigo, Nombre", null);
if (cursor.moveToFirst())
{
do
{
opcionesMenu.add(cursor.getString(0) + " - " + cursor.getString(1));
} while(cursor.moveToNext());
}
db.close();

Resources