Cannot read network files in my OPC Server C++ - file

A little background info:
PC is a Windows 7 64 bit machine with all SP's and windows updates.
I wrote an OPC server using LightOPC as the base and having issues reading network files as the title says. If I run the program manually it reads the files just fine.
There are no permissions issues obviously when running it manually but something else happens when the client launches it (FactoryTalk View 8.0).
The problem occurs when the program starts during the normal OPC connection launching. In the code std::strerror(errno) function returns
no such file or directory
which is obviously a lie. So hopefully someone can shed some light on what I can do to adjust the code (maybe some security level up or something like that) to let it read from these network files (about 10 total). I log data to the Windows Event viewer and I log the user name used to attach to the server, it is using NT AUTHORITY\SYSTEM at program launch.
Here is 1 of the pieces of code used to read 1 of the files (fin.is_open() is returning false so it falls into the else and I am writing the error to the OPC tag so I could debug it):
ifstream fin;
fin.open("\\\\SCS\\C\\Sorter Software\\Data\\TocStatus.dat"); // open a file | std::ios::binary | std::ios::app
if (fin.is_open())
{
char buf[512];
fin.getline(buf, 512);
char *theString = TrimCString(buf);
if (strlen(theString))
mbstowcs_s(NULL, V_BSTR(&tagValues[TAGS_MOTIONTL_STATUS_TOC].tvValue), strlen(theString)*2, theString, strlen(theString));
if (use_window)
ListView_SetItemText(TheListView, TAGS_MOTIONTL_STATUS_TOC-1, 1, theString);
if (tagValues[TAGS_MOTIONTL_STATUS_TOC].tvState.tsQuality == OPC_QUALITY_BAD)
{
tagValues[TAGS_MOTIONTL_STATUS_TOC].tvState.tsQuality = OPC_QUALITY_GOOD;
SetListViewItem(TAGS_MOTIONTL_STATUS_TOC, tagActivationFlags[TAGS_MOTIONTL_STATUS_TOC]);
}
if (tagValues[TAGS_MOTIONTL_STATUS_TOC].tvState.tsQuality != OPC_QUALITY_GOOD)
{
tagValues[TAGS_MOTIONTL_STATUS_TOC].tvState.tsQuality = OPC_QUALITY_GOOD;
SetListViewItem(TAGS_MOTIONTL_STATUS_TOC, 1);
}
}
else
{
V_BSTR(&tagValues[TAGS_MOTIONTL_STATUS_TOC].tvValue) = SysAllocString(L"UNKNOWN ERROR9 ");
mbstowcs_s(NULL, V_BSTR(&tagValues[TAGS_MOTIONTL_STATUS_TOCLOADED].tvValue), strlen(std::strerror(errno)) * 2, std::strerror(errno), strlen(std::strerror(errno)));
tagValues[TAGS_MOTIONTL_STATUS_TOC].tvState.tsQuality = OPC_QUALITY_GOOD;
if (use_window)
{
ListView_SetItemText(TheListView, TAGS_MOTIONTL_STATUS_TOC-1, 1, "UNKNOWN");
SetListViewItem(TAGS_MOTIONTL_STATUS_TOC, 2);
}
UpdateConnStatus(false);
}
fin.close();

Related

How do I read OpenVINO IR models from memory with the OpenVINO C API

I am having trouble reading OpenVINO IR networks (XML and bin) from memory using ie_core_read_network_from_memory() in the OpenVINO 2021.4 C API ie_c_api.h.
I suspect that I am creating the network weight blob wrong, but I cannot find any information on how to create weight blobs correctly for networks.
I have read the OpenVINO C API docs but cannot deduce from docs what I am doing wrong. The OpenVINO code repo contains some C code samples, but none of the samples seem to use ie_core_read_network_from_memory().
Below is a cut out of the code I am having trouble with.
// void* dmem->data - network memory buffer (float32)
// size_t dmem->size - size of network memory buffer (bytes)
ie_core_t* ov_core = NULL;
IEStatusCode status = ie_core_create("", &ov_core);
if (status != OK)
{
// error handling
}
const dimensions_t weights_tensor_dims =
{ 4, { 1, 1, 1, dmem->size/sizeof(float) } };
tensor_desc_t weights_tensor_desc = { OIHW, weights_tensor_dims, FP32 };
ie_blob_t* ov_model_weight_blob = NULL;
status = ie_blob_make_memory_from_preallocated(
&weights_tensor_desc, dmem->data, dmem->size, &ov_model_weight_blob);
if (status != OK)
{
// error handling
}
// char* model_xml_desc - the model's XML string
uint8_t* ov_model_xml_content = (uint8_t*)model_xml_desc;
ie_network_t* ov_network = NULL;
size_t xml_sz = strlen(ov_model_xml_content);
status = ie_core_read_network_from_memory(
ov_core, ov_model_xml_content, xml_sz, ov_model_weight_blob, &ov_network);
if (status != OK)
{
// Always get "GENERAL_ERROR (-1)"
}
The code works fine down to the ie_core_read_network_from_memory() call which results in "GENERAL_ERROR".
I have tried two models that were converted from Tensorflow. One is a simple [X] -> [Y] regression model (single input value, single output value). The other is also a regression model [X_1, X_2, ..., X_9] -> [Y] (nine input values, single output value). They work fine when reading them from file with ie_core_read_network(), but for my use case I must provide the network as a binary memory buffer and XML string.
I would appreciate any help, either by pointing out what I am getting wrong or directing me to some code samples that use ie_core_read_network_from_memory().
System information:
Windows 10
OpenVINO v2021.4.689
Microsoft Visual Studio 2019
UPDATE: An Intel employee reached out to me in another forum and pointed out that there is a unit test for ie_core_read_network_from_memory(). The unit test successfully reads a network from memory and made clear that I was in fact using a faulty tensor description to produce the weight blob, just as I suspected. Apparently the weight blob descriptor should be one dimensional, have memory layout ANY and datatype U8 even though the model weights are fp32.
From the unit test:
std::string bin_std = TestDataHelpers::generate_model_path("test_model", "test_model_fp32.bin");
const char* bin = bin_std.c_str();
//...
std::vector<uint8_t> weights_content(content_from_file(bin, true));
tensor_desc_t weights_desc { ANY, { 1, { weights_content.size() } }, U8 };
However, simply changing the tensor descriptor was not enough to get my code to work so it remains for me to properly translate the C++ code from the unit test to my C environment before the issue to can be considered solved.
Thanks
Refer to tensor_desc struct and standard layout format.
Apart from that, it is recommended to use the Benchmark_app tool to test the inference performance.

Can we host multiple vnc servers (using LibVNCServer library) in the same process?

There is an example called camera.c in the LibVNCServer library which captures camera snapshots and populates the framebuffer used by vnc server in intervals. My requirement is to do the same with mpeg transport streams (many sources instead of a single source like camera). Therefore, one vnc server per transport stream is required.
I read in the RFB protocol that we can host multiple vnc servers on the same host on ports starting from 5900 (5900+x). However, it would be better to host multiple vnc servers in the same process so that unwanted I/O between the vnc servers and the process generating the data can be avoided.
Does LibVNCServer support that use case or do I have to launch a vnc server process per video stream?
Note: I went through the library and saw that the rfbScreenInfoPtr is circulated everywhere and is not static. But could not conclude if LibVNCServer is thread safe because I am not familiar with C.
I tried to write a vnc server with server-side-downscale ability, which is one source multi-stream.
int main(int argc, char** argv)
{
...
rfbScreenInfoPtr rfbScreen_1080 = rfbGetScreen(&argc,argv,1920,1080,8,3,bpp);
rfbScreenInfoPtr rfbScreen_720 = rfbGetScreen(&argc,argv,1280,720,8,3,bpp);
rfbScreen_1080->frameBuffer = (char*)_aligned_malloc(1920*1080*bpp,256);
rfbScreen_720->frameBuffer = (char*)_aligned_malloc(1280*720*bpp,256);
rfbScreen_1080->progressiveSliceHeight = 1080/2;
rfbScreen_720->progressiveSliceHeight = 720/2;
rfbScreen_1080->cursor = rfbMakeXCursor(0,0,NULL,NULL);
rfbScreen_720->cursor = rfbMakeXCursor(0,0,NULL,NULL);
rfbScreen_1080->port = 5900;
rfbScreen_720->port = 5901;
rfbScreen_1080->alwaysShared = 1;
rfbScreen_720->alwaysShared = 1;
rfbInitServer(rfbScreen_1080);
rfbInitServer(rfbScreen_720);
int begin = clock();
while(rfbIsActive(rfbScreen_1080) || rfbIsActive(rfbScreen_720))
{
int end = clock();
if(end - begin >= UPDATE_INTERVAL)
{
//printf("%d\n",end-begin);
begin = clock()-(end - begin - UPDATE_INTERVAL);
CaptureScreen(rfbScreen_1080, rfbScreen_720);
rfbMarkRectAsModified(rfbScreen_1080,0,0,1920,1080);
rfbMarkRectAsModified(rfbScreen_720,0,0,1280,720);
}
rfbProcessEvents(rfbScreen_1080,40);
rfbProcessEvents(rfbScreen_720,40);
//Sleep(1);
}
...
}
void CaptureScreen(rfbScreenInfoPtr rfbScreen1, rfbScreenInfoPtr rfbScreen2)
{
//capture screen to bmp, resize and copy data to rfbScreen->frameBuffer;
}

SDLNet Networking Not Working

I am working on a game written in C using SDL. Given that it already uses SDL, SDL_image, and SDL_ttf, I decided to add SDL_mixer and SDL_net to my engine. Getting SDL_mixer set up and working was very easy, but I am having a lot of trouble with SDL_net.
To test I created a very simple application with the following rules:
Run without arguments act as a TCP server on port 9999
Run with an argument try to connect to the server at the given IP address on port 9999
Here are some of the key lines of the program (I'm not going to post my whole event-driven SDL engine because its too long):
char *host = NULL;
if (argc > 1) host = argv[1];
and...
IPaddress ip;
TCPsocket server = NULL;
TCPsocket conn = NULL;
if (host) { /* client mode */
if (SDLNet_ResolveHost(&ip,host,port) < 0)
return NULL; //this is actually inside an engine method
if (!(conn = SDLNet_TCP_Open(&ip)))
return NULL;
} else { /* server mode */
if (SDLNet_ResolveHost(&ip,NULL,port) < 0)
return NULL;
if (!(server = SDLNet_TCP_Open(&ip)))
return NULL;
}
and... inside the event loop
if (server) {
if (!conn)
conn = SDLNet_TCP_Accept(server);
}
if (conn) {
void *buf = malloc(size); //server, conn, size are actually members of a weird struct
while (SDLNet_TCP_Recv(conn,buf,size))
onReceive(buf); //my engine uses a callback system to handle things
free(buf);
}
The program seems to start up just fine. However for some reason when I run it in client mode trying to connect to my home computer (which I have on a different IP) from my laptop I find that the call to SDLNet_TCP_Open blocks the program for awhile (5-10 seconds) then returns NULL. Can anybody see what I did wrong? Should I post more of the code? Let me know.

Arduino Serial.println() outputs a blank line if not in loop()

I'm attempting to write a function that will pull text from different sources (Ethernet client/Serial/etc.) into a single line, then compare them and run other functions based on them. Simple..
And while this works, I am having issues when trying to call a simple Serial.println() from a function OTHER than loop().
So far, I have around 140 lines of code, but here's a trimmed down version of the portion that's causing me problems:
boolean fileTerm;
setup() {
fileTerm = false;
}
loop() {
char character;
String content="";
while (Serial.available()) {
character = Serial.read();
content.concat(character);
delay(1);
}
if (content != "") {
Serial.println("> " + content);
/** Error from Serial command string.
* 0 = No error
* 1 = Invalid command
*/
int err = testInput(content);
}
int testInput(String content) {
if (content == "term") {
fileTerm = true;
Serial.println("Starting Terminal Mode");
return 0;
}
if (content == "exit" && fileTerm == true) {
fileTerm = false;
Serial.println("Exiting Terminal Mode");
return 0;
}
return 1;
}
(full source at http://pastebin.com/prEuBaRJ)
So the point is to catch the "term" command and enter some sort of filesystem terminal mode (eventually to access and manipulate files on the SD card). The "exit" command will leave the terminal mode.
However, whenever I actually compile and type these commands with others into the Serial monitor, I see:
> hello
> term
> test for index.html
> exit
> test
> foo
> etc...
I figure the function is catching those reserved terms and actually processing them properly, but for whatever reason, is not sending the desired responses over the Serial bus.
Just for the sake of proper syntax, I am also declaring the testInput() function in a separate header, though I would doubt this has any bearing on whether or not this particular error would occur.
Any explainable reason for this?
Thanks.
Model: Arduino Uno R3, IDE version: 1.0.4, though this behavior also happened on v1.0.5 in some instances..
It is kinda guessable how you ended up putting delay(1) in your code, that was a workaround for a bug in your code. But you didn't solve it properly. What you probably saw was that your code was too eager to process the command, before you were done typing it. So you slowed it down.
But that wasn't the right fix, what you really want to do is wait for the entire command to be typed. Until you press the Enter key on your keyboard.
Which is the bug in your code right now, the content variable doesn't just contain "term", it also contains the character that was generated by your terminal's Enter key. Which is why you don't get a match.
So fix your code, add a test to check that you got the Enter key character. And then process the command.

CreateDesktop() with vista and UAC on (C, windows)

I asked this in CreateDesktop() with Vista UAC (C Windows)
I set a bounty but in trying to vote down the only answer the "accept" was pressed by mistake (i've been awake for more than 48 hs). so I am asking it again.
I'm using CreateDesktop() to create a temporary desktop where an application will run, perform a cleanup action (while remaining out of the way) and terminate. I'm closing that desktop once the application is gone. Everything is fine when using Windows XP and even Vista. The problem arises when you enable the (annoying) UAC.
Everything is OK when you create a desktop, but when you call CreateProcess() to open a program on that desktop it causes the opened application to crash with an exception on User32.dll.
I've been reading a lot about the different desktops and layers on Windows and the restrictions of memory. However, most of the programs I open (as test scenarios) are OK, but a few (like IE, Notepad, Calc and my own application) cause the crash.
Anyone has any idea why this happen on Vista with UAC, or more specifically for those specific programs? and how to fix this?
Anyone has a good solid example on how to create a desktop and open an application there without switching to it under Vista with UAC on?
Code is appreciated.
Thanks
The code used is
SECURITY_ATTRIBUTES sa;
HDESK dOld;
HDESK dNew;
BOOL switchdesk, switchdesk2, closedesk;
int AppPid;
sa.bInheritHandle = TRUE;
sa.lpSecurityDescriptor = NULL;
sa.nLength = sizeof(SECURITY_ATTRIBUTES);
//Get handle to current desktop
dOld = OpenDesktopA("default", 0, TRUE, DESKTOP_SWITCHDESKTOP|
DESKTOP_WRITEOBJECTS|
DESKTOP_READOBJECTS|
DESKTOP_ENUMERATE|
DESKTOP_CREATEWINDOW|
DESKTOP_CREATEMENU);
if(!dOld)
{
printf("Failed to get current desktop handle !!\n\n");
return 0;
}
//Make a new desktop
dNew = CreateDesktopA("kaka", 0, 0, 0, DESKTOP_SWITCHDESKTOP|
DESKTOP_WRITEOBJECTS|
DESKTOP_READOBJECTS|
DESKTOP_ENUMERATE|
DESKTOP_CREATEWINDOW|
DESKTOP_CREATEMENU, &sa);
if(!dNew)
{
printf("Failed to create new desktop !!\n\n");
return 0;
}
AppPid = PerformOpenApp(SomeAppPath);
if(AppPid == 0)
{
printf("failed to open app, err = %d\n", GetLastError());
}
else
{
printf("App pid = %d\n", AppPid);
}
closedesk = CloseDesktop(dNew);
if(!closedesk)
{
printf("Failed to close new desktop !!\n\n");
return 0;
}
return 0;
The correct solution is given as a short comment by ChristianWimmer above:
The desktop must have a security descriptor that allows access to lower integrity level like IE has. Otherwise the GUI cannot access the desktop. – ChristianWimmer Jul 22 '10 at 17:00
Since the answer is a little bit hidden and there's no source code example, let me state it clearly here:
If IE runs in protected mode then the browser tabs are created as low integrity processes. The low integrity tab process will fail to initialize if the desktop does not have a low integrity mandatory label.
As a consequence, the main IE process terminates, too. An interesting observation is that if you start IE providing a command line URL from the secure zone, then IE will succeed to start, because protected mode is disabled by default for the secure zone.
I checked the integrity level of the default desktop, and indeed I was able to verify that the default desktop has a low integrity level! So the easiest solution to the problem is to (1) create the new desktop, (2) get the mandatory label from the default desktop, and (3) copy it into the new desktop. For (2) and (3), you can use the following code
PACL pSacl;
PSECURITY_DESCRIPTOR pSecurityDescriptor;
DWORD dwResult;
dwResult = GetSecurityInfo(hDefaultDesktop, SE_WINDOW_OBJECT, LABEL_SECURITY_INFORMATION, NULL, NULL, NULL, &pSacl, &pSecurityDescriptor);
if (dwResult == ERROR_SUCCESS) {
if (pSacl != NULL) {
dwResult = SetSecurityInfo(hNewDesktop, SE_WINDOW_OBJECT, LABEL_SECURITY_INFORMATION, NULL, NULL, NULL, pSacl);
if (dwResult != ERROR_SUCCESS)
_tprintf(_T("SetSecurityInfo(hNewDesktop) failed, error = %d"), dwResult);
}
LocalFree(pSecurityDescriptor);
} else {
_tprintf(_T("GetSecurityInfo(hDefaultDesktop) failed, error = %d"), dwResult);
}
#CristianWimmer: Thanks for providing the hint to the correct solution. This saved my a lot of time!!
You appear to have come across a bug in IE as it interacts with UAC. If protected mode is set to on you cannot run IE as an ordinary user in any desktop except the default one. In order to run IE in an alternate desktop you must be running as administrator or have protected mode set to off. This is true for Vista, W2K8 and Win7.
As to the other programs that you cannot run, unfortunately I can't confirm anything. I tried upwards of thirty different programs including notepad, calc, all the office apps, visual studio 2005, 2008 and 2010, MSDN help and a number of others and all worked as expected with the noted exception of IE. Is there something truly unusual about your app that might make it behave in an unexpected manner?
One note - if you attempt to run an application like this that needs elevation (such as regedit, etc.) it will fail in CreateProcess with the last error set to ERROR_ELEVATION_REQUIRED.
For your reference, in case I'm doing something different from you, the code I used is:
#ifndef _WIN32_WINNT // Specifies that the minimum required platform is Windows Vista.
#define _WIN32_WINNT 0x0600 // Change this to the appropriate value to target other versions of Windows.
#endif
#include <stdio.h>
#include <tchar.h>
#include "windows.h"
HANDLE PerformOpenApp(TCHAR* appPath);
int _tmain(int argc, _TCHAR* argv[])
{
HDESK dNew;
BOOL closedesk;
HANDLE hApp;
//Make a new desktop
dNew = CreateDesktop(_T("kaka"), 0, 0, 0, DESKTOP_SWITCHDESKTOP|
DESKTOP_WRITEOBJECTS|
DESKTOP_READOBJECTS|
DESKTOP_ENUMERATE|
DESKTOP_CREATEWINDOW|
DESKTOP_CREATEMENU, NULL);
if(!dNew)
{
_tprintf(_T("Failed to create new desktop !!\n\n"));
return 0;
}
TCHAR path[MAX_PATH];
_putts(_T("Enter the path of a program to run in the new desktop:\n"));
_getts(path);
while(_tcslen(path) > 0)
{
hApp = PerformOpenApp(path);
if(hApp == 0)
{
_tprintf(_T("Failed to open app, err = %d\n"), GetLastError());
}
else
{
_tprintf(_T("App pid = %d\n"), GetProcessId(hApp));
_putts(_T("Press any key to close the app.\n"));
_gettchar();
TerminateProcess(hApp, 0);
CloseHandle(hApp);
}
_putts(_T("Enter the path of a program to run in the new desktop:\n"));
_getts(path);
}
closedesk = CloseDesktop(dNew);
if(!closedesk)
{
_tprintf(_T("Failed to close new desktop !!\n\n"));
return 0;
}
return 0;
}
HANDLE PerformOpenApp(TCHAR* appPath)
{
STARTUPINFO si = {0};
PROCESS_INFORMATION pi;
si.cb = sizeof(si);
si.lpDesktop = _T("kaka");
BOOL retVal = CreateProcess(NULL, appPath, NULL, NULL, FALSE, CREATE_NEW_CONSOLE, NULL,
NULL, &si, &pi);
if (retVal)
{
CloseHandle(pi.hThread);
}
return pi.hProcess;
}

Resources