Is there a way i can send a notification/message to another PC in C/C++? I think something like net send, but i don't know if there is another way to send a notification/message. I created an application which will run on every PC, and i want, that if my application has finished it should send a notification to my PC, that it has finished running. I don't know if there is a solution for my question, but if yes, could someone tell me, how to do that?
Thanks,
kampi
How about using sockets?
http://www.alhem.net/Sockets/tutorial/
Start by learning about WCF. http://msdn.microsoft.com/en-us/netframework/aa663324.aspx
We ended up building a system for alerting all of our retail locations of emergency situations by building a service that opens up a TCP channel using .NET Remoting. It just sits there and listens for notifications. Our command center has a program that can send out notifications to this service. The service is responsible for displaying the message.
The code is proprietary, so I can't share it, but that's the general idea. Remoting has been rolled into WCF, which is why I started by suggesting learning that.
It has been working very well for us for many years, and works just fine on newer versions of Windows (unlike Net Send) and it's faster than Net Send.
Edit - added
I hadn't heard of this until now but you could also look into msg.exe. it looks easier.
http://www.appscout.com/2009/03/vistas_msgexe_replaces_net_sen.php
If you want something like "NET SEND" use mailslots!
Here more info on MSDN: http://msdn.microsoft.com/en-us/library/aa365576.aspx
If you can't use net send, how about just creating a date-stamped temp file of some sort that your other PC looks for?
Make your application a Growl client
Net Send is an option, but I think it will annoy the crap out of you, as it sends console toast to your computer, which pops up in front of the tasks you are working on. Personally, I would find that incredibly annoying.
If you created the application, you have the ability to include notication code. As an example, you can set up a service on your box and write the code to contact that service. On a windows machine, this can be a WCF service. You can also wrap this in a windows service if you want to fire up non-annoying toast.
I am not sure how to set up C to access a service, so another option might be to drop something in a folder and have a file watcher tell you. A bit kludgy, of course.
Related
I'm planning on doing an experiment, where we will setup a Google Assistant or Alexa device and see how people would interact with voice assistants in a certain environment. It's basically a Wizard of Oz experiment (https://en.wikipedia.org/wiki/Wizard_of_Oz_experiment). Is it possible to intercept the voice commands before they get passed to the Assistant or Alexa? This could help me decide/manage if I want to handle the user input or let Google/Alexa handle it.
Will you be using a purchased "original" device or will you use, e.g. an Raspberry PI and build it yourself?
For the former this won't be possible out of the bow. However, I recently stumbled upon an article. It describes a new device which would achieve something that might help you: It allows you to "reprogram" the activation word for Alexa and Google Assistant. The article mentions that the device's hardware is a Raspberry PI. So, I guess you could build something similar yourself. That was also the first idea that came into my mind.
I would imagine something like this:
On your raspberry you have a script (I guess written in python would be easiest) that listens for the wake-word, e.g. "Alexa" and also records the following voice. However, you have Alexa itself not running for now, so it doesn't get triggered. Your script also includes a logic for when to pass the command on to Alexa or what to do with it instead. When it decides that the command is to be passed on, the script starts Alexa and replays the recording. Thus, triggering it the same way the users would have triggered it, in the first place.
Another idea would be to use two microphones. One for your script and one for Alexa. Your script having the ability to mute/unmute those.
Pleas take into account that those are just spontaneous ideas. It's completely possible that I've missed something and this wouldn't work. But until somebody who has done this before comes up, I'd give it a try!
Is there any other solution to create logging to file instead of using SharedObject ?
FileReference works only on Adobe air. File library isn't good because it opens dialog box.
I want to write error logs to file and now I'm using SharedObject, but that's not the main task of the SharedObject.
So if someone have any solutions, would be glad to hear it.
Maybe someone will be interesting - i solved this problem.
I wrote a simple WCF service which runs on the same computer. So i can send log messages to this server through localhost and the service writes the logs to the file.
The best part, that it doesn't require an internet connection. ;)
I have registered my Phone 7 app as a Share Picker Extension. It works—my app is in the list of Share options and it gets launched and I can load the chosen image. Okay, great.
But then things go wrong in my code. I would like to be able to debug the issues, but I can't seem to keep the debugger attached.
I cannot debug this in the simulator, since the Pictures app (and thus the Share Picker functionality) is not present in the simulator.
I cannot debug this on the phone because as soon as I pick my app from the Share list, the debugger detaches... right as my app is "launching" again.
Is it possible to attach the debugger to a running WP7 app? Is it possible to keep the debugger attached? Am I doing it wrong? Any suggestions, advice or guesses are welcome because I'm tearing my hair out.
When doing M+V hub integration (sorry, haven't done any pictures hub integration yet) I initially used a crude debug technique (Messagebox.Show, etc. - like Justin mentioned) to verify what was being passed to the NavigationEventArgs of OnNavigatedTo and wrapped the whole method in a try..catch block to learn what was going on. I then refactored the code when I knew what could be expected. (Remember OnNavigatedTo will be called when your app is launched normally too and so e won't be populated in the same way.)
When the app is launched from a/the hub it creates a new instance of the app and there is currently no way to connect to this for debugging while the main page is being navigated to.
Great question. I'm unsure if that's possible. As far as I know, there's no way to attach the debugger to when the WP7 O/S starts an app (which wasn't triggered by the debugger).
Photo Share picker extensibility, music+Video hub extensibility and other O/S extensibility points seem to not play nicely with the VS debugger. Normally I resort to MessageBox.Show to debug any problems with WP7 O/S integration.
1) Connect the Device
2) Turn off Zune
3) Start C:\Program Files\Microsoft SDKs\Windows Phone\v7.1\Tools\WPConnect\x86\WPConnect.exe
To properly debug your application that uses the Media Library, you'll need to use the Windows Phone Connect Tool (WPConnect.exe) as described on MSDN. Jaime has some additional tips on his blog.
Once you are connected, you should be able to debug your application. Fingers crossed anyway. If that doesn't help, I'll dig a bit further.
It's not so much about the WPConnect tool. The nature of your application means that you have to have it closed and the user should pick a photo. Only after that the data is returned to the application.
You should read about the application execution model on Windows Phone 7. Also a good explanation is available here.
Initially, I would say that you should look at tombstoning (a good explanation here) but then again, the image returned will re-start the app and won't allow you to directly attach the debugger.
Yeah, looks like this is impossible...
All the answers above seem to be missing the point: I presume you're able to debug your app in the "standalone" mode (when it's launched normally), but not when it's launched via the Share Picker Extension. Am I write? This is the wall I'm hitting... :-(
I thought the proper way would be to attach to the process once it's launched.
I tried to use Debug > Attach to Process, then select Smart Device as the Transport and Windows Phone Device as the Qualifier... But in return I get the ugly "Unable to connect to 'Windows Phone Device'. Not implemented" message.
Bummer :-(
I want to make project for my final year in college.
So someone suggested me to make Remote Desktop in C.
Now I know basic socket functions for windows in C i.e. I know how to make
echo server in C.
But I don't know what to do next. I searched on internet but couldn't find
something informative.
Could someone suggest me how to approach from this point..any tutorial...or any source ?
I think this is do-able. For a college project, you don't need to have something as complex and as full-featured as VNC. Even demonstrating simple keyboard and mouse control and screen feedback would be enough, in my opinion, and that's well within reach.
If you're doing everything from scratch and using Win32, you can get the remote screen using the regular "printscreen" example all around the internet.
http://www.codeproject.com/KB/cpp/Screen_Capture__Win32_.aspx has it, for one. You can then compress the image with a third-party library, or just send it raw; this wouldn't be very efficient but it would still be a viable demonstration.
Apart from capturing the screen data remotely and showing it in the local window, you'll need to listen for local window messages for mouse and keyboard events, send them to the remote host, and then play them back. http://msdn.microsoft.com/en-us/library/ms646310%28VS.85%29.aspx will probably do that for you.
Check tightvnc TightVNC is a free remote control software package. The source code is also available.
For sending the image of the screen I would probably use rtp. The JRTPLIB is really handy for that.
And yes, as KevinDTimm says, an echo server is the very easiest part.
KevinDTimm may well be right, writing an RDP client would a fairly significant undertaking. To give you some idea, the current spec, available at the top of this page, is 419 pages long and includes references to several additional documents for specific aspects of RDP like Audio Redirection and Clipboards.
I am using the syndicated client experience (SCE) SDK. Has anyone had success with creating custom datafeeds for this? I want to be able to host the masterfeed and other feeds at a URL instead of compiling as embedded resources like the example. For instance, the client application would gather its feeds from http://somesite/masterfeed.xml.
I believe this can be done, but I keep getting an AccessViolation exception during the debugging of the SCE client.
The application just happened to be writing to a bad memory space.