Can Silverlight (SLOOB) start a process even with full trust? - silverlight

I have been tasked with writing an installer with a silverlight out of browser application. I need to.
get the version off a local EXE
check a web service to see that it is the most recent version
download a zip if not
unpack the zip
overwrite the old EXE
start the EXE
This installer app is written in .NET WinForms now but the .NET framework is an obstacle for people to download.
The recommended solution is to use a SLOOB however i am not sure how to assign full trust. If i assign full trust can I start a process.
Thanks

Looking into this, I suspect you're going to have to create the process using WMI through the COM interface. At the end of the day, that makes this a very difficult option and very subject to failure due to a host of reasons (WMI being disabled or secured, user won't give full trust, etc.) I suspect you would be much better off creating a .msi deployment package or something similar that was able to go out and download the framework, if necessary. There are a lot of deployment models available, almost all of which feel superior to this one.
That said, if you're going to do this:
To get the COM object, you're going to want to use the AutomationFactory.CreateObject(...) API. Tim Heuer provides a sample here.
To actually do the WMI scripting, you're going to want to create the WbemScripting.SWbemLocator object as the root. From there, use the ConnectServer method to get a wmi service on the named machine. You can then interrogate the Win32_Process module to create new processes.
Edit: I spent a little time working on this and, even on my local machine as Admin I'm running into security problems. The correct code would be something similar to:
dynamic locatorService = AutomationFactory.CreateObject("WbemScripting.SWbemLocator");
dynamic wmiService = locatorService.ConnectServer("winmgmts:{impersonationLevel=impersonate,authentationLevel=Pkt}//./root/cimv2");
dynamic process = wmiService.Get("Win32_Process");
dynamic createParameters = process.Methods_["Create"].InParameters.SpawnInstance_;
createParameters.CommandLine = "cmd.exe";
wmiService.ExecMethod("Win32_Process", "Create", createParameters);

Silverlight 4 will have support for something like this: http://timheuer.com/blog/archive/2010/03/15/whats-new-in-silverlight-4-rc-mix10.aspx#sllauncher

Related

Work with Database using Spock and Geb.

I hope someone have already faced an issue to verify that application shows correct data from database. I reviewd how groovy used SQL, but I have no idea where and how I should do that. I'm just starting to use gradle+Spock+Geb for testing application. I have a few files where I described a couple of pages from application, a couple of modules and a file with spock specification. Where and how I need to connect to Oracle DB, use SQL and compare result's data with application's ones?
P.S. I write everything in notepad++ and launch from command line writing 'gradlew firefoxTest'. Does exist any more comfortable way to work with gradle+spock+geb?
Thanks in advance.
Because there are no other answers, I wanted to provide a solution someone at my company thought of. This assumes you already have a project that uses some sort of JDBC. In our case it is JDBI.
The idea is to extend Classloader and then use that to directly access the data access object class via the JVM. That idea should work.
I have not tested it out because it doesn't completely fit our use-case. I'll admit that this does not completely apply to your use case, but technically you could just run the jar of an existing project, which can access the database.

Issue With DAO 3.6 on VB6 database

I am currently in the process of trying to launch a database that has a VB6 front end connected to an access 2000 database. On certain computers we are experiencing a problem where the data being pulled from the database does not show up or does not show up correctly.
The computers that work seem to have the same dao360.dll date modified in both the system 32 and microsoftshared/dao while the one that are not working do not have the same date modified.
Is this whats causing the error? How can I correct this? Or is it something else that is happening?
There shouldn't be two copies of the DLL on the system. It sounds like a poorly designed install of some application had been previously done on these systems. There is no telling what the full extent of this has been.
Packaging as an isolated application can insulate your programs from these kinds of bad installs that create DLL Hell. Sadly MDAC/DAC and related components are very difficult to isolate.
This is another reason to have moved to ADO back in 1998, if not in the time since then. While you can't isolate the ADO-related parts of MDAC/DAC any more than you can DAO, those libraries are now shipped as part of Windows. You don't need to deploy them and they are protected from bad installers by the increasingly better system file protection mechanisms in Windows.
However providing specific assistance will probably require a more specific and detailed description of what is going on than "does not show up or show up corectly."
I'd create a minimal test case using DAO to begin exploring where (and what) the problems really are. To begin with perhaps just a simple query displaying the returned rowset without data binding.
I suggest installing the latest version of MDAC and Jet. While Jet used to be a part of the MDAC, I'm pretty sure they dropped it into its own installl/update/service pack at this point. Perhaps start here: http://support.microsoft.com/kb/239114

How to determine at runtime if I am connected to production database?

OK, so I did the dumb thing and released production code (C#, VS2010) that targeted our development database (SQL Server 2008 R2). Luckily we are not using the production database yet so I didn't have the pain of trying to recover and synchronize everything...
But, I want to prevent this from happening again when it could be much more painful. My idea is to add a table I can query at startup and determine what database I am connected to by the value returned. Production would return "PROD" and dev and test would return other values, for example.
If it makes any difference, the application talks to a WCF service to access the database so I have endpoints in the config file, not actual connection strings.
Does this make sense? How have others addressed this problem?
Thanks,
Dave
The easiest way to solve this is to not have access to production accounts. Those are stored in the Machine.config file for our .net applications. In non-.net applications this is easily duplicated, by having a config file in a common location, or (dare I say) a registry entry which holds the account information.
Most of our servers are accessed through aliases too, so no one really needs to change the connection string from environment to environment. Just grab the user from the config and the server alias in the hosts file points you to the correct server. This also removes the headache from us having to update all our config files when we switch db instances (change hardware etc.)
So even with the click once deployment and the end points. You can publish the a new endpoint URI in a machine config on the end users desktop (I'm assuming this is an internal application), and then reference that in the code.
If you absolutely can't do this, as this might be a lot of work (last place I worked had 2000 call center people, so this push was a lot more difficult, but still possible). You can always have an automated build server setup which modifies the app.config file for you as a last step of building the application for you. You then ALWAYS publish the compiled code from the automated build server. Never have the change in the app.config for something like this be a manual step in the developer's process. This will always lead to problems at some point.
Now if none of this works, your final option (done this one too), which I hated, but it worked is to look up the value off of a mapped drive. Essentially, everyone in the company has a mapped drive to say R:. This is where you have your production configuration files etc. The prod account people map to one drive location with the production values, and the devs etc. map to another with the development values. I hate this option compared to the others, but it works, and it can save you in a pinch with others become tedious and difficult (due to say office politics, setting up a build server etc.).
I'm assuming your production server has a different name than your development server, so you could simply SELECT ##SERVERNAME AS ServerName.
Not sure if this answer helps you in a assumed .net environment, but within a *nix/PHP environment, this is how I handle the same situation.
OK, so I did the dumb thing and released production code
There are a times where some app behavior is environment dependent, as you eluded to. In order to provide this ability to check between development and production environments I added the following line to global /etc/profile/profile.d/custom.sh config (CentOS):
SERVICE_ENV=dev
And in code I have a wrapper method which will grab an environment variable based on name and localize it's value making it accessible to my application code. Below is a snippet demonstrating how to check the current environment and react accordingly (in PHP):
public function __call($method, $params)
{
// Reduce chatter on production envs
// Only display debug messages if override told us to
if (($method === 'debug') &&
(CoreLib_Api_Environment_Package::getValue(CoreLib_Api_Environment::VAR_LABEL_SERVICE) === CoreLib_Api_Environment::PROD) &&
(!in_array(CoreLib_Api_Log::DEBUG_ON_PROD_OVERRIDE, $params))) {
return;
}
}
Remember, you don't want to pepper your application logic with environment checks, save for a few extreme use cases as demonstrated with snippet. Rather you should be controlling access to your production databases using DNS. For example, within your development environment the following db hostname mydatabase-db would resolve to a local server instead of your actual production server. And when you push your code to the production environment, your DNS will correctly resolve the hostname, so your code should "just work" without any environment checks.
After hours of wading through textbooks and tutorials on MSBuild and app.config manipulation, I stumbled across something called SlowCheetah - XML Transforms http://visualstudiogallery.msdn.microsoft.com/69023d00-a4f9-4a34-a6cd-7e854ba318b5 that did what I needed it to do in less than hour after first stumbling across it. Definitely recommended! From the article:
This package enables you to transform your app.config or any other XML file based on the build configuration. It also adds additional tooling to help you create XML transforms.
This package is created by Sayed Ibrahim Hashimi, Chuck England and Bill Heibert, the same Hashimi who authored THE book on MSBuild. If you're looking for a simple ubiquitous way to transform your app.config, web.config or any other XML fie based on the build configuration, look no further -- this VS package will do the job.
Yeah I know I answered my own question but I already gave points to the answer that eventually pointed me to the real answer. Now I need to go back and edit the question based on my new understanding of the problem...
Dave
I' assuming yout production serveur has a different ip address. You can simply use
SELECT CONNECTIONPROPERTY('local_net_address') AS local_net_address

I need help understanding Silverlight 4 security

Does anyone else think Silverlight 4 security is a bit screwball?
Look at the following scenario:
Silverlight when set to trusted app, and run out of browser mode allows you to browse for a file using the file open dialog.
You require the name of the path of the file to open it up from any COM automation. For example (excel/word) but this could be anything.
It is impossible to get the full path of the file from the dialog because of security restrictions
You can however using COM FileSystemObject - do what ever you want to the users file system, including create folders, move and delete files.
So in other words, why all the fuss about security in Silverlight, which actually hinders real business use cases, when its possible to access any file anyways using COM?
To say it another way, if a user runs a malicious silverlight app, its unlikely they'll say - oh well it was COM at fault. The COM was afterall being called by a Silverlight app.
Here is what I mean....
User browses for file - c:\myFile.xls
Silverlight prevents you from getting the path (for security reasons)
Silverlight only lets you work with my documents
Using COM you can do what ever you want to the file system in the background anyways. Including copying that file now to my documents, if only you knew the name! But besides that you can wipe any file potentially if its not in use.
In my opinion Silverlight security model is flawed, either they should have given developers full trust and allow us to run apps as if they were running locally
or
Not allowed Silverlight to access COM.
Is it just me, or can anyone else see that its a bad implementation?
This triggers security alerts:
OpenFileDialog flDialog = new OpenFileDialog();
FileInfo fs = flDialog.File;
string fileName = fs.FullName;
This doesn't
dynamic fileSystem = AutomationFactory.CreateObject("Scripting.FileSystemObject");
fileSystem.CopyFile(anyFileName,anyDestination);
I don't agree with your point of view. The fact that you can do pretty much anything that an installed COM object will allow you to do is not a reason to modify a whole bunch of existing Silverlight code to allow you to do those same things.
Why? Because in the process of opening up that code there is also an increase chance that in some unintended way that same code could get run when the Silverlight component is not running in trusted mode. If that were to happen even once the media would all over it in a shot and Silverlight's reputation would, probably unfairly, be in tatters.
Personally I'm quite happy with the very cautious approach to security that MS are taking with Silverlight.
some Silverlight controls such as the OpenFileDialog work in both trusted and untrusted mode. These controls have been ported from previous versions of Silverlight where the new levels are elevated trust were not a consideration.
Thank you to Anthony for pointing this out.
Developers need to be aware of the definition of trust we are discussing here. Running a Silverlight application in full trust with elevated privileges IS NOT the same thing as running a local Silverlight Windows based Application. It is also far more restrictive than ActiveX.
Its possible that the trust here provided in Silverlight suits your particular business requirement. It is however likely that there are scenarios where you will find Silverlight too restrictive, its best to do your research upfront, and run code samples to ensure you can do the critical stuff, before jumping in head over heels.
Microsoft guarantees that public Silverlight API has the same behavior for both for Windows and MacOS platforms. So the functionality is many ways limited by the common denominator and technical feasibility. Please treat COM introp as a specific case addressing only Windows platform and only in full trust mode and it is not going to work the same for other platforms. So the security restrictions are valid as they are the same for both worlds in terms of API reuse.
I agree with the original poster. I think it's bad implementation. We are given a built in dialog to browse for a file, including directory structure. We can select a file and get a FileInfo object, but security prevents us from getting the FullName (directory and file name). Why? How does that improve security? What's the point of the open file dialog to begin with?
And as the original poster mentioned, with those dynamic objects, we can modify the local file system... which seems like the possible security hole.
All I want to do is read some data from an excel file... a way for my users to import excel data into the application, and the file could be saved anywhere on their machine. These are sales reps using an excel files to record orders locally until they can get to an internet connection. Who knows where they all save that file... so I'm not going to try to suggest we tell them all to store it in the same place in "my documents". I'll get laughed at if I suggest that.
It seems like it should be incredibly simple. But that "security measure" that keeps us from getting the directory the user chose from the built in open file dialog makes it so that we can't use the dialog for the purpose it was created for.
So what's the alternative? Is there a way to pick files using those dynamic objects? Do I have to write my own file selection tool using those objects that can modify the file system? Since I don't need anything but to read the file, and because I read something somewhere that we do have access to the file stream... is there a way to using the file stream to open up the file for reading using the AutomationFactory?

Logging when application is running as XBAP?

Anybody here has actually implemented any logging strategy when application is running as XBAP ? Any suggestion (as code) as to how to implement a simple strategy base on your experience.
My app in desktop mode actually logs to a log file (rolling log) using integrated asop log4net implementation but in xbap I can't log cause it stores the file in cache (app2.0 or something folder) so I check if browser hosted and dont log since i dont even know if it ever logs...(why same codebase)....if there was a way to push this log to a service like a web service or post error to some endpoint...
My xbap is full trust intranet mode.
I would log to isolated storage and provide a way for users to submit the log back to the server using either a simple PUT/POST with HttpWebRequest or, if you're feeling frisky, via a WCF service.
Keep in mind an XBAP only gets 512k of isolated storage so you may actually want to push those event logs back to the server automatically. Also remember that the XBAP can only speak back to it's origin server, so the service that accepts the log files must run under the same domain.
Here's some quick sample code that shows how to setup a TextWriterTraceListener on top of an IsolatedStorageFileStream at which point you can can just use the standard Trace.Write[XXX] methods to do your logging.
IsolatedStorageFileStream traceFileStream = new IsolatedStorageFileStream("Trace.log", FileMode.OpenOrCreate, FileAccess.Write);
TraceListener traceListener = new TextWriterTraceListener(traceFileStream);
Trace.Listeners.Add(traceListener);
UPDATE
Here is a revised answer due to the revision you've made to your question with more details.
Since you mention you're using log4net in your desktop app we can build upon that dependency you are already comfortable working with as it is entirely possible to continue to use log4net in the XBAP version as well. Log4net does not come with an implementation that will solve this problem out of the box, but it is possible to write an implementation of a log4net IAppender which communicates with WCF.
I took a look at the implementation the other answerer linked to by Joachim Kerschbaumer (all credit due) and it looks like a solid implementation. My first concern was that, in a sample, someone might be logging back to the service on every event and perhaps synchronously, but the implementation actually has support for queuing up a certain number of events and sending them back to the server in batch form. Also, when it does send to the service, it does so using an async invocation of an Action delegate which means it will execute on a thread pool thread and not block the UI. Therefore I would say that implementation is quite solid.
Here's the steps I would take from here:
Download Joachim's WCF appender implementation
Add his project's to your solution.
Reference the WCFAppender project from your XBAP
Configure log4net to use the WCF appender. Now, there are several settings for this logger so I suggest checking out his sample app's config. The most important ones however are QueueSize and FlushLevel. You should set QueueSize high enough so that, based on how much you actually are logging, you won't be chattering with the WCF service too much. If you're just configuring warnings/errors then you can probably set this to something low. If you're configuring with informational then you want to set this a little higher. As far as FlushLevel you should probably just set this to ERROR as this will just guarantee that no matter how big the queue is at the time an error occurs everything will be flushed at the moment an error is logged.
The sample appears to use LINQ2SQL to log to a custom DB inside of the WCF service. You will need to replace this implementation to log to whatever data source best suits your needs.
Now, Joachim's sample is written in a way that's intended to be very easy for someone to download, run and understand very quickly. I would definitely change a couple things about it if I were putting it into a production solution:
Separate the WCF contracts into a separate library which you can share between the client and the server. This would allow you to stop using a Visual Studio service reference in the WCFAppender library and just reference the same contract library for the data types. Likewise, since the contracts would no longer be in the service itself, you would reference the contract library from the service.
I don't know that wsHttpBinding is really necessary here. It comes with a couple more knobs and switches than one probably needs for something as simple as this. I would probably go with the simpler basicHttpBinding and if you wanted to make sure the log data was encrypted over the wire I would just make sure to use HTTPS.
My approach has been to log to a remote service, keyed by a unique user ID or GUID. The overhead isn't very high with the usual async calls.
You can cache messages locally, too, either in RAM or in isolated storage -- perhaps as a backup in case the network isn't accessible.
Be sure to watch for duplicate events within a certain time window. You don't want to log 1,000 copies of the same Exception over a period of a few seconds.
Also, I like to log more than just errors. You can also log performance data, such as how long certain functions take to execute (particularly out-of-process calls), or more detailed data in response to the user explicitly entering into a "debug and report" mode. Checking for calls that take longer than a certain threshold is also useful to help catch regressions and preempt user complaints.
If you are running your XBAP under partial trust, you are only allowed to write to the IsolatedStorage on the client machine. And it's just 512 KB, which you would probably want to use in a more valuable way (than for logging), like for storing user's preferences.
You are not allowed to do any Remoting stuff as well under partial trust, so you can't use log4net RemotingAppender.
Finally, under partial trust XBAP you have WebPermission to talk to the server of your app origin only. I would recommend using a WCF service, like described in this article. We use similar configuration in my current project and it works fine.
Then, basically, on the WCF server side you can do logging to any place appropriate: file, database, etc. You may also want to keep your log4net logging code and try to use one of the wcf log appenders available on the internets (this or this).

Resources