WCF File upload from Silverlight client - silverlight

I am trying to upload files to my WCF service using streaming to upload large files. All of this works fine using a normal client (like a ASP.net page). In Silverlight however I get the following error:
Timeouts are not supported on this stream
I am uploading via a memorystream and I assume the issue is basically because instead of calling the synchronous method in Silverlight I am forced to call the async method. So it is this that doesn't like the normal memorystream. I have tried to find some other stream to use but it seems like either they are not supported in silverlight (bufferedstream, networkstream) or break the method (generic stream which for some reason MUST be the only parameter of the method to be used). Am I missing something here? I originally was using a byte array but there are too many size limitations there for what I need to allow to be uploaded.
I can insert my code here but since everything works flawlessly with my ASP.net test client I am assuming my bindings and code are fine.

There are three separate issues here:
1) Can you use the Stream type in your contract?
2) Can you get true streaming behavior on the client? (E.g. upload 2GB file without allocating 2GB of memory anywhere in the stack - including the underlying HTTP stack)
3) Can you get true streaming behavior on the server?
As far as I remember, the answers to #1 and #2 are "no" in Silverlight (though it may have changed in SL4.0). So the best you can achieve is #3. For example, you can try some trick with having a byte[]-based contract on the Silverlight side that results in the same XML projection as a Stream-based contract on the server side. Or, have byte[] on the client side and read from the Message class directly on the server side.
But my recollection about #1/#2 may be wrong...

Related

pollEnrich from FTP

I'm trying enrich using dynamically selected remote files on an FTP server using pollEnrich. The remote files must remain in place and can be selected again and again so the endpoint has noop=true and idempotent=false. Everything seems to work fine until multiple requests start coming in that use the same remote file for enrichment, and this results in all but a few of the requests receiving a null body for the new exchange argument in the aggregation strategy. Here is the relevant part of the route, which has been modified slightly to post here:
.pollEnrich()
.simple("ftp://username:password#ftp.example.com/path/files?fileName=${header.sourceFilename}&passiveMode=true&noop=true&idempotent=false")
.timeout(0)
.cacheSize(-1)
.aggregationStrategy(myEnrichmentAggregationStrategy)
I switched to using file:// instead of ftp:// as a test and still experienced the same problems. I also tried different modes for timeout, cacheSize, and also enabling streamCaching since the body is an InputStream. I'm now thinking about implementing a custom read-lock mechanism (processStrategy) but it feels like a long shot workaround. Has anyone else come across this problem and can shed some light on what's wrong?
I believe I've found a solution and it is using the inProgressRepository property on the polling consumer to set a dummy implementation of IdempotentRepository, allowing me to disable the check for in progress files.

How do I put and replace a file using in a client machine in a Silverlight app?

I'm trying to make a Silverlight app which has a local sqlite file to search some data when the app gets offline. I found the following library, http://code.google.com/p/csharp-sqlite/ and it seems pretty nice.
So, what I want to know is, what is a good approach to have and place a file which might be replaced by automatically when the data in a server gets updated at some points?
I've tried to put a file into a folder under the app, but I couldn't access to the file by using csSQLite.sqlite3_open (This method is from the library above). Sorry, I'm pretty new to Silverlight, so my question might be very odd.
Thanks in advance,
yokyo
It doesn't look like this library has been specifially coded for Silverlight. Despite being a pure C# implementation its still likely to assume the full .NET API is available. This is not true in Silverlight.
Specifically Silverlight cannot ordinarily access the local file system. The SQLLite code would need to be modified to understand Silverlight's IsolatedStorage. It would also have to limit its file operations to those that are supported by the streams available Isolated Storage.
The creation of a DB-esq. data source in Silverlight is typically done by create Classes the represent records and collections of records, using LINQ to query them and Xml serialisation into Isolated storage to persist them.
Here is a hacked version of the SQLite code to work with Silverlight, you can use it for some ideas on what to do: http://www.itwriting.com/blog/1695-proof-of-concept-c-sqlite-running-in-silverlight.html

Logging when application is running as XBAP?

Anybody here has actually implemented any logging strategy when application is running as XBAP ? Any suggestion (as code) as to how to implement a simple strategy base on your experience.
My app in desktop mode actually logs to a log file (rolling log) using integrated asop log4net implementation but in xbap I can't log cause it stores the file in cache (app2.0 or something folder) so I check if browser hosted and dont log since i dont even know if it ever logs...(why same codebase)....if there was a way to push this log to a service like a web service or post error to some endpoint...
My xbap is full trust intranet mode.
I would log to isolated storage and provide a way for users to submit the log back to the server using either a simple PUT/POST with HttpWebRequest or, if you're feeling frisky, via a WCF service.
Keep in mind an XBAP only gets 512k of isolated storage so you may actually want to push those event logs back to the server automatically. Also remember that the XBAP can only speak back to it's origin server, so the service that accepts the log files must run under the same domain.
Here's some quick sample code that shows how to setup a TextWriterTraceListener on top of an IsolatedStorageFileStream at which point you can can just use the standard Trace.Write[XXX] methods to do your logging.
IsolatedStorageFileStream traceFileStream = new IsolatedStorageFileStream("Trace.log", FileMode.OpenOrCreate, FileAccess.Write);
TraceListener traceListener = new TextWriterTraceListener(traceFileStream);
Trace.Listeners.Add(traceListener);
UPDATE
Here is a revised answer due to the revision you've made to your question with more details.
Since you mention you're using log4net in your desktop app we can build upon that dependency you are already comfortable working with as it is entirely possible to continue to use log4net in the XBAP version as well. Log4net does not come with an implementation that will solve this problem out of the box, but it is possible to write an implementation of a log4net IAppender which communicates with WCF.
I took a look at the implementation the other answerer linked to by Joachim Kerschbaumer (all credit due) and it looks like a solid implementation. My first concern was that, in a sample, someone might be logging back to the service on every event and perhaps synchronously, but the implementation actually has support for queuing up a certain number of events and sending them back to the server in batch form. Also, when it does send to the service, it does so using an async invocation of an Action delegate which means it will execute on a thread pool thread and not block the UI. Therefore I would say that implementation is quite solid.
Here's the steps I would take from here:
Download Joachim's WCF appender implementation
Add his project's to your solution.
Reference the WCFAppender project from your XBAP
Configure log4net to use the WCF appender. Now, there are several settings for this logger so I suggest checking out his sample app's config. The most important ones however are QueueSize and FlushLevel. You should set QueueSize high enough so that, based on how much you actually are logging, you won't be chattering with the WCF service too much. If you're just configuring warnings/errors then you can probably set this to something low. If you're configuring with informational then you want to set this a little higher. As far as FlushLevel you should probably just set this to ERROR as this will just guarantee that no matter how big the queue is at the time an error occurs everything will be flushed at the moment an error is logged.
The sample appears to use LINQ2SQL to log to a custom DB inside of the WCF service. You will need to replace this implementation to log to whatever data source best suits your needs.
Now, Joachim's sample is written in a way that's intended to be very easy for someone to download, run and understand very quickly. I would definitely change a couple things about it if I were putting it into a production solution:
Separate the WCF contracts into a separate library which you can share between the client and the server. This would allow you to stop using a Visual Studio service reference in the WCFAppender library and just reference the same contract library for the data types. Likewise, since the contracts would no longer be in the service itself, you would reference the contract library from the service.
I don't know that wsHttpBinding is really necessary here. It comes with a couple more knobs and switches than one probably needs for something as simple as this. I would probably go with the simpler basicHttpBinding and if you wanted to make sure the log data was encrypted over the wire I would just make sure to use HTTPS.
My approach has been to log to a remote service, keyed by a unique user ID or GUID. The overhead isn't very high with the usual async calls.
You can cache messages locally, too, either in RAM or in isolated storage -- perhaps as a backup in case the network isn't accessible.
Be sure to watch for duplicate events within a certain time window. You don't want to log 1,000 copies of the same Exception over a period of a few seconds.
Also, I like to log more than just errors. You can also log performance data, such as how long certain functions take to execute (particularly out-of-process calls), or more detailed data in response to the user explicitly entering into a "debug and report" mode. Checking for calls that take longer than a certain threshold is also useful to help catch regressions and preempt user complaints.
If you are running your XBAP under partial trust, you are only allowed to write to the IsolatedStorage on the client machine. And it's just 512 KB, which you would probably want to use in a more valuable way (than for logging), like for storing user's preferences.
You are not allowed to do any Remoting stuff as well under partial trust, so you can't use log4net RemotingAppender.
Finally, under partial trust XBAP you have WebPermission to talk to the server of your app origin only. I would recommend using a WCF service, like described in this article. We use similar configuration in my current project and it works fine.
Then, basically, on the WCF server side you can do logging to any place appropriate: file, database, etc. You may also want to keep your log4net logging code and try to use one of the wcf log appenders available on the internets (this or this).

Silverlight streaming upload

I have a Silverlight application that needs to upload large files to the server. I've looked at uploading using both WebClient as well a HttpWebRequest, however I don't see an obvious way stream the upload with either option. Do to the size of the files, loading the entire contents into memory before uplaoding is not reasonable. Is this possible in Silverlight?
You could go with a "chunking" approach. The Silverlight File Uploader on Codeplex uses this technique:
http://www.codeplex.com/SilverlightFileUpld
Given a chunk size (e.g. 10k, 20k, 100k, etc), you can split up the file and send each chunk to the server using an HTTP request. The server will need to handle each chunk and re-assemble the file as each chunk arrives. In a web farm scenario when there are multiple web servers - be careful to not use the local file system on the web server for this approach.
It does seem extraordinary that the WebClient in Silverlight fails to provide a means to pump a Stream to the server with progress events. Its especially amazing since this is offered for a string upload!
It is possible to code what would be appear to be doing what you want with a HttpWebRequest.
In the call back for BeginGetRequestStream you can get the Stream for the outgoing request and then read chunks from your file's Stream and write them to the output stream. Unfortunately Silverlight does not start sending the output to the server until the output stream has been closed. Where all this data ends up being stored in the meantime I don't know, its possible that if it gets large enough SL might use a temporary file so as not to stress the machines memory but then again it might just store it all in memory anyway.
The only solution to this that might be possible is to write the HTTP protocol via sockets.

Writing data into a database using a fully REST web service

How would one create a REST web service to write a row into a databse table. Use the follwoing scenario:
The table is called Customer - the
data to be inserted into the row would
be the name, addresss, telephone
number, email.
I think its impossible to describe the whole thing end to end in Java or C#, and I would never expect that, but here are the questions I have popping into my head as I prepare for coding:
How the URI would look (eg. if you use this URL - http://www.example.com/)?
What info would go into the HTTP envelope?
Would I use POST when writing to the database in this way?
Do I use a resource to store the posted data from the client? Is this even necessary if the data is being written to a database anyway?
When the data to be writeen into the db is recieved by the server - how do I physically insert it into the database - do I call some method on the server to actually write the data (in Java)? - this doesn't seem to fit with truely REST architecture - shunning RPC calls.
Should I even be bothering writing to a DB - should I be storing my data as a resource?
As you can see I need a few issues clearing in my head. Any help much appreciated.
First of all, I'm not either java nor c# expert and I don't exactly know what means do these languages have to support REST design, but in general:
http://www.example.com/customers - customers is a collection of resources and you want to add a new resource to this collection
It depends on various things - you should probably set the content-type header (according to the data format in which you are sending the representation) and set some authentication headers if you need it.
Yes, you always use POST to create a new entry in a collection of resources.
I don't fully understand this question, to be honest. What do you mean by "inmediately writing data into the database"?
REST is primarily just a style of communication between server and a client. It doesn't say anything about how you should handle the data received by using it. The usual way how modern web approaches (MVC style frameworks) solve it, is by routing every REST action to a method of some class (usually a controller instance) where you handle the received parameters (eg. save them to the database) and generate a response to be sent back.
For a very brief and very clear introduction to REST have a look at this short video.
RESTful Web Services, published by O'Reilly and Associates, seems to fit the bill you're looking for.
As far as doing it in Java, Sun has a page on it.

Resources