How to do crossdomain calls from Silverlight? - silverlight

What's needed to succesfully make a crossdomain call from Silverlight?

If I understand your question correctly you would need to have a clientaccesspolicy.xml file in the domain web root of the server that you wish to call (ie www.example.com/clientaccesspolicy.xml) that defines that it is ok for services from other domains to call services on that domain.
Read the How to Make a Service Available Across Domain Boundaries MSDN article for more detailed information.

See Jon Galloway's blog post on this as well
http://weblogs.asp.net/jgalloway/archive/2008/12/12/silverlight-crossdomain-access-workarounds.aspx

Intellisense helper file and walk-through: http://silverlight.net/learn/learnvideo.aspx?video=47174

Maybe also check out JSONP http://www.west-wind.com/weblog/posts/107136.aspx for example this is how you can get Twitter updates in JavaScript on the client side even though Twitter is on a different domain than you web page.

Related

why i couldn't see any text in "http://crawlservice.appspot.com/?key=123456&url=http://mydomain.com#!article"?

Ok, i found this link https://code.google.com/p/gwt-platform/wiki/CrawlerSupport#Using_gwtp-crawler-service that explain how you can make your GWTP app crawlable.
I got some GWTP experience, but i know nothing about AppEngine.
Google said its "crawlservice.appspot.com" can parse any Ajax page. Now I have a page "http://mydomain.com#!article" that has an artice that was pulled from Database. Say that page has the text "this is my article". Now I open this link:
crawlservice.appspot.com/?key=123456&url=http://mydomain.com#!article, then i can see all javascript but I couldn't find the text "this is my article".
Why?
Now let check with a real life example
open this link https://groups.google.com/forum/#!topic/google-web-toolkit/Syi04ArKl4k & you will see the text "If i open that url in IE"
Now you open http://crawlservice.appspot.com/?key=123456&url=https://groups.google.com/forum/#!topic/google-web-toolkit/Syi04ArKl4k you can see all javascript but there is no text "If i open that url in IE",
Why is it?
SO if i use http://crawlservice.appspot.com/?key=123456&url=mydomain#!article then Can google crawler be able to see the text in mydomain#!article?
also why the key=123456, it means everyone can use this service? do we have our own key? does google limit the number of calls to their service?
Could you explain all these things?
Extra Info:
Christopher suggested me to use this example
https://github.com/ArcBees/GWTP-Samples/tree/master/gwtp-samples/gwtp-sample-crawler-service
However, I ran into other problem. My app is a pure GWTP, it doesn't have appengine-web.xml in WEB-INF. I have no idea what is appengine or GAE mean or what is Maven.
DO i need to register AppEngine?
My Appp may have a lot of traffic. Also I am using Godaddy VPS. I don't want to register App Engine since I have to pay for Google for extra traffic.
Everything in my GWTP App is ok right now except Crawler Function.
So if I don't use Google App Engine, then how can i build Crawler Function for GWTP?
I tried to use HTMLUnit for my app, but HTMLUnit doesn't work for GWTP (See details in here Why HTMLUnit always shows the HostPage no matter what url I type in (Crawlable GWT APP)? )
I believe you are not allowed to crawl Google Groups. Probably they are actively trying to prevent this, so you do not see the expected content.
There's a couple points I wish to elaborate on:
The Google Code documentation is no longer maintained. You should look on Github instead: https://github.com/ArcBees/GWTP/wiki/Crawler-Support
You shouldn't use http://crawlservice.appspot.com. This isn't a Google service, it's out of date and we may decide to delete it down the road. This only serves as a public example. You should create your own application on App Engine (https://appengine.google.com/)
There is a sample here (https://github.com/ArcBees/GWTP-Samples/tree/master/gwtp-samples/gwtp-sample-crawler-service) using GWTP's Crawler Service. You can basically copy-paste it. Just make sure you update the <application> tag in appengine-web.xml to the name of your application and use your own service key in CrawlerModule.
Finally, if your client uses GWTP and you followed the documentation, it will work. If you want to try it manually, you must encode the Query Parameters.
For example http://crawlservice.appspot.com/?key=123456&url=http://www.arcbees.com#!service will not work because the hash (everything including and after #) is not sent to the server.
On the other hand http://crawlservice.appspot.com/?key=123456&url=http%3A%2F%2Fwww.arcbees.com%2F%23!service will work.

authentication/http headers support in forge.file trigger.io module?

in the official trigger.io docs there seems to be no provision for custom http headers when it comes to the forge.file module. I need this so I can download files behind an http authentication scheme. This seems like an easy thing to add, if support is not already there.
any workarounds? any chance of a quick fix in the next update? I know I could use forge.request instead, but I'd like to keep a local copy (saveURL).
thanks
Unfortunately the file module just uses simple "download url" methods rather than a full HTTP request library, which makes it a fairly big task to add support for custom headers.
I've added a task to our backlog for this, but I don't have a timeframe for it being added.
Currently on iOS you can do basic auth by using urls in the form http://user:password#url.com in case that helps.
Maybe to avoid this you can configure your server differently, or have a proxy server in front that allows you to pass authentication details as get parameters?

How to use google.maps within an asmx web service

I need to calculate distances for some logic within a web service and assume google.maps API is appropriate. Everything I've seen is Jscript and requires a reference to the script in html tags <script>, which does not apply here. A .dll would make things obvious to me, but that does not seem to be available...
How do you access google.maps within a c# .asmx??
You will have to do the same thing that would be done by the JavaScript code you're seeing as examples. You'll want to use the WebClient class or maybe the WebRequest class to do the network I/O, but you've got to send and receive HTTP messages.
"Add Service Reference" won't work, of course.
Note that this problem is not specific to ASMX web services. You would have the exact same issue in a console program or Winforms application.

.Net Windows Form Client. Capturing Request/Response SOAP from ASMX webservice

before i decided to post this question i went thru several articles and questions in here... none of those seem to be a solution for me.... or i am doing something wrong.
I went thru this article, suggested in this site
http://www.codeproject.com/Articles/38986/Trace-SOAP-Request-Response-XML-with-TraceExtensio?msg=4152902#xx4152902xx
that's not working. Not even the source code i downloaded.
then i found this other article...
http://blog.encoresystems.net/articles/how-to-capture-soap-envelopes-when-consuming-a-web-service.aspx
This is simple.
I have a client (winform)... interacts with a webservice i have no control over, and i need to be able to capture the soap request and response. i followed like 5 tutorials so far, soapextensions, soapattributes, etc, etc... nothing seems to work for me. i have modified app.config, done everything by the book... nothing.
Question... Does anyone have a WORKING example of this? the two examples i found don't work :)
I am using Visual Studio 2010.
Have you tried using tracing in the config file as described in the MSDN articles How to: Configure Network Tracing and Configuring Tracing?
Simply use a "Service Reference" instead of a "Web Reference" then see WCF Tracing.
SOAP extensions need to be registered on the service side (that's why all posts asking you to do configuration inside web.config).
If you want to print out the SOAP messages inside your WinForms client, you will have to call web service in the "raw" way,
http://mikehadlow.blogspot.com/2006/05/making-raw-web-service-calls-with.html
#James demonstrates System.NET tracing, which is another way to see the SOAP messages in an external log file, but that's only useful for troubleshooting, as you won't receive the tracing data inside your client.

Using a subdomain to identify a client

I'm working on building a Silverlight application whereas we want to be able to have a client hit a url like:
http://{client}.domain.com/
and login, where the {client} part is their business name. so for example, google's would be:
http://google.domain.com/
What I was wondering was if anyone has been able, in silverlight, to be able to use this subdomain model to make decisions on the call to the web server so that you can switch to a specific database to run a query? Unfortunately, it's something that is quite necessary for the project, as we are trying to make it easy for their employees to get their company specific information for our software.
Wouldn't it work to put the service on a specific subdomain itself, such as wcf.example.com, and then setup a cross domain policy file on the service to allow it to access it?
As long as this would work you could just load the silverlight in the proper subdomain and then pass that subdomain to your service and let it do its thing.
Some examples of this below:
Silverlight Cross Domain Services
Silverlight Cross Domain Policy Helpers
On the server side you can check the HTTP 1.1 Host header to see how the user came to your server and do the necessary customization based on that.
I think you cannot do this with Silverlight alone, I know you cannot do this without problems with Javascript, Ajax etc. . That is because a sub domain is - for security reasons - treated otherwise than a sub-page by the browsers.
What about the following idea: Insert a rewrite rule to your web server software. So if http://google.domain.com is called, the web server itself rewrites the URL to something like http://www.domain.com/google/ (or better: http://www.domain.com/customers/google/). Would that help?
Georgi:
That would help if it would be static, but alas, it's going to all be dynamic. My hope was to have 1x deployment for the application, and to use the http://google.domain.com/ idea to switch to the correct database for the user. I recall doing this once when we built an asp.net website, using the domain context to figure out what skin to use, etc.
Ates: Can you explain more about what you are saying... sounds like you are close to what I am trying to come up with. Have you seen such a tutorial for this?
The only other way I have come up with to make this work is to have a metabase that when the user logs in, it will switch them to the appropriate database as required... was just thinking as well that telling Client x to hit:
http://ClientX.domain.com/ would have been sweeter than saying to hit http://www.domain.com/ and login. It seemed as if they were to hit their name, and to show it personalized for them right from the login screen would have been much more appealing for the client base.
#Richard B: No, I can't think of any such tutorial that I've seen before. I'll try to be more verbose.
The server-side approach in more detail:
Direct *.example.com to the same IP in your DNS settings.
The backend app that handles login checks the Host HTTP header (e.g. the "HTTP_HOST" server variable in some platforms). That would contain the exact subdomain.example.com that the client used for reaching your server. Extract the subdomain part and continue...
There can also be a client-side-only approach. I don't know much about Silverlight but I'm assuming that you should be able to interface Silverlight with JavaScript. You could read document.location with JavaScript and pass it to your Silverlight applet, whereon further data fetching etc. logic would rely on the subdomain that was passed in by JavaScript.
#Ates:
That is what we did when we wrote the ASP.Net system... we pushed a slew of *.example.com hosts against the web server, and handled using the HTTP headers. The hold-up comes when dealing with WCF pushing the info between the client and the server... it can only exist in one domain...
So, for example, when you have {client}.example.com and {sandbox}.example.com, the WCF service can't be registered to both. It also cannot be registered to just *.example.com or example.com, so that's where the catch 22 is coming in at. everything else I have the prior knowledge of handling.
I recall a method by which an application can "spoof" another domain name in certain instances. I take it in this case, I would need to do such a configuration? Much to research yet I believe.

Resources