I'm working on Alexa skill and adding image in card's response. The image is not showing up on device but i can view it on simulator (screen shot attached).
What can be the possible reason? I enabled CORS on my S3 bucket, i am not sure if I did that properly. But before enabling that, the image was not available in simulator as well. How to check that?
The CORs policy was recently updated, so make sure yours reads as the following:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>http://ask-ifr-download.s3.amazonaws.com</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
</CORSRule>
<CORSRule>
<AllowedOrigin>https://ask-ifr-download.s3.amazonaws.com</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
</CORSRule>
</CORSConfiguration>
I ran into this exact same problem. I was running the Alexa app on Android. You need to clear your app cache. First, close the Alexa app. Then in Settings, open Apps, Amazon Alexa, Storage, Clear Cache.
You don't need CORS on your bucket, only make it publicly accessible (which you probably already did). Make sure you are using https to access it.
I also had a problem with images not being shown.
In my case - images were not made public in the S3.
You can make images public in 3 steps:
open S3 and select file
click menu 'More'
click on 'Make public'
Related
I am new to SalesForce and trying to setup my first call to Sandbox SalesForce.
I generated a Enterprise WSDL file and created the project in SOAPUI.
The login request has URL set to https://test.salesforce.com/services/Soap/c/48.0/0DFr0000XXXXXXX
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:urn="urn:enterprise.soap.sforce.com">
<soapenv:Header>
</soapenv:Header>
<soapenv:Body>
<urn:login>
<urn:username>XXXX#XXXXXXX</urn:username>
<urn:password>Password+Token</urn:password>
</urn:login>
</soapenv:Body>
</soapenv:Envelope>
Getting java.net.SocketTimeoutException
I am able to put this URL in browser and get a response
https://test.salesforce.com/services/Soap/c/48.0/0DFr0000XXXXXXX
and get response
Only POST allowed
That means it might not be firewall issue??
Please help.
(not 100% an answer but too long for a comment)
It might be that you're behind a proxy or firewall. Maybe your browser automatically downloads some configuration script but in SoapUI and other tools like that you'd need to specify the proxy details manually. Do you have any other app that can connect OK? Data Loader? maybe simple ping test.salesforce.com? curl if you're familiar with it? Or tell SoapUI to GET www.google.com and see what happens? If these fail - talk with your IT about proxy details. Or try from another machine?
See if anything hit Salesforce. Go to your user in setup, scroll all the way down to login history, see if there's any non-browser activity.
It's possible your SF admin disabled logging in from generic test.salesforce.com. You can verify it in Setup -> My Domain. (but if you can login via browser just fine and it's with normal SF username and pass, no Single Sign-On tricks - ignore this point)
As stupid as it sounds - what's the timeout you have in SoapUI ;)
I am trying to include an html template with angular like this
<div ng-include="http://SOME_OTHER_DOMAIN/template.html"></div>
As shown, the template is in another domain, to be more specific in an s3 bucket.
I have full control in this bucket and I have already applied cors configuration like this.
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
As expected the template loads correctly and by inspecting the network traffic chrome show this response headers:
access-control-allow-methods:GET
access-control-allow-origin:*
access-control-max-age:3000
content-encoding:gzip
content-type:text/html
........
Now the strange thing is that without any modification or change, the template stops loading and the browser complains that there is no access control allow policy on the request. And in fact when I inspect the network once again the CORS headers are missing. This happens randomly, and even checking the website from chrome and firefox at the same time, one browser will find the CORS as expected and the other will not.
I have read about the the browser's Same Origin Policy and Cross-Origin Resource Sharing (CORS) policy, but I found this behavior really strange.
I want your suggestions on this, is this even a browser problem, cache related, S3 bug or something else?
I have found some solutions involving a proxy server to just giving CORS to request that don't have, but keeping a server just for this is a little awkard, let alone that s3 has already implemented CORS by default.
Given the intermittent nature of this failure, my guess would be that the browser has cached the html file you're trying to load, and this cached version was not loaded with CORS.
So basically - do you load the file earlier without going through CORS? Then a second request that is cross-domain would require it, but never even leave the browser because the resource has already been cached.
Are you loading the template file directly in your browser by typing in the url to it (not a cross-domain request, because you're accessing it directly out of its bucket)?
One possible fix for this is to add a cache-busting pattern to the url to the template, which would mean the browser cannot cache it e.g. append ?nocache to the end of the template url.
See this answer: https://stackoverflow.com/a/14238351/808532
edit: also, maybe now angular works without this, but shouldn't the url string be single-quoted inside your ng-include attribute?
e.g. ng-include="'http://SOME_OTHER_DOMAIN/template.html'"
I am doing an angular app to upload file directly to s3 storage. The app seems work only with one issue:
When it start to upload, it will send a request with options method first, and it will fail with error as below:
In Chrome: OPTIONS https://{my-domain}.s3.amazonaws.com/ net::ERR_INSECURE_RESPONSE
In Firefox: Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://{my-domain}.s3.amazonaws.com/. This can be fixed by moving the resource to the same domain or enabling CORS.
However, when I try visit in browser with https://{my-domain}.s3.amazonaws.com, it will say connection untrust, and if I choose add exception, everythings goes very well.
So, I guess it is to do with the SSL certificate which is for s3.amazonaws.com not for {my-domain}.s3.amazonaws.com. It should not because CORS, otherwise later it should still be problem.
How can I fix it? Need another SSL?
Thanks,
Ron
After days efforts, finally found out it is because I got a dot "." in the bucket name, check this for more details: Amazon S3 - HTTPS/SSL - Is it possible?
I'm starting to use cloud endpoints in my GAE project but have been running into issues with the api not updating on the server.
localhost:8888/_ah/api/explorer is ok.
But when I deploy, nothing changes.
myapp.appspot.com:8888/_ah/api/explorer is bad
Further investigation shows the url end points update
example: https://myapp.appspot.com/_ah/api/myapp/v1/foo/list
But the loaded client api is still incorrect.
example: gapi.client.load('myapp', 'v1', callback, url);
gapi.client.myapp.foo.list();
If I changed the call from foo/list to foo/list2, the rest url would update, the api package would not.
I'll try to cover the two cases people could run into:
Client Side:
The Google APIs Explorer web app aggressively caches, so you'll need to clear your cache or force a refresh when you update your API server side to see the changes in the client.
Server Side (In Deployed Production App Engine App):
If you're having deployment issues, there are two places to look when debugging:
Check your Admin Logs (https://appengine.google.com/adminlogs?&app_id=s~YOUR-APP-ID) after deployment. After a successful deployment of your application code, you should see the message:
Completed update of a new default version
and shortly after that you should see:
Successfully updated API configuration
If you this message indicates the API configuration update failed, you should deploy again. If said error is persistent, you should notify us of a bug. If you don't see any message about your API configuration, you should check that the path /_ah/spi/.* is explicitly named in your routing config (app.yaml for Python, web.xml for Java).
Check your Application Logs (https://appengine.google.com/logs?&app_id=s~YOUR-APP-ID) after deployment. After the deployment finishes, Google's API infrastructure makes a request to /_ah/spi/BackendService.getApiConfigs in your application so that your API configuration (as JSON) can be registered with Google's API infrastructure and all the discovery-related configs can be created. If this request does not complete with a 200, then your API changes will not show up since Google's API infrastructure will have nothing to register.
If you are consistently getting a 302 redirect for requests to /_ah/spi/BackendService.getApiConfigs, it is because you (or your generated API config) have specified a "bns adapter" that uses http: as the protocol in your API root, but your web.xml (Java) or app.yaml (Python) is required that paths through /_ah/spi are secure. This will make requests using http: as the protocol be redirected (using 302) to the same page with https: as the protocol. This was discussed on the Trusted Tester forum before going to Experimental.
This is what happened to me.
I tested my endpoint on localhost and it worked fine.
I deployed my endpoint on appspot and when I made requests to it I received in the browser the message 'Not found'.
So I looked in the logs and when I made requests to the endpoint I saw a 404 http error code on favicon file. And in effects I forgot to put that file in my deploy.
So I redeployed my war with the favicon file, the 404 http code disappeared and the endpoint worked fine on appspot too!
I realize that this may sound silly, but it is what I experienced. (I apologize for my poor english)
I noticed that if you upload your app for the first time without the following in your web.xml:
<security-constraint>
<web-resource-collection>
<url-pattern>/_ah/spi/*</url-pattern>
</web-resource-collection>
<user-data-constraint>
<transport-guarantee>CONFIDENTIAL</transport-guarantee>
</user-data-constraint>
</security-constraint>
Then your bns adapter will be set as http going forward. When I add the above afterwards, I get 302 http code on /_ah/spi/BackendService.getApiConfigs and the endpoints never update.
So now I have reverted to not use https on /_ah/spi and my endpoints are updating. I guess for those that see their endpoints not being updated revert back to the first configuration they had for ssl on /_ah/spi/.
Yaw.
I had the same error Not Found (the 404 error code) when I was calling my API using this URL
https: // MY_APP_ID.appspot.com / _ah / api / MY_SERVICE / v1 / user
I tried everything and finally fixed it by removing the discovery files from WEB-INF and kept only MY_SERVICE-v1.api and then redeployed the API. It works fine now.
I was also getting stale API discovery doc after deploying new version, it took a couple of minutes for GAE to start serving the new one to me.
I had the same problem, and I checked the admin logs, other logs etc... but still my API wasn't updating to the latest version.
So I decided to check in the API code for the last method I had written (I am writing in Java 7). And I found out that GAE doesn't like statements like:
if (!blocked){ .... }
I switched that to:
if (blocked == false) { ... }
And it worked like a charm. So by the looks of it, GAE scans the new API methods and doesn't accept some shortcuts.
My silverlight application is currently hosted in IIS and is set up to only use HTTPS.
the silverlight web project is the root of the IIS website and the webservices project is a seperate web application mapped to /Services.
I can navigate to my site by using "" and ""
but if I use the second option the site loads fine but I get an error when attempting to access any of my services.
An error occurred while trying to make
a request to URI
'https://localhost/Services/Services/Authentication.svc'.
This could be due to attempting to
access a service in a cross-domain way
without a proper cross-domain policy
in place, or a policy that is
unsuitable for SOAP services. You may
need to contact the owner of the
service to publish a cross-domain
policy file and to ensure it allows
SOAP-related HTTP headers to be sent.
This error may also be caused by using
internal types in the web service
proxy without using the
InternalsVisibleToAttribute attribute
I have Crossdomain.xml and clientaccesspolicy.xml files in the root of my Web Services application and also within the root of the Silverlight Web project.
Crossdomain.xml
<?xml version="1.0"?>
<!DOCTYPE cross-domain-policy SYSTEM "http://www.macromedia.com/xml/dtds/cross-domain- policy.dtd">
<cross-domain-policy>
<allow-access-from domain="https://*" secure="true" />
<allow-http-request-headers-from domain="*" headers="*"/>
</cross-domain-policy>
clientaccesspolicy.xml
<?xml version="1.0" encoding="utf-8"?>
<access-policy>
<cross-domain-access>
<policy>
<allow-from>
<domain uri="https://*"/>
</allow-from>
<grant-to>
<resource path="/" include-subpaths="true"/>
</grant-to>
</policy>
I'm not really sure what the problem is.
Thanks
Edit
The following is the what fiddler shows after calling the service. .
With fiddler set up to decode https IE didn't show any extra entries, but with chrome I get the following output
As the error message says, "This could be due to attempting to access a service in a cross-domain way..." Try using some tool such as Fiddler in the client to see what is the actual response from the server. That will give you more information about the issue.
As shown by fiddler your reference file for the service are having pointers to localhost:444 this usually happens when you have both projects in same solution and add the service reference.
I resolved this by right clicking on the frontEnd.Web part of my solution, going to properties and then the Web tab, instead of using an auto assign port option, I changed it to use local IIS server. This got rid of the error.