I'm developing a recorder in silverlight and I need to upload data from stream to the web server after recording process is completed.
On server side I'm using ASP.NET MVC 3, and I have created a Controller with method FileUpload.
public class FileUploaderController : Controller
{
[HttpPost]
public ActionResult FileUpload(string fileName)
{
....
}
}
In silverlight applet, the upload is made by parts, about 20000 bytes at time. Servers web config is configured to accept larger amount of data.
Server returns an exception "The remote server returned an error: NotFound.".
In this case the request have not reached the action and I can't understand why.
Example of code that is used to start upload:
UriBuilder httpHandlerUrlBuilder = new UriBuilder("http://localhost:37386/FileUploader/FileUpload/?fileName=" + Guid.NewGuid() + ".wav");
HttpWebRequest webRequest = (HttpWebRequest)WebRequest.Create(httpHandlerUrlBuilder.Uri);
webRequest.Method = "POST";
webRequest.ContentType = "multipart/form-data"; // This solved my problem
webRequest.BeginGetRequestStream(new AsyncCallback(WriteToStreamCallback), webRequest);
EDIT
My route configuration is by default:
routes.MapRoute(
"Default", // Route name
"{controller}/{action}/{id}", // URL with parameters
new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults
);
When the small amount of data is sent, everything goes well and server receives the requested data. But when data to be send is larger I'm just getting NotFound response. This doesn't make any sense to me, what I'm doing is:
HttpWebRequest to send 20000 bytes
close request stream (obtained from request.EndGetRequestStream)
wait for server response (from webRequest.EndGetResponse) This is where error occurs.
In my case, I never send more than 20000 bytes, which is strange this to work sometimes and others not.
I don't know a better way to explain this problem. If you need I can provide more code and more information.
Any help is very much appreciated.
With filddler I was able to get more detailed information regarding to the error. It was "upload file potentially dangerous Request.Form value was detected from the client...".
To solve this I've specified content-type of the webRequest to "multipart/form-data"
Related
Our front-end side uploads document to S3 using pre-signed urls and seems to be failing randomly. This part of functionality if very critical to us.
Our pre-signed urls are generated by back-end using boto3.
[...]
#classmethod
def get_presigned_url(cls, filename, user, content_type, size=None):
client = cls.get_s3_client()
import logging
logging.info(cls.generate_keyname(filename, user))
key = cls.generate_keyname(filename, user)
params ={'Bucket': cls.s3_staging_bucket, 'Key': key,
"ContentType": content_type}
if size:
params['ContentLength'] = size
# It's private as default
if cls.is_private:
params['ACL'] = 'private'
else:
params['ACL'] = 'public-read'
return client.generate_presigned_url(
'put_object',
Params=params,
ExpiresIn=600
), cls.get_url(key, cls.s3_staging_bucket)
[...]
So the front-end sends following information to request upload link:
[...]
// Request Presigned url
Restangular.all('upload').all('get_presigned_url').post(
{
'resource_type': 'candidate-cv',
'filename': vm.file.name,
'size': vm.file.size || null,
'content_type': vm.file.type || 'application/octet-stream'
}
).then(
[...]
Things to note in above example: the size and type are not available in all browsers so I have to fallback to defaults.
Once link is retrieved front-end attempts to upload directly to s3 bucket:
[...]
$http.put(
data['presigned_url'],
vm.file,
{
headers: {
'Content-Type': vm.file.type || 'application/octet-stream',
'Authorization': undefined // Needed to remove default ApiKey
}
}
).then(
[...]
The above code gives sometimes -1 response. Sometimes is a problem because it happens a way to often. Probably something around 3% of cases.
We have checked inserted debug logger that sends debug information on every bad response but everything really seems to be alright there.
Our facts so far:
It seemed to me in the beginning that's connectivity issue but should the response status be 0 instead -1?
It happens way too often for connectivity issue (~3%)
It happens on whole range or user agents Windows/Mac Chrome/Edge Mobile/Desk old and new.
It happens with whole range of document formats docx/doc/pdf.
Same users tried several times in a row during 1 hour period all failed with -1.
Same users with same user-agents seem to be able to do upload successfully day before or day after.
We are unable replicate it.
What do we do wrong? What direction should we take to investigate this problem? What next steps should we follow to solve the issue?
Thanks for your input.
EDIT:
As #tcrite suggested that -1 means client side timeout. That seem to be correct to replicate the problem in my local env. We updated production server adding long client timeouts: 250 seconds.
But just recently we have got several -1 responses. The user tried to submit file 6 times in 2 minutes with all resulting with -1 response code and timeout config was present:
Response:
{
"data":null,
"status":-1,
"config":{
"method":"PUT",
"transformRequest":[
null
],
"transformResponse":[
null
],
"jsonpCallbackParam":"callback",
"headers":{
"Content-Type":"application/msword",
"Accept":"application/json, text/plain, */*"
},
"timeout":250000,
"url":"https://stackoverflow-question.s3.amazonaws.com/uploads/files/a-b-a36b9b2f216..."
}
}
It can't be S3 timeout as I tried in my local env to upload file on slow connection for ~5 minutes and it was uploaded sucessfully.
I think you should make a server side web application to upload files ( rather than browse based angular ).Because browser are sometime restricted by company policy.
Check this python django application.I believe youa re already using python.
[https://testdriven.io/blog/storing-django-static-and-media-files-on-amazon-s3/][1]
I'm new to Single Page Application area and I try to develop app using angularjs and Spark framework. I get error 400 bad request when I want to post JSON from my website. Here is code fragment from client side:
app.controller('PostTripCtrl', function($scope, $http) {
$scope.newTrip = {};
$scope.submitForm = function() {
$http({
method : 'POST',
url : 'http://localhost:4567/trips/add',
data : $scope.newTrip,
headers : {
'Content-Type' : 'application/x-www-form-urlencoded'
}
}).success(function(data) {
console.log("ok");
}).error(function(data) {
console.log("error");
console.log($scope.newTrip);
});
};
});
Values that are to be assigned to newTrip are read from appropriate inputs in html file. Here is server-side fragment:
post("/trips/add", (req, res) -> {
String tripOwner = req.queryParams("tripOwner");
String startDate = req.queryParams("startDate");
String startingPlace = req.queryParams("startingPlace");
String tripDestination = req.queryParams("tripDestination");
int tripPrice = Integer.parseInt(req.queryParams("tripPrice"));
int maxNumberOfSeats = Integer.parseInt(req.queryParams("maxNumberOfSeats"));
int seatsAlreadyOccupied = Integer.parseInt(req.queryParams("seatsAlreadyOccupied"));
tripService.createTrip(tripOwner, startDate, startingPlace, tripDestination, tripPrice, maxNumberOfSeats,
seatsAlreadyOccupied);
res.status(201);
return null;
} , json());
At the end I obtain error 400 bad request. It is strange for me that when I want to see output on the console
System.out.println(req.queryParams());
I get json array of objects with values written by me on the website. However, when I want to see such output
System.out.println(req.queryParams("tripOwner"));
I get null. Does anyone have idea what is wrong here?
I think the main problem is that you are sending data to your Spark webservice with the 'Content-Type' : 'application/x-www-form-urlencoded' header. Try sending it as 'Content-Type' : 'application/json' instead, then in your Java code declare a String to receive req.body(), you'll see all your data in there.
Note: When you try to acces your data like this req.queryParams("tripOwner"); you're not accessing post data, but you're seeking for a get parameter called tripOwner, one that could be sent like this http://localhost:8080/trips/add?tripOwner=MyValue.
I would advise using postman to post a request to your server and see if it works. Try a different content type too. Try using curl and play with the various headers you are sending. 400 suggests the wrong data is being sent or expected data is missing or the data is the wrong type but based on your code you've provided I can see nothing wrong (but see below).
When your server receives a request log all request headers being received and see what changing them does. If it works in postman then you can change your client code to mirror the headers postman is using.
Does your spark server validate the data being sent before your controller code is hit? If so ensure you are adhering to all validation rules
Also on looking at your code again your client is sending the data in the post data but your server is expecting the data in the query string and not in the post data?
What happens if your server just sends a 201 response and does nothing else? Does your client get a 201 back? If so it suggests the hook up is working but there is something wrong with the code before you return a 201, build it up slowly to fix this.
Ok, I managed to cope with that using another approach. I used Jackson and ObjectMapper according to Spark documentantion. Thanks for your answers.
You can see more about that here: https://sparktutorials.github.io/2015/04/03/spark-lombok-jackson-reduce-boilerplate.html
You're probably just needed to enable CORS(Cross-origin resource sharing) in your Spark Server, which would have allowed you to access the REST resources outside the original domain of the request.
Spark.options("/*", (request,response)->{
String accessControlRequestHeaders = request.headers("Access-Control-Request-Headers");
if (accessControlRequestHeaders != null) {
response.header("Access-Control-Allow-Headers", accessControlRequestHeaders);
}
String accessControlRequestMethod = request.headers("Access-Control-Request-Method");
if(accessControlRequestMethod != null){
response.header("Access-Control-Allow-Methods", accessControlRequestMethod);
}
return "OK";
});
Spark.before((request,response)->{
response.header("Access-Control-Allow-Origin", "*");
});
Read more about pre-flighted requests here.
I have an AngularJS app trying to submit a form to a Java backend deployed in a Tomcat 7.0.54. My AngularJS app seems to be submitting the form correctly. This is the content type header as recored by Chrome's inspector:
Content-Type:application/json;charset=UTF-8
The request payload, once again as recorded by Chrome's inspector, is:
{"newProject":{"title":"título","deadline":"30/Maio/2014", .....
That is, the AngularJS app is putting the request correctly in the wire. However, I',m unable to read this payload correctly in the server side. Characters like "í" are being printed as "?".
Just for the purpose of testing I modified my second filter in the chain (the first is Spring Security) for printing the content of the request. This is to be sure that neither my server side application nor any of the frameworks I'm using are interfering in my data.
#Override
public void doFilter( ServletRequest request, ServletResponse response, FilterChain chain ) throws IOException, ServletException {
try {
HttpServletRequest hsr = (HttpServletRequest)request;
if( "POST".equalsIgnoreCase( hsr.getMethod() ) && "http://localhost:8080/profile/createproject".equalsIgnoreCase( hsr.getRequestURL().toString() ) ) {
hsr.setCharacterEncoding( "UTF-8" );
BufferedReader reader = new BufferedReader( new InputStreamReader( request.getInputStream(), "UTF-8" ) );
System.out.println( reader.readLine() );
}
chain.doFilter( request, response );
} finally {
MDC.remove( CHAVE_ID_REQUEST );
}
}
Even reading the request in the second filter of the chain I'm getting "t?tulo" instead of "título". If the same AngularJS app submits to a node backend, then the payload is correctly read and printed in the terminal.
Does anyone have any glue about the reason Tomcat can't read my UTF-8 request correctly?
Seems like you already solved your particular problem, but wanted to add that a key reference for this kind of issue is http://wiki.apache.org/tomcat/FAQ/CharacterEncoding,
which is referenced in https://stackoverflow.com/a/470320/830737
In particular, I was able to solve a similar issue by setting URIEncoding="UTF-8" in my <Connector> in server.xml.
I am a new bie on GWT, I wrote an application on abc.com, I have another application i.e. xyz.com, xyz.com?id=1 provides me a data in json format, I was thinking to find a way that how to get that json file in abc.com via RPC call, because I have seen tutorials in which RPC calls are used to get data from its server. any help will be appreciated.
EDIT
I am trying to implement this in this StockWatcher tutorial
I changed my code slightly change to this
private static final String JSON_URL = "http://localhost/stockPrices.php?q=";
AND
private void refreshWatchList() {
if (stocks.size() == 0) {
return;
}
String url = JSON_URL;
// Append watch list stock symbols to query URL.
Iterator iter = stocks.iterator();
while (iter.hasNext()) {
url += iter.next();
if (iter.hasNext()) {
url += "+";
}
}
url = URL.encode(url);
MyJSONUtility.makeJSONRequest(url, new JSONHandler() {
#Override
public void handleJSON(JavaScriptObject obj) {
if (obj == null) {
displayError("Couldn't retrieve JSON");
return;
}
updateTable(asArrayOfStockData(obj));
}
});
}
before when I was requesting my url via RequestBuilder it was giving me an exception Couldn't retrieve JSON but now JSON is fetched and status code is 200 as I saw that in firebug but it is not updating on table. Kindly help me regarding this.
First, you need to understand the Same Origin Policy which explains how browsers implement a security model where JavaScript code running on a web page may not interact with any resource not originating from the same web site.
While GWT's HTTP client and RPC call can only fetch data from the same site where your application was loaded, you can get data from another server if it returns json in the right format. You must be interacting with a JSON service that can invoke user defined callback functions with the JSON data as argument.
Second, see How to Fetch JSON DATA
Well I am wondering how I can achieve to post a multipart in chunked mode. I have 3 parts, and the files which can be big so must be sent in chunks.
Here what I do :
MultipartEntity multipartEntity = new MultipartEntity() {
#Override
public boolean isChunked() {
return true;
}
};
multipartEntity.addPart("theText", new StringBody("some text", Charset.forName("UTF-8")));
FileBody fileBody1 = new FileBody(file1);
multipartEntity.addPart("theFile1", fileBody1);
FileBody fileBody2 = new FileBody(file2);
multipartEntity.addPart("theFile2", fileBody2);
httppost.setEntity(multipartEntity);
HttpParams params = new BasicHttpParams();
HttpProtocolParams.setVersion(params, HttpVersion.HTTP_1_1);
HttpClient httpClient = new DefaultHttpClient(params);
HttpResponse httpResponse = httpClient.execute(httppost);
On the server side, I do receive the 3 parts but the files for example are not chunked, they are received as one piece... basically total I see 4 boundaries appearing only : 3 --xxx, 1 at the end --xxx-- .
I thought the override of isChunked would do the trick but no... ;(
Is what I am trying to do feasible ? How could I make that work ?
Thanks a lot.
Fab
To generate a multipart body chunked, one of the part must have it size unavailable. Like a part that is streaming.
For example let assume your file2 is a really big video. You could replace the part of your code:
FileBody fileBody2 = new FileBody(file2);
multipartEntity.addPart("theFile2", fileBody2);
wtih that code:
final InputStreamBody binVideo = new InputStreamBody(new FileInputStream(file2), "video/mp4", file2.getName());
multipartEntity.addPart("video", binVideo);
since now the third part is an InputStream instead of File, your multipart HTTP request will have the header Transfer-Encoding: chunked.
Usually any decent server-side HTTP framework (such as Java EE Servlet API) would hide transport details such as transfer coding from the application code. just because you are not seeing chunk delimiters by reading from the content stream does not mean the chunk coding was not used by the underlying HTTP transport.
You can see exactly what kind of HTTP packets HttpClient generates by activating the wire logging as described here:
http://hc.apache.org/httpcomponents-client-ga/logging.html