I have a Spring MVC 3 app (that uses JSP) running on Google App Engine and saving information on the Datastore. I'm using the Google Maps API v3 to project some of the data on maps by drawing shapes, colouring etc. My database will potentionally hold millions of entries.
I was wondering what the best way is to keep pulling data from the datastore and project them on the map until there are no more database entries left to project. I need to do this to avoid hitting the 30 seconds limit (and getting a DeadlineExceededException) but also for good user experience.
Is it worth using GWT?
Any advice would be great.
Thanks!
You could use a cursor similar to the pagination technique described here:
Pagination in Google App Engine with Java
When your page with the map loads, have it make an AJAX request with a blank cursor parameter. The request handler would fetch a small number of entities, then return a response containing them and a cursor (if there are entities remaining).
From the client javascript, after displaying the items on the map, if there is a cursor in the response start a new request with the cursor as an argument. In the request handler if a cursor is provided, use it when making the query.
This will set up a continuous loop of AJAX requests until all items have been fetched and displayed on the map.
Update:
You could write a service which returns JSON something like this:
{
items:
[
{ lat: 1.23, lon: 3.45, abc = 'def' },
{ lat: 2.34, lon: 4.56, abc = 'ghi' }
],
cursor: '1234abcd'
}
So, it contains an array of items (with lat/lon and whatever other info you need per item), as well as a cursor (which would be null when the last entity has been fetched).
Then, on the client side I would recommend using jQuery's ajax function to make the ajax calls, something like this:
$(document).ready(function()
{
// first you may need to initialise the map - then start fetching items
fetchItems(null);
});
function fetchItems(cursor)
{
// build the url to request the items - include the cursor as an argument
// if one is specified
var url = "/path/getitems";
if (cursor != null)
url += "?cursor=" + cursor;
// start the ajax request
$.ajax({
url: url,
dataType: 'json',
success: function(response)
{
// now handle the response - first loop over the items
for (i in response.items)
{
var item = response.items[i];
// add something to the map using item.lat, item.lon, etc
}
// if there is a cursor in the response then there are more items,
// so start fetching them
if (response.cursor != null)
fetchItems(response.cursor);
}});
}
Related
I currently am using react hook powered component to record my screen, and subsequently upload it to Google Cloud Storage. However, when it finishes, the file created inside Google Cloud appears to be corrupt.
This is the gist of the code within my React component, where useMediaRecorder is from here: https://github.com/wmik/use-media-recorder -
let {
error,
status,
mediaBlob,
stopRecording,
getMediaStream,
startRecording,
liveStream,
} = useMediaRecorder({
onCancelScreenShare: () => {
stopRecording();
},
onDataAvailable: (chunk) => {
// do the uploading here:
onChunk(chunk);
},
recordScreen: true,
blobOptions: { type: "video/webm;codecs=vp8,opus" },
mediaStreamConstraints: { audio: audioEnabled, video: true },
});
As data becomes available through this hook - it calls onChunk( chunk ) passing a binary Blob through to that method, to perform the upload, I tie in with this section of code to perform the upload:
const onChunk = (binaryData) => {
var formData = new FormData();
formData.append("data", binaryData);
let customerApi = new CustomerVideoApi();
customerApi.uploadRecording(
videoUUID,
formData,
(res) => {},
(err) => {}
);
};
customerApi.uploadRecording looks like this (using axios).
const uploadRecording = (uuid, data, fn, fnErr) => {
axios
.post(endpoint + "/stream/upload", data, {
headers: {
"Content-Type": "multipart/form-data",
},
})
.then(function (response) {
fn(response);
})
.catch(function (error) {
fnErr(error.response);
});
};
The HTTP request succeeds, and all is well with the world: the server side code to upload is based on laravel:
// this is inside the controller.
public function index( Request $request )
{
// Set file attributes.
$filepath = '/public/chunks/';
$file = $request->file('data');
$filename = $uuid . ".webm";
// streamupload
File::streamUpload($filepath, $filename, $file, true);
return response()->json(['uploaded' => true,'uuid'=>$uuid]);
}
// there's a service provider used to create a new macro on the File:: object, providing the facility for appropriate handling the stream:
public function boot()
{
File::macro('streamUpload', function($path, $fileName, $file, $overWrite = true) {
$resource = fopen($file->getRealPath(), 'r+');
$storageClient = new StorageClient([
'projectId' => 'myprjectid',
'keyFilePath' => '/my/path/to/servicejson.json',
]);
$bucket = $storageClient->bucket('mybucket');
$adapter = new GoogleStorageAdapter($storageClient, $bucket);
$filesystem = new Filesystem($adapter);
return $overWrite
? $filesystem->putStream($fileName, $resource)
: $filesystem->writeStream($fileName, $resource);
});
}
So to reiterate:
React app chunks out blobs,
server side determines if it should create or append in Google Cloud Storage
server side succeeds.
4) Video inside Google Cloud platform is corrupted.
However, the video file, inside the Google Cloud container is corrupted and won't play. I'm unsure exactly why it is corrupted, but my guesses so far:
Some sort of Dodgy Mime type problem.. - different browsers seem to handle the codec / filetype differently from the mediarecorder: e.g. Chrome seems to be x-matroska (.mkv?) - firefox different again.. Ideally I would have a container of .webm - notice how I set the file name server side, and it isn't coming from the client. Should it? I'm unsure how to force the MediaRecorder to be a specific mimeType - I thought the blobOptions option should do it, but changing the extension and mime type seems to have little to no impact on the corruption occurring.
Some sort of problem during upload where an HTTP request doesn't execute and finish in order - e.g.
1 onDataAvailable completes second
2 onDataAvailable completes first
3 onDataAvailable completes third
I've sort of ruled this out because I think the chunks should be small enough.
Some sort of problem with Google Cloud Storage APIs that I'm using, perhaps in the wrong way? Does the cloud platform support streaming, and does this library send the correct params to do so?
Some sort of problem with how I'm uploading - should the axios headers be multipart formdata, or something else?
This is the package I'm using for the Server side: https://github.com/Superbalist/flysystem-google-cloud-storage
Can anyone could shed any light on how to achieve this goal of streaming up into Google Cloud without the video from the mediarecorder being corrupted? Hopefully there's enough detail here in the question to help figure it out. The problem as illustrated isn't on getting the file as far as Google cloud, but rather the resulting file being unplayable in any video format.
Update
I've ordered my chunks client side now, and queued them properly before letting them reach the server. No difference to the output. As some have suggested - a single blob upload request works fine.
Tried using streamable config param (from reading source code it seems like chunks need to be a certain size before Google recognises them as a resumable upload
$filesystem = new Filesystem($adapter, [
'resumable'=>true
]);
Not sure how: https://cloud.google.com/storage/docs/performing-resumable-uploads - is implemented within the libraries I'm using, (or within the Google Cloud APIs themselves if at all?). Do I need to implement that myself? Documentation is light on Google's part.
Short version:
The first thing you should do is buffer the whole video locally, and send a single payload to the server and to google drive. This will validate your code for a small video is actually correct. Once you can verify this you can move onto handling multi-chunk uploads.
Longer version:
For starters, you aren't passing the uuid to the request, it's being used:
const uploadRecording = (uuid, data, fn, fnErr) => {
axios
.post(endpoint + "/stream/upload", data, {
headers: {
"Content-Type": "multipart/form-data",
},
})
.then(function (response) {
fn(response);
})
.catch(function (error) {
fnErr(error.response);
});
};
Next, you can't trust how chunking will work, I think you verified this behavior with the out of order result of chunk logging. You need to assume on your server you will get chunks out of order and handle them correctly.
Each chunk you get on the server needs to put in the right place, you can't just "writeStream", you need to write to the explicit binary block. Specifically, on every request specify the byte range: Google docs:
curl -i -X PUT --data-binary #CHUNK_LOCATION \
-H "Content-Length: CHUNK_SIZE" \
-H "Content-Range: bytes CHUNK_FIRST_BYTE-CHUNK_LAST_BYTE/TOTAL_OBJECT_SIZE" \
"SESSION_URI"
CHUNK_LOCATION is the local path to the
chunk that you're currently uploading. CHUNK_SIZE is the number of
bytes you're uploading in the current request. For example, 524288. CHUNK_FIRST_BYTE is the
starting byte in the overall object that the chunk you're uploading
contains. CHUNK_LAST_BYTE is the ending byte in the
overall object that the chunk you're uploading contains.
TOTAL_OBJECT_SIZE is the total size of the
object you are uploading. SESSION_URI is the value returned in the
Location header when you initiated the resumable upload.
Try to eliminate as many variables as possible and pinpoint where exactly the file is getting corrupted.
Since you are using a React(JS) -> Laravel(PHP) -> GoogleCloud path,
first thing I would suggest is to test each step separately:
React -> Laravel - save the file on your server and check if its corrupted at this point
Laravel -> GoogleCloud - Load a file from the server filesystem and upload to cloud and see if it gets corrupted
I don't have experience with Google cloud, but I did something very similar with AWS and found that their video uploading service was extremely picky about the requests (including order of headers that were sent).
Try to compare the specs on the service you are using with your input, make the smallest possible thing that works and start adding variables until you get to the final state.
Also I don't see any kind of data ordering in your code.
If your chunks are close to each other, and with streaming it is highly possible then there is a chance that they will arrive in different order than originally sent. If you just append them to a file without any control of the sorting then the file will indeed get corrupted. Not sure if for webm that would cause just parts of the video to be broken or the entire thing to die.
Okay. I'm kinda new to react and I'm having a #1 mayor issue. Can't really find any solution out there.
I've built an app that renders a list of objects. The list comes from my mock API for now. The list of objects is stored inside a store. The store action to fetch the objects is done by the components.
My issue is when showing these objects. When a user clicks show, it renders a page with details on the object. Store-wise this means firing a getSpecific function that retrieves the object, from the store, based on an ID.
This is all fine, the store still has the objects. Until I reload the page. That is when the store gets wiped, a new instance is created (this is my guess). The store is now empty, and getting that specific object is now impossible (in my current implementation).
So, I read somewhere that this is by design. Is the solutions to:
Save the store in local storage, to keep the data?
Make the API call again and get all the objects once again?
And in case 2, when/where is this supposed to happen?
How should a store make sure it always has the expected data?
Any hints?
Some if the implementation:
//List.js
componentDidMount() {
//The fetch offers function will trigger a change event
//which will trigger the listener in componentWillMount
OfferActions.fetchOffers();
}
componentWillMount() {
//Listen for changes in the store
offerStore.addChangeListener(this.retriveOffers);
}
retrieveOffers() {
this.setState({
offers: offerStore.getAll()
});
}
.
//OfferActions.js
fetchOffers(){
let url = 'http://localhost:3001/offers';
axios.get(url).then(function (data) {
dispatch({
actionType: OfferConstants.RECIVE_OFFERS,
payload: data.data
});
});
}
.
//OfferStore.js
var _offers = [];
receiveOffers(payload) {
_offers = payload || [];
this.emitChange();
}
handleActions(action) {
switch (action.actionType) {
case OfferConstants.RECIVE_OFFERS:
{
this.receiveOffers(action.payload);
}
}
}
getAll() {
return _offers;
}
getOffer(requested_id) {
var result = this.getAll().filter(function (offer) {
return offer.id == requested_id;
});
}
.
//Show.js
componentWillMount() {
this.state = {
offer: offerStore.getOffer(this.props.params.id)
};
}
That is correct, redux stores, like any other javascript objects, do not survive a refresh. During a refresh you are resetting the memory of the browser window.
Both of your approaches would work, however I would suggest the following:
Save to local storage only information that is semi persistent such as authentication token, user first name/last name, ui settings, etc.
During app start (or component load), load any auxiliary information such as sales figures, message feeds, and offers. This information generally changes quickly and it makes little sense to cache it in local storage.
For 1. you can utilize the redux-persist middleware. It let's you save to and retrieve from your browser's local storage during app start. (This is just one of many ways to accomplish this).
For 2. your approach makes sense. Load the required data on componentWillMount asynchronously.
Furthermore, regarding being "up-to-date" with data: this entirely depends on your application needs. A few ideas to help you get started exploring your problem domain:
With each request to get offers, also send or save a time stamp. Have the application decide when a time stamp is "too old" and request again.
Implement real time communication, for example socket.io which pushes the data to the client instead of the client requesting it.
Request the data at an interval suitable to your application. You could pass along the last time you requested the information and the server could decide if there is new data available or return an empty response in which case you display the existing data.
I am using Angular js to show loading screen. It works for all the REST services call except REST service to download the file. I understand why it is not working because for download I am not making any service call using $resource; instead of that I am using normal approach to download the file therefore Angular js code doesn't have any control on start/finish the service request. I tried to use $resource to hit this REST service however I am getting the data from this service and in this case loading screen was working fine however not sure how to use this data to display to user to download in angular way. Following are required details. Please help.
Approach 1 using iframe approach:
/*Download file */
scope.downloadFile = function (fileId) {
//Show loading screen. (Somehow it is not working)
scope.loadingProjectFiles=true;
var fileDownloadURL = "/api/files/" + fileId + "/download";
downloadURL(fileDownloadURL);
//Hide loading screen
scope.loadingProjectFiles=false;
};
var $idown; // Keep it outside of the function, so it's initialized once.
var downloadURL = function (url) {
if ($idown) {
$idown.attr('src', url);
} else {
$idown = $('<iframe>', { id: 'idown', src: url }).hide().appendTo('body');
}
};
Approach 2 using $resource (Not sure how to display data on screen to download)
/*Download file */
scope.downloadFile = function (fileId) {
//Show loading screen (Here loading screen works).
scope.loadingProjectFiles=true;
//File download object
var fileDownloadObj = new DownloadFile();
//Make server call to create new File
fileDownloadObj.$get({ fileid: fileid }, function (response) {
//Q? How to use the response data to display on UI as download popup
//Hide loading screen
scope.loadingProjectFiles=false;
});
};
This is the correct pattern with the $resource service:
scope.downloadFile = function (fileId) {
//Show loading screen (Here loading screen works).
scope.loadingProjectFiles=true;
var FileResource = $resource('/api/files/:idParam', {idParam:'#id'});
//Make server call to retrieve a file
var yourFile = FileResource.$get({ id: fileId }, function () {
//Now (inside this callback) the response data is loaded inside the yourFile variable
//I know it's an ugly pattern but that's what $resource is about...
DoSomethingWithYourFile(yourFile);
//Hide loading screen
scope.loadingProjectFiles=false;
});
};
I agree with you that this is a weird pattern and is different of other APIs where the downloaded data is assigned to a parameter in a callback function, hence your confusion.
Pay attention to the names and the cases of the parameters, and look that there're two mappings involved here, one between the caller to the $resource object and the object itself, and another between this object and the url that it contructs for downloading the actual data.
Here are some idea's for the second approach, you could present the user with a link after the download has happened:
With a "data url". Probably not a good idea for large files.
With a URL like "filesystem:mydownload.zip" You'd first have to save the file with the filesystem API. You can find some inspiration on html5rocks
What's the best way to pre-fetch the next page in backbone.js?
Is there a build-in mechanism to do that, or do I have to take care of it myself by making Ajax calls and storing the results.
Also, is there a way to preload the entire page like in JQuery mobile( http://jquerymobile.com/demos/1.2.0/docs/pages/page-cache.html)
There is no built in support for a such a thing. It's dependent on your use case but you could do a number of things.
1) Use setTime() to wait a short time before fetching the data you might be needing shortly. (Probably not a good solution)
2) Set up an event handler to fetch the data on a specific event, or something similar:
$('#my-button').on('hover',function() {
//fetch data
});
To fetch the data you can use the fetch() function on a backbone model or collection, which will return a jqXHR (or you can use a straight $.ajax() call). You can then wait and see if it failed or passed:
var fetch = myModel.fetch();
fetch.done(function(data) {
// Do something with data, maybe store it in a variable than you can later use
})
.fail(function(jqXHR) {
// Handle the failed ajax call
// Use the jqXHR to get the response text and/or response status to do something useful
});
No build-in support, but actually easy to add. Please referer to the concept of View Manager, it is able to handle both "view-keeping" tasks and transitions.
In short, the concept is: view manager is component, which is reposible for switching from one application view to another. It will dispose existing view, so it prevents zombies and memory leaks. As well it could handle transitions between view switch.
Here my way how I handle the loading of pages into an "endless scrolling" list.
Make you backend paginate aware
First of all you require a DB backend which is capable of handlling page load requests:
As an example refer to my git modelloader project which provides a small Coffee based framework integrated into a Node.js/Moongoose environment.
Model Loader on GIT will contain additional information and samples.
Here the most important points:
You backend should support the following Pagination features
Each request will return partial response only limiting it to for example 20 records (size of a page).
By default the last JSON record entry returned by a request, will contain additional technical and meta information about the request, necessary for allowing consumers to implement a paging.
{
_maxRec: "3",
_limit: "20",
_offset: "0"
}
_maxRec will list the total amount of records in the collection
_limit will list the maximum number of requests which are given back
_offset will tell you which set of records was passed back, i.e. an _offset of 200 would mean that result list skipped the first 199 records and presents the records from 200-220
The backend should support the following pagination control parameters:
http(s)://<yourDomainName>/<versionId</<collection>?offset=<number>
Use offset to skip a number of records, as for example with a limit of 20, you would send a first request with offset=0 then offset=20, then offset=40 etc. until you reached _maxRec.
In order to reduce the db activities you should provide a possiblity to reduce the _maxRec calculation for subsequent requests:
http(s)://<yourDomainName>/<versionId</<collection>?maxRec=<number>
By passing in a maxRec parameter (normally the one gotten by an earlier paging requerst), the request handler will by pass the database count objects statement, which results in one db activity less (performance optimization). The passed in number will passed back via _maxRec entry. Normally a consumer will fetch in the first request the _maxRec number and pass it back for the subsequent request, resulting in a faster data access request.
Fire of Backbone Model requests if necessary
So now you have to implement on the Backbone side the firing of page loading requests when necessary.
In the example below we assume to have a Backbone.View which has a list loaded into a jquery.tinyscrollbar based HTML element. The list contains the first 20 records loaded via the URL when built up initially:
http(s)://<yourDomainName>/<versionId</<collection>?offset=0
The View would listen in this case to the following scrolling events
events:
'mousewheel': 'checkScroll'
'mousemove': 'checkScroll'
Goal is as soon the user has scrolled down to the bottom of the scrollable list (e.g. he reaches a point which is 30px above the scrollable list end point) a request will be fired to load the next 20 entries. The following code sample desrcribes the necessary step:
checkScroll: () =>
# We calculate the actual scroll point within the list
ttop = $(".thumb").css("top")
ttop = ttop.substring(0,ttop.length-2)
theight = $(".thumb").css("height")
theight = theight.substring(0,theight.length-2)
triggerPoint = 30 # 30px from the bottom
fh = parseFloat(ttop)+parseFloat(theight)+triggerPoint
# The if will turn true if the end of the scrollable list
# is below 30 pixel, as well as no other loading request is
# actually ongoing
if fh > #scrollbarheight && ! #isLoading && #endLessScrolling
# Make sure that during fetch phase no other request intercepts
#isLoading = true
log.debug "BaseListView.checkscroll "+ ttop + "/"+ theight + "/"+ fh
# So let's increase the offset by the limit
# BTW these two variables (limit, offset) will be retrieved
# and updated by the render method when it's getting back
# the response of the request (not shown here)
skipRec = #offset+#limit
# Now set the model URL and trigger the fetch
#model.url = #baseURL+"?offset="+skipRec+"&maxRec="+#maxRec
#model.fetch
update: true
remove: false
merge: false
add: true
success: (collection, response, options) =>
# Set isLoading to false, as well as
# the URL to the original one
#isLoading = false
#model.url = #baseURL
error: (collection, xhr, options) =>
#isLoading = false
#model.url = #baseURL
The render method of the view will get the response back and will update the scrollable list, which will grow in size and allows the user again to start scrolling down along the new loaded entries.
This will load nicely all the data in a paged manner.
Is there a simple way to export a grid data to XLS in ExtJS.
If not I am trying the following way.
I am trying to read the data store inside a controller. The datastore is already being used by the grid. I want to read the data on a button click and send it to server through AJAX. Later inside server I would retrieve the data and write to XLS. In this case what is the way I can read the data inside the controller?
enter code here
Ext.define("MyApp.controller.GridController", {
extend : 'Ext.app.Controller',
views: ['performance.grid.PerformanceGrid'],
models: ['GridModel'],
stores: ['GridStore'],
refs : [{
ref : 'mainTabPanel',
selector : 'portal > tabpanel'
}],
init : function() {
this.control({
'portal toolbar > button[itemId=xls]' : {
click : this.onAddTab
},
'portal toolbar > button[itemId=pdf]' : {
click : this.onAddPortlet
}
});
},
onAddTab : function(btn, e) {
// I want to read the datastore here and make an AJAX call
},
});
onAddTab: function(btn, e){
var store = // get the store reference probably doing Ext.getStore('the store');
var records = store.data.items.map(function(r){ return r.data });
// send it all to your server as you want to
// Ext.ajax.Request({
// url: 'the url',
// data: records,
// method: 'POST'
// });
});
I didnĀ“t test it but it have to work.
Good luck!
I think that process is not the best because you will have 3 payloads (data round trips that doesn't make any sense)
Your call your server method to get the data that will be populated into the grid.
The JSON object (containing the server data) will then travel again to the server
(THIS DOESN'T MAKE SENSE TO ME... WHY YOU WANT TO SEND DATA TO SERVER WHEN THE SERVER WAS THE SOURCE?? )
The server will process your object from JSON response and then create the document on the fly and send it back to server.
What I think you should do is the following:
Get data from server and bind your grid.
Get your store proxy URL and parse the method and extraParams so you know who served the grid and what you asked to the server.
Create a common method on server that receives a method and an array of parameters. Then inside this method make the logic so depending on the method, you call your data Repository (same repository where your first request got the data), process the document and send the file back to server.
This way you should have something like this:
webmethod(string method, object[] params) {
switch(method){
case "GetTestGridData":
// here you call your Repository in order to get the same data
GeneralRepo repo = new GeneralRepo();
var data = repo.GetTestGridData(object[0],object[1]);
break;
}
byte[] fileStream = Reports.Common.Generate(data, ExportType.PDF);
// response the stream to client...
}