I want to know the relation between velocity and IS. If a request is satisfied by velocity, then will it going to use worker process. Or what happen I’m confused. ?
Also I want to store some data like country, state and city for auto suggest in velocity. This database could be on 3 gb. Now how velocity will work. And how IS will work. Is this going to effect IS. Basically my requirements is that I want to save all country, state and city data in velocity and don’t want to hit database and don’t want to make IS busy. What is the solution?
Please help
Velocity was the codename for Microsoft's AppFabric distributed caching technology. Very similar to memcache, it is used for caching objects across multiple computers.
This has no real bearing on how IIS processes requests. All requests are satisfied by IIS, AppFabric is a mechanism for storing data, not processing requests.
In answer to your second question; You can use AppFabric is a first-call check for data. If the data does not exist in the cache, call the database to populate the cache, and then return the data.
var factory = DataCacheFactory();
var cache = factory.GetCache("AutoSuggest");
List<Region> regions = cache.Get("Regions") as List<Region>;
if (regions == null) {
regions = // Get regions from database.
cache.Add("Regions", regions);
}
return regions;
Checking the cache first enables the app to get a faster response, as the database is only hit on the first instance (ideally), and the result data is pushed back into the cache.
You could wrap this up a bit more:
public T Get<T>(string cacheName, string keyName, Func<T> itemFactory)
where T : class
{
var cache = dataFactory.GetCache(cacheName);
T value = cache.Get(keyName) as T;
if (value == null) {
value = itemFactory();
cache.Add(keyName, value);
}
return value;
}
That way you can change your lookup calls to something similar to:
var regions = Get<List<Region>>("AutoSuggest", "Regions", () => GetRegions());
Related
I am in between building a web application for showing vehicles position in a map view, AngularJS as front end and Node.js as the server for real-time updating of the map. The events are coming from a middleware application to Node server, the server then has to apply some logic there to broadcast the data to the needed connected clients.
Coming to the question- the issue what I am currently facing is that, there comes around 20K vehicle data at a single time to the Node server, the Node server then has to decide to which connected clients should the data updated.This is achieving by looping each data against each connected client's map bounds.If the incoming data bounds are within the connected client's bounds that particular data will be emitted to that client. So this entire process will take more time if there have 1K clients and 20K data.
Are there any ways to reduce this server overload by using any node techniques?
What I have tried: I read through node clusters, but I think it deals with distributing connections across multiple workers. Is this a way for resolving my issue?
The sample code snippet is as follows:
Node server side logic
users // user array eg: array(userSocketId1,userSocketId2);
bounds //each user's bounds array eg:array({userSocketId1:boundsValue},{userSocketId2:boundsValue2});
app.post('/addObject', (req, res) => {
for (var k = 0; k < Object.keys(req.body).length; k++) {
var point=[{'lat':req.body[k].lat,'lng':req.body[k].lng,'message':req.body[k].message,'id':req.body[k].id}];
for (var i = 0; i < users.length; i++) {
var userBounds = bounds[users[i]];
if(typeof userBounds!=='undefined'){
var inbounds = inBounds(point[0],userBounds); // check whether current vehicle's bounds within user's bounds
var user = users[i];
if(inbounds){
io.to(user).emit('updateMap', point); // send that vehicle data to one client say user
}
}
}
}
res.send('Event received in Node Server');
});
Client-side logic for plotting vehicle info to map
socket.on('updateMap', function(msg){
L.marker([msg.lat, msg.lng]).addTo(map);
});
First thing you can try is make the code asynchrnous, like using Promise.
Without any library, this should work better:
app.post('/addObject', (req, res) => {
Promise.all(Object.keys(req.body).map((k) => {
let point=[
{
'lat':req.body[k].lat,
'lng':req.body[k].lng,
'message':req.body[k].message,
'id':req.body[k].id
}
];
Promise.all(users.map((user) => {
let userBounds = bounds[user];
if(typeof userBounds!=='undefined'){
let inbounds = inBounds(point[0], userBounds); // check whether current vehicle's bounds within user's bounds
if(inbounds){
io.to(user).emit('updateMap', point); // send that vehicle data to one client say user
}
}
}));
})).then(() => {
res.send('Event received in Node Server');
}).catch((error) => {
res.send(error);
});
});
Other advantages include not having to deal with indexes, easier to deal with errors.
It may not be enought, but you will not block each time you receive a request, and that is already a huge improvement.
For existing architecture you need to do following things -
Use Cluster.
Implement your logic with Promise.
Or you need to update your architecture, you would need to store user position with socket id and user id. and you need to get all the socket id those falling in your criteria.
Here your performance player is Mongo
how?
If you are using multiple objects like
Customer=>{...} l times
Clients=>{...} m times
data=>{client_id:'something',...} n times
then you need looping each data and checks in it. which is equal to l * m * n
here is the trick that save alot of things
Customer:{
_id:Object('someid')
,client:{
_id:Object('someid'),
data:{
...
}
}
}
It will decrease the looping factor and gives result in l * (no of clients inside)* (no of data inside).
Okay. I'm kinda new to react and I'm having a #1 mayor issue. Can't really find any solution out there.
I've built an app that renders a list of objects. The list comes from my mock API for now. The list of objects is stored inside a store. The store action to fetch the objects is done by the components.
My issue is when showing these objects. When a user clicks show, it renders a page with details on the object. Store-wise this means firing a getSpecific function that retrieves the object, from the store, based on an ID.
This is all fine, the store still has the objects. Until I reload the page. That is when the store gets wiped, a new instance is created (this is my guess). The store is now empty, and getting that specific object is now impossible (in my current implementation).
So, I read somewhere that this is by design. Is the solutions to:
Save the store in local storage, to keep the data?
Make the API call again and get all the objects once again?
And in case 2, when/where is this supposed to happen?
How should a store make sure it always has the expected data?
Any hints?
Some if the implementation:
//List.js
componentDidMount() {
//The fetch offers function will trigger a change event
//which will trigger the listener in componentWillMount
OfferActions.fetchOffers();
}
componentWillMount() {
//Listen for changes in the store
offerStore.addChangeListener(this.retriveOffers);
}
retrieveOffers() {
this.setState({
offers: offerStore.getAll()
});
}
.
//OfferActions.js
fetchOffers(){
let url = 'http://localhost:3001/offers';
axios.get(url).then(function (data) {
dispatch({
actionType: OfferConstants.RECIVE_OFFERS,
payload: data.data
});
});
}
.
//OfferStore.js
var _offers = [];
receiveOffers(payload) {
_offers = payload || [];
this.emitChange();
}
handleActions(action) {
switch (action.actionType) {
case OfferConstants.RECIVE_OFFERS:
{
this.receiveOffers(action.payload);
}
}
}
getAll() {
return _offers;
}
getOffer(requested_id) {
var result = this.getAll().filter(function (offer) {
return offer.id == requested_id;
});
}
.
//Show.js
componentWillMount() {
this.state = {
offer: offerStore.getOffer(this.props.params.id)
};
}
That is correct, redux stores, like any other javascript objects, do not survive a refresh. During a refresh you are resetting the memory of the browser window.
Both of your approaches would work, however I would suggest the following:
Save to local storage only information that is semi persistent such as authentication token, user first name/last name, ui settings, etc.
During app start (or component load), load any auxiliary information such as sales figures, message feeds, and offers. This information generally changes quickly and it makes little sense to cache it in local storage.
For 1. you can utilize the redux-persist middleware. It let's you save to and retrieve from your browser's local storage during app start. (This is just one of many ways to accomplish this).
For 2. your approach makes sense. Load the required data on componentWillMount asynchronously.
Furthermore, regarding being "up-to-date" with data: this entirely depends on your application needs. A few ideas to help you get started exploring your problem domain:
With each request to get offers, also send or save a time stamp. Have the application decide when a time stamp is "too old" and request again.
Implement real time communication, for example socket.io which pushes the data to the client instead of the client requesting it.
Request the data at an interval suitable to your application. You could pass along the last time you requested the information and the server could decide if there is new data available or return an empty response in which case you display the existing data.
I'm currently working on a project that requires me to make an API call. It only allows me to make 500 requests / 10 mins but the data returned (object with ~800 properties) only changes every few months so I rather just cache it somewhere.
I'm very new to this whole thing and I'm wondering how can I make the call every few months and store the data somewhere so that I could retrieve it from the client whenever needed?
Thanks in advance!
Since you want to store your object for a longer period of time, I would suggest writing it to disk rather than caching it in memory (in case your node app crashes).
You didn't mention it precisely, but I assume you are referring to a simple javascript object, which you want to store? To store such an object to disk, you can do the following:
var fs = require("fs");
// with your object being stored in the variable "myObject", after your API call:
var myObject = ....
fs.writeFile( "myFilename.json", JSON.stringify(myObject), "utf8", function(err) {
if(err) {
return console.log(err);
}
// do whatever you want to do after file has been saved...
});
To read the object from disk, simply do:
myObject = require("./filename.json");
I'm building an Angular Shop-Frontend which consumes a REST-API with Restangular.
To get the articles from the API, I use Restangular.all("articles") and I setup Restangular to cache this request.
When I want to get one article from the API, for example on the article-detail page by it's linkname and later somewhere else (on the cart-summary) by it's id, I would need 3 REST-calls:
/api/articles
/api/articles?linkname=some_article
/api/articles/5
But actually, the data from the two later calls is already available from the cached first call.
So instead I thought about using the cached articles and filter them to save the additional REST-calls.
I built these functions into my ArticleService and it works as expected:
function getOne(articleId) {
var article = $q.defer();
restangular.all("articles").getList().then(function(articles) {
var filtered = $filter('filter')(wines, {id: articleId}, true);
article.resolve((filtered.length == 1) ? filtered[0] : null);
});
return article.promise;
}
function getOneByLinkname(linkname) {
var article = $q.defer();
restangular.all("articles").getList().then(function(articles) {
var filtered = $filter('filter')(articles, {linkname: linkname}, true);
article.resolve((filtered.length == 1) ? filtered[0] : null);
});
return article.promise;
}
My questions concerning this approach:
Are there any downsides I don't see right now? What would be the correct way to go? Is my approach legitimate, to have as little REST-calls as possible?
Thanks for your help.
Are there any downsides I don't see right now?
Depends on how the functionality of your application. If it requires real time data, then having REST calls performed to obtain the latest data would be a requirement.
What would be the correct way to go? Is my approach legitimate, to have as little REST-calls as possible?
Depends still. If you want, you can explore push data notifications, such that when your data from the server is changed or modified, you could push those info to your client. That way, the REST operations happens based on conditions you would have defined.
I am running grails 1.3.7 and using the grails database migration plugin version database-migration-1.0
The problem I have is I have a migration change set. That is pulling blobs out of a table and writing them to disk. When running through this migration though I am running out of heap space. I was thinking I would need to flush and clear the session to free up some space however I am having difficulty getting access to the session from within the migration. BTW The reason it's in a migration is we are moving away from storing files in oracle and putting them on disk
I have tried
SessionFactoryUtils.getSession(sessionFactory, true)
I have also tried
SecurityRequestHolder.request.getSession(false) //request in null -> not surprising
changeSet(author: "userone", id: "saveFilesToDisk-1") {
grailsChange{
change{
def fileIds = sql.rows("""SELECT id FROM erp_file""")
for (row in fileIds) {
def erpFile = ErpFile.get(row.id)
erpFile.writeToDisk()
session.flush()
session.clear()
propertyInstanceMap.get().clear()
}
ConfigurationHolder.config.erp.ErpFile.persistenceMode = previousMode
}
}
}
Any help would be greatly appreciated.
The application context will be automatically available in your migration as ctx. You can get the session like this:
def session = ctx.sessionFactory.currentSession
To access session, you can use withSession closure like this:
Book.withSession { session ->
session.clear()
}
But, this may not be the reason why your app run out of heap space. If the data volume is large, then
def fileIds = sql.rows("""SELECT id FROM erp_file""")
for (row in fileIds) {
..........
}
will consume up your space. Try to process the data with pagination. Don't load all the data at once.