I would like to know if its possible?
Step1: call one web service A which retrieves data from the database and stores in a result set (dynamically)
Step2: web service A calls web service B to process the data stored in the result set.
Is it possible to share result sets which could be of large or small size between web services. If its not possible whats the best options
/SR
Might have more elegant solutions depending on which web services you are specifically referring to, but most of them out there have built-in environment variables, functions, etc.. for parsing the URL string, so you can pass data that way. Or with cookies. Or with XML or SOAP
I don't see any problem here. As soon as one web-service can handle data "of large or small size" you can insert another web-service in a middle.
However, there are might be performance penalties in converting data to and from XML/JSON/SOAP several times.
You don't say what your execution environment is -- please do, btw -- but in any environment I've heard of or worked in, there are several ways to do what you want to do. Leaving aside standalone programs, interprocess communication is (I claim, without thinking too much) the most common requirement on any OS.
-- b
Related
Traditionally, in a non-serverless environment, I would have the following system. Say I have a custom ID generation protocol for all my models. Say I also have 20 servers scattered around. I give each server a slice of IDs to work with off the whole stack of IDs. When they are done or the server goes down, it returns the IDs back to the system so they don't get wasted. The reason for sending each server a batch of IDs is so that every time a new record is created you don't need to fetch from a central ID server to get the next ID. Instead they have a local set they can work with freely.
How would you do this sort of thing in a serverless system? I am deploying to Vercel and wondering what the appropriate architecture might be for such an ID batching system. There are other use cases for needed a persistent copy of data in a local server, so if you don't like the ID example just imagine another sort of system. How do you solve this optimization problem in a serverless environment?
Serverless is an approach. Like all such things (solutions), it should be matched to the problem - not the other way around. Is this simply a case where serverless is a good solution choice for dealing with 80% of your problem, and that all you need to do is choose something appropriate to deal to the other 20%?
Assuming you have the freedom to do this, can't you just have the serverless parts of the solution consume non-serverless services - e.g. an ID Service?
Separately to this, caching comes to mind - just the general idea of having some data close by which might be mastered somewhere else. Caching patterns like Write Behind would allow you to work with local copies (i.e. immediate consumption) whilst farming out the cache-master communication.
I have a monitoring service that polls a REST API for information about the latest resources (list of hosts/list of licenses). The monitoring service cache's all this data in a Redis database. Everything works great for discovering new resources.
However the problem I am facing is when a host drops off the network. The challenge I am facing is that I haves no way of knowing that the host has disappeared from the list of hosts. The REST API only gives me a way of querying a list of hosts.
One way that I can come up (theoretically) is by taking a diff of the rdb at different time intervals. However this does not seem efficient to me and honestly I am not sure how I would do this with redis.
The suggestions I am looking for are, maybe some frameworks which are best suited for this kind of an operation or if need be a different database that might be as efficient as redis yet gives me the functionality I need to take diffs. Time series databases spring to mind but I have no experience in them and not sure how they can be used to solve this problem precisely.
There's no need to resort to anywhere besides Redis itself - it is robust enough to continue serving your requirements as long as you tell it what to do (like any other software ;)).
The following is an example but as you didn't specify how you're caching your data, I'll assume for simplicity's sake that you have a key per every host/license in your list where you store some string/binary value, like:
SET acme.org "some cached value"
You have a lot of such keys because the monitoring REST API returns a list, so a common way to keep everything order is use another key to store that list for each request returned by the API. You can achieve that with a Set:
SADD request:<timestamp> acme.org foo.bar ...
Sets are particularly useful here because you can perform Set operations, SDIFF and SINTER and store-variants in your case, to keep track of the current online and dropped hosts. For example:
MULTI
SINTERSTORE online:<timestamp> request:<timestamp> request:<previous-timestamp>
SDIFFSTORE dropped:<timestamp> request:<timestamp> request:<previous-timestamp>
EXEC
Note: as you're caching things it is good practice to expiry values (TTL) to all relevant keys and use an appropriate eviction policy.
I am building a mobile app using AngularJS and PhoneGap. The app allows the user to access a large amount of data-items. These data-items come with the app in form of a number of .json files.
One use-case is that a user can favorite any of those data-items.
Currently, I store the (ids of) the items that have been favorited in localStorage. It works and it's great and very simple.
But now I would like create an online-backend for the app. By this I mean that the (ids of) the items that have been favorited should also be stored on a server somewhere in some form of database.
Now my question is:
How do I best do this?
How do I keep the localStorage data and online-backend data in synch?
In particular, the user might not have an internet connection at the time were he favorites a data-item. Additionally, if the user favorites x data-items in a row, I would need to make x update calls to the server db, which clearly isn't great.
So, how do people do it?
Does Angular have anything build-in for this?
Is there any plugin?
Any other framework?
This very much seems like a common problem that must have a well-known solution?
I think you've almost got the entire solution. All you need to do is periodically (on app start load the data from the service if available, otherwise use current local storage, then maybe with a timer and on app close update the data if connected) send the JSON out to a service (I generally prefer PHP, but Python, Java, Ruby, Perl, whatever floats your boat) that puts it in a database. If you're concerned with merging synchronization changes you'll need to use timestamps in the data in local storage and in the database to make the right call on what should be inserted vs what should be updated.
I don't think there's a one size fits all solution to the problem, though I imagine someone may have crafted a library that handles the different potential scenarios the configuration may be as complicated as just writing the logic yourself.
For my new project I'm looking forward to use JSON data as a text file rather then fetching data from database. My concept is to save a JSON file on the server whenever admin creates a new entry in the database.
As there is no issue of security, will this approach will make user access to data faster or shall I go with the usual database queries.
JSON is typically used as a way to format the data for the purpose of transporting it somewhere. Databases are typically used for storing data.
What you've described may be perfectly sensible, but you really need to say a little bit more about your project before the community can comment on your approach.
What's the pattern of access? Is it always read-only for the user, editable only by site administrator for example?
You shouldn't worry about performance early on. Worry more about ease of development, maintenance and reliability, you can always optimise afterwards.
You may want to look at http://www.mongodb.org/. MongoDB is a document-centric store that uses JSON as its storage format.
JSON in combination with Jquery is a great fast web page smooth updating option but ultimately it still will come down to the same database query.
Just make sure your query is efficient. Use a stored proc.
JSON is just the way the data is sent from the server (Web controller in MVC or code behind in standind c#) to the client (JQuery or JavaScript)
Ultimately the database will be queried the same way.
You should stick with the classic method (database), because you'll face many problems with concurrency and with having too many files to handle.
I think you should go with usual database query.
If you use JSON file you'll have to sync JSON files with the DB (That's mean an extra work is need) and face I/O problems (if your site super busy).
Architecture :
database on a central server which contains a complex hierarchical database structure.
The clients should be able to insert data into tables through the API, The data would be inserted into multiple tables in the database at the same time, and not only into one table.
The clients should be able to retrieve data by using a complex search query.
The clients can upload/download files to the server which could have a size of multiple GBs
would SOAP be better for this job than REST ? can you please explain why ?
Almost all the things you mention are equally achievable using either SOAP or REST, though perhaps a little easier with SOAP. Certainly it's easier to create client APIs for SOAP interfaces; client tooling support is significantly more advanced on the majority of languages.
However, you say that you're wanting to deal with multi-gigabyte upload and download. That's a crucial point as REST is able to handle that sort of thing far more easily. SOAP is almost always tooled in terms of DOM processing, and that means building full messages in memory; you don't ever want to do that with a multi-GB payload.
So go with REST. That's definitely your best option for achieving all your listed objectives.