I am working on an assignment for a dummy phonebook app, which is an "extra points" part of a test for a local frontend job opening. I did some basic apps with angular before, but I always used it along with php and mysql. For this project the requirements state that I can't communicate with a server, so I need to store, edit, delete and search through data without a real database.
I don't even know what options are out there to achieve something like that, neither which one should I choose. I am looking for a simplest tool that could help me achieve those requirements, preferably one that has decent documentation that can help me get up and running as soon as possible.
You can use simple local filesystem and store objects as JSON using JSON.stringify() and parse them back using JSON.parse(jsonstring)
to write phonebook to your server's file
var phonebook = {
'name1' : 234283409,
'name2' : 234253453,
'name3' : 234234236
};
var jsonStr = JSON.stringify(phonebook);
/*
__________________
contents of jsonStr
{"name1":234283409,"name2":234253453,"name3":234234236}
__________________
write a logic here to save this JSON on a file in your server.
*/
to read phonebook to your server's file
//write a logic here to read JSON back from your server's file
var jsonStr = getJSONDataFromServer();
var phonebook = JSON.parse(jsonStr);
//now you can use your phonebook as a usual js object
You can use csv file to store your data.
To store data on the client you can use any local storage methods:
WebStorage: Web Storage API (provides both sessionStorage and localStorage)
gears: Google Gears-based persistent storage
whatwg db: HTML database storage standard
cookie: Cookie-based persistent storage RFC
The best choice depends on the kind of data you need to store, and the usage of that data. The most common choice is WebStorage.
If you use Angular, the great module ngStorage is available, that makes Web Storage working in the Angular Way.
Be warned that:
you'll be able to store only data per user, of course (i.e., you'll not be able to store any global status of the application).
any client storage solution poses strict space limits, which often differ from browser to browser.
If instead you simply don't want to use any local server solution, you could try some cloud platform, like, for example, firebase (just acquired by Google), or others.
you can use Google's Firebase. If firebase is complicated to you then use simple localstorage.
Related
I want to clear this issue. I'm new to react js. but I need to store some sensitive data in frontend. just like database name, database password, and database username. I have used universal-cookie and local storage also. but it seems like not secure. because anyone can edit that data if they inspect the page and open the cookie tab. I just want to know if there is a way to make these cookies uneditable or suggest to me if there is a better way to keep this data in frontend?
Thanks in advance
Normally sensitive data are not saving on frontend.
best way is you can call this from server using http request.
Or you can use local storage,cookies,session storage etc.
env setup is another way.
Or you can use thirdparty storge for this, many free & trusted resources are available
Ideally you do not want to, you should always send it encrypted from your BE> But if you must you can create a .env.local file at the root of your react project and put all your variables there. The variable names should start REACT_APP. there should be NO space/quotations around your values in this file
REACT_APP_DB_PASS=your_pass
REACT_APP_DB_ID=your_id
and then you can access them from the process.env object like this
process.env.REACT_APP_DB_PASS
I am building a web app using nodeJS with an angular based frontend and a Firebase/AngularFire2 backend. I have a list of about 80 cities and couple of details about each of them that I need to display with checkboxes for the user.
Should I save them as a json object in a .json file on the server and call it, or just store it in my Real-time Database and query it? Are there any speed/memory benefits to either?
There are two scenarios :
1.Your task is search oriented. You have to query the data and manipulate it. Memory management is key issues for you. You want some complex searching methods on your data. Then go for the database.
2.Your task require whole data at a time. You don't need to worry about memory management. Then directly load the data from file. Obviously this method will save the connection making time with your database. It will work as simple as file streams. [suggested for your case]
Please presume that I do not know anything about any of the things I will be mentioning because I really do not.
Most OpenData sites have the possibility of exporting the presented file either in for example .csv or .json formats (Example). They also always have an API tab (Example API).
I presume using the API would mean that if the data is updated you would receive the change whereas exporting it as .csv would mean the content will not be changed anymore.
My questions is: how does one use this API code to display the same table one would get when exporting a .csv file.
Would you use a database to extract this information? What kind of database and how do you link the API to the database?
I presume using the API would mean that if the data is updated you
would receive the change whereas exporting it as .csv would mean the
content will not be changed anymore.
You are correct in the sense that, if you download the csv to your computer, that csv file won't be updated any more.
An API is something you would call - in this case, you can call the API, saying "Hey, do you have the latest data on xxx?", and you will be given back the latest information about what you have asked. This does not mean though, that this site will notify you when there's a new update - you will have to keep calling the API (every hour, every day etc) to see if there are any changes.
My questions is: how does one use this API code to display the same
table one would get when exporting a .csv file.
You would:
Call the API from a server code, or a cloud service
Let the server code or cloud service decipher (or "Parse") the response
Use the deciphered response to create a table made out of HTML, or to place it into a database
Would you use a database to extract this information? What kind of
database and how do you link the API to the database?
You wouldn't necessarily need a database to extract information, although a database would be nice to place the final data inside.
You would first need some sort of way to "call the REST API". There are many ways to do this - using Shell Script, using Python, using Excel VBA etc.
I understand this is hard to visualize, so here is an example of step 1, where you can retrieve information.
Try placing in the below URL (taken from the site you showed us) in your address bar of your Chrome browser, and hit enter
http://opendata.brussels.be/api/records/1.0/search/?dataset=associations-clubs-sportifs
See how it gives back a lot of text with many brackets and commas? You've basically asked the site to give you some data, and this is the response they gave back (different browsers work differently - IE asks you to download the response as a .json file). You've basically called an API.
To see this data more cleanly, open your developer tools of your Chrome browser, and enter the following JavaScript code
var url = 'http://opendata.brussels.be/api/records/1.0/search/?dataset=associations-clubs-sportifs';
var xhr = new XMLHttpRequest();
xhr.open('GET', url);
xhr.setRequestHeader('X-Requested-With', 'XMLHttpRequest');
xhr.onload = function() {
if (xhr.status === 200) {
// success
console.log(JSON.parse(xhr.responseText));
} else {
// error
console.log(JSON.parse(xhr.responseText));
}
};
xhr.send();
When you hit enter, a response will come back, stating "Object". If you click through the arrows, you can see this is a cleaner version of the data we just saw - more human readable.
In this case, I used JavaScript to retrieve the data, but you can use whatever code you want. You could proceed to use JavaScript to decipher the data, manipulate it, and push it into a database.
kintone is an online cloud database where you can customize it to run JavaScript codes, and have it store the data in their database, so you'll have the data stored online like in the below image. This is just one example of a database you can use.
There are other cloud services which allow you to connect API end points of different services with each other, like IFTTT and Zapier, but I'm not sure if they connect with open data.
The page you linked to shows that the API returns values as a JSON object. To access the data you can just send an appropriate http request and the response will be the requested data as a JSON. You can send requests like that over your browser if you want to.
Most languages allow JSON objects to be manipulated pro grammatically if you need to do work on the data.
Restful APIs publish model is "request and publish". Wen you request data via an API endpoint, you would receive response strings in JSON objects, CSV tables or XML.
The publisher, in this case Opendata.brussel.be would update their database on regular basis and publish the results via an API endpoint.
If you want to download the table as a relational data table in a CSV file, you'd need to parse the JSON objects into relational tables. This can be tricky since each JSON response string can vary in their paths.
There're several ways to do it. You can either write scripts to flatten the JSON objects or use a tool to parse and flatten the objects for you.
I use a tool called Acho to turn API endpoints into CSV files. It would parse almost all API endpoints through the parameters and even configure for multiple requests, such as iterative and recursive requests.
Acho API parser
I'm currently using angularJS and phonegap to build a test application for Android / iOS.
The app use only text data stored in a Firebase database. I want the app to have its own local database (used when the device is offline) and sometime (when the device is online)
sync with a Firebase database.
The offline mode uses the storage API of phonegap/cordova. Could I just check the device's online state and backup the online database periodically ?
Any clues on how I can achieve this ? Last time a similar question was asked, the answer was "not yet"... (here)... because it focused on a hypothetical Firebase feature.
If Firebase is online at the start and loses its connection temporarily, then reconnects later, it will sync the local data then. So in many cases, once Firebase is online, you can simply keep pushing to Firebase during an outage.
For true offline usage, you will probably want to monitor the device's state, and also watch .info/connected to know when Firebase connects.
new Firebase('URL/.info/connected').on('value', function(ss) {
if( ss.val() === null ) /* firebase disconnected */
else /* firebase reconnected */
});
The way to achieve this with the current Firebase toolset, until it supports true offline storage, would
keep the local data simple and small
when the device comes online, convert the locally stored data to JSON
use set() to save the data into Firebase at the appropriate path
Additionally, if the app loads while the device is offline, for some reason, you can "prime" Firebase by calling set() to "initialize" the data. Then you can use Firebase as normal (just as if it were online) until it comes online at some point in the future (you would also want to store your local copy to handle the case where it never does).
Obviously, the simpler the better. Concurrent modifications, limits of local storage size, and many other factors will quickly accumulate to make any offline storage solution complex and time consuming.
After some time, I would like to add $0.03 to #Kato's answer:
Opt to call snapshot.exists() instead of calling snapshot.val() === null. As the documentation points out, exists() is slightly more efficient than comparing snapshot.val() to null.
And if you want to update data prefer to use the update() method rather then set(), as the last will overwrite your Firebase data. You can read more here.
I am using Azure Mobile Services to store images for a web application.
I have managed to successfully upload images to a private container. I've followed the logic in this introductory guide (http://code.msdn.microsoft.com/windowsapps/Upload-File-to-Windows-c9169190), i.e. when uploading the file to the database an SAS is generated by a node script called when inserting a record into a table.
One of the reasons to use this approach from mobile apps is so that the storage key is not stored within the application source itself.
Conforming with that idea I am now struggling to find an example of how to download the images.
Perhaps I should update the read function for the same table and have that return an SAS which can be used to accessed the image.
Does this sound reasonable or are they better approaches?
Any assistance is greatly appreciated.
It sounds to me like you are on the right track. If you are storing the image in a private container and want the mobile device to read it back then yes, you will want to produce a SAS that allows reading and get that back to the device. The device code can then make a call directly against BLOB storage using that SAS URL to retrieve the image.
This applies only if you want the container private. If the container is public then just returning the URL (like they have in the article you link to) should be fine.
It also depends on how private you care the image to be. For example, let's say you have a container created per user. If the container has a Shared Access Signature Policy on it with a really far off expiration date then technically someone still needs the URL with the SAS to view it, but you can create that SAS and store it like the sample. The mobile app can then be given the URL when it reads data from your service and get to the BLOB directly without having it create an additional SAS. In my opinion this option only really works if the images aren't going to be around very long, or you don't really care that if someone sniffs the URL from the network traffic that they can access it.
If you want it fairly secure and do not know how long the images will be around, then you should go with your stated approach of getting a SAS for read when the app reads from the related table data. The SAS can have a fairly short expiry on it and the mobile device can cache the result.